Merge remote-tracking branch 'origin/master' into topic/robin/pktsrc

Conflicts:
	configure
	src/CMakeLists.txt
	src/Net.cc
	src/PacketSort.cc
	src/PacketSort.h
	src/RemoteSerializer.cc
	src/Sessions.cc
	src/Sessions.h
This commit is contained in:
Robin Sommer 2014-08-22 15:38:24 -07:00
commit bf6dd2e9ca
794 changed files with 119018 additions and 91558 deletions

6
.gitmodules vendored
View file

@ -16,9 +16,9 @@
[submodule "cmake"]
path = cmake
url = git://git.bro.org/cmake
[submodule "magic"]
path = magic
url = git://git.bro.org/bromagic
[submodule "src/3rdparty"]
path = src/3rdparty
url = git://git.bro.org/bro-3rdparty
[submodule "aux/plugins"]
path = aux/plugins
url = git://git.bro.org/bro-plugins

908
CHANGES
View file

@ -1,4 +1,912 @@
2.3-121 | 2014-08-22 15:22:15 -0700
* Detect functions that try to bind variables from an outer scope
and raise an error saying that's not supported. Addresses
BIT-1233. (Jon Siwek)
2.3-116 | 2014-08-21 16:04:13 -0500
* Adding plugin testing to Makefile's test-all. (Robin Sommer)
* Converting log writers and input readers to plugins.
DataSeries and ElasticSearch plugins have moved to the new
bro-plugins repository, which is now a git submodule in the
aux/plugins directory. (Robin Sommer)
2.3-98 | 2014-08-19 11:03:46 -0500
* Silence some doc-related warnings when using `bro -e`.
Closes BIT-1232. (Jon Siwek)
* Fix possible null ptr derefs reported by Coverity. (Jon Siwek)
2.3-96 | 2014-08-01 14:35:01 -0700
* Small change to DHCP documentation. In server->client messages the
host name may differ from the one requested by the client.
(Johanna Amann)
* Split DHCP log writing from record creation. This allows users to
customize dhcp.log by changing the record in their own dhcp_ack
event. (Johanna Amann)
* Update PATH so that documentation btests can find bro-cut. (Daniel
Thayer)
* Remove gawk from list of optional packages in documentation.
(Daniel Thayer)
* Fix for redefining built-in constants. (Robin Sommer)
2.3-86 | 2014-07-31 14:19:58 -0700
* Fix for redefining built-in constants. (Robin Sommer)
* Adding missing check that a plugin's API version matches what Bro
defines. (Robin Sommer)
* Adding NEWS entry for plugins. (Robin Sommer)
2.3-83 | 2014-07-30 16:26:11 -0500
* Minor adjustments to plugin code/docs. (Jon Siwek)
* Dynamic plugin support. (Rpbin Sommer)
Bro now supports extending core functionality, like protocol and
file analysis, dynamically with external plugins in the form of
shared libraries. See doc/devel/plugins.rst for an overview of the
main functionality. Changes coming with this:
- Replacing the old Plugin macro magic with a new API.
- The plugin API changed to generally use std::strings instead
of const char*.
- There are a number of invocations of PLUGIN_HOOK_
{VOID,WITH_RESULT} across the code base, which allow plugins
to hook into the processing at those locations.
- A few new accessor methods to various classes to allow
plugins to get to that information.
- network_time cannot be just assigned to anymore, there's now
function net_update_time() for that.
- Redoing how builtin variables are initialized, so that it
works for plugins as well. No more init_net_var(), but
instead bifcl-generated code that registers them.
- Various changes for adjusting to the now dynamic generation
of analyzer instances.
- same_type() gets an optional extra argument allowing record type
comparision to ignore if field names don't match. (Robin Sommer)
- Further unify file analysis API with the protocol analyzer API
(assigning IDs to analyzers; adding Init()/Done() methods;
adding subtypes). (Robin Sommer)
- A new command line option -Q that prints some basic execution
time stats. (Robin Sommer)
- Add support to the file analysis for activating analyzers by
MIME type. (Robin Sommer)
- File::register_for_mime_type(tag: Analyzer::Tag, mt:
string): Associates a file analyzer with a MIME type.
- File::add_analyzers_for_mime_type(f: fa_file, mtype:
string): Activates all analyzers registered for a MIME
type for the file.
- The default file_new() handler calls
File::add_analyzers_for_mime_type() with the file's MIME
type.
2.3-20 | 2014-07-22 17:41:02 -0700
* Updating submodule(s).
2.3-19 | 2014-07-22 17:29:19 -0700
* Implement bytestring_to_coils() in Modbus analyzer so that coils
gets passed to the corresponding events. (Hui Lin)
* Add length field to ModbusHeaders. (Hui Lin)
2.3-12 | 2014-07-10 19:17:37 -0500
* Include yield of vectors in Broxygen's type descriptions.
Addresses BIT-1217. (Jon Siwek)
2.3-11 | 2014-07-10 14:49:27 -0700
* Fixing DataSeries output. It was using a now illegal value as its
default compression level. (Robin Sommer)
2.3-7 | 2014-06-26 17:35:18 -0700
* Extending "make test-all" to include aux/bro-aux. (Robin Sommer)
2.3-6 | 2014-06-26 17:24:10 -0700
* DataSeries compilation issue fixed. (mlaterman)
* Fix a reference counting bug in ListVal ctor. (Jon Siwek)
2.3-3 | 2014-06-26 15:41:04 -0500
* Support tilde expansion when Bro tries to find its own path. (Jon
Siwek)
2.3-2 | 2014-06-23 16:54:15 -0500
* Remove references to line numbers in tutorial text. (Daniel Thayer)
2.3 | 2014-06-16 09:48:25 -0500
* Release 2.3.
2.3-beta-33 | 2014-06-12 11:59:28 -0500
* Documentation improvements/fixes. (Daniel Thayer)
2.3-beta-24 | 2014-06-11 15:35:31 -0500
* Fix SMTP state tracking when server response is missing.
(Robin Sommer)
2.3-beta-22 | 2014-06-11 12:31:38 -0500
* Fix doc/test that broke due to a Bro script change. (Jon Siwek)
* Remove unused --with-libmagic configure option. (Jon Siwek)
2.3-beta-20 | 2014-06-10 18:16:51 -0700
* Fix use-after-free in some cases of reassigning a table index.
Addresses BIT-1202. (Jon Siwek)
2.3-beta-18 | 2014-06-06 13:11:50 -0700
* Add two more SSL events, one triggered for each handshake message
and one triggered for the tls change cipherspec message. (Bernhard
Amann)
* Small SSL bug fix. In case SSL::disable_analyzer_after_detection
was set to false, the ssl_established event would fire after each
data packet once the session is established. (Bernhard Amann)
2.3-beta-16 | 2014-06-06 13:05:44 -0700
* Re-activate notice suppression for expiring certificates.
(Bernhard Amann)
2.3-beta-14 | 2014-06-05 14:43:33 -0700
* Add new TLS extension type numbers from IANA (Bernhard Amann)
* Switch to double hashing for Bloomfilters for better performance.
(Matthias Vallentin)
* Bugfix to use full digest length instead of just one byte for
Bloomfilter's universal hash function. Addresses BIT-1140.
(Matthias Vallentin)
* Make buffer for X509 certificate subjects larger. Addresses
BIT-1195 (Bernhard Amann)
2.3-beta-5 | 2014-05-29 15:34:42 -0500
* Fix misc/load-balancing.bro's reference to
PacketFilter::sampling_filter (Jon Siwek)
2.3-beta-4 | 2014-05-28 14:55:24 -0500
* Fix potential mem leak in remote function/event unserialization.
(Jon Siwek)
* Fix reference counting bug in table coercion expressions (Jon Siwek)
* Fix an "unused value" warning. (Jon Siwek)
* Remove a duplicate unit test baseline dir. (Jon Siwek)
2.3-beta | 2014-05-19 16:36:50 -0500
* Release 2.3-beta
* Clean up OpenSSL data structures on exit. (Bernhard Amann)
* Fixes for OCSP & x509 analysis memory leak issues. (Bernhard Amann)
* Remove remaining references to BROMAGIC (Daniel Thayer)
* Fix typos and formatting in event and BiF documentation (Daniel Thayer)
* Update intel framework plugin for ssl server_name extension API
changes. (Bernhard Amann, Justin Azoff)
* Fix expression errors in SSL/x509 scripts when unparseable data
is in certificate chain. (Bernhard Amann)
2.2-478 | 2014-05-19 15:31:33 -0500
* Change record ctors to only allow record-field-assignment
expressions. (Jon Siwek)
2.2-477 | 2014-05-19 14:13:00 -0500
* Fix X509::Result record's "result" field to be set internally as type int instead of type count. (Bernhard Amann)
* Fix a couple of doc build warnings (Daniel Thayer)
2.2-470 | 2014-05-16 15:16:32 -0700
* Add a new section "Cluster Configuration" to the docs that is
intended as a how-to for configuring a Bro cluster. Most of this
content was moved here from the BroControl doc (which is now
intended as more of a reference guide for more experienced users)
and the load balancing FAQ on the website. (Daniel Thayer)
* Update some doc tests and line numbers (Daniel Thayer)
2.2-457 | 2014-05-16 14:38:31 -0700
* New script policy/protocols/ssl/validate-ocsp.bro that adds OSCP
validation to ssl.log. The work is done by a new bif
x509_ocsp_verify(). (Bernhard Amann)
* STARTTLS support for POP3 and SMTP. The SSL analyzer takes over
when seen. smtp.log now logs when a connection switches to SSL.
(Bernhard Amann)
* Replace errors when parsing x509 certs with weirds. (Bernhard
Amann)
* Improved Heartbleed attack/scan detection. (Bernhard Amann)
* Let TLS analyzer fail better when no longer in sync with the data
stream. (Bernhard Amann)
2.2-444 | 2014-05-16 14:10:32 -0500
* Disable all default AppStat plugins except facebook. (Jon Siwek)
* Update for the active http test to force it to use ipv4. (Seth Hall)
2.2-441 | 2014-05-15 11:29:56 -0700
* A new RADIUS analyzer. (Vlad Grigorescu)
It produces a radius.log and generates two events:
event radius_message(c: connection, result: RADIUS::Message);
event radius_attribute(c: connection, attr_type: count, value: string);
2.2-427 | 2014-05-15 13:37:23 -0400
* Fix dynamic SumStats update on clusters (Bernhard Amann)
2.2-425 | 2014-05-08 16:34:44 -0700
* Fix reassembly of data w/ sizes beyond 32-bit capacities. (Jon Siwek)
Reassembly code (e.g. for TCP) now uses int64/uint64 (signedness
is situational) data types in place of int types in order to
support delivering data to analyzers that pass 2GB thresholds.
There's also changes in logic that accompany the change in data
types, e.g. to fix TCP sequence space arithmetic inconsistencies.
Another significant change is in the Analyzer API: the *Packet and
*Undelivered methods now use a uint64 in place of an int for the
relative sequence space offset parameter.
Addresses BIT-348.
* Fixing compiler warnings. (Robin Sommer)
* Update SNMP analyzer's DeliverPacket method signature. (Jon Siwek)
2.2-417 | 2014-05-07 10:59:22 -0500
* Change handling of atypical OpenSSL error case in x509 verification. (Jon Siwek)
* Fix memory leaks in X509 certificate parsing/verification. (Jon Siwek)
* Fix new []/delete mismatch in input::reader::Raw::DoClose(). (Jon Siwek)
* Fix buffer over-reads in file_analysis::Manager::Terminate() (Jon Siwek)
* Fix buffer overlows in IP address masking logic. (Jon Siwek)
That could occur either in taking a zero-length mask on an IPv6 address
(e.g. [fe80::]/0) or a reverse mask of length 128 on any address (e.g.
via the remask_addr BuiltIn Function).
* Fix new []/delete mismatch in ~Base64Converter. (Jon Siwek)
2.2-410 | 2014-05-02 12:49:53 -0500
* Replace an unneeded OPENSSL_malloc call. (Jon Siwek)
2.2-409 | 2014-05-02 12:09:06 -0500
* Clean up and documentation for base SNMP script. (Jon Siwek)
* Update base SNMP script to now produce a snmp.log. (Seth Hall)
* Add DH support to SSL analyzer. When using DHE or DH-Anon, sever
key parameters are now available in scriptland. Also add script to
alert on weak certificate keys or weak dh-params. (Bernhard Amann)
* Add a few more ciphers Bro did not know at all so far. (Bernhard Amann)
* Log chosen curve when using ec cipher suite in TLS. (Bernhard Amann)
2.2-397 | 2014-05-01 20:29:20 -0700
* Fix reference counting for lookup_ID() usages. (Jon Siwek)
2.2-395 | 2014-05-01 20:25:48 -0700
* Fix missing "irc-dcc-data" service field from IRC DCC connections.
(Jon Siwek)
* Correct a notice for heartbleed. The notice is thrown correctly,
just the message conteined wrong values. (Bernhard Amann)
* Improve/standardize some malloc/realloc return value checks. (Jon
Siwek)
* Improve file analysis manager shutdown/cleanup. (Jon Siwek)
2.2-388 | 2014-04-24 18:38:07 -0700
* Fix decoding of MIME quoted-printable. (Mareq)
2.2-386 | 2014-04-24 18:22:29 -0700
* Do a Intel::ADDR lookup for host field if we find an IP address
there. (jshlbrd)
2.2-381 | 2014-04-24 17:08:45 -0700
* Add Java version to software framework. (Brian Little)
2.2-379 | 2014-04-24 17:06:21 -0700
* Remove unused Val::attribs member. (Jon Siwek)
2.2-377 | 2014-04-24 16:57:54 -0700
* A larger set of SSL improvements and extensions. Addresses
BIT-1178. (Bernhard Amann)
- Fixes TLS protocol version detection. It also should
bail-out correctly on non-tls-connections now
- Adds support for a few TLS extensions, including
server_name, alpn, and ec-curves.
- Adds support for the heartbeat events.
- Add Heartbleed detector script.
- Adds basic support for OCSP stapling.
* Fix parsing of DNS TXT RRs w/ multiple character-strings.
Addresses BIT-1156. (Jon Siwek)
2.2-353 | 2014-04-24 16:12:30 -0700
* Adapt HTTP partial content to cache file analysis IDs. (Jon Siwek)
* Adapt SSL analyzer to generate file analysis handles itself. (Jon
Siwek)
* Adapt more of HTTP analyzer to use cached file analysis IDs. (Jon
Siwek)
* Adapt IRC/FTP analyzers to cache file analysis IDs. (Jon Siwek)
* Refactor regex/signature AcceptingSet data structure and usages.
(Jon Siwek)
* Enforce data size limit when checking files for MIME matches. (Jon
Siwek)
* Refactor file analysis file ID lookup. (Jon Siwek)
2.2-344 | 2014-04-22 20:13:30 -0700
* Refactor various hex escaping code. (Jon Siwek)
2.2-341 | 2014-04-17 18:01:41 -0500
* Fix duplicate DNS log entries. (Robin Sommer)
2.2-341 | 2014-04-17 18:01:01 -0500
* Refactor initialization of ASCII log writer options. (Jon Siwek)
* Fix a memory leak in ASCII log writer. (Jon Siwek)
2.2-338 | 2014-04-17 17:48:17 -0500
* Disable input/logging threads setting their names on every
heartbeat. (Jon Siwek)
* Fix bug when clearing Bloom filter contents. Reported by
@colonelxc. (Matthias Vallentin)
2.2-335 | 2014-04-10 15:04:57 -0700
* Small logic fix for main SSL script. (Bernhard Amann)
* Update DPD signatures for detecting TLS 1.2. (Bernhard Amann)
* Remove unused data member of SMTP_Analyzer to silence a Coverity
warning. (Jon Siwek)
* Fix missing @load dependencies in some scripts. Also update the
unit test which is supposed to catch such errors. (Jon Siwek)
2.2-326 | 2014-04-08 15:21:51 -0700
* Add SNMP datagram parsing support.This supports parsing of SNMPv1
(RFC 1157), SNMPv2 (RFC 1901/3416), and SNMPv2 (RFC 3412). An
event is raised for each SNMP PDU type, though there's not
currently any event handlers for them and not a default snmp.log
either. However, simple presence of SNMP is currently visible now
in conn.log service field and known_services.log. (Jon Siwek)
2.2-319 | 2014-04-03 15:53:25 -0700
* Improve __load__.bro creation for .bif.bro stubs. (Jon Siwek)
2.2-317 | 2014-04-03 10:51:31 -0400
* Add a uid field to the signatures.log. Addresses BIT-1171
(Anthony Verez)
2.2-315 | 2014-04-01 16:50:01 -0700
* Change logging's "#types" description of sets to "set". Addresses
BIT-1163 (Bernhard Amann)
2.2-313 | 2014-04-01 16:40:19 -0700
* Fix a couple nits reported by Coverity.(Jon Siwek)
* Fix potential memory leak in IP frag reassembly reported by
Coverity. (Jon Siwek)
2.2-310 | 2014-03-31 18:52:22 -0700
* Fix memory leak and unchecked dynamic cast reported by Coverity.
(Jon Siwek)
* Fix potential memory leak in x509 parser reported by Coverity.
(Bernhard Amann)
2.2-304 | 2014-03-30 23:05:54 +0200
* Replace libmagic w/ Bro signatures for file MIME type
identification. Addresses BIT-1143. (Jon Siwek)
Includes:
- libmagic is no longer used at all. All MIME type detection is
done through new Bro signatures, and there's no longer a means
to get verbose file type descriptions. The majority of the
default file magic signatures are derived from the default magic
database of libmagic ~5.17.
- File magic signatures consist of two new constructs in the
signature rule parsing grammar: "file-magic" gives a regular
expression to match against, and "file-mime" gives the MIME type
string of content that matches the magic and an optional strength
value for the match.
- Modified signature/rule syntax for identifiers: they can no
longer start with a '-', which made for ambiguous syntax when
doing negative strength values in "file-mime". Also brought
syntax for Bro script identifiers in line with reality (they
can't start with numbers or include '-' at all).
- A new built-in function, "file_magic", can be used to get all
file magic matches and their corresponding strength against a
given chunk of data.
- The second parameter of the "identify_data" built-in function
can no longer be used to get verbose file type descriptions,
though it can still be used to get the strongest matching file
magic signature.
- The "file_transferred" event's "descr" parameter no longer
contains verbose file type descriptions.
- The BROMAGIC environment variable no longer changes any behavior
in Bro as magic databases are no longer used/installed.
- Removed "binary" and "octet-stream" mime type detections. They
don' provide any more information than an uninitialized
mime_type field which implicitly means no magic signature
matches and so the media type is unknown to Bro.
- The "fa_file" record now contains a "mime_types" field that
contains all magic signatures that matched the file content
(where the "mime_type" field is just a shortcut for the
strongest match).
- Reverted back to minimum requirement of CMake 2.6.3 from 2.8.0.
* The logic for adding file ids to {orig,resp}_fuids fields of the
http.log incorrectly depended on the state of
{orig,resp}_mime_types fields, so sometimes not all file ids
associated w/ the session were logged. (Jon Siwek)
* Fix MHR script's use of fa_file$mime_type before checking if it's
initialized. (Jon Siwek)
2.2-294 | 2014-03-30 22:08:25 +0200
* Rework and move X509 certificate processing from the SSL protocol
analyzer to a dedicated file analyzer. This will allow us to
examine X509 certificates from sources other than SSL in the
future. Furthermore, Bro now parses more fields and extensions
from the certificates (e.g. elliptic curve information, subject
alternative names, basic constraints). Certificate validation also
was improved, should be easier to use and exposes information like
the full verified certificate chain. (Bernhard Amann)
This update changes the format of ssl.log and adds a new x509.log
with certificate information. Furthermore all x509 events and
handling functions have changed.
2.2-271 | 2014-03-30 20:25:17 +0200
* Add unit tests covering vector/set/table ctors/inits. (Jon Siwek)
* Fix parsing of "local" named table constructors. (Jon Siwek)
* Improve type checking of records. Addresses BIT-1159. (Jon Siwek)
2.2-267 | 2014-03-30 20:21:43 +0200
* Improve documentation of Bro clusters. Addresses BIT-1160.
(Daniel Thayer)
2.2-263 | 2014-03-30 20:19:05 +0200
* Don't include locations into serialization when cloning values.
(Robin Sommer)
2.2-262 | 2014-03-30 20:12:47 +0200
* Refactor SerializationFormat::EndWrite and ChunkedIO::Chunk memory
management. (Jon Siwek)
* Improve SerializationFormat's write buffer growth strategy. (Jon
Siwek)
* Add --parse-only option to exit after parsing scripts. May be
useful for syntax-checking tools. (Jon Siwek)
2.2-256 | 2014-03-30 19:57:28 +0200
* For the summary statistics framewirk, change all &create_expire
attributes to &read_expire in the cluster part. (Bernhard Amann)
2.2-254 | 2014-03-30 19:55:22 +0200
* Update instructions on how to build Bro docs. (Daniel Thayer)
2.2-251 | 2014-03-28 08:37:37 -0400
* Quick fix to the ElasticSearch writer. (Seth Hall)
2.2-250 | 2014-03-19 17:20:55 -0400
* Improve performance of MHR script by reducing cloned Vals in
a "when" scope. (Jon Siwek)
2.2-248 | 2014-03-19 14:47:40 -0400
* Make SumStats work incrementally and non-blocking in non-cluster
mode, but force it to operate by blocking if Bro is shutting
down. (Seth Hall)
2.2-244 | 2014-03-17 08:24:17 -0700
* Fix compile errror on FreeBSD caused by wrong include file order.
(Bernhard Amann)
2.2-240 | 2014-03-14 10:23:54 -0700
* Derive results of DNS lookups from from input when in BRO_DNS_FAKE
mode. Addresses BIT-1134. (Jon Siwek)
* Fixing a few cases of undefined behaviour introduced by recent
formatter work.
* Fixing compiler error. (Robin Sommer)
* Fixing (very unlikely) double delete in HTTP analyzer when
decapsulating CONNECTs. (Robin Sommer)
2.2-235 | 2014-03-13 16:21:19 -0700
* The Ascii writer has a new option LogAscii::use_json for writing
out logs as JSON. (Seth Hall)
* Ascii input reader now supports all config options as per-input
stream "config" values. (Seth Hall)
* Refactored formatters and updated the the writers a bit. (Seth
Hall)
2.2-229 | 2014-03-13 14:58:30 -0700
* Refactoring analyzer manager code to reuse
ApplyScheduledAnalyzers(). (Robin Sommer)
2.2-228 | 2014-03-13 14:25:53 -0700
* Teach async DNS lookup builtin-functions about BRO_DNS_FAKE.
Addresses BIT-1134. (Jon Siwek)
* Enable fake DNS mode for test suites.
* Improve analysis of TCP SYN/SYN-ACK reversal situations. (Jon
Siwek)
- Since it's just the handshake packets out of order, they're no
longer treated as partial connections, which some protocol analyzers
immediately refuse to look at.
- The TCP_Reassembler "is_orig" state failed to change, which led to
protocol analyzers sometimes using the wrong value for that.
- Add a unit test which exercises the Connection::FlipRoles() code
path (i.e. the SYN/SYN-ACK reversal situation).
Addresses BIT-1148.
* Fix bug in Connection::FlipRoles. It didn't swap address values
right and also didn't consider that analyzers might be scheduled
for the new connection tuple. Reported by Kevin McMahon. Addresses
BIT-1148. (Jon Siwek)
2.2-221 | 2014-03-12 17:23:18 -0700
* Teach configure script --enable-jemalloc, --with-jemalloc.
Addresses BIT-1128. (Jon Siwek)
2.2-218 | 2014-03-12 17:19:45 -0700
* Improve DBG_LOG macro (perf. improvement for --enable-debug mode).
(Jon Siwek)
* Silences some documentation warnings from Sphinx. (Jon Siwek)
2.2-215 | 2014-03-10 11:10:15 -0700
* Fix non-deterministic logging of unmatched DNS msgs. Addresses
BIT-1153 (Jon Siwek)
2.2-213 | 2014-03-09 08:57:37 -0700
* No longer accidentally attempting to parse NBSTAT RRs as SRV RRs
in DNS analyzer. (Seth Hall)
* Fix DNS SRV responses and a small issue with NBNS queries and
label length. (Seth Hall)
- DNS SRV responses never had the code written to actually
generate the dns_SRV_reply event. Adding this required
extending the event a bit to add extra information. SRV responses
now appear in the dns.log file correctly.
- Fixed an issue where some Microsoft NetBIOS Name Service lookups
would exceed the max label length for DNS and cause an incorrect
"DNS_label_too_long" weird.
2.2-210 | 2014-03-06 22:52:36 -0500
* Improve SSL logging so that connections are logged even when the
ssl_established event is not generated as well as other small SSL
fixes. (Bernhard Amann)
2.2-206 | 2014-03-03 16:52:28 -0800
* HTTP CONNECT proxy support. The HTTP analyzer now supports
handling HTTP CONNECT proxies. (Seth Hall)
* Expanding the HTTP methods used in the DPD signature to detect
HTTP traffic. (Seth Hall)
* Fixing removal of support analyzers. (Robin Sommer)
2.2-199 | 2014-03-03 16:34:20 -0800
* Allow iterating over bif functions with result type vector of any.
This changes the internal type that is used to signal that a
vector is unspecified from any to void. Addresses BIT-1144
(Bernhard Amann)
2.2-197 | 2014-02-28 15:36:58 -0800
* Remove test code. (Robin Sommer)
2.2-194 | 2014-02-28 14:50:53 -0800
* Remove packet sorter. Addresses BIT-700. (Bernhard Amann)
2.2-192 | 2014-02-28 09:46:43 -0800
* Update Mozilla root bundle. (Bernhard Amann)
2.2-190 | 2014-02-27 07:34:44 -0800
* Adjust timings of a few leak tests. (Bernhard Amann)
2.2-187 | 2014-02-25 07:24:42 -0800
* More Google TLS extensions that are being actively used. (Bernhard
Amann)
* Remove unused, and potentially unsafe, function
ListVal::IncludedInString. (Bernhard Amann)
2.2-184 | 2014-02-24 07:28:18 -0800
* New TLS constants from
https://tools.ietf.org/html/draft-bmoeller-tls-downgrade-scsv-01.
(Bernhard Amann)
2.2-180 | 2014-02-20 17:29:14 -0800
* New SSL alert descriptions from
https://tools.ietf.org/html/draft-ietf-tls-applayerprotoneg-04.
(Bernhard Amann)
* Update SQLite. (Bernhard Amann)
2.2-177 | 2014-02-20 17:27:46 -0800
* Update to libmagic version 5.17. Addresses BIT-1136. (Jon Siwek)
2.2-174 | 2014-02-14 12:07:04 -0800
* Support for MPLS over VLAN. (Chris Kanich)
2.2-173 | 2014-02-14 10:50:15 -0800
* Fix misidentification of SOCKS traffic that in particiular seemed
to happen a lot with DCE/RPC traffic. (Vlad Grigorescu)
2.2-170 | 2014-02-13 16:42:07 -0800
* Refactor DNS script's state management to improve performance.
(Jon Siwek)
* Revert "Expanding the HTTP methods used in the signature to detect
HTTP traffic." (Robin Sommer)
2.2-167 | 2014-02-12 20:17:39 -0800
* Increase timeouts of some unit tests. (Jon Siwek)
* Fix memory leak in modbus analyzer. Would happen if there's a
'modbus_read_fifo_queue_response' event handler. (Jon Siwek)
* Add channel_id TLS extension number. This number is not IANA
defined, but we see it being actively used. (Bernhard Amann)
* Test baseline updates for DNS change. (Robin Sommer)
2.2-158 | 2014-02-09 23:45:39 -0500
* Change dns.log to include only standard DNS queries. (Jon Siwek)
* Improve DNS analysis. (Jon Siwek)
- Fix parsing of empty question sections (when QDCOUNT == 0). In this
case, the DNS parser would extract two 2-byte fields for use in either
"dns_query_reply" or "dns_rejected" events (dependent on value of
RCODE) as qclass and qtype parameters. This is not correct, because
such fields don't actually exist in the DNS message format when
QDCOUNT is 0. As a result, these events are no longer raised when
there's an empty question section. Scripts that depends on checking
for an empty question section can do that in the "dns_message" event.
- Add a new "dns_unknown_reply" event, for when Bro does not know how
to fully parse a particular resource record type. This helps fix a
problem in the default DNS scripts where the logic to complete
request-reply pair matching doesn't work because it's waiting on more
RR events to complete the reply. i.e. it expects ANCOUNT number of
dns_*_reply events and will wait until it gets that many before
completing a request-reply pair and logging it to dns.log. This could
cause bogus replies to match a previous request if they happen to
share a DNS transaction ID. (Jon Siwek)
- The previous method of matching queries with replies was still
unreliable in cases where the reply contains no answers. The new code
also takes extra measures to avoid pending state growing too large in
cases where the condition to match a query with a corresponding reply is
never met, but yet DNS messages continue to be exchanged over the same
connection 5-tuple (preventing cleanup of the pending state). (Jon Siwek)
* Updates to httpmonitor and mimestats documentation. (Jeannette Dopheide)
* Updates to Logs and Cluster documentation (Jeannette Dopheide)
2.2-147 | 2014-02-07 08:06:53 -0800
* Fix x509-extension test sometimes failing. (Bernhard Amann)
2.2-144 | 2014-02-06 20:31:18 -0800
* Fixing bug in POP3 analyzer. With certain input the analyzer could
end up trying to write to non-writable memory. (Robin Sommer)
2.2-140 | 2014-02-06 17:58:04 -0800
* Fixing memory leaks in input framework. (Robin Sommer)
* Add script to detect filtered TCP traces. Addresses BIT-1119. (Jon
Siwek)
2.2-137 | 2014-02-04 09:09:55 -0800
* Minor unified2 script documentation fix. (Jon Siwek)
2.2-135 | 2014-01-31 11:09:36 -0800
* Added some grammar and spelling corrections to Installation and
Quick Start Guide. (Jeannette Dopheide)
2.2-131 | 2014-01-30 16:11:11 -0800
* Extend file analysis API to allow file ID caching. This allows an
analyzer to either provide file IDs associated with some file
content or to cache a file ID that was already determined by
script-layer logic so that subsequent calls to the file analysis
interface can bypass costly detours through script-layer. This
can yield a decent performance improvement for analyzers that are
able to take advantage of it and deal with streaming content (like
HTTP, which has been adapted accordingly). (Jon Siwek)
2.2-128 | 2014-01-30 15:58:47 -0800
* Add leak test for Exec module. (Bernhard Amann)
* Fix file_over_new_connection event to trigger when entire file is
missed. (Jon Siwek)
* Improve TCP connection size reporting for half-open connections.
(Jon Siwek)
* Improve gap reporting in TCP connections that never see data. We
no longer accomodate SYN/FIN/RST-filtered traces by not reporting
missing data. The behavior can be reverted by redef'ing
"detect_filtered_trace". (Jon Siwek)
* Improve TCP FIN retransmission handling. (Jon Siwek)
2.2-120 | 2014-01-28 10:25:23 -0800
* Fix and extend x509_extension() event, which now actually returns
the extension. (Bernhard Amann)
New event signauture:
event x509_extension(c: connection, is_orig: bool, cert:X509, extension: X509_extension_info)
2.2-117 | 2014-01-23 14:18:19 -0800
* Fixing initialization context in anonymous functions. (Robin

View file

@ -1,9 +1,8 @@
project(Bro C CXX)
# When changing the minimum version here, also adapt
# cmake/BroPluginDynamic and
# aux/bro-aux/plugin-support/skeleton/CMakeLists.txt
cmake_minimum_required(VERSION 2.8.0 FATAL_ERROR)
cmake_minimum_required(VERSION 2.6.3 FATAL_ERROR)
include(cmake/CommonCMakeConfig.cmake)
@ -22,19 +21,16 @@ get_filename_component(BRO_SCRIPT_INSTALL_PATH ${BRO_SCRIPT_INSTALL_PATH}
ABSOLUTE)
set(BRO_PLUGIN_INSTALL_PATH ${BRO_ROOT_DIR}/lib/bro/plugins CACHE STRING "Installation path for plugins" FORCE)
set(BRO_MAGIC_INSTALL_PATH ${BRO_ROOT_DIR}/share/bro/magic)
set(BRO_MAGIC_SOURCE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/magic/database)
configure_file(bro-path-dev.in ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev)
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.sh
"export BROPATH=`${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"export BROMAGIC=\"${BRO_MAGIC_SOURCE_PATH}\"\n"
"export BRO_PLUGIN_PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src:${BRO_PLUGIN_INSTALL_PATH}\"\n"
"export PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh
"setenv BROPATH `${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
"setenv BROMAGIC \"${BRO_MAGIC_SOURCE_PATH}\"\n"
"setenv BRO_PLUGIN_PATH \"${CMAKE_CURRENT_BINARY_DIR}/src:${BRO_PLUGIN_INSTALL_PATH}\"\n"
"setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
@ -48,32 +44,6 @@ set(VERSION_MAJ_MIN "${VERSION_MAJOR}.${VERSION_MINOR}")
########################################################################
## Dependency Configuration
include(ExternalProject)
# LOG_* options to ExternalProject_Add appear in CMake 2.8.3. If
# available, using them hides external project configure/build output.
if("${CMAKE_VERSION}" VERSION_GREATER 2.8.2)
set(EXTERNAL_PROJECT_LOG_OPTIONS
LOG_DOWNLOAD 1 LOG_UPDATE 1 LOG_CONFIGURE 1 LOG_BUILD 1 LOG_INSTALL 1)
else()
set(EXTERNAL_PROJECT_LOG_OPTIONS)
endif()
set(LIBMAGIC_PREFIX ${CMAKE_CURRENT_BINARY_DIR}/libmagic-prefix)
set(LIBMAGIC_INCLUDE_DIR ${LIBMAGIC_PREFIX}/include)
set(LIBMAGIC_LIB_DIR ${LIBMAGIC_PREFIX}/lib)
set(LIBMAGIC_LIBRARY ${LIBMAGIC_LIB_DIR}/libmagic.a)
ExternalProject_Add(libmagic
PREFIX ${LIBMAGIC_PREFIX}
URL ${CMAKE_CURRENT_SOURCE_DIR}/src/3rdparty/file-5.16.tar.gz
CONFIGURE_COMMAND ./configure --enable-static --disable-shared
--prefix=${LIBMAGIC_PREFIX}
--includedir=${LIBMAGIC_INCLUDE_DIR}
--libdir=${LIBMAGIC_LIB_DIR}
BUILD_IN_SOURCE 1
${EXTERNAL_PROJECT_LOG_OPTIONS}
)
include(FindRequiredPackage)
# Check cache value first to avoid displaying "Found sed" messages everytime
@ -100,6 +70,10 @@ if (NOT BinPAC_ROOT_DIR AND
endif ()
FindRequiredPackage(BinPAC)
if (ENABLE_JEMALLOC)
find_package(JeMalloc)
endif ()
if (MISSING_PREREQS)
foreach (prereq ${MISSING_PREREQ_DESCS})
message(SEND_ERROR ${prereq})
@ -112,8 +86,8 @@ include_directories(BEFORE
${OpenSSL_INCLUDE_DIR}
${BIND_INCLUDE_DIR}
${BinPAC_INCLUDE_DIR}
${LIBMAGIC_INCLUDE_DIR}
${ZLIB_INCLUDE_DIR}
${JEMALLOC_INCLUDE_DIR}
)
# Optional Dependencies
@ -153,33 +127,6 @@ if (GOOGLEPERFTOOLS_FOUND)
endif ()
endif ()
set(USE_DATASERIES false)
find_package(Lintel)
find_package(DataSeries)
find_package(LibXML2)
if (NOT DISABLE_DATASERIES AND
LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND)
set(USE_DATASERIES true)
include_directories(BEFORE ${Lintel_INCLUDE_DIR})
include_directories(BEFORE ${DataSeries_INCLUDE_DIR})
include_directories(BEFORE ${LibXML2_INCLUDE_DIR})
list(APPEND OPTLIBS ${Lintel_LIBRARIES})
list(APPEND OPTLIBS ${DataSeries_LIBRARIES})
list(APPEND OPTLIBS ${LibXML2_LIBRARIES})
endif()
set(USE_ELASTICSEARCH false)
set(USE_CURL false)
find_package(LibCURL)
if (NOT DISABLE_ELASTICSEARCH AND LIBCURL_FOUND)
set(USE_ELASTICSEARCH true)
set(USE_CURL true)
include_directories(BEFORE ${LibCURL_INCLUDE_DIR})
list(APPEND OPTLIBS ${LibCURL_LIBRARIES})
endif()
if (ENABLE_PERFTOOLS_DEBUG OR ENABLE_PERFTOOLS)
# Just a no op to prevent CMake from complaining about manually-specified
# ENABLE_PERFTOOLS_DEBUG or ENABLE_PERFTOOLS not being used if google
@ -191,8 +138,8 @@ set(brodeps
${PCAP_LIBRARY}
${OpenSSL_LIBRARIES}
${BIND_LIBRARY}
${LIBMAGIC_LIBRARY}
${ZLIB_LIBRARY}
${JEMALLOC_LIBRARIES}
${OPTLIBS}
)
@ -233,10 +180,6 @@ CheckOptionalBuildSources(aux/broctl Broctl INSTALL_BROCTL)
CheckOptionalBuildSources(aux/bro-aux Bro-Aux INSTALL_AUX_TOOLS)
CheckOptionalBuildSources(aux/broccoli Broccoli INSTALL_BROCCOLI)
install(DIRECTORY ./magic/database/
DESTINATION ${BRO_MAGIC_INSTALL_PATH}
)
########################################################################
## Packaging Setup
@ -281,10 +224,7 @@ message(
"\ngperftools found: ${HAVE_PERFTOOLS}"
"\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
"\n debugging: ${USE_PERFTOOLS_DEBUG}"
"\ncURL: ${USE_CURL}"
"\n"
"\nDataSeries: ${USE_DATASERIES}"
"\nElasticSearch: ${USE_ELASTICSEARCH}"
"\njemalloc: ${ENABLE_JEMALLOC}"
"\n"
"\n================================================================\n"
)

View file

@ -55,6 +55,8 @@ test:
test-all: test
test -d aux/broctl && ( cd aux/broctl && make test )
test -d aux/btest && ( cd aux/btest && make test )
test -d aux/bro-aux && ( cd aux/bro-aux && make test )
test -d aux/plugins && ( cd aux/plugins && make test-all )
configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )

144
NEWS
View file

@ -4,18 +4,39 @@ release. For an exhaustive list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with
their own ``CHANGES``.)
Bro 2.3
=======
[In progress]
Bro 2.4 (in progress)
=====================
Dependencies
------------
- Bro no longer requires a pre-installed libmagic (because it now
ships its own).
New Functionality
-----------------
- Compiling from source now needs a CMake version >= 2.8.0.
- Bro now has support for external plugins that can extend its core
functionality, like protocol/file analysis, via shared libraries.
Plugins can be developed and distributed externally, and will be
pulled in dynamically at startup. Currently, a plugin can provide
custom protocol analyzers, file analyzers, log writers[TODO], input
readers[TODO], packet sources[TODO], and new built-in functions. A
plugin can furthermore hook into Bro's processing a number of places
to add custom logic.
See http://www.bro.org/sphinx-git/devel/plugins.html for more
information on writing plugins.
Changed Functionality
---------------------
- bro-cut has been rewritten in C, and is hence much faster.
Bro 2.3
=======
Dependencies
------------
- Libmagic is no longer a dependency.
New Functionality
-----------------
@ -25,6 +46,54 @@ New Functionality
parsing past the GRE header in between the delivery and payload IP
packets.
- The DNS analyzer now actually generates the dns_SRV_reply() event.
It had been documented before, yet was never raised.
- Bro now uses "file magic signatures" to identify file types. These
are defined via two new constructs in the signature rule parsing
grammar: "file-magic" gives a regular expression to match against,
and "file-mime" gives the MIME type string of content that matches
the magic and an optional strength value for the match. (See also
"Changed Functionality" below for changes due to switching from
using libmagic to such signatures.)
- A new built-in function, "file_magic", can be used to get all file
magic matches and their corresponding strength against a given chunk
of data.
- The SSL analyzer now supports heartbeats as well as a few
extensions, including server_name, alpn, and ec-curves.
- The SSL analyzer comes with Heartbleed detector script in
protocols/ssl/heartbleed.bro. Note that loading this script changes
the default value of "SSL::disable_analyzer_after_detection" from true
to false to prevent encrypted heartbeats from being ignored.
- StartTLS is now supported for SMTP and POP3.
- The X509 analyzer can now perform OSCP validation.
- Bro now has analyzers for SNMP and Radius, which produce corresponding
snmp.log and radius.log output (as well as various events of course).
- BroControl has a new option "BroPort" which allows a user to specify
the starting port number for Bro.
- BroControl has a new option "StatsLogExpireInterval" which allows a
user to specify when entries in the stats.log file expire.
- BroControl has a new option "PFRINGClusterType" which allows a user
to specify a PF_RING cluster type.
- BroControl now supports PF_RING+DNA. There is also a new option
"PFRINGFirstAppInstance" that allows a user to specify the starting
application instance number for processes running on a DNA cluster.
See the BroControl documentation for more details.
- BroControl now warns a user to run "broctl install" if Bro has
been upgraded or if the broctl or node configuration has changed
since the most recent install.
Changed Functionality
---------------------
@ -36,6 +105,67 @@ Changed Functionality
- Notice::end_suppression() has been removed.
- Bro now parses X.509 extensions headers and, as a result, the
corresponding event got a new signature:
event x509_extension(c: connection, is_orig: bool, cert: X509, ext: X509_extension_info);
- In addition, there are several new, more specialized events for a
number of x509 extensions.
- Generally, all x509 events and handling functions have changed their
signatures.
- X509 certificate verification now returns the complete certificate
chain that was used for verification.
- Bro no longer special-cases SYN/FIN/RST-filtered traces by not
reporting missing data. Instead, if Bro never sees any data segments
for analyzed TCP connections, the new
base/misc/find-filtered-trace.bro script will log a warning in
reporter.log and to stderr. The old behavior can be reverted by
redef'ing "detect_filtered_trace".
- We have removed the packet sorter component.
- Bro no longer uses libmagic to identify file types but instead now
comes with its own signature library (which initially is still
derived from libmagic's database). This leads to a number of further
changes with regards to MIME types:
* The second parameter of the "identify_data" built-in function
can no longer be used to get verbose file type descriptions,
though it can still be used to get the strongest matching file
magic signature.
* The "file_transferred" event's "descr" parameter no longer
contains verbose file type descriptions.
* The BROMAGIC environment variable no longer changes any behavior
in Bro as magic databases are no longer used/installed.
* Removed "binary" and "octet-stream" mime type detections. They
don't provide any more information than an uninitialized
mime_type field.
* The "fa_file" record now contains a "mime_types" field that
contains all magic signatures that matched the file content
(where the "mime_type" field is just a shortcut for the
strongest match).
- dns_TXT_reply() now supports more than one string entry by receiving
a vector of strings.
- BroControl now runs the "exec" and "df" broctl commands only once
per host, instead of once per Bro node. The output of these
commands has been changed slightly to include both the host and
node names.
- Several performance improvements were made. Particular emphasis
was put on the File Analysis system, which generally will now emit
far fewer file handle request events due to protocol analyzers now
caching that information internally.
Bro 2.2
=======

View file

@ -1 +1 @@
2.2-117
2.3-121

@ -1 +1 @@
Subproject commit 896ddedde55c48ec2163577fc258b49c418abb3e
Subproject commit 4e5969f5a40f5cc192a751375cb61131d32c0fc1

@ -1 +1 @@
Subproject commit 77234b4ba1c5ad0eda64554a7f16a0d79df9ca52
Subproject commit 181f084432e277f899140647d9b788059b3cccb1

@ -1 +1 @@
Subproject commit 17ec437752837fb4214abfb0a2da49df74668d5d
Subproject commit 6be54279bb7ecb5e03d8bcdc7660d323dc4de1bc

@ -1 +1 @@
Subproject commit 6e01d6972f02d68ee82d05f392d1a00725595b7f
Subproject commit f0e0efda05e4b20924efc1b826ad5d85c8b65f83

@ -1 +1 @@
Subproject commit 26c3136d56493017bc33c5a2f22ae393d585c2d9
Subproject commit 1efa4d10f943351efea96def68e598b053fd217a

1
aux/plugins Submodule

@ -0,0 +1 @@
Subproject commit 6de518922e5f89d52d831ea6fb6adb7fff94437e

2
cmake

@ -1 +1 @@
Subproject commit 67da63d43e734111b324d8ed045e188e0a28ebf2
Subproject commit aa15263ae39667e5e9bd73690b05aa4af9147ca3

39
configure vendored
View file

@ -32,14 +32,13 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-perftools force use of Google perftools on non-Linux systems
(automatically on when perftools is present on Linux)
--enable-perftools-debug use Google's perftools for debugging
--enable-jemalloc link against jemalloc
--enable-ruby build ruby bindings for broccoli (deprecated)
--disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli
--disable-dataseries don't use the optional DataSeries log writer
--disable-elasticsearch don't use the optional ElasticSearch log writer
Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root
@ -49,11 +48,11 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-flex=PATH path to flex executable
--with-bison=PATH path to bison executable
--with-perl=PATH path to perl executable
--with-libmagic=PATH path to libmagic install root
Optional Packages in Non-Standard Locations:
--with-geoip=PATH path to the libGeoIP install root
--with-perftools=PATH path to Google Perftools install root
--with-jemalloc=PATH path to jemalloc install root
--with-python=PATH path to Python interpreter
--with-python-lib=PATH path to libpython
--with-python-inc=PATH path to Python headers
@ -61,10 +60,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-ruby-lib=PATH path to ruby library
--with-ruby-inc=PATH path to ruby headers
--with-swig=PATH path to SWIG executable
--with-dataseries=PATH path to DataSeries and Lintel libraries
--with-xml2=PATH path to libxml2 installation (for DataSeries)
--with-curl=PATH path to libcurl install root (for ElasticSearch)
--with-netmap=PATH path to netmap distribution
Packaging Options (for developers):
--binary-package toggle special logic for binary packaging
@ -106,6 +101,7 @@ append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
append_cache_entry ENABLE_JEMALLOC BOOL false
append_cache_entry BinPAC_SKIP_INSTALL BOOL true
append_cache_entry BUILD_SHARED_LIBS BOOL true
append_cache_entry INSTALL_AUX_TOOLS BOOL true
@ -161,6 +157,9 @@ while [ $# -ne 0 ]; do
append_cache_entry ENABLE_PERFTOOLS BOOL true
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true
;;
--enable-jemalloc)
append_cache_entry ENABLE_JEMALLOC BOOL true
;;
--disable-broccoli)
append_cache_entry INSTALL_BROCCOLI BOOL false
;;
@ -179,12 +178,6 @@ while [ $# -ne 0 ]; do
--enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;;
--disable-dataseries)
append_cache_entry DISABLE_DATASERIES BOOL true
;;
--disable-elasticsearch)
append_cache_entry DISABLE_ELASTICSEARCH BOOL true
;;
--with-openssl=*)
append_cache_entry OpenSSL_ROOT_DIR PATH $optarg
;;
@ -206,15 +199,16 @@ while [ $# -ne 0 ]; do
--with-perl=*)
append_cache_entry PERL_EXECUTABLE PATH $optarg
;;
--with-libmagic=*)
append_cache_entry LibMagic_ROOT_DIR PATH $optarg
;;
--with-geoip=*)
append_cache_entry LibGeoIP_ROOT_DIR PATH $optarg
;;
--with-perftools=*)
append_cache_entry GooglePerftools_ROOT_DIR PATH $optarg
;;
--with-jemalloc=*)
append_cache_entry JEMALLOC_ROOT_DIR PATH $optarg
append_cache_entry ENABLE_JEMALLOC BOOL true
;;
--with-python=*)
append_cache_entry PYTHON_EXECUTABLE PATH $optarg
;;
@ -238,19 +232,6 @@ while [ $# -ne 0 ]; do
--with-swig=*)
append_cache_entry SWIG_EXECUTABLE PATH $optarg
;;
--with-dataseries=*)
append_cache_entry DataSeries_ROOT_DIR PATH $optarg
append_cache_entry Lintel_ROOT_DIR PATH $optarg
;;
--with-xml2=*)
append_cache_entry LibXML2_ROOT_DIR PATH $optarg
;;
--with-curl=*)
append_cache_entry LibCURL_ROOT_DIR PATH $optarg
;;
--with-netmap=*)
append_cache_entry NETMAP_ROOT_DIR PATH $optarg
;;
--binary-package)
append_cache_entry BINARY_PACKAGING_MODE BOOL true
;;

View file

@ -14,8 +14,6 @@ if (NOT ${retval} EQUAL 0)
message(FATAL_ERROR "Problem setting BROPATH")
endif ()
set(BROMAGIC ${BRO_MAGIC_SOURCE_PATH})
# Configure the Sphinx config file (expand variables CMake might know about).
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in
${CMAKE_CURRENT_BINARY_DIR}/conf.py
@ -34,7 +32,6 @@ add_custom_target(sphinxdoc
${CMAKE_CURRENT_SOURCE_DIR}/ ${SPHINX_INPUT_DIR}
# Use Bro/Broxygen to dynamically generate reST for all Bro scripts.
COMMAND BROPATH=${BROPATH}
BROMAGIC=${BROMAGIC}
${CMAKE_BINARY_DIR}/src/bro
-X ${CMAKE_CURRENT_BINARY_DIR}/broxygen.conf
broxygen >/dev/null

View file

@ -10,7 +10,7 @@ common/general documentation, style sheets, JavaScript, etc. The Sphinx
config file is produced from ``conf.py.in``, and can be edited to change
various Sphinx options.
There is also a custom Sphinx domain implemented in ``source/ext/bro.py``
There is also a custom Sphinx domain implemented in ``ext/bro.py``
which adds some reST directives and roles that aid in generating useful
index entries and cross-references. Other extensions can be added in
a similar fashion.
@ -19,7 +19,8 @@ The ``make doc`` target in the top-level Makefile can be used to locally
render the reST files into HTML. That target depends on:
* Python interpreter >= 2.5
* `Sphinx <http://sphinx.pocoo.org/>`_ >= 1.0.1
* `Sphinx <http://sphinx-doc.org/>`_ >= 1.0.1
* Doxygen (required only for building the Broccoli API doc)
After completion, HTML documentation is symlinked in ``build/html``.

View file

@ -15,19 +15,19 @@ conditions specific to your particular case.
In the following sections, we present a few examples of common uses of
Bro as an IDS.
------------------------------------------------
Detecting an FTP Bruteforce attack and notifying
------------------------------------------------
-------------------------------------------------
Detecting an FTP Brute-force Attack and Notifying
-------------------------------------------------
For the purpose of this exercise, we define FTP bruteforcing as too many
For the purpose of this exercise, we define FTP brute-forcing as too many
rejected usernames and passwords occurring from a single address. We
start by defining a threshold for the number of attempts and a
monitoring interval in minutes as well as a new notice type.
start by defining a threshold for the number of attempts, a monitoring
interval (in minutes), and a new notice type.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
:lines: 9-25
Now, using the ftp_reply event, we check for error codes from the `500
Using the ftp_reply event, we check for error codes from the `500
series <http://en.wikipedia.org/wiki/List_of_FTP_server_return_codes>`_
for the "USER" and "PASS" commands, representing rejected usernames or
passwords. For this, we can use the :bro:see:`FTP::parse_ftp_reply_code`
@ -38,9 +38,9 @@ function to break down the reply code and check if the first digit is a
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
:lines: 52-60
Next, we use the SumStats framework to raise a notice of the attack of
the attack when the number of failed attempts exceeds the specified
threshold during the measuring interval.
Next, we use the SumStats framework to raise a notice of the attack when
the number of failed attempts exceeds the specified threshold during the
measuring interval.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
:lines: 28-50
@ -56,14 +56,14 @@ Below is the final code for our script.
As a final note, the :doc:`detect-bruteforcing.bro
</scripts/policy/protocols/ftp/detect-bruteforcing.bro>` script above is
include with Bro out of the box, so you only need to load it at startup
to instruct Bro to detect and notify of FTP bruteforce attacks.
included with Bro out of the box. Use this feature by loading this script
during startup.
-------------
Other Attacks
-------------
Detecting SQL Injection attacks
Detecting SQL Injection Attacks
-------------------------------
Checking files against known malware hashes
@ -76,5 +76,4 @@ list of known malware hashes. Bro simplifies this task by offering a
:doc:`detect-MHR.bro </scripts/policy/frameworks/files/detect-MHR.bro>`
script that creates and compares hashes against the `Malware Hash
Registry <https://www.team-cymru.org/Services/MHR/>`_ maintained by Team
Cymru. You only need to load this script along with your other scripts
at startup time.
Cymru. Use this feature by loading this script during startup.

View file

@ -1,12 +1,19 @@
========================
Setting up a Bro Cluster
Bro Cluster Architecture
========================
Intro
------
Bro is not multithreaded, so once the limitations of a single processor core are reached, the only option currently is to spread the workload across many cores or even many physical computers. The cluster deployment scenario for Bro is the current solution to build these larger systems. The accompanying tools and scripts provide the structure to easily manage many Bro processes examining packets and doing correlation activities but acting as a singular, cohesive entity.
Bro is not multithreaded, so once the limitations of a single processor core
are reached the only option currently is to spread the workload across many
cores, or even many physical computers. The cluster deployment scenario for
Bro is the current solution to build these larger systems. The tools and
scripts that accompany Bro provide the structure to easily manage many Bro
processes examining packets and doing correlation activities but acting as
a singular, cohesive entity. This document describes the Bro cluster
architecture. For information on how to configure a Bro cluster,
see the documentation for
:doc:`BroControl <../components/broctl/README>`.
Architecture
---------------
@ -17,42 +24,97 @@ The figure below illustrates the main components of a Bro cluster.
Tap
***
This is a mechanism that splits the packet stream in order to make a copy
available for inspection. Examples include the monitoring port on a switch and
an optical splitter for fiber networks.
The tap is a mechanism that splits the packet stream in order to make a copy
available for inspection. Examples include the monitoring port on a switch
and an optical splitter on fiber networks.
Frontend
********
This is a discrete hardware device or on-host technique that will split your traffic into many streams or flows. The Bro binary does not do this job. There are numerous ways to accomplish this task, some of which are described below in `Frontend Options`_.
The frontend is a discrete hardware device or on-host technique that splits
traffic into many streams or flows. The Bro binary does not do this job.
There are numerous ways to accomplish this task, some of which are described
below in `Frontend Options`_.
Manager
*******
This is a Bro process which has two primary jobs. It receives log messages and notices from the rest of the nodes in the cluster using the Bro communications protocol. The result is that you will end up with single logs for each log instead of many discrete logs that you have to later combine in some manner with post processing. The manager also takes the opportunity to de-duplicate notices and it has the ability to do so since its acting as the choke point for notices and how notices might be processed into actions such as emailing, paging, or blocking.
The manager is a Bro process that has two primary jobs. It receives log
messages and notices from the rest of the nodes in the cluster using the Bro
communications protocol. The result is a single log instead of many
discrete logs that you have to combine in some manner with post-processing.
The manager also takes the opportunity to de-duplicate notices, and it has the
ability to do so since it's acting as the choke point for notices and how
notices might be processed into actions (e.g., emailing, paging, or blocking).
The manager process is started first by BroControl and it only opens its designated port and waits for connections, it doesnt initiate any connections to the rest of the cluster. Once the workers are started and connect to the manager, logs and notices will start arriving to the manager process from the workers.
The manager process is started first by BroControl and it only opens its
designated port and waits for connections, it doesn't initiate any
connections to the rest of the cluster. Once the workers are started and
connect to the manager, logs and notices will start arriving to the manager
process from the workers.
Proxy
*****
This is a Bro process which manages synchronized state. Variables can be synchronized across connected Bro processes automatically in Bro and proxies will help the workers by alleviating the need for all of the workers to connect directly to each other.
The proxy is a Bro process that manages synchronized state. Variables can
be synchronized across connected Bro processes automatically. Proxies help
the workers by alleviating the need for all of the workers to connect
directly to each other.
Examples of synchronized state from the scripts that ship with Bro are things such as the full list of “known” hosts and services which are hosts or services which have been detected as performing full TCP handshakes or an analyzed protocol has been found on the connection. If worker A detects host 1.2.3.4 as an active host, it would be beneficial for worker B to know that as well so worker A shares that information as an insertion to a set <link to set documentation would be good here> which travels to the clusters proxy and the proxy then sends that same set insertion to worker B. The result is that worker A and worker B have shared knowledge about host and services that are active on the network being monitored.
Examples of synchronized state from the scripts that ship with Bro include
the full list of "known" hosts and services (which are hosts or services
identified as performing full TCP handshakes) or an analyzed protocol has been
found on the connection. If worker A detects host 1.2.3.4 as an active host,
it would be beneficial for worker B to know that as well. So worker A shares
that information as an insertion to a set which travels to the cluster's
proxy and the proxy sends that same set insertion to worker B. The result
is that worker A and worker B have shared knowledge about host and services
that are active on the network being monitored.
The proxy model extends to having multiple proxies as well if necessary for performance reasons, it only adds one additional step for the Bro processes. Each proxy connects to another proxy in a ring and the workers are shared between them as evenly as possible. When a proxy receives some new bit of state, it will share that with its proxy which is then shared around the ring of proxies and down to all of the workers. From a practical standpoint, there are no rules of thumb established yet for the number of proxies necessary for the number of workers they are serving. Best is to start with a single proxy and add more if communication performance problems are found.
The proxy model extends to having multiple proxies when necessary for
performance reasons. It only adds one additional step for the Bro processes.
Each proxy connects to another proxy in a ring and the workers are shared
between them as evenly as possible. When a proxy receives some new bit of
state it will share that with its proxy, which is then shared around the
ring of proxies, and down to all of the workers. From a practical standpoint,
there are no rules of thumb established for the number of proxies
necessary for the number of workers they are serving. It is best to start
with a single proxy and add more if communication performance problems are
found.
Bro processes acting as proxies dont tend to be extremely intense to CPU or memory and users frequently run proxy processes on the same physical host as the manager.
Bro processes acting as proxies don't tend to be extremely hard on CPU
or memory and users frequently run proxy processes on the same physical
host as the manager.
Worker
******
This is the Bro process that sniffs network traffic and does protocol analysis on the reassembled traffic streams. Most of the work of an active cluster takes place on the workers and as such, the workers typically represent the bulk of the Bro processes that are running in a cluster. The fastest memory and CPU core speed you can afford is best here since all of the protocol parsing and most analysis will take place here. There are no particular requirements for the disks in workers since almost all logging is done remotely to the manager and very little is normally written to disk.
The worker is the Bro process that sniffs network traffic and does protocol
analysis on the reassembled traffic streams. Most of the work of an active
cluster takes place on the workers and as such, the workers typically
represent the bulk of the Bro processes that are running in a cluster.
The fastest memory and CPU core speed you can afford is recommended
since all of the protocol parsing and most analysis will take place here.
There are no particular requirements for the disks in workers since almost all
logging is done remotely to the manager, and normally very little is written
to disk.
The rule of thumb we have followed recently is to allocate approximately 1 core for every 80Mbps of traffic that is being analyzed, however this estimate could be extremely traffic mix specific. It has generally worked for mixed traffic with many users and servers. For example, if your traffic peaks around 2Gbps (combined) and you want to handle traffic at peak load, you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps estimate works for your traffic, this could be handled by 3 physical hosts dedicated to being workers with each one containing dual 6-core processors.
The rule of thumb we have followed recently is to allocate approximately 1
core for every 80Mbps of traffic that is being analyzed. However, this
estimate could be extremely traffic mix-specific. It has generally worked
for mixed traffic with many users and servers. For example, if your traffic
peaks around 2Gbps (combined) and you want to handle traffic at peak load,
you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps
estimate works for your traffic, this could be handled by 3 physical hosts
dedicated to being workers with each one containing dual 6-core processors.
Once a flow based load balancer is put into place this model is extremely easy to scale as well so its recommended that you guess at the amount of hardware you will need to fully analyze your traffic. If it turns out that you need more, its relatively easy to increase the size of the cluster in most cases.
Once a flow-based load balancer is put into place this model is extremely
easy to scale. It is recommended that you estimate the amount of
hardware you will need to fully analyze your traffic. If more is needed it's
relatively easy to increase the size of the cluster in most cases.
Frontend Options
----------------
There are many options for setting up a frontend flow distributor and in many cases it may even be beneficial to do multiple stages of flow distribution on the network and on the host.
There are many options for setting up a frontend flow distributor. In many
cases it is beneficial to do multiple stages of flow distribution
on the network and on the host.
Discrete hardware flow balancers
********************************
@ -60,12 +122,24 @@ Discrete hardware flow balancers
cPacket
^^^^^^^
If you are monitoring one or more 10G physical interfaces, the recommended solution is to use either a cFlow or cVu device from cPacket because they are currently being used very successfully at a number of sites. These devices will perform layer-2 load balancing by rewriting the destination ethernet MAC address to cause each packet associated with a particular flow to have the same destination MAC. The packets can then be passed directly to a monitoring host where each worker has a BPF filter to limit its visibility to only that stream of flows or onward to a commodity switch to split the traffic out to multiple 1G interfaces for the workers. This can ultimately greatly reduce costs since workers can use relatively inexpensive 1G interfaces.
If you are monitoring one or more 10G physical interfaces, the recommended
solution is to use either a cFlow or cVu device from cPacket because they
are used successfully at a number of sites. These devices will perform
layer-2 load balancing by rewriting the destination Ethernet MAC address
to cause each packet associated with a particular flow to have the same
destination MAC. The packets can then be passed directly to a monitoring
host where each worker has a BPF filter to limit its visibility to only that
stream of flows, or onward to a commodity switch to split the traffic out to
multiple 1G interfaces for the workers. This greatly reduces
costs since workers can use relatively inexpensive 1G interfaces.
OpenFlow Switches
^^^^^^^^^^^^^^^^^
We are currently exploring the use of OpenFlow based switches to do flow based load balancing directly on the switch which can greatly reduce frontend costs for many users. This document will be updated when we have more information.
We are currently exploring the use of OpenFlow based switches to do flow-based
load balancing directly on the switch, which greatly reduces frontend
costs for many users. This document will be updated when we have more
information.
On host flow balancing
**********************
@ -73,14 +147,26 @@ On host flow balancing
PF_RING
^^^^^^^
The PF_RING software for Linux has a “clustering” feature which will do flow based load balancing across a number of processes that are sniffing the same interface. This will allow you to easily take advantage of multiple cores in a single physical host because Bros main event loop is single threaded and cant natively utilize all of the cores. More information about Bro with PF_RING can be found here: (someone want to write a quick Bro/PF_RING tutorial to link to here? document installing kernel module, libpcap wrapper, building Bro with the --with-pcap configure option)
The PF_RING software for Linux has a "clustering" feature which will do
flow-based load balancing across a number of processes that are sniffing the
same interface. This allows you to easily take advantage of multiple
cores in a single physical host because Bro's main event loop is single
threaded and can't natively utilize all of the cores. If you want to use
PF_RING, see the documentation on `how to configure Bro with PF_RING
<http://bro.org/documentation/load-balancing.html>`_.
Netmap
^^^^^^
FreeBSD has an in-progress project named Netmap which will enable flow based load balancing as well. When it becomes viable for real world use, this document will be updated.
FreeBSD has an in-progress project named Netmap which will enable flow-based
load balancing as well. When it becomes viable for real world use, this
document will be updated.
Click! Software Router
^^^^^^^^^^^^^^^^^^^^^^
Click! can be used for flow based load balancing with a simple configuration. (link to an example for the config). This solution is not recommended on Linux due to Bros PF_RING support and only as a last resort on other operating systems since it causes a lot of overhead due to context switching back and forth between kernel and userland several times per packet.
Click! can be used for flow based load balancing with a simple configuration.
This solution is not recommended on
Linux due to Bro's PF_RING support and only as a last resort on other
operating systems since it causes a lot of overhead due to context switching
back and forth between kernel and userland several times per packet.

View file

@ -21,7 +21,7 @@ sys.path.insert(0, os.path.abspath('sphinx_input/ext'))
# ----- Begin of BTest configuration. -----
btest = os.path.abspath("@CMAKE_SOURCE_DIR@/aux/btest")
brocut = os.path.abspath("@CMAKE_SOURCE_DIR@/aux/bro-aux/bro-cut")
brocut = os.path.abspath("@CMAKE_SOURCE_DIR@/build/aux/bro-aux/bro-cut")
bro = os.path.abspath("@CMAKE_SOURCE_DIR@/build/src")
os.environ["PATH"] += (":%s:%s/sphinx:%s:%s" % (btest, btest, bro, brocut))
@ -38,7 +38,6 @@ extensions += ["broxygen"]
bro_binary = os.path.abspath("@CMAKE_SOURCE_DIR@/build/src/bro")
broxygen_cache="@BROXYGEN_CACHE_DIR@"
os.environ["BROPATH"] = "@BROPATH@"
os.environ["BROMAGIC"] = "@BROMAGIC@"
# ----- End of Broxygen configuration. -----
# -- General configuration -----------------------------------------------------

263
doc/configuration/index.rst Normal file
View file

@ -0,0 +1,263 @@
.. _configuration:
=====================
Cluster Configuration
=====================
.. contents::
A *Bro Cluster* is a set of systems jointly analyzing the traffic of
a network link in a coordinated fashion. You can operate such a setup from
a central manager system easily using BroControl because BroControl
hides much of the complexity of the multi-machine installation.
This section gives examples of how to setup common cluster configurations
using BroControl. For a full reference on BroControl, see the
:doc:`BroControl <../components/broctl/README>` documentation.
Preparing to Setup a Cluster
============================
In this document we refer to the user account used to set up the cluster
as the "Bro user". When setting up a cluster the Bro user must be set up
on all hosts, and this user must have ssh access from the manager to all
machines in the cluster, and it must work without being prompted for a
password/passphrase (for example, using ssh public key authentication).
Also, on the worker nodes this user must have access to the target
network interface in promiscuous mode.
Additional storage must be available on all hosts under the same path,
which we will call the cluster's prefix path. We refer to this directory
as ``<prefix>``. If you build Bro from source, then ``<prefix>`` is
the directory specified with the ``--prefix`` configure option,
or ``/usr/local/bro`` by default. The Bro user must be able to either
create this directory or, where it already exists, must have write
permission inside this directory on all hosts.
When trying to decide how to configure the Bro nodes, keep in mind that
there can be multiple Bro instances running on the same host. For example,
it's possible to run a proxy and the manager on the same host. However, it is
recommended to run workers on a different machine than the manager because
workers can consume a lot of CPU resources. The maximum recommended
number of workers to run on a machine should be one or two less than
the number of CPU cores available on that machine. Using a load-balancing
method (such as PF_RING) along with CPU pinning can decrease the load on
the worker machines.
Basic Cluster Configuration
===========================
With all prerequisites in place, perform the following steps to setup
a Bro cluster (do this as the Bro user on the manager host only):
- Edit the BroControl configuration file, ``<prefix>/etc/broctl.cfg``,
and change the value of any BroControl options to be more suitable for
your environment. You will most likely want to change the value of
the ``MailTo`` and ``LogRotationInterval`` options. A complete
reference of all BroControl options can be found in the
:doc:`BroControl <../components/broctl/README>` documentation.
- Edit the BroControl node configuration file, ``<prefix>/etc/node.cfg``
to define where manager, proxies, and workers are to run. For a cluster
configuration, you must comment-out (or remove) the standalone node
in that file, and either uncomment or add node entries for each node
in your cluster (manager, proxy, and workers). For example, if you wanted
to run four Bro nodes (two workers, one proxy, and a manager) on a cluster
consisting of three machines, your cluster configuration would look like
this::
[manager]
type=manager
host=10.0.0.10
[proxy-1]
type=proxy
host=10.0.0.10
[worker-1]
type=worker
host=10.0.0.11
interface=eth0
[worker-2]
type=worker
host=10.0.0.12
interface=eth0
For a complete reference of all options that are allowed in the ``node.cfg``
file, see the :doc:`BroControl <../components/broctl/README>` documentation.
- Edit the network configuration file ``<prefix>/etc/networks.cfg``. This
file lists all of the networks which the cluster should consider as local
to the monitored environment.
- Install workers and proxies using BroControl::
> broctl install
- Some tasks need to be run on a regular basis. On the manager node,
insert a line like this into the crontab of the user running the
cluster::
0-59/5 * * * * <prefix>/bin/broctl cron
(Note: if you are editing the system crontab instead of a user's own
crontab, then you need to also specify the user which the command
will be run as. The username must be placed after the time fields
and before the broctl command.)
Note that on some systems (FreeBSD in particular), the default PATH
for cron jobs does not include the directories where bash and python
are installed (the symptoms of this problem would be that "broctl cron"
works when run directly by the user, but does not work from a cron job).
To solve this problem, you would either need to create symlinks
to bash and python in a directory that is in the default PATH for
cron jobs, or specify a new PATH in the crontab.
PF_RING Cluster Configuration
=============================
`PF_RING <http://www.ntop.org/products/pf_ring/>`_ allows speeding up the
packet capture process by installing a new type of socket in Linux systems.
It supports 10Gbit hardware packet filtering using standard network adapters,
and user-space DNA (Direct NIC Access) for fast packet capture/transmission.
Installing PF_RING
^^^^^^^^^^^^^^^^^^
1. Download and install PF_RING for your system following the instructions
`here <http://www.ntop.org/get-started/download/#PF_RING>`_. The following
commands will install the PF_RING libraries and kernel module (replace
the version number 5.6.2 in this example with the version that you
downloaded)::
cd /usr/src
tar xvzf PF_RING-5.6.2.tar.gz
cd PF_RING-5.6.2/userland/lib
./configure --prefix=/opt/pfring
make install
cd ../libpcap
./configure --prefix=/opt/pfring
make install
cd ../tcpdump-4.1.1
./configure --prefix=/opt/pfring
make install
cd ../../kernel
make install
modprobe pf_ring enable_tx_capture=0 min_num_slots=32768
Refer to the documentation for your Linux distribution on how to load the
pf_ring module at boot time. You will need to install the PF_RING
library files and kernel module on all of the workers in your cluster.
2. Download the Bro source code.
3. Configure and install Bro using the following commands::
./configure --with-pcap=/opt/pfring
make
make install
4. Make sure Bro is correctly linked to the PF_RING libpcap libraries::
ldd /usr/local/bro/bin/bro | grep pcap
libpcap.so.1 => /opt/pfring/lib/libpcap.so.1 (0x00007fa6d7d24000)
5. Configure BroControl to use PF_RING (explained below).
6. Run "broctl install" on the manager. This command will install Bro and
all required scripts to the other machines in your cluster.
Using PF_RING
^^^^^^^^^^^^^
In order to use PF_RING, you need to specify the correct configuration
options for your worker nodes in BroControl's node configuration file.
Edit the ``node.cfg`` file and specify ``lb_method=pf_ring`` for each of
your worker nodes. Next, use the ``lb_procs`` node option to specify how
many Bro processes you'd like that worker node to run, and optionally pin
those processes to certain CPU cores with the ``pin_cpus`` option (CPU
numbering starts at zero). The correct ``pin_cpus`` setting to use is
dependent on your CPU architecture (Intel and AMD systems enumerate
processors in different ways). Using the wrong ``pin_cpus`` setting
can cause poor performance. Here is what a worker node entry should
look like when using PF_RING and CPU pinning::
[worker-1]
type=worker
host=10.0.0.50
interface=eth0
lb_method=pf_ring
lb_procs=10
pin_cpus=2,3,4,5,6,7,8,9,10,11
Using PF_RING+DNA with symmetric RSS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You must have a PF_RING+DNA license in order to do this. You can sniff
each packet only once.
1. Load the DNA NIC driver (i.e. ixgbe) on each worker host.
2. Run "ethtool -L dna0 combined 10" (this will establish 10 RSS queues
on your NIC) on each worker host. You must make sure that you set the
number of RSS queues to the same as the number you specify for the
lb_procs option in the node.cfg file.
3. On the manager, configure your worker(s) in node.cfg::
[worker-1]
type=worker
host=10.0.0.50
interface=dna0
lb_method=pf_ring
lb_procs=10
Using PF_RING+DNA with pfdnacluster_master
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You must have a PF_RING+DNA license and a libzero license in order to do
this. You can load balance between multiple applications and sniff the
same packets multiple times with different tools.
1. Load the DNA NIC driver (i.e. ixgbe) on each worker host.
2. Run "ethtool -L dna0 1" (this will establish 1 RSS queues on your NIC)
on each worker host.
3. Run the pfdnacluster_master command on each worker host. For example::
pfdnacluster_master -c 21 -i dna0 -n 10
Make sure that your cluster ID (21 in this example) matches the interface
name you specify in the node.cfg file. Also make sure that the number
of processes you're balancing across (10 in this example) matches
the lb_procs option in the node.cfg file.
4. If you are load balancing to other processes, you can use the
pfringfirstappinstance variable in broctl.cfg to set the first
application instance that Bro should use. For example, if you are running
pfdnacluster_master with "-n 10,4" you would set
pfringfirstappinstance=4. Unfortunately that's still a global setting
in broctl.cfg at the moment but we may change that to something you can
set in node.cfg eventually.
5. On the manager, configure your worker(s) in node.cfg::
[worker-1]
type=worker
host=10.0.0.50
interface=dnacluster:21
lb_method=pf_ring
lb_procs=10

View file

@ -13,7 +13,9 @@ functionality to Bro:
- Builtin functions/events/types for the scripting language.
- Protocol and file analyzers.
- Protocol analyzers.
- File analyzers.
- Packet sources and packet dumpers. TODO: Not yet.
@ -24,22 +26,21 @@ functionality to Bro:
A plugin's functionality is available to the user just as if Bro had
the corresponding code built-in. Indeed, internally many of Bro's
pieces are structured as plugins as well, they are just statically
compiled into the binary rather than loaded dynamically at runtime, as
external plugins are.
compiled into the binary rather than loaded dynamically at runtime.
Quick Start
===========
Writing a basic plugin is quite straight-forward as long as one
follows a few conventions. In the following we walk through adding a
new built-in function (bif) to Bro; we'll add `a `rot13(s: string) :
string``, a function that rotates every character in a string by 13
places.
follows a few conventions. In the following we walk a simple example
plugin that adds a new built-in function (bif) to Bro: we'll add
``rot13(s: string) : string``, a function that rotates every character
in a string by 13 places.
A plugin comes in the form of a directory following a certain
structure. To get started, Bro's distribution provides a helper script
``aux/bro-aux/plugin-support/init-plugin`` that creates a skeleton
plugin that can then be customized. Let's use that::
Generally, a plugin comes in the form of a directory following a
certain structure. To get started, Bro's distribution provides a
helper script ``aux/bro-aux/plugin-support/init-plugin`` that creates
a skeleton plugin that can then be customized. Let's use that::
# mkdir rot13-plugin
# cd rot13-plugin
@ -50,9 +51,9 @@ namespace the plugin will live in, and the second a descriptive name
for the plugin itself. Bro uses the combination of the two to identify
a plugin. The namespace serves to avoid naming conflicts between
plugins written by independent developers; pick, e.g., the name of
your organisation (and note that the namespace ``Bro`` is reserved for
functionality distributed by the Bro Project). In our example, the
plugin will be called ``Demo::Rot13``.
your organisation. The namespace ``Bro`` is reserved for functionality
distributed by the Bro Project. In our example, the plugin will be
called ``Demo::Rot13``.
The ``init-plugin`` script puts a number of files in place. The full
layout is described later. For now, all we need is
@ -62,7 +63,7 @@ there as follows::
# cat scripts/functions.bif
module CaesarCipher;
function camel_case%(s: string%) : string
function rot13%(s: string%) : string
%{
char* rot13 = copy_string(s->CheckString());
@ -72,7 +73,7 @@ there as follows::
*p = (*p - b + 13) % 26 + b;
}
return new StringVal(strlen(rot13), rot13);
return new StringVal(new BroString(1, rot13, strlen(rot13)));
%}
The syntax of this file is just like any other ``*.bif`` file; we
@ -188,10 +189,6 @@ directory.
load this in dynamically at run-time if OS and architecture match
the current platform.
``lib/bif/``
Directory with auto-generated Bro scripts that declare the plugins
bif elements. The files here are produced by ``bifcl``.
``scripts/``
A directory with the plugin's custom Bro scripts. When the plugin
gets activated, this directory will be automatically added to
@ -202,6 +199,10 @@ directory.
A Bro script that will be loaded immediately when the plugin gets
activated. See below for more information on activating plugins.
``lib/bif/``
Directory with auto-generated Bro scripts that declare the plugin's
bif elements. The files here are produced by ``bifcl``.
By convention, a plugin should put its custom scripts into sub folders
of ``scripts/``, i.e., ``scripts/<script-namespace>/<script>.bro`` to
avoid conflicts. As usual, you can then put a ``__load__.bro`` in
@ -220,19 +221,21 @@ function.
``init-plugin`` puts a basic plugin structure in place that follows
the above layout and augments it with a CMake build and installation
system. Note that plugins with this structure can be used both
directly out of their source directory (after ``make`` and setting
Bro's ``BRO_PLUGIN_PATH``), and when installed alongside Bro (after
``make install``).
system. Plugins with this structure can be used both directly out of
their source directory (after ``make`` and setting Bro's
``BRO_PLUGIN_PATH``), and when installed alongside Bro (after ``make
install``).
``make install`` copies over the ``lib`` and ``scripts`` directories,
as well as the ``__bro_plugin__`` magic file and the ``README`` (which
you should customize). One can add further CMake ``install`` rules to
install additional files if neeed.
install additional files if needed.
.. todo::
Describe the other files that the script puts in place.
``init-plugin`` will never overwrite existing files, so it's safe to
rerun in an existing plugin directory; it only put files in place that
don't exist yet. That also provides a convenient way to revert a file
back to what ``init-plugin`` created originally: just delete it and
rerun.
Activating a Plugin
===================
@ -250,16 +253,15 @@ in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro
-b``), no dynamic plugins will be activated by default; instead the
user can selectively enable individual plugins in scriptland using the
``@load-plugin <qualified-plugin-name>`` directive (e.g.,
``@load-plugin Demo::Rot13``).
``bro -N`` shows activated and found yet unactivated plugins
separately. Note that plugins compiled statically into Bro are always
activated, and hence show up as such even in bare mode.
.. todo::
Is this the right activation model?
``@load-plugin Demo::Rot13``). Alternatively, one can activate a
plugin from the command-line by specifying its full name
(``Demo::Rot13``), or set the environment variable
``BRO_PLUGIN_ACTIVATE`` to a list of comma(!)-separated names of
plugins to unconditionally activate, even in bare mode.
``bro -N`` shows activated plugins separately from found but not yet
activated plugins. Note that plugins compiled statically into Bro are
always activated, and hence show up as such even in bare mode.
Plugin Component
================
@ -271,13 +273,13 @@ multiple protocol analyzers at once; or both a logging backend and
input reader at the same time.
We now walk briefly through the specifics of providing a specific type
of functionality (a *component*) through plugin. We'll focus on their
interfaces to the plugin system, rather than specifics on writing the
corresponding logic (usually the best way to get going on that is to
start with an existing plugin providing a corresponding component and
adapt that). We'll also point out how the CMake infrastructure put in
place by the ``init-plugin`` helper script ties the various pieces
together.
of functionality (a *component*) through a plugin. We'll focus on
their interfaces to the plugin system, rather than specifics on
writing the corresponding logic (usually the best way to get going on
that is to start with an existing plugin providing a corresponding
component and adapt that). We'll also point out how the CMake
infrastructure put in place by the ``init-plugin`` helper script ties
the various pieces together.
Bro Scripts
-----------
@ -311,22 +313,117 @@ TODO.
Logging Writer
--------------
Not yet implemented.
Not yet available as plugins.
Input Reader
------------
Not yet implemented.
Not yet available as plugins.
Packet Sources
--------------
Not yet implemented.
Not yet available as plugins.
Packet Dumpers
--------------
Not yet implemented.
Not yet available as plugins.
Hooks
=====
TODO.
Testing Plugins
===============
A plugin should come with a test suite to exercise its functionality.
The ``init-plugin`` script puts in place a basic </btest/README> setup
to start with. Initially, it comes with a single test that just checks
that Bro loads the plugin correctly. It won't have a baseline yet, so
let's get that in place::
# cd tests
# btest -d
[ 0%] plugin.loading ... failed
% 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag
== File ===============================
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1.0)
[Function] CaesarCipher::rot13
== Error ===============================
test-diff: no baseline found.
=======================================
# btest -U
all 1 tests successful
# cd ..
# make test
make -C tests
make[1]: Entering directory `tests'
all 1 tests successful
make[1]: Leaving directory `tests'
Now let's add a custom test that ensures that our bif works
correctly::
# cd tests
# cat >plugin/rot13.bro
# @TEST-EXEC: bro %INPUT >output
# @TEST-EXEC: btest-diff output
event bro_init()
{
print CaesarCipher::rot13("Hello");
}
Check the output::
# btest -d plugin/rot13.bro
[ 0%] plugin.rot13 ... failed
% 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag
== File ===============================
Uryyb
== Error ===============================
test-diff: no baseline found.
=======================================
% cat .stderr
1 of 1 test failed
Install the baseline::
# btest -U plugin/rot13.bro
all 1 tests successful
Run the test-suite::
# btest
all 2 tests successful
Debugging Plugins
=================
Plugins can use Bro's standard debug logger by using the
``PLUGIN_DBG_LOG(<plugin>, <args>)`` macro (defined in
``DebugLogger.h``), where ``<plugin>`` is the ``Plugin`` instance and
``<args>`` are printf-style arguments, just as with Bro's standard
debuggging macros.
At runtime, one then activates a plugin's debugging output with ``-B
plugin-<name>``, where ``<name>`` is the name of the plugin as
returned by its ``Configure()`` method, yet with the
namespace-separator ``::`` replaced with a simple dash. Example: If
the plugin is called ``Bro::Demo``, use ``-B plugin-Bro-Demo``. As
usual, the debugging output will be recorded to ``debug.log`` if Bro's
compiled in debug mode.
Documenting Plugins
===================

View file

@ -1,186 +0,0 @@
=============================
Binary Output with DataSeries
=============================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for storing and searching large volumes of data. An an
alternative, Bro comes with experimental support for `DataSeries
<http://www.hpl.hp.com/techreports/2009/HPL-2009-323.html>`_
output, an efficient binary format for recording structured bulk
data. DataSeries is developed and maintained at HP Labs.
.. contents::
Installing DataSeries
---------------------
To use DataSeries, its libraries must be available at compile-time,
along with the supporting *Lintel* package. Generally, both are
distributed on `HP Labs' web site
<http://tesla.hpl.hp.com/opensource/>`_. Currently, however, you need
to use recent development versions for both packages, which you can
download from github like this::
git clone http://github.com/dataseries/Lintel
git clone http://github.com/dataseries/DataSeries
To build and install the two into ``<prefix>``, do::
( cd Lintel && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX=<prefix> .. && make && make install )
( cd DataSeries && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX=<prefix> .. && make && make install )
Please refer to the packages' documentation for more information about
the installation process. In particular, there's more information on
required and optional `dependencies for Lintel
<https://raw.github.com/dataseries/Lintel/master/doc/dependencies.txt>`_
and `dependencies for DataSeries
<https://raw.github.com/dataseries/DataSeries/master/doc/dependencies.txt>`_.
For users on RedHat-style systems, you'll need the following::
yum install libxml2-devel boost-devel
Compiling Bro with DataSeries Support
-------------------------------------
Once you have installed DataSeries, Bro's ``configure`` should pick it
up automatically as long as it finds it in a standard system location.
Alternatively, you can specify the DataSeries installation prefix
manually with ``--with-dataseries=<prefix>``. Keep an eye on
``configure``'s summary output, if it looks like the following, Bro
found DataSeries and will compile in the support::
# ./configure --with-dataseries=/usr/local
[...]
====================| Bro Build Summary |=====================
[...]
DataSeries: true
[...]
================================================================
Activating DataSeries
---------------------
The direct way to use DataSeries is to switch *all* log files over to
the binary format. To do that, just add ``redef
Log::default_writer=Log::WRITER_DATASERIES;`` to your ``local.bro``.
For testing, you can also just pass that on the command line::
bro -r trace.pcap Log::default_writer=Log::WRITER_DATASERIES
With that, Bro will now write all its output into DataSeries files
``*.ds``. You can inspect these using DataSeries's set of command line
tools, which its installation process installs into ``<prefix>/bin``.
For example, to convert a file back into an ASCII representation::
$ ds2txt conn.log
[... We skip a bunch of metadata here ...]
ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes
1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0
1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0
1300475167.099816 pXPi1kPMgxb 141.142.220.50 5353 224.0.0.251 5353 udp 0.000000 0 0 S0 F 0 D 1 179 0 0
1300475168.853899 R7sOc16woCj 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 38 89 SF F 0 Dd 1 66 1 117
1300475168.854378 Z6dfHVmt0X7 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 52 99 SF F 0 Dd 1 80 1 127
1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211
[...]
(``--skip-all`` suppresses the metadata.)
Note that the ASCII conversion is *not* equivalent to Bro's default
output format.
You can also switch only individual files over to DataSeries by adding
code like this to your ``local.bro``:
.. code:: bro
event bro_init()
{
local f = Log::get_filter(Conn::LOG, "default"); # Get default filter for connection log.
f$writer = Log::WRITER_DATASERIES; # Change writer type.
Log::add_filter(Conn::LOG, f); # Replace filter with adapted version.
}
Bro's DataSeries writer comes with a few tuning options, see
:doc:`/scripts/base/frameworks/logging/writers/dataseries.bro`.
Working with DataSeries
=======================
Here are a few examples of using DataSeries command line tools to work
with the output files.
* Printing CSV::
$ ds2txt --csv conn.log
ts,uid,id.orig_h,id.orig_p,id.resp_h,id.resp_p,proto,service,duration,orig_bytes,resp_bytes,conn_state,local_orig,missed_bytes,history,orig_pkts,orig_ip_bytes,resp_pkts,resp_ip_bytes
1258790493.773208,ZTtgbHvf4s3,192.168.1.104,137,192.168.1.255,137,udp,dns,3.748891,350,0,S0,F,0,D,7,546,0,0
1258790451.402091,pOY6Rw7lhUd,192.168.1.106,138,192.168.1.255,138,udp,,0.000000,0,0,S0,F,0,D,1,229,0,0
1258790493.787448,pn5IiEslca9,192.168.1.104,138,192.168.1.255,138,udp,,2.243339,348,0,S0,F,0,D,2,404,0,0
1258790615.268111,D9slyIu3hFj,192.168.1.106,137,192.168.1.255,137,udp,dns,3.764626,350,0,S0,F,0,D,7,546,0,0
[...]
Add ``--separator=X`` to set a different separator.
* Extracting a subset of columns::
$ ds2txt --select '*' ts,id.resp_h,id.resp_p --skip-all conn.log
1258790493.773208 192.168.1.255 137
1258790451.402091 192.168.1.255 138
1258790493.787448 192.168.1.255 138
1258790615.268111 192.168.1.255 137
1258790615.289842 192.168.1.255 138
[...]
* Filtering rows::
$ ds2txt --where '*' 'duration > 5 && id.resp_p > 1024' --skip-all conn.ds
1258790631.532888 V8mV5WLITu5 192.168.1.105 55890 239.255.255.250 1900 udp 15.004568 798 0 S0 F 0 D 6 966 0 0
1258792413.439596 tMcWVWQptvd 192.168.1.105 55890 239.255.255.250 1900 udp 15.004581 798 0 S0 F 0 D 6 966 0 0
1258794195.346127 cQwQMRdBrKa 192.168.1.105 55890 239.255.255.250 1900 udp 15.005071 798 0 S0 F 0 D 6 966 0 0
1258795977.253200 i8TEjhWd2W8 192.168.1.105 55890 239.255.255.250 1900 udp 15.004824 798 0 S0 F 0 D 6 966 0 0
1258797759.160217 MsLsBA8Ia49 192.168.1.105 55890 239.255.255.250 1900 udp 15.005078 798 0 S0 F 0 D 6 966 0 0
1258799541.068452 TsOxRWJRGwf 192.168.1.105 55890 239.255.255.250 1900 udp 15.004082 798 0 S0 F 0 D 6 966 0 0
[...]
* Calculate some statistics:
Mean/stddev/min/max over a column::
$ dsstatgroupby '*' basic duration from conn.ds
# Begin DSStatGroupByModule
# processed 2159 rows, where clause eliminated 0 rows
# count(*), mean(duration), stddev, min, max
2159, 42.7938, 1858.34, 0, 86370
[...]
Quantiles of total connection volume::
$ dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds
[...]
2159 data points, mean 24616 +- 343295 [0,1.26615e+07]
quantiles about every 216 data points:
10%: 0, 124, 317, 348, 350, 350, 601, 798, 1469
tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262
[...]
The ``man`` pages for these tools show further options, and their
``-h`` option gives some more information (either can be a bit cryptic
unfortunately though).
Deficiencies
------------
Due to limitations of the DataSeries format, one cannot inspect its
files before they have been fully written. In other words, when using
DataSeries, it's currently not possible to inspect the live log
files inside the spool directory before they are rotated to their
final location. It seems that this could be fixed with some effort,
and we will work with DataSeries development team on that if the
format gains traction among Bro users.
Likewise, we're considering writing custom command line tools for
interacting with DataSeries files, making that a bit more convenient
than what the standard utilities provide.

View file

@ -1,89 +0,0 @@
=========================================
Indexed Logging Output with ElasticSearch
=========================================
.. rst-class:: opening
Bro's default ASCII log format is not exactly the most efficient
way for searching large volumes of data. ElasticSearch
is a new data storage technology for dealing with tons of data.
It's also a search engine built on top of Apache's Lucene
project. It scales very well, both for distributed indexing and
distributed searching.
.. contents::
Warning
-------
This writer plugin is still in testing and is not yet recommended for
production use! The approach to how logs are handled in the plugin is "fire
and forget" at this time, there is no error handling if the server fails to
respond successfully to the insertion request.
Installing ElasticSearch
------------------------
Download the latest version from: http://www.elasticsearch.org/download/.
Once extracted, start ElasticSearch with::
# ./bin/elasticsearch
For more detailed information, refer to the ElasticSearch installation
documentation: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html
Compiling Bro with ElasticSearch Support
----------------------------------------
First, ensure that you have libcurl installed then run configure::
# ./configure
[...]
====================| Bro Build Summary |=====================
[...]
cURL: true
[...]
ElasticSearch: true
[...]
================================================================
Activating ElasticSearch
------------------------
The easiest way to enable ElasticSearch output is to load the
tuning/logs-to-elasticsearch.bro script. If you are using BroControl,
the following line in local.bro will enable it:
.. console::
@load tuning/logs-to-elasticsearch
With that, Bro will now write most of its logs into ElasticSearch in addition
to maintaining the Ascii logs like it would do by default. That script has
some tunable options for choosing which logs to send to ElasticSearch, refer
to the autogenerated script documentation for those options.
There is an interface being written specifically to integrate with the data
that Bro outputs into ElasticSearch named Brownian. It can be found here::
https://github.com/grigorescu/Brownian
Tuning
------
A common problem encountered with ElasticSearch is too many files being held
open. The ElasticSearch website has some suggestions on how to increase the
open file limit.
- http://www.elasticsearch.org/tutorials/too-many-open-files/
TODO
----
Lots.
- Perform multicast discovery for server.
- Better error detection.
- Better defaults (don't index loaded-plugins, for instance).
-

View file

@ -380,11 +380,11 @@ uncommon to need to delete that data before the end of the connection.
Other Writers
-------------
Bro supports the following output formats other than ASCII:
Bro supports the following built-in output formats other than ASCII:
.. toctree::
:maxdepth: 1
logging-dataseries
logging-elasticsearch
logging-input-sqlite
Further formats are available as external plugins.

View file

@ -64,8 +64,8 @@ expect that signature file in the same directory as the Bro script. The
default extension of the file name is ``.sig``, and Bro appends that
automatically when necessary.
Signature language
==================
Signature Language for Network Traffic
======================================
Let's look at the format of a signature more closely. Each individual
signature has the format ``signature <id> { <attributes> }``. ``<id>``
@ -286,6 +286,44 @@ two actions defined:
connection (``"http"``, ``"ftp"``, etc.). This is used by Bro's
dynamic protocol detection to activate analyzers on the fly.
Signature Language for File Content
===================================
The signature framework can also be used to identify MIME types of files
irrespective of the network protocol/connection over which the file is
transferred. A special type of signature can be written for this
purpose and will be used automatically by the :doc:`Files Framework
<file-analysis>` or by Bro scripts that use the :bro:see:`file_magic`
built-in function.
Conditions
----------
File signatures use a single type of content condition in the form of a
regular expression:
``file-magic /<regular expression>/``
This is analogous to the ``payload`` content condition for the network
traffic signature language described above. The difference is that
``payload`` signatures are applied to payloads of network connections,
but ``file-magic`` can be applied to any arbitrary data, it does not
have to be tied to a network protocol/connection.
Actions
-------
Upon matching a chunk of data, file signatures use the following action
to get information about that data's MIME type:
``file-mime <string> [, <integer>]``
The arguments include the MIME type string associated with the file
magic regular expression and an optional "strength" as a signed integer.
Since multiple file magic signatures may match against a given chunk of
data, the strength value may be used to help choose a "winner". Higher
values are considered stronger.
Things to keep in mind when writing signatures
==============================================

View file

@ -10,7 +10,7 @@ http.log file. This file can then be used for analysis and auditing
purposes.
In the sections below we briefly explain the structure of the http.log
file. Then, we show you how to perform basic HTTP traffic monitoring and
file, then we show you how to perform basic HTTP traffic monitoring and
analysis tasks with Bro. Some of these ideas and techniques can later be
applied to monitor different protocols in a similar way.
@ -40,11 +40,10 @@ request to the root of Bro website::
Network administrators and security engineers, for instance, can use the
information in this log to understand the HTTP activity on the network
and troubleshoot network problems or search for anomalous activities. At
this point, we would like to stress out the fact that there is no just
one right way to perform analysis; it will depend on the expertise of
the person doing the analysis and the specific details of the task to
accomplish.
and troubleshoot network problems or search for anomalous activities. We must
stress that there is no single right way to perform an analysis. It will
depend on the expertise of the person performing the analysis and the
specific details of the task.
For more information about how to handle the HTTP protocol in Bro,
including a complete list of the fields available in http.log, go to
@ -58,15 +57,15 @@ Detecting a Proxy Server
A proxy server is a device on your network configured to request a
service on behalf of a third system; one of the most common examples is
a Web proxy server. A client without Internet access connects to the
proxy and requests a Web page; the proxy then sends the request to the
actual Web server, receives the response and passes it to the original
proxy and requests a web page, the proxy sends the request to the web
server, which receives the response, and passes it to the original
client.
Proxies were conceived to help manage a network and provide better
encapsulation. By themselves, proxies are not a security threat, but a
encapsulation. Proxies by themselves are not a security threat, but a
misconfigured or unauthorized proxy can allow others, either inside or
outside the network, to access any Web site and even conduct malicious
activities anonymously using the network resources.
outside the network, to access any web site and even conduct malicious
activities anonymously using the network's resources.
What Proxy Server traffic looks like
-------------------------------------

View file

@ -12,11 +12,15 @@ Introduction Section
:maxdepth: 2
intro/index.rst
cluster/index.rst
install/index.rst
quickstart/index.rst
configuration/index.rst
..
.. _using-bro:
Using Bro Section
=================
@ -27,7 +31,7 @@ Using Bro Section
httpmonitor/index.rst
broids/index.rst
mimestats/index.rst
cluster/index.rst
scripting/index.rst
..
@ -37,7 +41,6 @@ Reference Section
.. toctree::
:maxdepth: 2
scripting/index.rst
frameworks/index.rst
script-reference/index.rst
components/index.rst

View file

@ -1,43 +1,47 @@
.. _upgrade-guidelines:
==================
General Guidelines
==================
==============
How to Upgrade
==============
If you're doing an upgrade install (rather than a fresh install),
there's two suggested approaches: either install Bro using the same
installation prefix directory as before, or pick a new prefix and copy
local customizations over. In the following we summarize general
guidelines for upgrading, see the :ref:`release-notes` for
version-specific information.
local customizations over. Regardless of which approach you choose,
if you are using BroControl, then after upgrading Bro you will need to
run "broctl check" (to verify that your new configuration is OK)
and "broctl install" to complete the upgrade process.
Re-Using Previous Install Prefix
In the following we summarize general guidelines for upgrading, see
the :ref:`release-notes` for version-specific information.
Reusing Previous Install Prefix
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you choose to configure and install Bro with the same prefix
directory as before, local customization and configuration to files in
``$prefix/share/bro/site`` and ``$prefix/etc`` won't be overwritten
(``$prefix`` indicating the root of where Bro was installed). Also, logs
generated at run-time won't be touched by the upgrade. (But making
a backup of local changes before upgrading is still recommended.)
generated at run-time won't be touched by the upgrade. Backing up local
changes before upgrading is still recommended.
After upgrading, remember to check ``$prefix/share/bro/site`` and
``$prefix/etc`` for ``.example`` files, which indicate the
distribution's version of the file differs from the local one, which may
include local changes. Review the differences, and make adjustments
as necessary (for differences that aren't the result of a local change,
use the new version's).
``$prefix/etc`` for ``.example`` files, which indicate that the
distribution's version of the file differs from the local one, and therefore,
may include local changes. Review the differences and make adjustments
as necessary. Use the new version for differences that aren't a result of
a local change.
Using a New Install prefix
Using a New Install Prefix
~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to install the newer version in a different prefix
directory than before, you can just copy local customization and
configuration files from ``$prefix/share/bro/site`` and ``$prefix/etc``
to the new location (``$prefix`` indicating the root of where Bro was
originally installed). Make sure to review the files for difference
before copying and make adjustments as necessary (for differences that
aren't the result of a local change, use the new version's). Of
particular note, the copied version of ``$prefix/etc/broctl.cfg`` is
likely to need changes to the ``SpoolDir`` and ``LogDir`` settings.
To install the newer version in a different prefix directory than before,
copy local customization and configuration files from ``$prefix/share/bro/site``
and ``$prefix/etc`` to the new location (``$prefix`` indicating the root of
where Bro was originally installed). Review the files for differences
before copying and make adjustments as necessary (use the new version for
differences that aren't a result of a local change). Of particular note,
the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes
to the ``SpoolDir`` and ``LogDir`` settings.

View file

@ -3,7 +3,7 @@
.. _Xcode: https://developer.apple.com/xcode/
.. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org
.. _Homebrew: http://mxcl.github.com/homebrew
.. _Homebrew: http://brew.sh
.. _bro downloads page: http://bro.org/download/index.html
.. _installing-bro:
@ -35,7 +35,7 @@ before you begin:
To build Bro from source, the following additional dependencies are required:
* CMake 2.8.0 or greater (http://www.cmake.org)
* CMake 2.6.3 or greater (http://www.cmake.org)
* Make
* C/C++ compiler
* SWIG (http://www.swig.org)
@ -80,7 +80,7 @@ that ``bash`` and ``python`` are in your ``PATH``):
Distributions of these dependencies can likely be obtained from your
preferred Mac OS X package management system (e.g. MacPorts_, Fink_,
or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``,
``swig-python`` and packages provide the required dependencies.
and ``swig-python`` packages provide the required dependencies.
Optional Dependencies
@ -89,9 +89,8 @@ Optional Dependencies
Bro can make use of some optional libraries and tools if they are found at
build time:
* LibGeoIP (for geo-locating IP addresses)
* LibGeoIP (for geolocating IP addresses)
* sendmail (enables Bro and BroControl to send mail)
* gawk (enables all features of bro-cut)
* curl (used by a Bro script that implements active HTTP)
* gperftools (tcmalloc is used to improve memory and CPU usage)
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
@ -137,14 +136,14 @@ The primary install prefix for binary packages is ``/opt/bro``.
Non-MacOS packages that include BroControl also put variable/runtime
data (e.g. Bro logs) in ``/var/opt/bro``.
Installing From Source
Installing from Source
==========================
Bro releases are bundled into source packages for convenience and
available from the `bro downloads page`_. Alternatively, the latest
Bro releases are bundled into source packages for convenience and are
available on the `bro downloads page`_. Alternatively, the latest
Bro development version can be obtained through git repositories
hosted at ``git.bro.org``. See our `git development documentation
<http://bro.org/development/process.html>`_ for comprehensive
<http://bro.org/development/howtos/process.html>`_ for comprehensive
information on Bro's use of git revision control, but the short story
for downloading the full source code experience for Bro via git is:
@ -184,6 +183,11 @@ OpenBSD users, please see our `FAQ
<http://www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro.
Finally, if you want to build the Bro documentation (not required, because
all of the documentation for the latest Bro release is available on the
Bro web site), there are instructions in ``doc/README`` in the source
distribution.
Configure the Run-Time Environment
==================================

View file

@ -24,17 +24,17 @@ Working with Log Files
Generally, all of Bro's log files are produced by a corresponding
script that defines their individual structure. However, as each log
file flows through the Logging Framework, there share a set of
file flows through the Logging Framework, they share a set of
structural similarities. Without breaking into the scripting aspect of
Bro here, a bird's eye view of how the log files are produced would
progress as follows. The script's author defines the kinds of data,
Bro here, a bird's eye view of how the log files are produced
progresses as follows. The script's author defines the kinds of data,
such as the originating IP address or the duration of a connection,
which will make up the fields (i.e., columns) of the log file. The
author then decides what network activity should generate a single log
file entry (i.e., one line); that could, e.g., be a connection having
been completed or an HTTP ``GET`` method being issued by an
file entry (i.e., one line). For example, this could be a connection
having been completed or an HTTP ``GET`` request being issued by an
originator. When these behaviors are observed during operation, the
data is passed to the Logging Framework which, in turn, adds the entry
data is passed to the Logging Framework which adds the entry
to the appropriate log file.
As the fields of the log entries can be further customized by the
@ -57,7 +57,7 @@ data, the string ``(empty)`` as the indicator for an empty field and
the ``-`` character as the indicator for a field that hasn't been set.
The timestamp for when the file was created is included under
``#open``. The header then goes on to detail the fields being listed
in the file and the data types of those fields in ``#fields`` and
in the file and the data types of those fields, in ``#fields`` and
``#types``, respectively. These two entries are often the two most
significant points of interest as they detail not only the field names
but the data types used. When navigating through the different log
@ -66,12 +66,12 @@ definitions readily available saves the user some mental leg work. The
field names are also a key resource for using the :ref:`bro-cut
<bro-cut>` utility included with Bro, see below.
Next to the header follows the main content; in this example we see 7
Next to the header follows the main content. In this example we see 7
connections with their key properties, such as originator and
responder IP addresses (note how Bro transparely handles both IPv4 and
IPv6), transport-layer ports, application-layer services - the
``service`` field is filled ias Bro determines a specific protocol to
be in use, independent of the connection's ports - payload size, and
responder IP addresses (note how Bro transparently handles both IPv4 and
IPv6), transport-layer ports, application-layer services ( - the
``service`` field is filled in as Bro determines a specific protocol to
be in use, independent of the connection's ports), payload size, and
more. See :bro:type:`Conn::Info` for a description of all fields.
In addition to ``conn.log``, Bro generates many further logs by
@ -87,8 +87,8 @@ default, including:
A log of FTP session-level activity.
``files.log``
Summaries of files transfered over the network. This information
is aggregrated from different protocols, including HTTP, FTP, and
Summaries of files transferred over the network. This information
is aggregated from different protocols, including HTTP, FTP, and
SMTP.
``http.log``
@ -106,7 +106,7 @@ default, including:
``weird.log``
A log of unexpected protocol-level activity. Whenever Bro's
protocol analysis encounters a situation it would not expect
(e.g., an RFC violation) is logs it in this file. Note that in
(e.g., an RFC violation) it logs it in this file. Note that in
practice, real-world networks tend to exhibit a large number of
such "crud" that is usually not worth following up on.
@ -120,7 +120,7 @@ Using ``bro-cut``
The ``bro-cut`` utility can be used in place of other tools to build
terminal commands that remain flexible and accurate independent of
possible changes to log file itself. It accomplishes this by parsing
possible changes to the log file itself. It accomplishes this by parsing
the header in each file and allowing the user to refer to the specific
columnar data available (in contrast to tools like ``awk`` that
require the user to refer to fields referenced by their position).
@ -131,7 +131,7 @@ from a ``conn.log``:
@TEST-EXEC: btest-rst-cmd -n 10 "cat conn.log | bro-cut id.orig_h id.orig_p id.resp_h duration"
The correspding ``awk`` command would look like this:
The corresponding ``awk`` command will look like this:
.. btest:: using_bro
@ -162,8 +162,8 @@ tools like ``awk`` allow you to indicate the log file as a command
line option, bro-cut only takes input through redirection such as
``|`` and ``<``. There are a couple of ways to direct log file data
into ``bro-cut``, each dependent upon the type of log file you're
processing. A caveat of its use, however, is that the 8 lines of
header data must be present.
processing. A caveat of its use, however, is that all of the
header lines must be present.
.. note::
@ -177,16 +177,16 @@ moving the current log file into a directory with format
``YYYY-MM-DD`` and gzip compressing the file with a file format that
includes the log file type and time range of the file. In the case of
processing a compressed log file you simply adjust your command line
tools to use the complementary ``z*`` versions of commands such as cat
(``zcat``), ``grep`` (``zgrep``), and ``head`` (``zhead``).
tools to use the complementary ``z*`` versions of commands such as ``cat``
(``zcat``) or ``grep`` (``zgrep``).
Working with Timestamps
-----------------------
``bro-cut`` accepts the flag ``-d`` to convert the epoch time values
in the log files to human-readable format. The following command
includes the human readable time stamp, the unique identifier and the
HTTP ``Host`` and HTTP ``URI`` as extracted from the ``http.log``
includes the human readable time stamp, the unique identifier, the
HTTP ``Host``, and HTTP ``URI`` as extracted from the ``http.log``
file:
.. btest:: using_bro
@ -218,7 +218,7 @@ See ``man strfime`` for more options for the format string.
Using UIDs
----------
While Bro can do signature based analysis, its primary focus is on
While Bro can do signature-based analysis, its primary focus is on
behavioral detection which alters the practice of log review from
"reactionary review" to a process a little more akin to a hunting
trip. A common progression of review includes correlating a session
@ -254,12 +254,13 @@ network.
-----------------------
Common Log Files
-----------------------
As a monitoring tool, Bro records a detailed view of the traffic inspected and the events generated in
a series of relevant log files. These files can later be reviewed for monitoring, auditing and troubleshooting
purposes.
As a monitoring tool, Bro records a detailed view of the traffic inspected
and the events generated in a series of relevant log files. These files can
later be reviewed for monitoring, auditing and troubleshooting purposes.
In this section we present a brief explanation of the most commonly used log files generated by Bro including links
to descriptions of some of the fields for each log type.
In this section we present a brief explanation of the most commonly used log
files generated by Bro including links to descriptions of some of the fields
for each log type.
+-----------------+---------------------------------------+------------------------------+
| Log File | Description | Field Descriptions |

View file

@ -6,19 +6,19 @@ MIME Type Statistics
====================
Files are constantly transmitted over HTTP on regular networks. These
files belong to a specific category (i.e., executable, text, image,
etc.) identified by a `Multipurpose Internet Mail Extension (MIME)
files belong to a specific category (e.g., executable, text, image)
identified by a `Multipurpose Internet Mail Extension (MIME)
<http://en.wikipedia.org/wiki/MIME>`_. Although MIME was originally
developed to identify the type of non-text attachments on email, it is
also used by Web browser to identify the type of files transmitted and
also used by a web browser to identify the type of files transmitted and
present them accordingly.
In this tutorial, we will show how to use the Sumstats Framework to
collect some statistics information based on MIME types, specifically
In this tutorial, we will demonstrate how to use the Sumstats Framework
to collect statistical information based on MIME types; specifically,
the total number of occurrences, size in bytes, and number of unique
hosts transmitting files over HTTP per each type. For instructions about
hosts transmitting files over HTTP per each type. For instructions on
extracting and creating a local copy of these files, visit :ref:`this
<http-monitor>` tutorial instead.
tutorial <http-monitor>`.
------------------------------------------------
MIME Statistics with Sumstats
@ -30,31 +30,31 @@ Observations, where the event is observed and fed into the framework.
(ii) Reducers, where observations are collected and measured. (iii)
Sumstats, where the main functionality is implemented.
So, we start by defining our observation along with a record to store
all statistics values and an observation interval. We are conducting our
observation on the :bro:see:`HTTP::log_http` event and we are interested
in the MIME type, size of the file ("response_body_len") and the
We start by defining our observation along with a record to store
all statistical values and an observation interval. We are conducting our
observation on the :bro:see:`HTTP::log_http` event and are interested
in the MIME type, size of the file ("response_body_len"), and the
originator host ("orig_h"). We use the MIME type as our key and create
observers for the other two values.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
:lines: 6-29, 54-64
Next, we create the reducers. The first one will accumulate file sizes
and the second one will make sure we only store a host ID once. Below is
Next, we create the reducers. The first will accumulate file sizes
and the second will make sure we only store a host ID once. Below is
the partial code from a :bro:see:`bro_init` handler.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
:lines: 34-37
In our final step, we create the SumStats where we check for the
observation interval and once it expires, we populate the record
observation interval. Once it expires, we populate the record
(defined above) with all the relevant data and write it to a log.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
:lines: 38-51
Putting everything together we end up with the following final code for
After putting the three pieces together we end up with the following final code for
our script.
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro

View file

@ -12,8 +12,10 @@ Quick Start Guide
Bro works on most modern, Unix-based systems and requires no custom
hardware. It can be downloaded in either pre-built binary package or
source code forms. See :ref:`installing-bro` for instructions on how to
install Bro. Below, ``$PREFIX`` is used to reference the Bro
installation root directory, which by default is ``/usr/local/bro/`` if
install Bro.
In the examples below, ``$PREFIX`` is used to reference the Bro
installation root directory, which by default is ``/usr/local/bro`` if
you install from source.
Managing Bro with BroControl
@ -21,7 +23,10 @@ Managing Bro with BroControl
BroControl is an interactive shell for easily operating/managing Bro
installations on a single system or even across multiple systems in a
traffic-monitoring cluster.
traffic-monitoring cluster. This section explains how to use BroControl
to manage a stand-alone Bro installation. For instructions on how to
configure a Bro cluster, see the :doc:`Cluster Configuration
<../configuration/index>` documentation.
A Minimal Starting Configuration
--------------------------------
@ -155,7 +160,7 @@ changes we want to make:
attempt looks like it may have been successful, and we want email when
that happens, but only for certain servers.
So we've defined *what* we want to do, but need to know *where* to do it.
We've defined *what* we want to do, but need to know *where* to do it.
The answer is to use a script written in the Bro programming language, so
let's do a quick intro to Bro scripting.
@ -181,7 +186,7 @@ must explicitly choose if they want to load them.
The main entry point for the default analysis configuration of a standalone
Bro instance managed by BroControl is the ``$PREFIX/share/bro/site/local.bro``
script. So we'll be adding to that in the following sections, but first
script. We'll be adding to that in the following sections, but first
we have to figure out what to add.
Redefining Script Option Variables
@ -197,7 +202,7 @@ A redefineable constant might seem strange, but what that really means is that
the variable's value may not change at run-time, but whose initial value can be
modified via the ``redef`` operator at parse-time.
So let's continue on our path to modify the behavior for the two SSL
Let's continue on our path to modify the behavior for the two SSL
and SSH notices. Looking at :doc:`/scripts/base/frameworks/notice/main.bro`,
we see that it advertises:
@ -211,7 +216,7 @@ we see that it advertises:
const ignored_types: set[Notice::Type] = {} &redef;
}
That's exactly what we want to do for the SSL notice. So add to ``local.bro``:
That's exactly what we want to do for the SSL notice. Add to ``local.bro``:
.. code:: bro
@ -229,7 +234,7 @@ is valid before installing it and then restarting the Bro instance:
.. console::
[BroControl] > check
bro is ok.
bro scripts are ok.
[BroControl] > install
removing old policies in /usr/local/bro/spool/policy/site ... done.
removing old policies in /usr/local/bro/spool/policy/auto ... done.
@ -245,15 +250,15 @@ is valid before installing it and then restarting the Bro instance:
Now that the SSL notice is ignored, let's look at how to send an email on
the SSH notice. The notice framework has a similar option called
``emailed_types``, but that can't differentiate between SSH servers and we
only want email for logins to certain ones. Then we come to the ``PolicyItem``
record and ``policy`` set and realize that those are actually what get used
to implement the simple functionality of ``ignored_types`` and
``emailed_types``, but using that would generate email for all SSH servers and
we only want email for logins to certain ones. There is a ``policy`` hook
that is actually what is used to implement the simple functionality of
``ignored_types`` and
``emailed_types``, but it's extensible such that the condition and action taken
on notices can be user-defined.
In ``local.bro``, let's add a new ``PolicyItem`` record to the ``policy`` set
that only takes the email action for SSH logins to a defined set of servers:
In ``local.bro``, let's define a new ``policy`` hook handler body
that takes the email action for SSH logins only for a defined set of servers:
.. code:: bro
@ -271,14 +276,14 @@ that only takes the email action for SSH logins to a defined set of servers:
You'll just have to trust the syntax for now, but what we've done is
first declare our own variable to hold a set of watched addresses,
``watched_servers``; then added a record to the policy that will generate
an email on the condition that the predicate function evaluates to true, which
is whenever the notice type is an SSH login and the responding host stored
``watched_servers``; then added a hook handler body to the policy that will
generate an email whenever the notice type is an SSH login and the responding
host stored
inside the ``Info`` record's connection field is in the set of watched servers.
.. note:: record field member access is done with the '$' character
.. note:: Record field member access is done with the '$' character
instead of a '.' as might be expected from other languages, in
order to avoid ambiguity with the builtin address type's use of '.'
order to avoid ambiguity with the built-in address type's use of '.'
in IPv4 dotted decimal representations.
Remember, to finalize that configuration change perform the ``check``,
@ -292,9 +297,10 @@ tweak the most basic options. Here's some suggestions on what to explore next:
* We only looked at how to change options declared in the notice framework,
there's many more options to look at in other script packages.
* Continue reading with :ref:`using-bro` chapter which goes into more
depth on working with Bro; then look at :ref:`writing-scripts` for
learning how to start writing your own scripts.
* Continue reading with :ref:`Using Bro <using-bro>` chapter which goes
into more depth on working with Bro; then look at
:ref:`writing-scripts` for learning how to start writing your own
scripts.
* Look at the scripts in ``$PREFIX/share/bro/policy`` for further ones
you may want to load; you can browse their documentation at the
:ref:`overview of script packages <script-packages>`.
@ -407,7 +413,7 @@ logging) and adds SSL certificate validation.
You might notice that a script you load from the command line uses the
``@load`` directive in the Bro language to declare dependence on other scripts.
This directive is similar to the ``#include`` of C/C++, except the semantics
are "load this script if it hasn't already been loaded".
are, "load this script if it hasn't already been loaded."
.. note:: If one wants Bro to be able to load scripts that live outside the
default directories in Bro's installation root, the ``BROPATH`` environment
@ -420,7 +426,7 @@ Running Bro Without Installing
For developers that wish to run Bro directly from the ``build/``
directory (i.e., without performing ``make install``), they will have
to first adjust ``BROPATH`` and ``BROMAGIC`` to look for scripts and
to first adjust ``BROPATH`` to look for scripts and
additional files inside the build directory. Sourcing either
``build/bro-path-dev.sh`` or ``build/bro-path-dev.csh`` as appropriate
for the current shell accomplishes this and also augments your

View file

@ -1,5 +1,5 @@
@load base/protocols/conn
@load base/protocols/dns
@load base/protocols/http
event connection_state_remove(c: connection)
{

View file

@ -1,13 +1,19 @@
event bro_init()
{
# Declaration of the table.
local ssl_services: table[string] of port;
# Initialize the table.
ssl_services = table(["SSH"] = 22/tcp, ["HTTPS"] = 443/tcp);
# Insert one key-yield pair into the table.
ssl_services["IMAPS"] = 993/tcp;
# Check if the key "SMTPS" is not in the table.
if ( "SMTPS" !in ssl_services )
ssl_services["SMTPS"] = 587/tcp;
# Iterate over each key in the table.
for ( k in ssl_services )
print fmt("Service Name: %s - Common Port: %s", k, ssl_services[k]);
}

View file

@ -1,8 +1,10 @@
module Factor;
export {
# Append the value LOG to the Log::ID enumerable.
redef enum Log::ID += { LOG };
# Define a new type called Factor::Info.
type Info: record {
num: count &log;
factorial_num: count &log;
@ -20,6 +22,7 @@ function factorial(n: count): count
event bro_init()
{
# Create the logging stream.
Log::create_stream(LOG, [$columns=Info]);
}

View file

@ -2,7 +2,6 @@
@load base/protocols/ssh/
redef Notice::emailed_types += {
SSH::Interesting_Hostname_Login,
SSH::Login
SSH::Interesting_Hostname_Login
};

View file

@ -3,5 +3,4 @@
redef Notice::type_suppression_intervals += {
[SSH::Interesting_Hostname_Login] = 1day,
[SSH::Login] = 12hrs,
};

View file

@ -54,7 +54,8 @@ script and much more in following sections.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 4-6
Lines 3 to 5 of the script process the ``__load__.bro`` script in the
The first part of the script consists of ``@load`` directives which
process the ``__load__.bro`` script in the
respective directories being loaded. The ``@load`` directives are
often considered good practice or even just good manners when writing
Bro scripts to make sure they can be used on their own. While it's unlikely that in a
@ -78,29 +79,37 @@ of the :bro:id:`NOTICE` function to generate notices of type
``TeamCymruMalwareHashRegistry::Match`` as done in the next section. Notices
allow Bro to generate some kind of extra notification beyond its
default log types. Often times, this extra notification comes in the
form of an email generated and sent to a preconfigured address, but can be altered
depending on the needs of the deployment. The export section is finished off with
the definition of two constants that list the kind of files we want to match against and
the minimum percentage of detection threshold in which we are interested.
form of an email generated and sent to a preconfigured address, but can
be altered depending on the needs of the deployment. The export section
is finished off with the definition of a few constants that list the kind
of files we want to match against and the minimum percentage of
detection threshold in which we are interested.
Up until this point, the script has merely done some basic setup. With the next section,
the script starts to define instructions to take in a given event.
Up until this point, the script has merely done some basic setup. With
the next section, the script starts to define instructions to take in
a given event.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
:lines: 38-62
:lines: 38-71
The workhorse of the script is contained in the event handler for
``file_hash``. The :bro:see:`file_hash` event allows scripts to access
the information associated with a file for which Bro's file analysis framework has
generated a hash. The event handler is passed the file itself as ``f``, the type of digest
algorithm used as ``kind`` and the hash generated as ``hash``.
the information associated with a file for which Bro's file analysis
framework has generated a hash. The event handler is passed the
file itself as ``f``, the type of digest algorithm used as ``kind``
and the hash generated as ``hash``.
On line 3, an ``if`` statement is used to check for the correct type of hash, in this case
a SHA1 hash. It also checks for a mime type we've defined as being of interest as defined in the
constant ``match_file_types``. The comparison is made against the expression ``f$mime_type``, which uses
the ``$`` dereference operator to check the value ``mime_type`` inside the variable ``f``. Once both
values resolve to true, a local variable is defined to hold a string comprised of the SHA1 hash concatenated
with ``.malware.hash.cymru.com``; this value will be the domain queried in the malware hash registry.
In the ``file_hash`` event handler, there is an ``if`` statement that is used
to check for the correct type of hash, in this case
a SHA1 hash. It also checks for a mime type we've defined as
being of interest as defined in the constant ``match_file_types``.
The comparison is made against the expression ``f$mime_type``, which uses
the ``$`` dereference operator to check the value ``mime_type``
inside the variable ``f``. If the entire expression evaluates to true,
then a helper function is called to do the rest of the work. In that
function, a local variable is defined to hold a string comprised of
the SHA1 hash concatenated with ``.malware.hash.cymru.com``; this
value will be the domain queried in the malware hash registry.
The rest of the script is contained within a ``when`` block. In
short, a ``when`` block is used when Bro needs to perform asynchronous
@ -111,24 +120,28 @@ this event continues and upon receipt of the values returned by
:bro:id:`lookup_hostname_txt`, the ``when`` block is executed. The
``when`` block splits the string returned into a portion for the date on which
the malware was first detected and the detection rate by splitting on an text space
and storing the values returned in a local table variable. In line 12, if the table
returned by ``split1`` has two entries, indicating a successful split, we store the detection
date in ``mhr_first_detected`` and the rate in ``mhr_detect_rate`` on lines 14 and 15 respectively
and storing the values returned in a local table variable.
In the ``do_mhr_lookup`` function, if the table
returned by ``split1`` has two entries, indicating a successful split, we
store the detection
date in ``mhr_first_detected`` and the rate in ``mhr_detect_rate``
using the appropriate conversion functions. From this point on, Bro knows it has seen a file
transmitted which has a hash that has been seen by the Team Cymru Malware Hash Registry, the rest
of the script is dedicated to producing a notice.
On line 17, the detection time is processed into a string representation and stored in
``readable_first_detected``. The script then compares the detection rate against the
``notice_threshold`` that was defined earlier. If the detection rate is high enough, the script
creates a concise description of the notice on line 22, a possible URL to check the sample against
``virustotal.com``'s database, and makes the call to :bro:id:`NOTICE` to hand the relevant information
off to the Notice framework.
The detection time is processed into a string representation and stored in
``readable_first_detected``. The script then compares the detection rate
against the ``notice_threshold`` that was defined earlier. If the
detection rate is high enough, the script creates a concise description
of the notice and stores it in the ``message`` variable. It also
creates a possible URL to check the sample against
``virustotal.com``'s database, and makes the call to :bro:id:`NOTICE`
to hand the relevant information off to the Notice framework.
In approximately 25 lines of code, Bro provides an amazing
In approximately a few dozen lines of code, Bro provides an amazing
utility that would be incredibly difficult to implement and deploy
with other products. In truth, claiming that Bro does this in 25
lines is a misdirection; there is a truly massive number of things
with other products. In truth, claiming that Bro does this in such a small
number of lines is a misdirection; there is a truly massive number of things
going on behind-the-scenes in Bro, but it is the inclusion of the
scripting language that gives analysts access to those underlying
layers in a succinct and well defined manner.
@ -180,7 +193,7 @@ event definition used by Bro. As Bro detects DNS requests being
issued by an originator, it issues this event and any number of
scripts then have access to the data Bro passes along with the event.
In this example, Bro passes not only the message, the query, query
type and query class for the DNS request, but also a then record used
type and query class for the DNS request, but also a record used
for the connection itself.
The Connection Record Data Type
@ -210,8 +223,7 @@ into the connection record data type will be
:bro:id:`connection_state_remove` . As detailed in the in-line
documentation, Bro generates this event just before it decides to
remove this event from memory, effectively forgetting about it. Let's
take a look at a simple script, stored as
``connection_record_01.bro``, that will output the connection record
take a look at a simple example script, that will output the connection record
for a single connection.
.. btest-include:: ${DOC_ROOT}/scripting/connection_record_01.bro
@ -232,7 +244,7 @@ overly populated.
.. btest:: connection-record-01
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_01.bro
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_01.bro
As you can see from the output, the connection record is something of
a jumble when printed on its own. Regularly taking a peek at a
@ -243,14 +255,14 @@ of reference for accessing data in a script.
Bro makes extensive use of nested data structures to store state and
information gleaned from the analysis of a connection as a complete
unit. To break down this collection of information, you will have to
make use of use Bro's field delimiter ``$``. For example, the
make use of Bro's field delimiter ``$``. For example, the
originating host is referenced by ``c$id$orig_h`` which if given a
narrative relates to ``orig_h`` which is a member of ``id`` which is
a member of the data structure referred to as ``c`` that was passed
into the event handler." Given that the responder port
(``c$id$resp_p``) is ``53/tcp``, it's likely that Bro's base DNS scripts
into the event handler. Given that the responder port
``c$id$resp_p`` is ``53/tcp``, it's likely that Bro's base HTTP scripts
can further populate the connection record. Let's load the
``base/protocols/dns`` scripts and check the output of our script.
``base/protocols/http`` scripts and check the output of our script.
Bro uses the dollar sign as its field delimiter and a direct
correlation exists between the output of the connection record and the
@ -262,16 +274,16 @@ brackets, which would correspond to the ``$``-delimiter in a Bro script.
.. btest:: connection-record-02
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_02.bro
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_02.bro
The addition of the ``base/protocols/dns`` scripts populates the
``dns=[]`` member of the connection record. While Bro is doing a
The addition of the ``base/protocols/http`` scripts populates the
``http=[]`` member of the connection record. While Bro is doing a
massive amount of work in the background, it is in what is commonly
called "scriptland" that details are being refined and decisions
being made. Were we to continue running in "bare mode" we could slowly
keep adding infrastructure through ``@load`` statements. For example,
were we to ``@load base/frameworks/logging``, Bro would generate a
``conn.log`` and ``dns.log`` for us in the current working directory.
``conn.log`` and ``http.log`` for us in the current working directory.
As mentioned above, including the appropriate ``@load`` statements is
not only good practice, but can also help to indicate which
functionalities are being used in a script. Take a second to run the
@ -345,13 +357,13 @@ keyword. Unlike globals, constants can only be set or altered at
parse time if the ``&redef`` attribute has been used. Afterwards (in
runtime) the constants are unalterable. In most cases, re-definable
constants are used in Bro scripts as containers for configuration
options. For example, the configuration option to log password
options. For example, the configuration option to log passwords
decrypted from HTTP streams is stored in
``HTTP::default_capture_password`` as shown in the stripped down
:bro:see:`HTTP::default_capture_password` as shown in the stripped down
excerpt from :doc:`/scripts/base/protocols/http/main.bro` below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro
:lines: 8-10,19-21,120
:lines: 9-11,20-22,121
Because the constant was declared with the ``&redef`` attribute, if we
needed to turn this option on globally, we could do so by adding the
@ -384,9 +396,12 @@ which it was declared. Local variables tend to be used for values
that are only needed within a specific scope and once the processing
of a script passes beyond that scope and no longer used, the variable
is deleted. Bro maintains names of locals separately from globally
visible ones, an example of which is illustrated below. The script
executes the event handler :bro:id:`bro_init` which in turn calls the
function ``add_two(i: count)`` with an argument of ``10``. Once Bro
visible ones, an example of which is illustrated below.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_local.bro
The script executes the event handler :bro:id:`bro_init` which in turn calls
the function ``add_two(i: count)`` with an argument of ``10``. Once Bro
enters the ``add_two`` function, it provisions a locally scoped
variable called ``added_two`` to hold the value of ``i+2``, in this
case, ``12``. The ``add_two`` function then prints the value of the
@ -398,8 +413,6 @@ processing the ``bro_init`` function, the variable called ``test`` is
no longer in scope and, since there exist no other references to the
value ``12``, the value is also deleted.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_local.bro
Data Structures
---------------
@ -506,20 +519,7 @@ Tables
A table in Bro is a mapping of a key to a value or yield. While the
values don't have to be unique, each key in the table must be unique
to preserve a one-to-one mapping of keys to values. In the example
below, we've compiled a table of SSL-enabled services and their common
ports. The explicit declaration and constructor for the table on
lines 5 and 7 lay out the data types of the keys (strings) and the
data types of the yields (ports) and then fill in some sample key and
yield pairs. Line 8 shows how to use a table accessor to insert one
key-yield pair into the table. When using the ``in`` operator on a table,
you are effectively working with the keys of the table. In the case
of an ``if`` statement, the ``in`` operator will check for membership among
the set of keys and return a true or false value. As seen on line 10,
we are checking if ``SMTPS`` is not in the set of keys for the
ssl_services table and if the condition holds true, we add the
key-yield pair to the table. Line 13 shows the use of a ``for`` statement
to iterate over each key currently in the table.
to preserve a one-to-one mapping of keys to values.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_declaration.bro
@ -527,6 +527,21 @@ to iterate over each key currently in the table.
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/data_struct_table_declaration.bro
In this example,
we've compiled a table of SSL-enabled services and their common
ports. The explicit declaration and constructor for the table are on
two different lines and lay out the data types of the keys (strings) and the
data types of the yields (ports) and then fill in some sample key and
yield pairs. You can also use a table accessor to insert one
key-yield pair into the table. When using the ``in``
operator on a table, you are effectively working with the keys of the table.
In the case of an ``if`` statement, the ``in`` operator will check for
membership among the set of keys and return a true or false value.
The example shows how to check if ``SMTPS`` is not in the set
of keys for the ``ssl_services`` table and if the condition holds true,
we add the key-yield pair to the table. Finally, the example shows how
to use a ``for`` statement to iterate over each key currently in the table.
Simple examples aside, tables can become extremely complex as the keys
and values for the table become more intricate. Tables can have keys
comprised of multiple data types and even a series of elements called
@ -535,9 +550,15 @@ Bro implies a cost in complexity for the person writing the scripts
but pays off in effectiveness given the power of Bro as a network
security platform.
The script below shows a sample table of strings indexed by two
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_complex.bro
.. btest:: data_struct_table_complex
@TEST-EXEC: btest-rst-cmd bro -b ${DOC_ROOT}/scripting/data_struct_table_complex.bro
This script shows a sample table of strings indexed by two
strings, a count, and a final string. With a tuple acting as an
aggregate key, the order is the important as a change in order would
aggregate key, the order is important as a change in order would
result in a new key. Here, we're using the table to track the
director, studio, year or release, and lead actor in a series of
samurai flicks. It's important to note that in the case of the ``for``
@ -546,14 +567,9 @@ iterate over, say, the directors; we have to iterate with the exact
format as the keys themselves. In this case, we need squared brackets
surrounding four temporary variables to act as a collection for our
iteration. While this is a contrived example, we could easily have
had keys containing IP addresses (``addr``), ports (``port``) and even a ``string``
calculated as the result of a reverse hostname lookup.
had keys containing IP addresses (``addr``), ports (``port``) and even
a ``string`` calculated as the result of a reverse hostname lookup.
.. btest-include:: ${DOC_ROOT}/scripting/data_struct_table_complex.bro
.. btest:: data_struct_table_complex
@TEST-EXEC: btest-rst-cmd bro -b ${DOC_ROOT}/scripting/data_struct_table_complex.bro
Vectors
~~~~~~~
@ -657,7 +673,7 @@ using a 20 bit subnet mask.
Because this is a script that doesn't use any kind of network
analysis, we can handle the event :bro:id:`bro_init` which is always
generated by Bro's core upon startup. On lines six and seven, two
generated by Bro's core upon startup. In the example script, two
locally scoped vectors are created to hold our lists of subnets and IP
addresses respectively. Then, using a set of nested ``for`` loops, we
iterate over every subnet and every IP address and use an ``if``
@ -760,7 +776,7 @@ string against which it will be tested to be on the right.
In the sample above, two local variables are declared to hold our
sample sentence and regular expression. Our regular expression in
this case will return true if the string contains either the word
``quick`` or the word ``fox``. The ``if`` statement on line six uses
``quick`` or the word ``fox``. The ``if`` statement in the script uses
embedded matching and the ``in`` operator to check for the existence
of the pattern within the string. If the statement resolves to true,
:bro:id:`split` is called to break the string into separate pieces.
@ -768,8 +784,8 @@ of the pattern within the string. If the statement resolves to true,
table of strings indexed by a count. Each element of the table will
be the segments before and after any matches against the pattern but
excluding the actual matches. In this case, our pattern matches
twice, and results in a table with three entries. Lines 11 through 13
print the contents of the table in order.
twice, and results in a table with three entries. The ``print`` statements
in the script will print the contents of the table in order.
.. btest:: data_type_pattern
@ -780,7 +796,7 @@ inequality operators through the ``==`` and ``!=`` operators
respectively. When used in this manner however, the string must match
entirely to resolve to true. For example, the script below uses two
ternary conditional statements to illustrate the use of the ``==``
operators with patterns. On lines 8 and 11 the output is altered based
operator with patterns. The output is altered based
on the result of the comparison between the pattern and the string.
.. btest-include:: ${DOC_ROOT}/scripting/data_type_pattern_02.bro
@ -915,11 +931,7 @@ through a contrived example of simply logging the digits 1 through 10
and their corresponding factorial to the default ASCII log writer.
It's always best to work through the problem once, simulating the
desired output with ``print`` and ``fmt`` before attempting to dive
into the Logging Framework. Below is a script that defines a
factorial function to recursively calculate the factorial of a
unsigned integer passed as an argument to the function. Using
``print`` and :bro:id:`fmt` we can ensure that Bro can perform these
calculations correctly as well get an idea of the answers ourselves.
into the Logging Framework.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro
@ -927,19 +939,28 @@ calculations correctly as well get an idea of the answers ourselves.
@TEST-EXEC: btest-rst-cmd bro ${DOC_ROOT}/scripting/framework_logging_factorial_01.bro
This script defines a factorial function to recursively calculate the
factorial of a unsigned integer passed as an argument to the function. Using
``print`` and :bro:id:`fmt` we can ensure that Bro can perform these
calculations correctly as well get an idea of the answers ourselves.
The output of the script aligns with what we expect so now it's time
to integrate the Logging Framework. As mentioned above we have to
perform a few steps before we can issue the :bro:id:`Log::write`
method and produce a logfile. As we are working within a namespace
and informing an outside entity of workings and data internal to the
namespace, we use an ``export`` block. First we need to inform Bro
to integrate the Logging Framework.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_02.bro
As mentioned above we have to perform a few steps before we can
issue the :bro:id:`Log::write` method and produce a logfile.
As we are working within a namespace and informing an outside
entity of workings and data internal to the namespace, we use
an ``export`` block. First we need to inform Bro
that we are going to be adding another Log Stream by adding a value to
the :bro:type:`Log::ID` enumerable. In line 6 of the script, we append the
the :bro:type:`Log::ID` enumerable. In this script, we append the
value ``LOG`` to the ``Log::ID`` enumerable, however due to this being in
an export block the value appended to ``Log::ID`` is actually
``Factor::Log``. Next, we need to define the name and value pairs
that make up the data of our logs and dictate its format. Lines 8
through 11 define a new datatype called an ``Info`` record (actually,
that make up the data of our logs and dictate its format. This script
defines a new record datatype called ``Info`` (actually,
``Factor::Info``) with two fields, both unsigned integers. Each of the
fields in the ``Factor::Log`` record type include the ``&log``
attribute, indicating that these fields should be passed to the
@ -947,16 +968,14 @@ Logging Framework when ``Log::write`` is called. Were there to be
any name value pairs without the ``&log`` attribute, those fields
would simply be ignored during logging but remain available for the
lifespan of the variable. The next step is to create the logging
stream with :bro:id:`Log::create_stream` which takes a Log::ID and a
record as its arguments. In this example, on line 25, we call the
stream with :bro:id:`Log::create_stream` which takes a ``Log::ID`` and a
record as its arguments. In this example, we call the
``Log::create_stream`` method and pass ``Factor::LOG`` and the
``Factor::Info`` record as arguments. From here on out, if we issue
the ``Log::write`` command with the correct ``Log::ID`` and a properly
formatted ``Factor::Info`` record, a log entry will be generated.
.. btest-include:: ${DOC_ROOT}/scripting/framework_logging_factorial_02.bro
Now, if we run the new version of the script, instead of generating
Now, if we run this script, instead of generating
logging information to stdout, no output is created. Instead the
output is all in ``factor.log``, properly formatted and organized.
@ -1000,8 +1019,8 @@ filter can specify function returns a string to be used as the
filename for the current call to ``Log::write``. The definition for
this function has to take as its parameters a ``Log::ID`` called id, a
string called ``path`` and the appropriate record type for the logs called
``rec``. You can see the definition of ``mod5`` used in this example on
line one conforms to that requirement. The function simply returns
``rec``. You can see the definition of ``mod5`` used in this example
conforms to that requirement. The function simply returns
``factor-mod5`` if the factorial is divisible evenly by 5, otherwise, it
returns ``factor-non5``. In the additional ``bro_init`` event
handler, we define a locally scoped ``Log::Filter`` and assign it a
@ -1074,7 +1093,8 @@ make a call to :bro:id:`NOTICE` supplying it with an appropriate
:bro:type:`Notice::Info` record. Often times the call to ``NOTICE``
includes just the ``Notice::Type``, and a concise message. There are
however, significantly more options available when raising notices as
seen in the table below. The only field in the table below whose
seen in the definition of :bro:type:`Notice::Info`. The only field in
``Notice::Info`` whose
attributes make it a required field is the ``note`` field. Still,
good manners are always important and including a concise message in
``$msg`` and, where necessary, the contents of the connection record
@ -1086,57 +1106,6 @@ that are commonly included, ``$identifier`` and ``$suppress_for`` are
built around the automated suppression feature of the Notice Framework
which we will cover shortly.
.. todo::
Once the link to ``Notice::Info`` work I think we should take out
the table. That's too easy to get out of date.
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| Field | Type | Attributes | Use |
+=====================+==================================================================+================+========================================+
| ts | time | &log &optional | The time of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| uid | string | &log &optional | A unique connection ID |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| id | conn_id | &log &optional | A 4-tuple to identify endpoints |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| conn | connection | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| iconn | icmp_conn | &optional | Shorthand for the uid and id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| proto | transport_proto | &log &optional | Transport protocol |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| note | Notice::Type | &log | The Notice::Type of the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| msg | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| sub | string | &log &optional | Human readable message |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src | addr | &log &optional | Source address if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| dst | addr | &log &optional | Destination addr if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| p | port | &log &optional | Port if no conn_id |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| n | count | &log &optional | Count or status code |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| src_peer | event_peer | &log &optional | Peer that raised the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| peer_descr | string | &log &optional | Text description of the src_peer |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| actions | set[Notice::Action] | &log &optional | Actions applied to the notice |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| policy_items | set[count] | &log &optional | Policy items that have been applied |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_body_sections | vector | &optional | Body of the email for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| email_delay_tokens | set[string] | &optional | Delay functionality for email notices. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| identifier | string | &optional | A unique string identifier |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
| suppress_for | interval | &log &optional | Length of time to suppress a notice. |
+---------------------+------------------------------------------------------------------+----------------+----------------------------------------+
One of the default policy scripts raises a notice when an SSH login
has been heuristically detected and the originating hostname is one
that would raise suspicion. Effectively, the script attempts to
@ -1153,15 +1122,15 @@ possible while staying concise.
While much of the script relates to the actual detection, the parts
specific to the Notice Framework are actually quite interesting in
themselves. On line 18 the script's ``export`` block adds the value
themselves. The script's ``export`` block adds the value
``SSH::Interesting_Hostname_Login`` to the enumerable constant
``Notice::Type`` to indicate to the Bro core that a new type of notice
is being defined. The script then calls ``NOTICE`` and defines the
``$note``, ``$msg``, ``$sub`` and ``$conn`` fields of the
:bro:type:`Notice::Info` record. Line 42 also includes a ternary if
statement that modifies the ``$msg`` text depending on whether the
:bro:type:`Notice::Info` record. There are two ternary if
statements that modify the ``$msg`` text depending on whether the
host is a local address and whether it is the client or the server.
This use of :bro:id:`fmt` and a ternary operators is a concise way to
This use of :bro:id:`fmt` and ternary operators is a concise way to
lend readability to the notices that are generated without the need
for branching ``if`` statements that each raise a specific notice.
@ -1222,7 +1191,7 @@ from the connection relative to the behavior that has been observed by
Bro.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ssl/expiring-certs.bro
:lines: 60-63
:lines: 64-68
In the :doc:`/scripts/policy/protocols/ssl/expiring-certs.bro` script
which identifies when SSL certificates are set to expire and raises
@ -1302,9 +1271,9 @@ in the call to ``NOTICE``.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_01.bro
The Notice Policy shortcut above adds the ``Notice::Types`` of
SSH::Interesting_Hostname_Login and SSH::Login to the
Notice::emailed_types set while the shortcut below alters the length
The Notice Policy shortcut above adds the ``Notice::Type`` of
``SSH::Interesting_Hostname_Login`` to the
``Notice::emailed_types`` set while the shortcut below alters the length
of time for which those notices will be suppressed.
.. btest-include:: ${DOC_ROOT}/scripting/framework_notice_shortcuts_02.bro

1
magic

@ -1 +0,0 @@
Subproject commit 99c6b89230e2b9b0e781c42b0b9412d2ab4e14b2

View file

@ -7,10 +7,10 @@ module Unified2;
export {
redef enum Log::ID += { LOG };
## Directory to watch for Unified2 files.
## File to watch for Unified2 files.
const watch_file = "" &redef;
## File to watch for Unified2 records.
## Directory to watch for Unified2 records.
const watch_dir = "" &redef;
## The sid-msg.map file you would like to use for your alerts.

View file

@ -0,0 +1 @@
Support for X509 certificates with the file analysis framework.

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,77 @@
@load base/frameworks/files
@load base/files/hash
module X509;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Current timestamp.
ts: time &log;
## File id of this certificate.
id: string &log;
## Basic information about the certificate.
certificate: X509::Certificate &log;
## The opaque wrapping the certificate. Mainly used
## for the verify operations.
handle: opaque of x509;
## All extensions that were encountered in the certificate.
extensions: vector of X509::Extension &default=vector();
## Subject alternative name extension of the certificate.
san: X509::SubjectAlternativeName &optional &log;
## Basic constraints extension of the certificate.
basic_constraints: X509::BasicConstraints &optional &log;
};
## Event for accessing logged records.
global log_x509: event(rec: Info);
}
event bro_init() &priority=5
{
Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509]);
}
redef record Files::Info += {
## Information about X509 certificates. This is used to keep
## certificate information until all events have been received.
x509: X509::Info &optional;
};
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=5
{
f$info$x509 = [$ts=f$info$ts, $id=f$id, $certificate=cert, $handle=cert_ref];
}
event x509_extension(f: fa_file, ext: X509::Extension) &priority=5
{
if ( f$info?$x509 )
f$info$x509$extensions[|f$info$x509$extensions|] = ext;
}
event x509_ext_basic_constraints(f: fa_file, ext: X509::BasicConstraints) &priority=5
{
if ( f$info?$x509 )
f$info$x509$basic_constraints = ext;
}
event x509_ext_subject_alternative_name(f: fa_file, ext: X509::SubjectAlternativeName) &priority=5
{
if ( f$info?$x509 )
f$info$x509$san = ext;
}
event file_state_remove(f: fa_file) &priority=5
{
if ( ! f$info?$x509 )
return;
Log::write(LOG, f$info$x509);
}

View file

@ -1 +1,2 @@
@load ./main.bro
@load ./magic

View file

@ -0,0 +1,2 @@
@load-sigs ./general
@load-sigs ./libmagic

View file

@ -0,0 +1,11 @@
# General purpose file magic signatures.
signature file-plaintext {
file-magic /([[:print:][:space:]]{10})/
file-mime "text/plain", -20
}
signature file-tar {
file-magic /([[:print:]\x00]){100}(([[:digit:]\x00\x20]){8}){3}/
file-mime "application/x-tar", 150
}

File diff suppressed because it is too large Load diff

View file

@ -41,15 +41,15 @@ export {
## If this file was transferred over a network
## connection this should show the host or hosts that
## the data sourced from.
tx_hosts: set[addr] &log;
tx_hosts: set[addr] &default=addr_set() &log;
## If this file was transferred over a network
## connection this should show the host or hosts that
## the data traveled to.
rx_hosts: set[addr] &log;
rx_hosts: set[addr] &default=addr_set() &log;
## Connection UIDs over which the file was transferred.
conn_uids: set[string] &log;
conn_uids: set[string] &default=string_set() &log;
## An identification of the source of the file data. E.g. it
## may be a network protocol over which it was transferred, or a
@ -63,12 +63,13 @@ export {
depth: count &default=0 &log;
## A set of analysis types done during the file analysis.
analyzers: set[string] &log;
analyzers: set[string] &default=string_set() &log;
## A mime type provided by libmagic against the *bof_buffer*
## field of :bro:see:`fa_file`, or in the cases where no
## buffering of the beginning of file occurs, an initial
## guess of the mime type based on the first data seen.
## A mime type provided by the strongest file magic signature
## match against the *bof_buffer* field of :bro:see:`fa_file`,
## or in the cases where no buffering of the beginning of file
## occurs, an initial guess of the mime type based on the first
## data seen.
mime_type: string &log &optional;
## A filename for the file if one is available from the source
@ -233,6 +234,42 @@ export {
## callback: Function to execute when the given file analyzer is being added.
global register_analyzer_add_callback: function(tag: Files::Tag, callback: function(f: fa_file, args: AnalyzerArgs));
## Registers a set of MIME types for an analyzer. If a future connection on one of
## these types is seen, the analyzer will be automatically assigned to parsing it.
## The function *adds* to all MIME types already registered, it doesn't replace
## them.
##
## tag: The tag of the analyzer.
##
## mts: The set of MIME types, each in the form "foo/bar" (case-insensitive).
##
## Returns: True if the MIME types were successfully registered.
global register_for_mime_types: function(tag: Analyzer::Tag, mts: set[string]) : bool;
## Registers a MIME type for an analyzer. If a future file with this type is seen,
## the analyzer will be automatically assigned to parsing it. The function *adds*
## to all MIME types already registered, it doesn't replace them.
##
## tag: The tag of the analyzer.
##
## mt: The MIME type in the form "foo/bar" (case-insensitive).
##
## Returns: True if the MIME type was successfully registered.
global register_for_mime_type: function(tag: Analyzer::Tag, mt: string) : bool;
## Returns a set of all MIME types currently registered for a specific analyzer.
##
## tag: The tag of the analyzer.
##
## Returns: The set of MIME types.
global registered_mime_types: function(tag: Analyzer::Tag) : set[string];
## Returns a table of all MIME-type-to-analyzer mappings currently registered.
##
## Returns: A table mapping each analyzer to the set of MIME types registered for
## it.
global all_registered_mime_types: function() : table[Analyzer::Tag] of set[string];
## Event that can be handled to access the Info record as it is sent on
## to the logging framework.
global log_files: event(rec: Info);
@ -245,6 +282,9 @@ redef record fa_file += {
# Store the callbacks for protocol analyzers that have files.
global registered_protocols: table[Analyzer::Tag] of ProtoRegistration = table();
# Store the MIME type to analyzer mappings.
global mime_types: table[Analyzer::Tag] of set[string];
global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: AnalyzerArgs) = table();
event bro_init() &priority=5
@ -369,6 +409,41 @@ function register_protocol(tag: Analyzer::Tag, reg: ProtoRegistration): bool
return result;
}
function register_for_mime_types(tag: Analyzer::Tag, mime_types: set[string]) : bool
{
local rc = T;
for ( mt in mime_types )
{
if ( ! register_for_mime_type(tag, mt) )
rc = F;
}
return rc;
}
function register_for_mime_type(tag: Analyzer::Tag, mt: string) : bool
{
if ( ! __register_for_mime_type(tag, mt) )
return F;
if ( tag !in mime_types )
mime_types[tag] = set();
add mime_types[tag][mt];
return T;
}
function registered_mime_types(tag: Analyzer::Tag) : set[string]
{
return tag in mime_types ? mime_types[tag] : set();
}
function all_registered_mime_types(): table[Analyzer::Tag] of set[string]
{
return mime_types;
}
function describe(f: fa_file): string
{
local tag = Analyzer::get_tag(f$source);

View file

@ -4,6 +4,17 @@
module Input;
export {
type Event: enum {
EVENT_NEW = 0,
EVENT_CHANGED = 1,
EVENT_REMOVED = 2,
};
type Mode: enum {
MANUAL = 0,
REREAD = 1,
STREAM = 2
};
## The default input reader used. Defaults to `READER_ASCII`.
const default_reader = READER_ASCII &redef;

View file

@ -1,7 +1,5 @@
@load ./main
@load ./postprocessors
@load ./writers/ascii
@load ./writers/dataseries
@load ./writers/sqlite
@load ./writers/elasticsearch
@load ./writers/none

View file

@ -5,9 +5,15 @@
module Log;
# Log::ID and Log::Writer are defined in types.bif due to circular dependencies.
export {
## Type that defines an ID unique to each log stream. Scripts creating new log
## streams need to redef this enum to add their own specific log ID. The log ID
## implicitly determines the default name of the generated log file.
type Log::ID: enum {
## Dummy place-holder.
UNKNOWN
};
## If true, local logging is by default enabled for all filters.
const enable_local_logging = T &redef;

View file

@ -17,27 +17,51 @@ module LogAscii;
export {
## If true, output everything to stdout rather than
## into files. This is primarily for debugging purposes.
##
## This option is also available as a per-filter ``$config`` option.
const output_to_stdout = F &redef;
## If true, the default will be to write logs in a JSON format.
##
## This option is also available as a per-filter ``$config`` option.
const use_json = F &redef;
## Format of timestamps when writing out JSON. By default, the JSON
## formatter will use double values for timestamps which represent the
## number of seconds from the UNIX epoch.
const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef;
## If true, include lines with log meta information such as column names
## with types, the values of ASCII logging options that are in use, and
## the time when the file was opened and closed (the latter at the end).
##
## If writing in JSON format, this is implicitly disabled.
const include_meta = T &redef;
## Prefix for lines with meta information.
##
## This option is also available as a per-filter ``$config`` option.
const meta_prefix = "#" &redef;
## Separator between fields.
##
## This option is also available as a per-filter ``$config`` option.
const separator = Log::separator &redef;
## Separator between set elements.
##
## This option is also available as a per-filter ``$config`` option.
const set_separator = Log::set_separator &redef;
## String to use for empty fields. This should be different from
## *unset_field* to make the output unambiguous.
##
## This option is also available as a per-filter ``$config`` option.
const empty_field = Log::empty_field &redef;
## String to use for an unset &optional field.
##
## This option is also available as a per-filter ``$config`` option.
const unset_field = Log::unset_field &redef;
}

View file

@ -1,60 +0,0 @@
##! Interface for the DataSeries log writer.
module LogDataSeries;
export {
## Compression to use with the DS output file. Options are:
##
## 'none' -- No compression.
## 'lzf' -- LZF compression (very quick, but leads to larger output files).
## 'lzo' -- LZO compression (very fast decompression times).
## 'gz' -- GZIP compression (slower than LZF, but also produces smaller output).
## 'bz2' -- BZIP2 compression (slower than GZIP, but also produces smaller output).
const compression = "gz" &redef;
## The extent buffer size.
## Larger values here lead to better compression and more efficient writes,
## but also increase the lag between the time events are received and
## the time they are actually written to disk.
const extent_size = 65536 &redef;
## Should we dump the XML schema we use for this DS file to disk?
## If yes, the XML schema shares the name of the logfile, but has
## an XML ending.
const dump_schema = F &redef;
## How many threads should DataSeries spawn to perform compression?
## Note that this dictates the number of threads per log stream. If
## you're using a lot of streams, you may want to keep this number
## relatively small.
##
## Default value is 1, which will spawn one thread / stream.
##
## Maximum is 128, minimum is 1.
const num_threads = 1 &redef;
## Should time be stored as an integer or a double?
## Storing time as a double leads to possible precision issues and
## can (significantly) increase the size of the resulting DS log.
## That said, timestamps stored in double form are consistent
## with the rest of Bro, including the standard ASCII log. Hence, we
## use them by default.
const use_integer_for_time = F &redef;
}
# Default function to postprocess a rotated DataSeries log file. It moves the
# rotated file to a new name that includes a timestamp with the opening time,
# and then runs the writer's default postprocessor command on it.
function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool
{
# Move file to name including both opening and closing time.
local dst = fmt("%s.%s.ds", info$path,
strftime(Log::default_rotation_date_format, info$open));
system(fmt("/bin/mv %s %s", info$fname, dst));
# Run default postprocessor.
return Log::run_rotation_postprocessor_cmd(info, dst);
}
redef Log::default_rotation_postprocessors += { [Log::WRITER_DATASERIES] = default_rotation_postprocessor_func };

View file

@ -1,48 +0,0 @@
##! Log writer for sending logs to an ElasticSearch server.
##!
##! Note: This module is in testing and is not yet considered stable!
##!
##! There is one known memory issue. If your elasticsearch server is
##! running slowly and taking too long to return from bulk insert
##! requests, the message queue to the writer thread will continue
##! growing larger and larger giving the appearance of a memory leak.
module LogElasticSearch;
export {
## Name of the ES cluster.
const cluster_name = "elasticsearch" &redef;
## ES server.
const server_host = "127.0.0.1" &redef;
## ES port.
const server_port = 9200 &redef;
## Name of the ES index.
const index_prefix = "bro" &redef;
## The ES type prefix comes before the name of the related log.
## e.g. prefix = "bro\_" would create types of bro_dns, bro_software, etc.
const type_prefix = "" &redef;
## The time before an ElasticSearch transfer will timeout. Note that
## the fractional part of the timeout will be ignored. In particular,
## time specifications less than a second result in a timeout value of
## 0, which means "no timeout."
const transfer_timeout = 2secs;
## The batch size is the number of messages that will be queued up before
## they are sent to be bulk indexed.
const max_batch_size = 1000 &redef;
## The maximum amount of wall-clock time that is allowed to pass without
## finishing a bulk log send. This represents the maximum delay you
## would like to have with your logs before they are sent to ElasticSearch.
const max_batch_interval = 1min &redef;
## The maximum byte size for a buffered JSON string to send to the bulk
## insert API.
const max_byte_size = 1024 * 1024 &redef;
}

View file

@ -20,7 +20,8 @@ export {
## category along with the specific notice separating words with
## underscores and using leading capitals on each word except for
## abbreviations which are kept in all capitals. For example,
## SSH::Login is for heuristically guessed successful SSH logins.
## SSH::Password_Guessing is for hosts that have crossed a threshold of
## heuristically determined failed SSH logins.
type Type: enum {
## Notice reporting a count of how often a notice occurred.
Tally,
@ -206,6 +207,38 @@ export {
## The maximum amount of time a plugin can delay email from being sent.
const max_email_delay = 15secs &redef;
## Contains a portion of :bro:see:`fa_file` that's also contained in
## :bro:see:`Notice::Info`.
type FileInfo: record {
fuid: string; ##< File UID.
desc: string; ##< File description from e.g.
##< :bro:see:`Files::describe`.
mime: string &optional; ##< Strongest mime type match for file.
cid: conn_id &optional; ##< Connection tuple over which file is sent.
cuid: string &optional; ##< Connection UID over which file is sent.
};
## Creates a record containing a subset of a full :bro:see:`fa_file` record.
##
## f: record containing metadata about a file.
##
## Returns: record containing a subset of fields copied from *f*.
global create_file_info: function(f: fa_file): Notice::FileInfo;
## Populates file-related fields in a notice info record.
##
## f: record containing metadata about a file.
##
## n: a notice record that needs file-related fields populated.
global populate_file_info: function(f: fa_file, n: Notice::Info);
## Populates file-related fields in a notice info record.
##
## fi: record containing metadata about a file.
##
## n: a notice record that needs file-related fields populated.
global populate_file_info2: function(fi: Notice::FileInfo, n: Notice::Info);
## A log postprocessing function that implements emailing the contents
## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`.
## The rotated log is removed upon being sent.
@ -493,6 +526,42 @@ function execute_with_notice(cmd: string, n: Notice::Info)
#system_env(cmd, tags);
}
function create_file_info(f: fa_file): Notice::FileInfo
{
local fi: Notice::FileInfo = Notice::FileInfo($fuid = f$id,
$desc = Files::describe(f));
if ( f?$mime_type )
fi$mime = f$mime_type;
if ( f?$conns && |f$conns| == 1 )
for ( id in f$conns )
{
fi$cid = id;
fi$cuid = f$conns[id]$uid;
}
return fi;
}
function populate_file_info(f: fa_file, n: Notice::Info)
{
populate_file_info2(create_file_info(f), n);
}
function populate_file_info2(fi: Notice::FileInfo, n: Notice::Info)
{
if ( ! n?$fuid )
n$fuid = fi$fuid;
if ( ! n?$file_mime_type && fi?$mime )
n$file_mime_type = fi$mime;
n$file_desc = fi$desc;
n$id = fi$cid;
n$uid = fi$cuid;
}
# This is run synchronously as a function before all of the other
# notice related functions and events. It also modifies the
# :bro:type:`Notice::Info` record in place.
@ -503,21 +572,7 @@ function apply_policy(n: Notice::Info)
n$ts = network_time();
if ( n?$f )
{
if ( ! n?$fuid )
n$fuid = n$f$id;
if ( ! n?$file_mime_type && n$f?$mime_type )
n$file_mime_type = n$f$mime_type;
n$file_desc = Files::describe(n$f);
if ( n$f?$conns && |n$f$conns| == 1 )
{
for ( id in n$f$conns )
n$conn = n$f$conns[id];
}
}
populate_file_info(n$f, n);
if ( n?$conn )
{

View file

@ -185,6 +185,7 @@ export {
["RPC_underflow"] = ACTION_LOG,
["RST_storm"] = ACTION_LOG,
["RST_with_data"] = ACTION_LOG,
["SSL_many_server_names"] = ACTION_LOG,
["simultaneous_open"] = ACTION_LOG_PER_CONN,
["spontaneous_FIN"] = ACTION_IGNORE,
["spontaneous_RST"] = ACTION_IGNORE,

View file

@ -70,6 +70,9 @@ export {
## The network time at which a signature matching type of event
## to be logged has occurred.
ts: time &log;
## A unique identifier of the connection which triggered the
## signature match event.
uid: string &log &optional;
## The host which triggered the signature match event.
src_addr: addr &log &optional;
## The host port on which the signature-matching activity
@ -192,6 +195,7 @@ event signature_match(state: signature_state, msg: string, data: string)
{
local info: Info = [$ts=network_time(),
$note=Sensitive_Signature,
$uid=state$conn$uid,
$src_addr=src_addr,
$src_port=src_port,
$dst_addr=dst_addr,

View file

@ -287,6 +287,13 @@ function parse_mozilla(unparsed_version: string): Description
if ( 2 in parts )
v = parse(parts[2])$version;
}
else if ( / Java\/[0-9]\./ in unparsed_version )
{
software_name = "Java";
parts = split_all(unparsed_version, /Java\/[0-9\._]*/);
if ( 2 in parts )
v = parse(parts[2])$version;
}
return [$version=v, $unparsed_version=unparsed_version, $name=software_name];
}

View file

@ -28,10 +28,6 @@ export {
## values for a sumstat.
global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool);
# Event sent by nodes that are collecting sumstats after receiving a
# request for the sumstat from the manager.
#global cluster_ss_response: event(uid: string, ss_name: string, data: ResultTable, done: bool, cleanup: bool);
## This event is sent by the manager in a cluster to initiate the
## collection of a single key value from a sumstat. It's typically used
## to get intermediate updates before the break interval triggers to
@ -62,7 +58,7 @@ export {
# Add events to the cluster framework to make this work.
redef Cluster::manager2worker_events += /SumStats::cluster_(ss_request|get_result|threshold_crossed)/;
redef Cluster::manager2worker_events += /SumStats::(get_a_key)/;
redef Cluster::worker2manager_events += /SumStats::cluster_(ss_response|send_result|key_intermediate_response)/;
redef Cluster::worker2manager_events += /SumStats::cluster_(send_result|key_intermediate_response)/;
redef Cluster::worker2manager_events += /SumStats::(send_a_key|send_no_key)/;
@if ( Cluster::local_node_type() != Cluster::MANAGER )
@ -74,7 +70,7 @@ global recent_global_view_keys: table[string, Key] of count &create_expire=1min
# Result tables indexed on a uid that are currently being sent to the
# manager.
global sending_results: table[string] of ResultTable = table() &create_expire=1min;
global sending_results: table[string] of ResultTable = table() &read_expire=1min;
# This is done on all non-manager node types in the event that a sumstat is
# being collected somewhere other than a worker.
@ -195,6 +191,19 @@ event SumStats::cluster_threshold_crossed(ss_name: string, key: SumStats::Key, t
threshold_tracker[ss_name][key] = thold_index;
}
# request-key is a non-op on the workers.
# It only should be called by the manager. Due to the fact that we usually run the same scripts on the
# workers and the manager, it might also be called by the workers, so we just ignore it here.
#
# There is a small chance that people will try running it on events that are just thrown on the workers.
# This does not work at the moment and we cannot throw an error message, because we cannot distinguish it
# from the "script is running it everywhere" case. But - people should notice that they do not get results.
# Not entirely pretty, sorry :(
function request_key(ss_name: string, key: Key): Result
{
return Result();
}
@endif
@ -203,7 +212,7 @@ event SumStats::cluster_threshold_crossed(ss_name: string, key: SumStats::Key, t
# This variable is maintained by manager nodes as they collect and aggregate
# results.
# Index on a uid.
global stats_keys: table[string] of set[Key] &create_expire=1min
global stats_keys: table[string] of set[Key] &read_expire=1min
&expire_func=function(s: table[string] of set[Key], idx: string): interval
{
Reporter::warning(fmt("SumStat key request for the %s SumStat uid took longer than 1 minute and was automatically cancelled.", idx));
@ -215,17 +224,16 @@ global stats_keys: table[string] of set[Key] &create_expire=1min
# matches the number of peer nodes that results should be coming from, the
# result is written out and deleted from here.
# Indexed on a uid.
# TODO: add an &expire_func in case not all results are received.
global done_with: table[string] of count &create_expire=1min &default=0;
global done_with: table[string] of count &read_expire=1min &default=0;
# This variable is maintained by managers to track intermediate responses as
# they are getting a global view for a certain key.
# Indexed on a uid.
global key_requests: table[string] of Result &create_expire=1min;
global key_requests: table[string] of Result &read_expire=1min;
# Store uids for dynamic requests here to avoid cleanup on the uid.
# (This needs to be done differently!)
global dynamic_requests: set[string] &create_expire=1min;
global dynamic_requests: set[string] &read_expire=1min;
# This variable is maintained by managers to prevent overwhelming communication due
# to too many intermediate updates. Each sumstat is tracked separately so that

View file

@ -2,23 +2,59 @@
module SumStats;
event SumStats::process_epoch_result(ss: SumStat, now: time, data: ResultTable)
{
# TODO: is this the right processing group size?
local i = 50;
for ( key in data )
{
ss$epoch_result(now, key, data[key]);
delete data[key];
if ( |data| == 0 )
{
if ( ss?$epoch_finished )
ss$epoch_finished(now);
# Now that no data is left we can finish.
return;
}
i = i-1;
if ( i == 0 )
{
# TODO: is this the right interval?
schedule 0.01 secs { process_epoch_result(ss, now, data) };
break;
}
}
}
event SumStats::finish_epoch(ss: SumStat)
{
if ( ss$name in result_store )
{
local now = network_time();
if ( ss?$epoch_result )
{
local data = result_store[ss$name];
# TODO: don't block here.
local now = network_time();
if ( bro_is_terminating() )
{
for ( key in data )
ss$epoch_result(now, key, data[key]);
}
if ( ss?$epoch_finished )
ss$epoch_finished(now);
}
else
{
event SumStats::process_epoch_result(ss, now, data);
}
}
# We can reset here because we know that the reference
# to the data will be maintained by the process_epoch_result
# event.
reset(ss);
}

View file

@ -39,6 +39,14 @@ type count_set: set[count];
## directly and then remove this alias.
type index_vec: vector of count;
## A vector of any, used by some builtin functions to store a list of varying
## types.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type any_vec: vector of any;
## A vector of strings.
##
## .. todo:: We need this type definition only for declaring builtin functions
@ -46,6 +54,13 @@ type index_vec: vector of count;
## directly and then remove this alias.
type string_vec: vector of string;
## A vector of x509 opaques.
##
## .. todo:: We need this type definition only for declaring builtin functions
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
## directly and then remove this alias.
type x509_opaque_vector: vector of opaque of x509;
## A vector of addresses.
##
## .. todo:: We need this type definition only for declaring builtin functions
@ -67,6 +82,23 @@ type table_string_of_string: table[string] of string;
## directly and then remove this alias.
type files_tag_set: set[Files::Tag];
## A structure indicating a MIME type and strength of a match against
## file magic signatures.
##
## :bro:see:`file_magic`
type mime_match: record {
strength: int; ##< How strongly the signature matched. Used for
##< prioritization when multiple file magic signatures
##< match.
mime: string; ##< The MIME type of the file magic signature match.
};
## A vector of file magic signature matches, ordered by strength of
## the signature, strongest first.
##
## :bro:see:`file_magic`
type mime_matches: vector of mime_match;
## A connection's transport-layer protocol. Note that Bro uses the term
## "connection" broadly, using flow semantics for ICMP and UDP.
type transport_proto: enum {
@ -378,10 +410,15 @@ type fa_file: record {
## This is also the buffer that's used for file/mime type detection.
bof_buffer: string &optional;
## A mime type provided by libmagic against the *bof_buffer*, or
## in the cases where no buffering of the beginning of file occurs,
## an initial guess of the mime type based on the first data seen.
## The mime type of the strongest file magic signature matches against
## the data chunk in *bof_buffer*, or in the cases where no buffering
## of the beginning of file occurs, an initial guess of the mime type
## based on the first data seen.
mime_type: string &optional;
## All mime types that matched file magic signatures against the data
## chunk in *bof_buffer*, in order of their strength value.
mime_types: mime_matches &optional;
} &redef;
## Fields of a SYN packet.
@ -1035,13 +1072,6 @@ const rpc_timeout = 24 sec &redef;
## means "forever", which resists evasion, but can lead to state accrual.
const frag_timeout = 0.0 sec &redef;
## Time window for reordering packets. This is used for dealing with timestamp
## discrepancy between multiple packet sources.
##
## .. note:: Setting this can have a major performance impact as now packets
## need to be potentially copied and buffered.
const packet_sort_window = 0 usecs &redef;
## If positive, indicates the encapsulation header size that should
## be skipped. This applies to all packets.
const encap_hdr_size = 0 &redef;
@ -2427,18 +2457,6 @@ global dns_skip_all_addl = T &redef;
## traffic and do not process it. Set to 0 to turn off this functionality.
global dns_max_queries = 5;
## An X509 certificate.
##
## .. bro:see:: x509_certificate
type X509: record {
version: count; ##< Version number.
serial: string; ##< Serial number.
subject: string; ##< Subject.
issuer: string; ##< Issuer.
not_valid_before: time; ##< Timestamp before when certificate is not valid.
not_valid_after: time; ##< Timestamp after when certificate is not valid.
};
## HTTP session statistics.
##
## .. bro:see:: http_stats
@ -2720,6 +2738,7 @@ type ModbusRegisters: vector of count;
type ModbusHeaders: record {
tid: count;
pid: count;
len: count;
uid: count;
function_code: count;
};
@ -2760,6 +2779,55 @@ export {
};
}
module X509;
export {
type Certificate: record {
version: count; ##< Version number.
serial: string; ##< Serial number.
subject: string; ##< Subject.
issuer: string; ##< Issuer.
not_valid_before: time; ##< Timestamp before when certificate is not valid.
not_valid_after: time; ##< Timestamp after when certificate is not valid.
key_alg: string; ##< Name of the key algorithm
sig_alg: string; ##< Name of the signature algorithm
key_type: string &optional; ##< Key type, if key parseable by openssl (either rsa, dsa or ec)
key_length: count &optional; ##< Key length in bits
exponent: string &optional; ##< Exponent, if RSA-certificate
curve: string &optional; ##< Curve, if EC-certificate
} &log;
type Extension: record {
name: string; ##< Long name of extension. oid if name not known
short_name: string &optional; ##< Short name of extension if known
oid: string; ##< Oid of extension
critical: bool; ##< True if extension is critical
value: string; ##< Extension content parsed to string for known extensions. Raw data otherwise.
};
type BasicConstraints: record {
ca: bool; ##< CA flag set?
path_len: count &optional; ##< Maximum path length
} &log;
type SubjectAlternativeName: record {
dns: string_vec &optional &log; ##< List of DNS entries in SAN
uri: string_vec &optional &log; ##< List of URI entries in SAN
email: string_vec &optional &log; ##< List of email entries in SAN
ip: addr_vec &optional &log; ##< List of IP entries in SAN
other_fields: bool; ##< True if the certificate contained other, not recognized or parsed name fields
};
## Result of an X509 certificate chain verification
type Result: record {
## OpenSSL result code
result: int;
## Result as string
result_string: string;
## References to the final certificate chain, if verification successful. End-host certificate is first.
chain_certs: vector of opaque of x509 &optional;
};
}
module SOCKS;
export {
## This record is for a SOCKS client or server to provide either a
@ -2769,6 +2837,148 @@ export {
name: string &optional;
} &log;
}
module RADIUS;
export {
type RADIUS::AttributeList: vector of string;
type RADIUS::Attributes: table[count] of RADIUS::AttributeList;
type RADIUS::Message: record {
## The type of message (Access-Request, Access-Accept, etc.).
code : count;
## The transaction ID.
trans_id : count;
## The "authenticator" string.
authenticator : string;
## Any attributes.
attributes : RADIUS::Attributes &optional;
};
}
module GLOBAL;
@load base/bif/plugins/Bro_SNMP.types.bif
module SNMP;
export {
## The top-level message data structure of an SNMPv1 datagram, not
## including the PDU data. See :rfc:`1157`.
type SNMP::HeaderV1: record {
community: string;
};
## The top-level message data structure of an SNMPv2 datagram, not
## including the PDU data. See :rfc:`1901`.
type SNMP::HeaderV2: record {
community: string;
};
## The ``ScopedPduData`` data structure of an SNMPv3 datagram, not
## including the PDU data (i.e. just the "context" fields).
## See :rfc:`3412`.
type SNMP::ScopedPDU_Context: record {
engine_id: string;
name: string;
};
## The top-level message data structure of an SNMPv3 datagram, not
## including the PDU data. See :rfc:`3412`.
type SNMP::HeaderV3: record {
id: count;
max_size: count;
flags: count;
auth_flag: bool;
priv_flag: bool;
reportable_flag: bool;
security_model: count;
security_params: string;
pdu_context: SNMP::ScopedPDU_Context &optional;
};
## A generic SNMP header data structure that may include data from
## any version of SNMP. The value of the ``version`` field
## determines what header field is initialized.
type SNMP::Header: record {
version: count;
v1: SNMP::HeaderV1 &optional; ##< Set when ``version`` is 0.
v2: SNMP::HeaderV2 &optional; ##< Set when ``version`` is 1.
v3: SNMP::HeaderV3 &optional; ##< Set when ``version`` is 3.
};
## A generic SNMP object value, that may include any of the
## valid ``ObjectSyntax`` values from :rfc:`1155` or :rfc:`3416`.
## The value is decoded whenever possible and assigned to
## the appropriate field, which can be determined from the value
## of the ``tag`` field. For tags that can't be mapped to an
## appropriate type, the ``octets`` field holds the BER encoded
## ASN.1 content if there is any (though, ``octets`` is may also
## be used for other tags such as OCTET STRINGS or Opaque). Null
## values will only have their corresponding tag value set.
type SNMP::ObjectValue: record {
tag: count;
oid: string &optional;
signed: int &optional;
unsigned: count &optional;
address: addr &optional;
octets: string &optional;
};
# These aren't an enum because it's easier to type fields as count.
# That way don't have to deal with type conversion, plus doesn't
# mislead that these are the only valid tag values (it's just the set
# of known tags).
const SNMP::OBJ_INTEGER_TAG : count = 0x02; ##< Signed 64-bit integer.
const SNMP::OBJ_OCTETSTRING_TAG : count = 0x04; ##< An octet string.
const SNMP::OBJ_UNSPECIFIED_TAG : count = 0x05; ##< A NULL value.
const SNMP::OBJ_OID_TAG : count = 0x06; ##< An Object Identifier.
const SNMP::OBJ_IPADDRESS_TAG : count = 0x40; ##< An IP address.
const SNMP::OBJ_COUNTER32_TAG : count = 0x41; ##< Unsigned 32-bit integer.
const SNMP::OBJ_UNSIGNED32_TAG : count = 0x42; ##< Unsigned 32-bit integer.
const SNMP::OBJ_TIMETICKS_TAG : count = 0x43; ##< Unsigned 32-bit integer.
const SNMP::OBJ_OPAQUE_TAG : count = 0x44; ##< An octet string.
const SNMP::OBJ_COUNTER64_TAG : count = 0x46; ##< Unsigned 64-bit integer.
const SNMP::OBJ_NOSUCHOBJECT_TAG : count = 0x80; ##< A NULL value.
const SNMP::OBJ_NOSUCHINSTANCE_TAG: count = 0x81; ##< A NULL value.
const SNMP::OBJ_ENDOFMIBVIEW_TAG : count = 0x82; ##< A NULL value.
## The ``VarBind`` data structure from either :rfc:`1157` or
## :rfc:`3416`, which maps an Object Identifier to a value.
type SNMP::Binding: record {
oid: string;
value: SNMP::ObjectValue;
};
## A ``VarBindList`` data structure from either :rfc:`1157` or :rfc:`3416`.
## A sequences of :bro:see:`SNMP::Binding`, which maps an OIDs to values.
type SNMP::Bindings: vector of SNMP::Binding;
## A ``PDU`` data structure from either :rfc:`1157` or :rfc:`3416`.
type SNMP::PDU: record {
request_id: int;
error_status: int;
error_index: int;
bindings: SNMP::Bindings;
};
## A ``Trap-PDU`` data structure from :rfc:`1157`.
type SNMP::TrapPDU: record {
enterprise: string;
agent: addr;
generic_trap: int;
specific_trap: int;
time_stamp: count;
bindings: SNMP::Bindings;
};
## A ``BulkPDU`` data structure from :rfc:`3416`.
type SNMP::BulkPDU: record {
request_id: int;
non_repeaters: count;
max_repititions: count;
bindings: SNMP::Bindings;
};
}
module GLOBAL;
@load base/bif/event.bif
@ -2856,6 +3066,12 @@ global load_sample_freq = 20 &redef;
## .. bro:see:: gap_report
const gap_report_freq = 1.0 sec &redef;
## Whether to attempt to automatically detect SYN/FIN/RST-filtered trace
## and not report missing segments for such connections.
## If this is enabled, then missing data at the end of connections may not
## be reported via :bro:see:`content_gap`.
const detect_filtered_trace = F &redef;
## Whether we want :bro:see:`content_gap` and :bro:see:`gap_report` for partial
## connections. A connection is partial if it is missing a full handshake. Note
## that gap reports for partial connections might not be reliable.
@ -3046,6 +3262,24 @@ const record_all_packets = F &redef;
## .. bro:see:: conn_stats
const ignore_keep_alive_rexmit = F &redef;
module JSON;
export {
type TimestampFormat: enum {
## Timestamps will be formatted as UNIX epoch doubles. This is
## the format that Bro typically writes out timestamps.
TS_EPOCH,
## Timestamps will be formatted as unsigned integers that
## represent the number of milliseconds since the UNIX
## epoch.
TS_MILLIS,
## Timestamps will be formatted in the ISO8601 DateTime format.
## Subseconds are also included which isn't actually part of the
## standard but most consumers that parse ISO8601 seem to be able
## to cope with that.
TS_ISO8601,
};
}
module Tunnel;
export {
## The maximum depth of a tunnel to decapsulate until giving up.
@ -3130,9 +3364,6 @@ const global_hash_seed: string = "" &redef;
## The maximum is currently 128 bits.
const bits_per_uid: count = 96 &redef;
# Load BiFs defined by plugins.
@load base/bif/plugins
# Load these frameworks here because they use fairly deep integration with
# BiFs and script-land defined types.
@load base/frameworks/logging
@ -3141,3 +3372,7 @@ const bits_per_uid: count = 96 &redef;
@load base/frameworks/files
@load base/bif
# Load BiFs defined by plugins.
@load base/bif/plugins

View file

@ -47,6 +47,8 @@
@load base/protocols/irc
@load base/protocols/modbus
@load base/protocols/pop3
@load base/protocols/radius
@load base/protocols/snmp
@load base/protocols/smtp
@load base/protocols/socks
@load base/protocols/ssh
@ -57,6 +59,7 @@
@load base/files/hash
@load base/files/extract
@load base/files/unified2
@load base/files/x509
@load base/misc/find-checksum-offloading
@load base/misc/find-filtered-trace

View file

@ -0,0 +1,49 @@
##! Discovers trace files that contain TCP traffic consisting only of
##! control packets (e.g. it's been filtered to contain only SYN/FIN/RST
##! packets and no content). On finding such a trace, a warning is
##! emitted that suggests toggling the :bro:see:`detect_filtered_trace`
##! option may be desired if the user does not want Bro to report
##! missing TCP segments.
module FilteredTraceDetection;
export {
## Flag to enable filtered trace file detection and warning message.
global enable: bool = T &redef;
}
global saw_tcp_conn_with_data: bool = F;
global saw_a_tcp_conn: bool = F;
event connection_state_remove(c: connection)
{
if ( ! reading_traces() )
return;
if ( ! enable )
return;
if ( saw_tcp_conn_with_data )
return;
if ( ! is_tcp_port(c$id$orig_p) )
return;
saw_a_tcp_conn = T;
if ( /[Dd]/ in c$history )
saw_tcp_conn_with_data = T;
}
event bro_done()
{
if ( ! enable )
return;
if ( ! saw_a_tcp_conn )
return;
if ( ! saw_tcp_conn_with_data )
Reporter::warning("The analyzed trace file was determined to contain only TCP control packets, which may indicate it's been pre-filtered. By default, Bro reports the missing segments for this type of trace, but the 'detect_filtered_trace' option may be toggled if that's not desired.");
}

View file

@ -47,13 +47,13 @@ redef record connection += {
const ports = { 67/udp, 68/udp };
redef likely_server_ports += { 67/udp };
event bro_init()
event bro_init() &priority=5
{
Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports);
}
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string)
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string) &priority=5
{
local info: Info;
info$ts = network_time();
@ -71,6 +71,9 @@ event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_lis
info$assigned_ip = c$id$orig_h;
c$dhcp = info;
}
event dhcp_ack(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr, host_name: string) &priority=-5
{
Log::write(DHCP::LOG, c$dhcp);
}

View file

@ -63,15 +63,17 @@ export {
## The DNS query was rejected by the server.
rejected: bool &log &default=F;
## This value indicates if this request/response pair is ready
## to be logged.
ready: bool &default=F;
## The total number of resource records in a reply message's
## answer section.
total_answers: count &optional;
## The total number of resource records in a reply message's
## answer, authority, and additional sections.
total_replies: count &optional;
## Whether the full DNS query has been seen.
saw_query: bool &default=F;
## Whether the full DNS reply has been seen.
saw_reply: bool &default=F;
};
## An event that can be handled to access the :bro:type:`DNS::Info`
@ -90,7 +92,7 @@ export {
## ans: The general information of a RR response.
##
## reply: The specific response information according to RR type/class.
global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
global do_reply: hook(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
## A hook that is called whenever a session is being set.
## This can be used if additional initialization logic needs to happen
@ -103,17 +105,37 @@ export {
## is_query: Indicator for if this is being called for a query or a response.
global set_session: hook(c: connection, msg: dns_msg, is_query: bool);
## Yields a queue of :bro:see:`DNS::Info` objects for a given
## DNS message query/transaction ID.
type PendingMessages: table[count] of Queue::Queue;
## The amount of time that DNS queries or replies for a given
## query/transaction ID are allowed to be queued while waiting for
## a matching reply or query.
const pending_msg_expiry_interval = 2min &redef;
## Give up trying to match pending DNS queries or replies for a given
## query/transaction ID once this number of unmatched queries or replies
## is reached (this shouldn't happen unless either the DNS server/resolver
## is broken, Bro is not seeing all the DNS traffic, or an AXFR query
## response is ongoing).
const max_pending_msgs = 50 &redef;
## Give up trying to match pending DNS queries or replies across all
## query/transaction IDs once there is at least one unmatched query or
## reply across this number of different query IDs.
const max_pending_query_ids = 50 &redef;
## A record type which tracks the status of DNS queries for a given
## :bro:type:`connection`.
type State: record {
## Indexed by query id, returns Info record corresponding to
## query/response which haven't completed yet.
pending: table[count] of Queue::Queue;
## queries that haven't been matched with a response yet.
pending_queries: PendingMessages;
## This is the list of DNS responses that have completed based
## on the number of responses declared and the number received.
## The contents of the set are transaction IDs.
finished_answers: set[count];
## Indexed by query id, returns Info record corresponding to
## replies that haven't been matched with a query yet.
pending_replies: PendingMessages;
};
}
@ -143,6 +165,66 @@ function new_session(c: connection, trans_id: count): Info
return info;
}
function log_unmatched_msgs_queue(q: Queue::Queue)
{
local infos: vector of Info;
Queue::get_vector(q, infos);
for ( i in infos )
{
event flow_weird("dns_unmatched_msg",
infos[i]$id$orig_h, infos[i]$id$resp_h);
Log::write(DNS::LOG, infos[i]);
}
}
function log_unmatched_msgs(msgs: PendingMessages)
{
for ( trans_id in msgs )
log_unmatched_msgs_queue(msgs[trans_id]);
clear_table(msgs);
}
function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info)
{
if ( id !in msgs )
{
if ( |msgs| > max_pending_query_ids )
{
event flow_weird("dns_unmatched_query_id_quantity",
msg$id$orig_h, msg$id$resp_h);
# Throw away all unmatched on assumption they'll never be matched.
log_unmatched_msgs(msgs);
}
msgs[id] = Queue::init();
}
else
{
if ( Queue::len(msgs[id]) > max_pending_msgs )
{
event flow_weird("dns_unmatched_msg_quantity",
msg$id$orig_h, msg$id$resp_h);
log_unmatched_msgs_queue(msgs[id]);
# Throw away all unmatched on assumption they'll never be matched.
msgs[id] = Queue::init();
}
}
Queue::put(msgs[id], msg);
}
function pop_msg(msgs: PendingMessages, id: count): Info
{
local rval: Info = Queue::get(msgs[id]);
if ( Queue::len(msgs[id]) == 0 )
delete msgs[id];
return rval;
}
hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
{
if ( ! c?$dns_state )
@ -151,29 +233,39 @@ hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
c$dns_state = state;
}
if ( msg$id !in c$dns_state$pending )
c$dns_state$pending[msg$id] = Queue::init();
local info: Info;
# If this is either a query or this is the reply but
# no Info records are in the queue (we missed the query?)
# we need to create an Info record and put it in the queue.
if ( is_query ||
Queue::len(c$dns_state$pending[msg$id]) == 0 )
{
info = new_session(c, msg$id);
Queue::put(c$dns_state$pending[msg$id], info);
}
if ( is_query )
# If this is a query, assign the newly created info variable
# so that the world looks correct to anything else handling
# this query.
c$dns = info;
{
if ( msg$id in c$dns_state$pending_replies &&
Queue::len(c$dns_state$pending_replies[msg$id]) > 0 )
{
# Match this DNS query w/ what's at head of pending reply queue.
c$dns = pop_msg(c$dns_state$pending_replies, msg$id);
}
else
# Peek at the next item in the queue for this trans_id and
# assign it to c$dns since this is a response.
c$dns = Queue::peek(c$dns_state$pending[msg$id]);
{
# Create a new DNS session and put it in the query queue so
# we can wait for a matching reply.
c$dns = new_session(c, msg$id);
enqueue_new_msg(c$dns_state$pending_queries, msg$id, c$dns);
}
}
else
{
if ( msg$id in c$dns_state$pending_queries &&
Queue::len(c$dns_state$pending_queries[msg$id]) > 0 )
{
# Match this DNS reply w/ what's at head of pending query queue.
c$dns = pop_msg(c$dns_state$pending_queries, msg$id);
}
else
{
# Create a new DNS session and put it in the reply queue so
# we can wait for a matching query.
c$dns = new_session(c, msg$id);
event conn_weird("dns_unmatched_reply", c, "");
enqueue_new_msg(c$dns_state$pending_replies, msg$id, c$dns);
}
}
if ( ! is_query )
{
@ -183,36 +275,36 @@ hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
if ( ! c$dns?$total_answers )
c$dns$total_answers = msg$num_answers;
if ( c$dns?$total_replies &&
c$dns$total_replies != msg$num_answers + msg$num_addl + msg$num_auth )
{
event conn_weird("dns_changed_number_of_responses", c,
fmt("The declared number of responses changed from %d to %d",
c$dns$total_replies,
msg$num_answers + msg$num_addl + msg$num_auth));
}
else
{
# Store the total number of responses expected from the first reply.
if ( ! c$dns?$total_replies )
c$dns$total_replies = msg$num_answers + msg$num_addl + msg$num_auth;
}
if ( msg$rcode != 0 && msg$num_queries == 0 )
c$dns$rejected = T;
}
}
event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) &priority=5
{
hook set_session(c, msg, is_orig);
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
hook set_session(c, msg, ! msg$QR);
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
{
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
if ( ! msg$QR )
# This is weird: the inquirer must also be providing answers in
# the request, which is not what we want to track.
return;
if ( ans$answer_type == DNS_ANS )
{
if ( ! c?$dns )
{
event conn_weird("dns_unmatched_reply", c, "");
hook set_session(c, msg, F);
}
c$dns$AA = msg$AA;
c$dns$RA = msg$RA;
@ -226,29 +318,35 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
c$dns$TTLs = vector();
c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
}
if ( c$dns?$answers && c$dns?$total_answers &&
|c$dns$answers| == c$dns$total_answers )
{
# Indicate this request/reply pair is ready to be logged.
c$dns$ready = T;
}
}
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=-5
event dns_end(c: connection, msg: dns_msg) &priority=5
{
if ( c$dns$ready )
if ( ! c?$dns )
return;
if ( msg$QR )
c$dns$saw_reply = T;
else
c$dns$saw_query = T;
}
event dns_end(c: connection, msg: dns_msg) &priority=-5
{
if ( c?$dns && c$dns$saw_reply && c$dns$saw_query )
{
Log::write(DNS::LOG, c$dns);
# This record is logged and no longer pending.
Queue::get(c$dns_state$pending[c$dns$trans_id]);
delete c$dns;
}
}
event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
{
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
c$dns$RD = msg$RD;
c$dns$TC = msg$TC;
c$dns$qclass = qclass;
@ -261,64 +359,88 @@ event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qcla
# Note: I'm ignoring the name type for now. Not sure if this should be
# worked into the query/response in some fashion.
if ( c$id$resp_p == 137/udp )
{
query = decode_netbios_name(query);
if ( c$dns$qtype_name == "SRV" )
{
# The SRV RFC used the ID used for NetBios Status RRs.
# So if this is NetBios Name Service we name it correctly.
c$dns$qtype_name = "NBSTAT";
}
}
c$dns$query = query;
}
event dns_unknown_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
{
hook DNS::do_reply(c, msg, ans, fmt("<unknown type=%s>", ans$qtype));
}
event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
{
event DNS::do_reply(c, msg, ans, fmt("%s", a));
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
}
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, str: string) &priority=5
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, strs: string_vec) &priority=5
{
event DNS::do_reply(c, msg, ans, str);
local txt_strings: string = "";
for ( i in strs )
{
if ( i > 0 )
txt_strings += " ";
txt_strings += fmt("TXT %d %s", |strs[i]|, strs[i]);
}
hook DNS::do_reply(c, msg, ans, txt_strings);
}
event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
{
event DNS::do_reply(c, msg, ans, fmt("%s", a));
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
}
event dns_A6_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
{
event DNS::do_reply(c, msg, ans, fmt("%s", a));
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
}
event dns_NS_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_CNAME_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_MX_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string,
preference: count) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_PTR_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
{
event DNS::do_reply(c, msg, ans, name);
hook DNS::do_reply(c, msg, ans, name);
}
event dns_SOA_reply(c: connection, msg: dns_msg, ans: dns_answer, soa: dns_soa) &priority=5
{
event DNS::do_reply(c, msg, ans, soa$mname);
hook DNS::do_reply(c, msg, ans, soa$mname);
}
event dns_WKS_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
{
event DNS::do_reply(c, msg, ans, "");
hook DNS::do_reply(c, msg, ans, "");
}
event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer, target: string, priority: count, weight: count, p: count) &priority=5
{
event DNS::do_reply(c, msg, ans, "");
hook DNS::do_reply(c, msg, ans, target);
}
# TODO: figure out how to handle these
@ -339,6 +461,7 @@ event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
event dns_rejected(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
{
if ( c?$dns )
c$dns$rejected = T;
}
@ -347,16 +470,8 @@ event connection_state_remove(c: connection) &priority=-5
if ( ! c?$dns_state )
return;
# If Bro is expiring state, we should go ahead and log all unlogged
# request/response pairs now.
for ( trans_id in c$dns_state$pending )
{
local infos: vector of Info;
Queue::get_vector(c$dns_state$pending[trans_id], infos);
for ( i in infos )
{
Log::write(DNS::LOG, infos[i]);
# If Bro is expiring state, we should go ahead and log all unmatched
# queries and replies now.
log_unmatched_msgs(c$dns_state$pending_queries);
log_unmatched_msgs(c$dns_state$pending_replies);
}
}
}

View file

@ -1,6 +1,8 @@
# List of HTTP headers pulled from:
# http://annevankesteren.nl/2007/10/http-methods
signature dpd_http_client {
ip-proto == tcp
payload /^[[:space:]]*(GET|HEAD|POST)[[:space:]]*/
payload /^[[:space:]]*(OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PROPFIND|PROPPATCH|MKCOL|COPY|MOVE|LOCK|UNLOCK|VERSION-CONTROL|REPORT|CHECKOUT|CHECKIN|UNCHECKOUT|MKWORKSPACE|UPDATE|LABEL|MERGE|BASELINE-CONTROL|MKACTIVITY|ORDERPATCH|ACL|PATCH|SEARCH|BCOPY|BDELETE|BMOVE|BPROPFIND|BPROPPATCH|NOTIFY|POLL|SUBSCRIBE|UNSUBSCRIBE|X-MS-ENUMATTS|RPC_OUT_DATA|RPC_IN_DATA)[[:space:]]*/
tcp-state originator
}

View file

@ -72,7 +72,7 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
if ( f$is_orig )
{
if ( ! c$http?$orig_mime_types )
if ( ! c$http?$orig_fuids )
c$http$orig_fuids = string_vec(f$id);
else
c$http$orig_fuids[|c$http$orig_fuids|] = f$id;
@ -87,7 +87,7 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
}
else
{
if ( ! c$http?$resp_mime_types )
if ( ! c$http?$resp_fuids )
c$http$resp_fuids = string_vec(f$id);
else
c$http$resp_fuids[|c$http$resp_fuids|] = f$id;

View file

@ -4,6 +4,7 @@
@load base/utils/numbers
@load base/utils/files
@load base/frameworks/tunnels
module HTTP;
@ -217,6 +218,17 @@ event http_reply(c: connection, version: string, code: count, reason: string) &p
c$http$info_code = code;
c$http$info_msg = reason;
}
if ( c$http?$method && c$http$method == "CONNECT" && code == 200 )
{
# Copy this conn_id and set the orig_p to zero because in the case of CONNECT
# proxies there will be potentially many source ports since a new proxy connection
# is established for each proxied connection. We treat this as a singular
# "tunnel".
local tid = copy(c$id);
tid$orig_p = 0/tcp;
Tunnel::register([$cid=tid, $tunnel_type=Tunnel::HTTP]);
}
}
event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=5

View file

@ -76,7 +76,7 @@ event irc_dcc_message(c: connection, is_orig: bool,
dcc_expected_transfers[address, p] = c$irc;
}
event expected_connection_seen(c: connection, a: Analyzer::Tag) &priority=10
event scheduled_analyzer_applied(c: connection, a: Analyzer::Tag) &priority=10
{
local id = c$id;
if ( [id$resp_h, id$resp_p] in dcc_expected_transfers )

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,231 @@
module RADIUS;
const msg_types: table[count] of string = {
[1] = "Access-Request",
[2] = "Access-Accept",
[3] = "Access-Reject",
[4] = "Accounting-Request",
[5] = "Accounting-Response",
[11] = "Access-Challenge",
[12] = "Status-Server",
[13] = "Status-Client",
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const attr_types: table[count] of string = {
[1] = "User-Name",
[2] = "User-Password",
[3] = "CHAP-Password",
[4] = "NAS-IP-Address",
[5] = "NAS-Port",
[6] = "Service-Type",
[7] = "Framed-Protocol",
[8] = "Framed-IP-Address",
[9] = "Framed-IP-Netmask",
[10] = "Framed-Routing",
[11] = "Filter-Id",
[12] = "Framed-MTU",
[13] = "Framed-Compression",
[14] = "Login-IP-Host",
[15] = "Login-Service",
[16] = "Login-TCP-Port",
[18] = "Reply-Message",
[19] = "Callback-Number",
[20] = "Callback-Id",
[22] = "Framed-Route",
[23] = "Framed-IPX-Network",
[24] = "State",
[25] = "Class",
[26] = "Vendor-Specific",
[27] = "Session-Timeout",
[28] = "Idle-Timeout",
[29] = "Termination-Action",
[30] = "Called-Station-Id",
[31] = "Calling-Station-Id",
[32] = "NAS-Identifier",
[33] = "Proxy-State",
[34] = "Login-LAT-Service",
[35] = "Login-LAT-Node",
[36] = "Login-LAT-Group",
[37] = "Framed-AppleTalk-Link",
[38] = "Framed-AppleTalk-Network",
[39] = "Framed-AppleTalk-Zone",
[40] = "Acct-Status-Type",
[41] = "Acct-Delay-Time",
[42] = "Acct-Input-Octets",
[43] = "Acct-Output-Octets",
[44] = "Acct-Session-Id",
[45] = "Acct-Authentic",
[46] = "Acct-Session-Time",
[47] = "Acct-Input-Packets",
[48] = "Acct-Output-Packets",
[49] = "Acct-Terminate-Cause",
[50] = "Acct-Multi-Session-Id",
[51] = "Acct-Link-Count",
[52] = "Acct-Input-Gigawords",
[53] = "Acct-Output-Gigawords",
[55] = "Event-Timestamp",
[56] = "Egress-VLANID",
[57] = "Ingress-Filters",
[58] = "Egress-VLAN-Name",
[59] = "User-Priority-Table",
[60] = "CHAP-Challenge",
[61] = "NAS-Port-Type",
[62] = "Port-Limit",
[63] = "Login-LAT-Port",
[64] = "Tunnel-Type",
[65] = "Tunnel-Medium-Type",
[66] = "Tunnel-Client-EndPoint",
[67] = "Tunnel-Server-EndPoint",
[68] = "Acct-Tunnel-Connection",
[69] = "Tunnel-Password",
[70] = "ARAP-Password",
[71] = "ARAP-Features",
[72] = "ARAP-Zone-Access",
[73] = "ARAP-Security",
[74] = "ARAP-Security-Data",
[75] = "Password-Retry",
[76] = "Prompt",
[77] = "Connect-Info",
[78] = "Configuration-Token",
[79] = "EAP-Message",
[80] = "Message Authenticator",
[81] = "Tunnel-Private-Group-ID",
[82] = "Tunnel-Assignment-ID",
[83] = "Tunnel-Preference",
[84] = "ARAP-Challenge-Response",
[85] = "Acct-Interim-Interval",
[86] = "Acct-Tunnel-Packets-Lost",
[87] = "NAS-Port-Id",
[88] = "Framed-Pool",
[89] = "CUI",
[90] = "Tunnel-Client-Auth-ID",
[91] = "Tunnel-Server-Auth-ID",
[92] = "NAS-Filter-Rule",
[94] = "Originating-Line-Info",
[95] = "NAS-IPv6-Address",
[96] = "Framed-Interface-Id",
[97] = "Framed-IPv6-Prefix",
[98] = "Login-IPv6-Host",
[99] = "Framed-IPv6-Route",
[100] = "Framed-IPv6-Pool",
[101] = "Error-Cause",
[102] = "EAP-Key-Name",
[103] = "Digest-Response",
[104] = "Digest-Realm",
[105] = "Digest-Nonce",
[106] = "Digest-Response-Auth",
[107] = "Digest-Nextnonce",
[108] = "Digest-Method",
[109] = "Digest-URI",
[110] = "Digest-Qop",
[111] = "Digest-Algorithm",
[112] = "Digest-Entity-Body-Hash",
[113] = "Digest-CNonce",
[114] = "Digest-Nonce-Count",
[115] = "Digest-Username",
[116] = "Digest-Opaque",
[117] = "Digest-Auth-Param",
[118] = "Digest-AKA-Auts",
[119] = "Digest-Domain",
[120] = "Digest-Stale",
[121] = "Digest-HA1",
[122] = "SIP-AOR",
[123] = "Delegated-IPv6-Prefix",
[124] = "MIP6-Feature-Vector",
[125] = "MIP6-Home-Link-Prefix",
[126] = "Operator-Name",
[127] = "Location-Information",
[128] = "Location-Data",
[129] = "Basic-Location-Policy-Rules",
[130] = "Extended-Location-Policy-Rules",
[131] = "Location-Capable",
[132] = "Requested-Location-Info",
[133] = "Framed-Management-Protocol",
[134] = "Management-Transport-Protection",
[135] = "Management-Policy-Id",
[136] = "Management-Privilege-Level",
[137] = "PKM-SS-Cert",
[138] = "PKM-CA-Cert",
[139] = "PKM-Config-Settings",
[140] = "PKM-Cryptosuite-List",
[141] = "PKM-SAID",
[142] = "PKM-SA-Descriptor",
[143] = "PKM-Auth-Key",
[144] = "DS-Lite-Tunnel-Name",
[145] = "Mobile-Node-Identifier",
[146] = "Service-Selection",
[147] = "PMIP6-Home-LMA-IPv6-Address",
[148] = "PMIP6-Visited-LMA-IPv6-Address",
[149] = "PMIP6-Home-LMA-IPv4-Address",
[150] = "PMIP6-Visited-LMA-IPv4-Address",
[151] = "PMIP6-Home-HN-Prefix",
[152] = "PMIP6-Visited-HN-Prefix",
[153] = "PMIP6-Home-Interface-ID",
[154] = "PMIP6-Visited-Interface-ID",
[155] = "PMIP6-Home-IPv4-HoA",
[156] = "PMIP6-Visited-IPv4-HoA",
[157] = "PMIP6-Home-DHCP4-Server-Address",
[158] = "PMIP6-Visited-DHCP4-Server-Address",
[159] = "PMIP6-Home-DHCP6-Server-Address",
[160] = "PMIP6-Visited-DHCP6-Server-Address",
[161] = "PMIP6-Home-IPv4-Gateway",
[162] = "PMIP6-Visited-IPv4-Gateway",
[163] = "EAP-Lower-Layer",
[164] = "GSS-Acceptor-Service-Name",
[165] = "GSS-Acceptor-Host-Name",
[166] = "GSS-Acceptor-Service-Specifics",
[167] = "GSS-Acceptor-Realm-Name",
[168] = "Framed-IPv6-Address",
[169] = "DNS-Server-IPv6-Address",
[170] = "Route-IPv6-Information",
[171] = "Delegated-IPv6-Prefix-Pool",
[172] = "Stateful-IPv6-Address-Pool",
[173] = "IPv6-6rd-Configuration"
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const nas_port_types: table[count] of string = {
[0] = "Async",
[1] = "Sync",
[2] = "ISDN Sync",
[3] = "ISDN Async V.120",
[4] = "ISDN Async V.110",
[5] = "Virtual",
[6] = "PIAFS",
[7] = "HDLC Clear Channel",
[8] = "X.25",
[9] = "X.75",
[10] = "G.3 Fax",
[11] = "SDSL - Symmetric DSL",
[12] = "ADSL-CAP - Asymmetric DSL, Carrierless Amplitude Phase Modulation",
[13] = "ADSL-DMT - Asymmetric DSL, Discrete Multi-Tone",
[14] = "IDSL - ISDN Digital Subscriber Line",
[15] = "Ethernet",
[16] = "xDSL - Digital Subscriber Line of unknown type",
[17] = "Cable",
[18] = "Wireless - Other",
[19] = "Wireless - IEEE 802.11"
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const service_types: table[count] of string = {
[1] = "Login",
[2] = "Framed",
[3] = "Callback Login",
[4] = "Callback Framed",
[5] = "Outbound",
[6] = "Administrative",
[7] = "NAS Prompt",
[8] = "Authenticate Only",
[9] = "Callback NAS Prompt",
[10] = "Call Check",
[11] = "Callback Administrative",
} &default=function(i: count): string { return fmt("unknown-%d", i); };
const framed_protocol_types: table[count] of string = {
[1] = "PPP",
[2] = "SLIP",
[3] = "AppleTalk Remote Access Protocol (ARAP)",
[4] = "Gandalf proprietary SingleLink/MultiLink protocol",
[5] = "Xylogics proprietary IPX/SLIP",
[6] = "X.75 Synchronous"
} &default=function(i: count): string { return fmt("unknown-%d", i); };

View file

@ -0,0 +1,126 @@
##! Implements base functionality for RADIUS analysis. Generates the radius.log file.
module RADIUS;
@load ./consts.bro
@load base/utils/addrs
export {
redef enum Log::ID += { LOG };
type Info: record {
## Timestamp for when the event happened.
ts : time &log;
## Unique ID for the connection.
uid : string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id : conn_id &log;
## The username, if present.
username : string &log &optional;
## MAC address, if present.
mac : string &log &optional;
## Remote IP address, if present.
remote_ip : addr &log &optional;
## Connect info, if present.
connect_info : string &log &optional;
## Successful or failed authentication.
result : string &log &optional;
## Whether this has already been logged and can be ignored.
logged : bool &optional;
};
## The amount of time we wait for an authentication response before
## expiring it.
const expiration_interval = 10secs &redef;
## Logs an authentication attempt if we didn't see a response in time.
##
## t: A table of Info records.
##
## idx: The index of the connection$radius table corresponding to the
## radius authentication about to expire.
##
## Returns: 0secs, which when this function is used as an
## :bro:attr:`&expire_func`, indicates to remove the element at
## *idx* immediately.
global expire: function(t: table[count] of Info, idx: count): interval;
## Event that can be handled to access the RADIUS record as it is sent on
## to the loggin framework.
global log_radius: event(rec: Info);
}
redef record connection += {
radius: table[count] of Info &optional &write_expire=expiration_interval &expire_func=expire;
};
const ports = { 1812/udp };
event bro_init() &priority=5
{
Log::create_stream(RADIUS::LOG, [$columns=Info, $ev=log_radius]);
Analyzer::register_for_ports(Analyzer::ANALYZER_RADIUS, ports);
}
event radius_message(c: connection, result: RADIUS::Message)
{
local info: Info;
if ( c?$radius && result$trans_id in c$radius )
info = c$radius[result$trans_id];
else
{
c$radius = table();
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
switch ( RADIUS::msg_types[result$code] ) {
case "Access-Request":
if ( result?$attributes ) {
# User-Name
if ( ! info?$username && 1 in result$attributes )
info$username = result$attributes[1][0];
# Calling-Station-Id (we expect this to be a MAC)
if ( ! info?$mac && 31 in result$attributes )
info$mac = normalize_mac(result$attributes[31][0]);
# Tunnel-Client-EndPoint (useful for VPNs)
if ( ! info?$remote_ip && 66 in result$attributes )
info$remote_ip = to_addr(result$attributes[66][0]);
# Connect-Info
if ( ! info?$connect_info && 77 in result$attributes )
info$connect_info = result$attributes[77][0];
}
break;
case "Access-Accept":
info$result = "success";
break;
case "Access-Reject":
info$result = "failed";
break;
}
if ( info?$result && ! info?$logged )
{
info$logged = T;
Log::write(RADIUS::LOG, info);
}
c$radius[result$trans_id] = info;
}
function expire(t: table[count] of Info, idx: count): interval
{
t[idx]$result = "unknown";
Log::write(RADIUS::LOG, t[idx]);
return 0secs;
}

View file

@ -48,6 +48,6 @@ event bro_init() &priority=5
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
{
if ( c?$smtp )
if ( c?$smtp && !c$smtp$tls )
c$smtp$fuids[|c$smtp$fuids|] = f$id;
}

View file

@ -50,6 +50,8 @@ export {
## Value of the User-Agent header from the client.
user_agent: string &log &optional;
## Indicates that the connection has switched to using TLS.
tls: bool &log &default=F;
## Indicates if the "Received: from" headers should still be
## processed.
process_received_from: bool &default=T;
@ -140,7 +142,10 @@ function set_smtp_session(c: connection)
function smtp_message(c: connection)
{
if ( c$smtp$has_client_activity )
{
Log::write(SMTP::LOG, c$smtp);
c$smtp = new_smtp_log(c);
}
}
event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &priority=5
@ -148,9 +153,6 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
set_smtp_session(c);
local upper_command = to_upper(command);
if ( upper_command != "QUIT" )
c$smtp$has_client_activity = T;
if ( upper_command == "HELO" || upper_command == "EHLO" )
{
c$smtp_state$helo = arg;
@ -162,12 +164,17 @@ event smtp_request(c: connection, is_orig: bool, command: string, arg: string) &
if ( ! c$smtp?$rcptto )
c$smtp$rcptto = set();
add c$smtp$rcptto[split1(arg, /:[[:blank:]]*/)[2]];
c$smtp$has_client_activity = T;
}
else if ( upper_command == "MAIL" && /^[fF][rR][oO][mM]:/ in arg )
{
# Flush last message in case we didn't see the server's acknowledgement.
smtp_message(c);
local partially_done = split1(arg, /:[[:blank:]]*/)[2];
c$smtp$mailfrom = split1(partially_done, /[[:blank:]]?/)[1];
c$smtp$has_client_activity = T;
}
}
@ -196,7 +203,6 @@ event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string,
event mime_one_header(c: connection, h: mime_header_rec) &priority=5
{
if ( ! c?$smtp ) return;
c$smtp$has_client_activity = T;
if ( h$name == "MESSAGE-ID" )
c$smtp$msg_id = h$value;
@ -276,6 +282,15 @@ event connection_state_remove(c: connection) &priority=-5
smtp_message(c);
}
event smtp_starttls(c: connection) &priority=5
{
if ( c?$smtp )
{
c$smtp$tls = T;
c$smtp$has_client_activity = T;
}
}
function describe(rec: Info): string
{
if ( rec?$mailfrom && rec?$rcptto )

View file

@ -0,0 +1 @@
Support for Simple Network Management Protocol (SNMP) analysis.

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,182 @@
##! Enables analysis and logging of SNMP datagrams.
module SNMP;
export {
redef enum Log::ID += { LOG };
## Information tracked per SNMP session.
type Info: record {
## Timestamp of first packet belonging to the SNMP session.
ts: time &log;
## The unique ID for the connection.
uid: string &log;
## The connection's 5-tuple of addresses/ports (ports inherently
## include transport protocol information)
id: conn_id &log;
## The amount of time between the first packet beloning to
## the SNMP session and the latest one seen.
duration: interval &log &default=0secs;
## The version of SNMP being used.
version: string &log;
## The community string of the first SNMP packet associated with
## the session. This is used as part of SNMP's (v1 and v2c)
## administrative/security framework. See :rfc:`1157` or :rfc:`1901`.
community: string &log &optional;
## The number of variable bindings in GetRequest/GetNextRequest PDUs
## seen for the session.
get_requests: count &log &default=0;
## The number of variable bindings in GetBulkRequest PDUs seen for
## the session.
get_bulk_requests: count &log &default=0;
## The number of variable bindings in GetResponse/Response PDUs seen
## for the session.
get_responses: count &log &default=0;
## The number of variable bindings in SetRequest PDUs seen for
## the session.
set_requests: count &log &default=0;
## A system description of the SNMP responder endpoint.
display_string: string &log &optional;
## The time at which the SNMP responder endpoint claims it's been
## up since.
up_since: time &log &optional;
};
## Maps an SNMP version integer to a human readable string.
const version_map: table[count] of string = {
[0] = "1",
[1] = "2c",
[3] = "3",
} &redef &default="unknown";
## Event that can be handled to access the SNMP record as it is sent on
## to the logging framework.
global log_snmp: event(rec: Info);
}
redef record connection += {
snmp: SNMP::Info &optional;
};
const ports = { 161/udp, 162/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Analyzer::register_for_ports(Analyzer::ANALYZER_SNMP, ports);
Log::create_stream(SNMP::LOG, [$columns=SNMP::Info, $ev=log_snmp]);
}
function init_state(c: connection, h: SNMP::Header): Info
{
if ( ! c?$snmp )
{
c$snmp = Info($ts=network_time(),
$uid=c$uid, $id=c$id,
$version=version_map[h$version]);
}
local s = c$snmp;
if ( ! s?$community )
{
if ( h?$v1 )
s$community = h$v1$community;
else if ( h?$v2 )
s$community = h$v2$community;
}
s$duration = network_time() - s$ts;
return s;
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$snmp )
Log::write(LOG, c$snmp);
}
event snmp_get_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_requests += |pdu$bindings|;
}
event snmp_get_bulk_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::BulkPDU) &priority=5
{
local s = init_state(c, header);
s$get_bulk_requests += |pdu$bindings|;
}
event snmp_get_next_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_requests += |pdu$bindings|;
}
event snmp_response(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$get_responses += |pdu$bindings|;
for ( i in pdu$bindings )
{
local binding = pdu$bindings[i];
if ( binding$oid == "1.3.6.1.2.1.1.1.0" && binding$value?$octets )
c$snmp$display_string = binding$value$octets;
else if ( binding$oid == "1.3.6.1.2.1.1.3.0" && binding$value?$unsigned )
{
local up_seconds = binding$value$unsigned / 100.0;
s$up_since = network_time() - double_to_interval(up_seconds);
}
}
}
event snmp_set_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
local s = init_state(c, header);
s$set_requests += |pdu$bindings|;
}
event snmp_trap(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::TrapPDU) &priority=5
{
init_state(c, header);
}
event snmp_inform_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_trapV2(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_report(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
{
init_state(c, header);
}
event snmp_unknown_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
{
init_state(c, header);
}
event snmp_unknown_scoped_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
{
init_state(c, header);
}
event snmp_encrypted_pdu(c: connection, is_orig: bool, header: SNMP::Header) &priority=5
{
init_state(c, header);
}
#event snmp_unknown_header_version(c: connection, is_orig: bool, version: count) &priority=5
# {
# }

View file

@ -1,5 +1,6 @@
@load ./consts
@load ./main
@load ./mozilla-ca-list
@load ./files
@load-sigs ./dpd.sig

View file

@ -15,6 +15,32 @@ export {
[TLSv12] = "TLSv12",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## TLS content types:
const CHANGE_CIPHER_SPEC = 20;
const ALERT = 21;
const HANDSHAKE = 22;
const APPLICATION_DATA = 23;
const HEARTBEAT = 24;
const V2_ERROR = 300;
const V2_CLIENT_HELLO = 301;
const V2_CLIENT_MASTER_KEY = 302;
const V2_SERVER_HELLO = 304;
## TLS Handshake types:
const HELLO_REQUEST = 0;
const CLIENT_HELLO = 1;
const SERVER_HELLO = 2;
const SESSION_TICKET = 4; # RFC 5077
const CERTIFICATE = 11;
const SERVER_KEY_EXCHANGE = 12;
const CERTIFICATE_REQUEST = 13;
const SERVER_HELLO_DONE = 14;
const CERTIFICATE_VERIFY = 15;
const CLIENT_KEY_EXCHANGE = 16;
const FINISHED = 20;
const CERTIFICATE_URL = 21; # RFC 3546
const CERTIFICATE_STATUS = 22; # RFC 3546
## Mapping between numeric codes and human readable strings for alert
## levels.
const alert_levels: table[count] of string = {
@ -47,6 +73,7 @@ export {
[70] = "protocol_version",
[71] = "insufficient_security",
[80] = "internal_error",
[86] = "inappropriate_fallback",
[90] = "user_canceled",
[100] = "no_renegotiation",
[110] = "unsupported_extension",
@ -55,6 +82,7 @@ export {
[113] = "bad_certificate_status_response",
[114] = "bad_certificate_hash_value",
[115] = "unknown_psk_identity",
[120] = "no_application_protocol",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable strings for SSL/TLS
@ -81,14 +109,64 @@ export {
[16] = "application_layer_protocol_negotiation",
[17] = "status_request_v2",
[18] = "signed_certificate_timestamp",
[19] = "client_certificate_type",
[20] = "server_certificate_type",
[21] = "padding", # temporary till 2015-03-12
[22] = "encrypt_then_mac", # temporary till 2015-06-05
[35] = "SessionTicket TLS",
[40] = "extended_random",
[13172] = "next_protocol_negotiation",
[13175] = "origin_bound_certificates",
[13180] = "encrypted_client_certificates",
[30031] = "channel_id",
[30032] = "channel_id_new",
[35655] = "padding",
[65281] = "renegotiation_info"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable string for SSL/TLS elliptic curves.
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8
const ec_curves: table[count] of string = {
[1] = "sect163k1",
[2] = "sect163r1",
[3] = "sect163r2",
[4] = "sect193r1",
[5] = "sect193r2",
[6] = "sect233k1",
[7] = "sect233r1",
[8] = "sect239k1",
[9] = "sect283k1",
[10] = "sect283r1",
[11] = "sect409k1",
[12] = "sect409r1",
[13] = "sect571k1",
[14] = "sect571r1",
[15] = "secp160k1",
[16] = "secp160r1",
[17] = "secp160r2",
[18] = "secp192k1",
[19] = "secp192r1",
[20] = "secp224k1",
[21] = "secp224r1",
[22] = "secp256k1",
[23] = "secp256r1",
[24] = "secp384r1",
[25] = "secp521r1",
[26] = "brainpoolP256r1",
[27] = "brainpoolP384r1",
[28] = "brainpoolP512r1",
[0xFF01] = "arbitrary_explicit_prime_curves",
[0xFF02] = "arbitrary_explicit_char2_curves"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable string for SSL/TLC EC point formats.
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-9
const ec_point_formats: table[count] of string = {
[0] = "uncompressed",
[1] = "ansiX962_compressed_prime",
[2] = "ansiX962_compressed_char2"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
# SSLv2
const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080;
const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080;
@ -262,6 +340,8 @@ export {
const TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C3;
const TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C4;
const TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C5;
# draft-bmoeller-tls-downgrade-scsv-01
const TLS_FALLBACK_SCSV = 0x5600;
# RFC 4492
const TLS_ECDH_ECDSA_WITH_NULL_SHA = 0xC001;
const TLS_ECDH_ECDSA_WITH_RC4_128_SHA = 0xC002;
@ -437,6 +517,10 @@ export {
const TLS_PSK_WITH_AES_256_CCM_8 = 0xC0A9;
const TLS_PSK_DHE_WITH_AES_128_CCM_8 = 0xC0AA;
const TLS_PSK_DHE_WITH_AES_256_CCM_8 = 0xC0AB;
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM = 0xC0AC;
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM = 0xC0AD;
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 = 0xC0AE;
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 = 0xC0AF;
# draft-agl-tls-chacha20poly1305-02
const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC13;
const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC14;
@ -628,6 +712,7 @@ export {
[TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256",
[TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256",
[TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256",
[TLS_FALLBACK_SCSV] = "TLS_FALLBACK_SCSV",
[TLS_ECDH_ECDSA_WITH_NULL_SHA] = "TLS_ECDH_ECDSA_WITH_NULL_SHA",
[TLS_ECDH_ECDSA_WITH_RC4_128_SHA] = "TLS_ECDH_ECDSA_WITH_RC4_128_SHA",
[TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA] = "TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA",
@ -799,6 +884,10 @@ export {
[TLS_PSK_WITH_AES_256_CCM_8] = "TLS_PSK_WITH_AES_256_CCM_8",
[TLS_PSK_DHE_WITH_AES_128_CCM_8] = "TLS_PSK_DHE_WITH_AES_128_CCM_8",
[TLS_PSK_DHE_WITH_AES_256_CCM_8] = "TLS_PSK_DHE_WITH_AES_256_CCM_8",
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM",
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM",
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8",
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8",
[TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
[TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
@ -813,42 +902,4 @@ export {
[TLS_EMPTY_RENEGOTIATION_INFO_SCSV] = "TLS_EMPTY_RENEGOTIATION_INFO_SCSV",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between the constants and string values for SSL/TLS errors.
const x509_errors: table[count] of string = {
[0] = "ok",
[1] = "unable to get issuer cert",
[2] = "unable to get crl",
[3] = "unable to decrypt cert signature",
[4] = "unable to decrypt crl signature",
[5] = "unable to decode issuer public key",
[6] = "cert signature failure",
[7] = "crl signature failure",
[8] = "cert not yet valid",
[9] = "cert has expired",
[10] = "crl not yet valid",
[11] = "crl has expired",
[12] = "error in cert not before field",
[13] = "error in cert not after field",
[14] = "error in crl last update field",
[15] = "error in crl next update field",
[16] = "out of mem",
[17] = "depth zero self signed cert",
[18] = "self signed cert in chain",
[19] = "unable to get issuer cert locally",
[20] = "unable to verify leaf signature",
[21] = "cert chain too long",
[22] = "cert revoked",
[23] = "invalid ca",
[24] = "path length exceeded",
[25] = "invalid purpose",
[26] = "cert untrusted",
[27] = "cert rejected",
[28] = "subject issuer mismatch",
[29] = "akid skid mismatch",
[30] = "akid issuer serial mismatch",
[31] = "keyusage no certsign",
[32] = "unable to get crl issuer",
[33] = "unhandled critical extension",
} &default=function(i: count):string { return fmt("unknown-%d", i); };
}

View file

@ -1,7 +1,7 @@
signature dpd_ssl_server {
ip-proto == tcp
# Server hello.
payload /^(\x16\x03[\x00\x01\x02]..\x02...\x03[\x00\x01\x02]|...?\x04..\x00\x02).*/
payload /^(\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client
enable "ssl"
tcp-state responder
@ -10,6 +10,6 @@ signature dpd_ssl_server {
signature dpd_ssl_client {
ip-proto == tcp
# Client hello.
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
payload /^(\x16\x03[\x00\x01\x02\x03]..\x01...\x03[\x00\x01\x02\x03]|...?\x01[\x00\x03][\x00\x01\x02\x03]).*/
tcp-state originator
}

View file

@ -0,0 +1,137 @@
@load ./main
@load base/utils/conn-ids
@load base/frameworks/files
@load base/files/x509
module SSL;
export {
redef record Info += {
## Chain of certificates offered by the server to validate its
## complete signing chain.
cert_chain: vector of Files::Info &optional;
## An ordered vector of all certicate file unique IDs for the
## certificates offered by the server.
cert_chain_fuids: vector of string &optional &log;
## Chain of certificates offered by the client to validate its
## complete signing chain.
client_cert_chain: vector of Files::Info &optional;
## An ordered vector of all certicate file unique IDs for the
## certificates offered by the client.
client_cert_chain_fuids: vector of string &optional &log;
## Subject of the X.509 certificate offered by the server.
subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## server.
issuer: string &log &optional;
## Subject of the X.509 certificate offered by the client.
client_subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## client.
client_issuer: string &log &optional;
## Current number of certificates seen from either side. Used
## to create file handles.
server_depth: count &default=0;
client_depth: count &default=0;
};
## Default file handle provider for SSL.
global get_file_handle: function(c: connection, is_orig: bool): string;
## Default file describer for SSL.
global describe_file: function(f: fa_file): string;
}
function get_file_handle(c: connection, is_orig: bool): string
{
# Unused. File handles are generated in the analyzer.
return "";
}
function describe_file(f: fa_file): string
{
if ( f$source != "SSL" || ! f?$info || ! f$info?$x509 || ! f$info$x509?$certificate )
return "";
# It is difficult to reliably describe a certificate - especially since
# we do not know when this function is called (hence, if the data structures
# are already populated).
#
# Just return a bit of our connection information and hope that that is good enough.
for ( cid in f$conns )
{
if ( f$conns[cid]?$ssl )
{
local c = f$conns[cid];
return cat(c$id$resp_h, ":", c$id$resp_p);
}
}
return cat("Serial: ", f$info$x509$certificate$serial, " Subject: ",
f$info$x509$certificate$subject, " Issuer: ",
f$info$x509$certificate$issuer);
}
event bro_init() &priority=5
{
Files::register_protocol(Analyzer::ANALYZER_SSL,
[$get_file_handle = SSL::get_file_handle,
$describe = SSL::describe_file]);
}
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
{
if ( ! c?$ssl )
return;
if ( ! c$ssl?$cert_chain )
{
c$ssl$cert_chain = vector();
c$ssl$client_cert_chain = vector();
c$ssl$cert_chain_fuids = string_vec();
c$ssl$client_cert_chain_fuids = string_vec();
}
if ( is_orig )
{
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = f$info;
c$ssl$client_cert_chain_fuids[|c$ssl$client_cert_chain_fuids|] = f$id;
}
else
{
c$ssl$cert_chain[|c$ssl$cert_chain|] = f$info;
c$ssl$cert_chain_fuids[|c$ssl$cert_chain_fuids|] = f$id;
}
Files::add_analyzer(f, Files::ANALYZER_X509);
# always calculate hashes. They are not necessary for base scripts
# but very useful for identification, and required for policy scripts
Files::add_analyzer(f, Files::ANALYZER_MD5);
Files::add_analyzer(f, Files::ANALYZER_SHA1);
}
event ssl_established(c: connection) &priority=6
{
# update subject and issuer information
if ( c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 &&
c$ssl$cert_chain[0]?$x509 )
{
c$ssl$subject = c$ssl$cert_chain[0]$x509$certificate$subject;
c$ssl$issuer = c$ssl$cert_chain[0]$x509$certificate$issuer;
}
if ( c$ssl?$client_cert_chain && |c$ssl$client_cert_chain| > 0 &&
c$ssl$client_cert_chain[0]?$x509 )
{
c$ssl$client_subject = c$ssl$client_cert_chain[0]$x509$certificate$subject;
c$ssl$client_issuer = c$ssl$client_cert_chain[0]$x509$certificate$issuer;
}
}

View file

@ -19,45 +19,28 @@ export {
version: string &log &optional;
## SSL/TLS cipher suite that the server chose.
cipher: string &log &optional;
## Elliptic curve the server chose when using ECDH/ECDHE.
curve: string &log &optional;
## Value of the Server Name Indicator SSL/TLS extension. It
## indicates the server name that the client was requesting.
server_name: string &log &optional;
## Session ID offered by the client for session resumption.
session_id: string &log &optional;
## Subject of the X.509 certificate offered by the server.
subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## server.
issuer_subject: string &log &optional;
## NotValidBefore field value from the server certificate.
not_valid_before: time &log &optional;
## NotValidAfter field value from the server certificate.
not_valid_after: time &log &optional;
## Last alert that was seen during the connection.
last_alert: string &log &optional;
## Subject of the X.509 certificate offered by the client.
client_subject: string &log &optional;
## Subject of the signer of the X.509 certificate offered by the
## client.
client_issuer_subject: string &log &optional;
## Full binary server certificate stored in DER format.
cert: string &optional;
## Chain of certificates offered by the server to validate its
## complete signing chain.
cert_chain: vector of string &optional;
## Full binary client certificate stored in DER format.
client_cert: string &optional;
## Chain of certificates offered by the client to validate its
## complete signing chain.
client_cert_chain: vector of string &optional;
## The analyzer ID used for the analyzer instance attached
## to each connection. It is not used for logging since it's a
## meaningless arbitrary number.
analyzer_id: count &optional;
## Flag to indicate if this ssl session has been established
## succesfully, or if it was aborted during the handshake.
established: bool &log &default=F;
## Flag to indicate if this record already has been logged, to
## prevent duplicates.
logged: bool &default=F;
};
## The default root CA bundle. By default, the mozilla-ca-list.bro
@ -108,8 +91,7 @@ event bro_init() &priority=5
function set_session(c: connection)
{
if ( ! c?$ssl )
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector(),
$client_cert_chain=vector()];
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id];
}
function delay_log(info: Info, token: string)
@ -127,9 +109,13 @@ function undelay_log(info: Info, token: string)
function log_record(info: Info)
{
if ( info$logged )
return;
if ( ! info?$delay_tokens || |info$delay_tokens| == 0 )
{
Log::write(SSL::LOG, info);
info$logged = T;
}
else
{
@ -146,11 +132,16 @@ function log_record(info: Info)
}
}
function finish(c: connection)
# remove_analyzer flag is used to prevent disabling analyzer for finished
# connections.
function finish(c: connection, remove_analyzer: bool)
{
log_record(c$ssl);
if ( disable_analyzer_after_detection && c?$ssl && c$ssl?$analyzer_id )
if ( remove_analyzer && disable_analyzer_after_detection && c?$ssl && c$ssl?$analyzer_id )
{
disable_analyzer(c$id, c$ssl$analyzer_id);
delete c$ssl$analyzer_id;
}
}
event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) &priority=5
@ -170,55 +161,23 @@ event ssl_server_hello(c: connection, version: count, possible_ts: time, server_
c$ssl$cipher = cipher_desc[cipher];
}
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=5
event ssl_server_curve(c: connection, curve: count) &priority=5
{
set_session(c);
# We aren't doing anything with client certificates yet.
if ( is_orig )
{
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$client_cert = der_cert;
# Also save other certificate information about the primary cert.
c$ssl$client_subject = cert$subject;
c$ssl$client_issuer_subject = cert$issuer;
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = der_cert;
}
}
else
{
if ( chain_idx == 0 )
{
# Save the primary cert.
c$ssl$cert = der_cert;
# Also save other certificate information about the primary cert.
c$ssl$subject = cert$subject;
c$ssl$issuer_subject = cert$issuer;
c$ssl$not_valid_before = cert$not_valid_before;
c$ssl$not_valid_after = cert$not_valid_after;
}
else
{
# Otherwise, add it to the cert validation chain.
c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert;
}
}
c$ssl$curve = ec_curves[curve];
}
event ssl_extension(c: connection, is_orig: bool, code: count, val: string) &priority=5
event ssl_extension_server_name(c: connection, is_orig: bool, names: string_vec) &priority=5
{
set_session(c);
if ( is_orig && extensions[code] == "server_name" )
c$ssl$server_name = sub_bytes(val, 6, |val|);
if ( is_orig && |names| > 0 )
{
c$ssl$server_name = names[0];
if ( |names| > 1 )
event conn_weird("SSL_many_server_names", c, cat(names));
}
}
event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priority=5
@ -228,26 +187,36 @@ event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priori
c$ssl$last_alert = alert_descriptions[desc];
}
event ssl_established(c: connection) &priority=5
event ssl_established(c: connection) &priority=7
{
set_session(c);
c$ssl$established = T;
}
event ssl_established(c: connection) &priority=-5
{
finish(c);
finish(c, T);
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$ssl )
# called in case a SSL connection that has not been established terminates
finish(c, F);
}
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5
{
# Check by checking for existence of c$ssl record.
if ( c?$ssl && atype == Analyzer::ANALYZER_SSL )
if ( atype == Analyzer::ANALYZER_SSL )
{
set_session(c);
c$ssl$analyzer_id = aid;
}
}
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
reason: string) &priority=5
{
if ( c?$ssl )
finish(c);
finish(c, T);
}

File diff suppressed because one or more lines are too long

View file

@ -1,4 +1,4 @@
##! Functions for parsing and manipulating IP addresses.
##! Functions for parsing and manipulating IP and MAC addresses.
# Regular expressions for matching IP addresses in strings.
const ipv4_addr_regex = /[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}/;
@ -119,3 +119,30 @@ function addr_to_uri(a: addr): string
else
return fmt("[%s]", a);
}
## Given a string, extracts the hex digits and returns a MAC address in
## the format: 00:a0:32:d7:81:8f. If the string doesn't contain 12 or 16 hex
## digits, an empty string is returned.
##
## a: the string to normalize.
##
## Returns: a normalized MAC address, or an empty string in the case of an error.
function normalize_mac(a: string): string
{
local result = to_lower(gsub(a, /[^A-Fa-f0-9]/, ""));
local octets: string_vec;
if ( |result| == 12 )
{
octets = str_split(result, vector(2, 4, 6, 8, 10));
return fmt("%s:%s:%s:%s:%s:%s", octets[1], octets[2], octets[3], octets[4], octets[5], octets[6]);
}
if ( |result| == 16 )
{
octets = str_split(result, vector(2, 4, 6, 8, 10, 12, 14));
return fmt("%s:%s:%s:%s:%s:%s:%s:%s", octets[1], octets[2], octets[3], octets[4], octets[5], octets[6], octets[7], octets[8]);
}
return "";
}

View file

@ -35,28 +35,37 @@ export {
const notice_threshold = 10 &redef;
}
event file_hash(f: fa_file, kind: string, hash: string)
{
if ( kind=="sha1" && match_file_types in f$mime_type )
function do_mhr_lookup(hash: string, fi: Notice::FileInfo)
{
local hash_domain = fmt("%s.malware.hash.cymru.com", hash);
when ( local MHR_result = lookup_hostname_txt(hash_domain) )
{
# Data is returned as "<dateFirstDetected> <detectionRate>"
local MHR_answer = split1(MHR_result, / /);
if ( |MHR_answer| == 2 )
{
local mhr_first_detected = double_to_time(to_double(MHR_answer[1]));
local mhr_detect_rate = to_count(MHR_answer[2]);
local readable_first_detected = strftime("%Y-%m-%d %H:%M:%S", mhr_first_detected);
if ( mhr_detect_rate >= notice_threshold )
{
local mhr_first_detected = double_to_time(to_double(MHR_answer[1]));
local readable_first_detected = strftime("%Y-%m-%d %H:%M:%S", mhr_first_detected);
local message = fmt("Malware Hash Registry Detection rate: %d%% Last seen: %s", mhr_detect_rate, readable_first_detected);
local virustotal_url = fmt(match_sub_url, hash);
NOTICE([$note=Match, $msg=message, $sub=virustotal_url, $f=f]);
# We don't have the full fa_file record here in order to
# avoid the "when" statement cloning it (expensive!).
local n: Notice::Info = Notice::Info($note=Match, $msg=message, $sub=virustotal_url);
Notice::populate_file_info2(fi, n);
NOTICE(n);
}
}
}
}
event file_hash(f: fa_file, kind: string, hash: string)
{
if ( kind == "sha1" && f?$mime_type && match_file_types in f$mime_type )
do_mhr_lookup(hash, Notice::create_file_info(f));
}

View file

@ -7,3 +7,4 @@
@load ./ssl
@load ./smtp
@load ./smtp-url-extraction
@load ./x509

View file

@ -9,6 +9,12 @@ event http_header(c: connection, is_orig: bool, name: string, value: string)
switch ( name )
{
case "HOST":
if ( is_valid_ip(value) )
Intel::seen([$host=to_addr(value),
$indicator_type=Intel::ADDR,
$conn=c,
$where=HTTP::IN_HOST_HEADER]);
else
Intel::seen([$indicator=value,
$indicator_type=Intel::DOMAIN,
$conn=c,

View file

@ -2,31 +2,9 @@
@load base/protocols/ssl
@load ./where-locations
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string)
event ssl_extension_server_name(c: connection, is_orig: bool, names: string_vec)
{
if ( chain_idx == 0 )
{
if ( /emailAddress=/ in cert$subject )
{
local email = sub(cert$subject, /^.*emailAddress=/, "");
email = sub(email, /,.*$/, "");
Intel::seen([$indicator=email,
$indicator_type=Intel::EMAIL,
$conn=c,
$where=(is_orig ? SSL::IN_CLIENT_CERT : SSL::IN_SERVER_CERT)]);
}
Intel::seen([$indicator=sha1_hash(der_cert),
$indicator_type=Intel::CERT_HASH,
$conn=c,
$where=(is_orig ? SSL::IN_CLIENT_CERT : SSL::IN_SERVER_CERT)]);
}
}
event ssl_extension(c: connection, is_orig: bool, code: count, val: string)
{
if ( is_orig && SSL::extensions[code] == "server_name" &&
c?$ssl && c$ssl?$server_name )
if ( is_orig && c?$ssl && c$ssl?$server_name )
Intel::seen([$indicator=c$ssl$server_name,
$indicator_type=Intel::DOMAIN,
$conn=c,

View file

@ -21,9 +21,8 @@ export {
SMTP::IN_REPLY_TO,
SMTP::IN_X_ORIGINATING_IP_HEADER,
SMTP::IN_MESSAGE,
SSL::IN_SERVER_CERT,
SSL::IN_CLIENT_CERT,
SSL::IN_SERVER_NAME,
SMTP::IN_HEADER,
X509::IN_CERT,
};
}

View file

@ -0,0 +1,16 @@
@load base/frameworks/intel
@load base/files/x509
@load ./where-locations
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate)
{
if ( /emailAddress=/ in cert$subject )
{
local email = sub(cert$subject, /^.*emailAddress=/, "");
email = sub(email, /,.*$/, "");
Intel::seen([$indicator=email,
$indicator_type=Intel::EMAIL,
$f=f,
$where=X509::IN_CERT]);
}
}

View file

@ -1,6 +1,6 @@
@load ./facebook
@load ./gmail
@load ./google
@load ./netflix
@load ./pandora
@load ./youtube
#@load ./gmail
#@load ./google
#@load ./netflix
#@load ./pandora
#@load ./youtube

View file

@ -82,7 +82,7 @@ event bro_init() &priority=5
++lb_proc_track[that_node$ip, that_node$interface];
if ( total_lb_procs > 1 )
{
that_node$lb_filter = PacketFilter::sample_filter(total_lb_procs, this_lb_proc);
that_node$lb_filter = PacketFilter::sampling_filter(total_lb_procs, this_lb_proc);
Communication::nodes[no]$capture_filter = that_node$lb_filter;
}
}

View file

@ -19,12 +19,16 @@ export {
};
}
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=4
hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
{
# The "ready" flag will be set here. This causes the setting from the
# base script to be overridden since the base script will log immediately
# after all of the ANS replies have been seen.
c$dns$ready=F;
if ( msg$opcode != 0 )
# Currently only standard queries are tracked.
return;
if ( ! msg$QR )
# This is weird: the inquirer must also be providing answers in
# the request, which is not what we want to track.
return;
if ( ans$answer_type == DNS_AUTH )
{
@ -38,11 +42,4 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
c$dns$addl = set();
add c$dns$addl[reply];
}
if ( c$dns?$answers && c$dns?$auth && c$dns?$addl &&
c$dns$total_replies == |c$dns$answers| + |c$dns$auth| + |c$dns$addl| )
{
# *Now* all replies desired have been seen.
c$dns$ready = T;
}
}

View file

@ -1,22 +0,0 @@
##! Calculate MD5 sums for server DER formatted certificates.
@load base/protocols/ssl
module SSL;
export {
redef record Info += {
## MD5 sum of the raw server certificate.
cert_hash: string &log &optional;
};
}
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=4
{
# We aren't tracking client certificates yet and we are also only tracking
# the primary cert. Watch that this came from an SSL analyzed session too.
if ( is_orig || chain_idx != 0 || ! c?$ssl )
return;
c$ssl$cert_hash = md5_hash(der_cert);
}

View file

@ -3,11 +3,10 @@
##! certificate.
@load base/protocols/ssl
@load base/files/x509
@load base/frameworks/notice
@load base/utils/directions-and-hosts
@load protocols/ssl/cert-hash
module SSL;
export {
@ -35,30 +34,36 @@ export {
const notify_when_cert_expiring_in = 30days &redef;
}
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3
event ssl_established(c: connection) &priority=3
{
# If this isn't the host cert or we aren't interested in the server, just return.
if ( is_orig ||
chain_idx != 0 ||
! c$ssl?$cert_hash ||
! addr_matches_host(c$id$resp_h, notify_certs_expiration) )
# If there are no certificates or we are not interested in the server, just return.
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! addr_matches_host(c$id$resp_h, notify_certs_expiration) ||
! c$ssl$cert_chain[0]?$x509 || ! c$ssl$cert_chain[0]?$sha1 )
return;
local fuid = c$ssl$cert_chain_fuids[0];
local cert = c$ssl$cert_chain[0]$x509$certificate;
local hash = c$ssl$cert_chain[0]$sha1;
if ( cert$not_valid_before > network_time() )
NOTICE([$note=Certificate_Not_Valid_Yet,
$conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s isn't valid until %T", cert$subject, cert$not_valid_before),
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]);
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
else if ( cert$not_valid_after < network_time() )
NOTICE([$note=Certificate_Expired,
$conn=c, $suppress_for=1day,
$msg=fmt("Certificate %s expired at %T", cert$subject, cert$not_valid_after),
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]);
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
else if ( cert$not_valid_after - notify_when_cert_expiring_in < network_time() )
NOTICE([$note=Certificate_Expires_Soon,
$msg=fmt("Certificate %s is going to expire at %T", cert$subject, cert$not_valid_after),
$conn=c, $suppress_for=1day,
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]);
$identifier=cat(c$id$resp_h, c$id$resp_p, hash),
$fuid=fuid]);
}

View file

@ -10,8 +10,8 @@
##!
@load base/protocols/ssl
@load base/files/x509
@load base/utils/directions-and-hosts
@load protocols/ssl/cert-hash
module SSL;
@ -23,41 +23,32 @@ export {
}
# This is an internally maintained variable to prevent relogging of
# certificates that have already been seen. It is indexed on an md5 sum of
# certificates that have already been seen. It is indexed on an sha1 sum of
# the certificate.
global extracted_certs: set[string] = set() &read_expire=1hr &redef;
event ssl_established(c: connection) &priority=5
{
if ( ! c$ssl?$cert )
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! c$ssl$cert_chain[0]?$x509 )
return;
if ( ! addr_matches_host(c$id$resp_h, extract_certs_pem) )
return;
if ( c$ssl$cert_hash in extracted_certs )
local hash = c$ssl$cert_chain[0]$sha1;
local cert = c$ssl$cert_chain[0]$x509$handle;
if ( hash in extracted_certs )
# If we already extracted this cert, don't do it again.
return;
add extracted_certs[c$ssl$cert_hash];
add extracted_certs[hash];
local filename = Site::is_local_addr(c$id$resp_h) ? "certs-local.pem" : "certs-remote.pem";
local outfile = open_for_append(filename);
enable_raw_output(outfile);
print outfile, "-----BEGIN CERTIFICATE-----";
print outfile, x509_get_certificate_string(cert, T);
# Encode to base64 and format to fit 50 lines. Otherwise openssl won't like it later.
local lines = split_all(encode_base64(c$ssl$cert), /.{50}/);
local i = 1;
for ( line in lines )
{
if ( |lines[i]| > 0 )
{
print outfile, lines[i];
}
i+=1;
}
print outfile, "-----END CERTIFICATE-----";
print outfile, "";
close(outfile);
}

View file

@ -0,0 +1,238 @@
##! Detect the TLS heartbleed attack. See http://heartbleed.com for more.
@load base/protocols/ssl
@load base/frameworks/notice
module Heartbleed;
export {
redef enum Notice::Type += {
## Indicates that a host performed a heartbleed attack or scan.
SSL_Heartbeat_Attack,
## Indicates that a host performing a heartbleed attack was probably successful.
SSL_Heartbeat_Attack_Success,
## Indicates we saw heartbeat requests with odd length. Probably an attack or scan.
SSL_Heartbeat_Odd_Length,
## Indicates we saw many heartbeat requests without an reply. Might be an attack.
SSL_Heartbeat_Many_Requests
};
}
# Do not disable analyzers after detection - otherwhise we will not notice
# encrypted attacks.
redef SSL::disable_analyzer_after_detection=F;
redef record SSL::Info += {
last_originator_heartbeat_request_size: count &optional;
last_responder_heartbeat_request_size: count &optional;
originator_heartbeats: count &default=0;
responder_heartbeats: count &default=0;
# Unencrypted connections - was an exploit attempt detected yet.
heartbleed_detected: bool &default=F;
# Count number of appdata packages and bytes exchanged so far.
enc_appdata_packages: count &default=0;
enc_appdata_bytes: count &default=0;
};
type min_length: record {
cipher: pattern;
min_length: count;
};
global min_lengths: vector of min_length = vector();
global min_lengths_tls11: vector of min_length = vector();
event bro_init()
{
# Minimum length a heartbeat packet must have for different cipher suites.
# Note - tls 1.1f and 1.0 have different lengths :(
# This should be all cipher suites usually supported by vulnerable servers.
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_256_GCM_SHA384$/, $min_length=43];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_128_GCM_SHA256$/, $min_length=43];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA384$/, $min_length=96];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA256$/, $min_length=80];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA$/, $min_length=64];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA256$/, $min_length=80];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA$/, $min_length=64];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=64];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=48];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_256_CBC_SHA$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_128_CBC_SHA$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=48];
min_lengths[|min_lengths|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_DES_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=40];
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
min_lengths[|min_lengths|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
min_lengths[|min_lengths|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=40];
}
event ssl_heartbeat(c: connection, is_orig: bool, length: count, heartbeat_type: count, payload_length: count, payload: string)
{
if ( ! c?$ssl )
return;
if ( heartbeat_type == 1 )
{
local checklength: count = (length<(3+16)) ? length : (length - 3 - 16);
if ( payload_length > checklength )
{
c$ssl$heartbleed_detected = T;
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack,
$msg=fmt("An TLS heartbleed attack was detected! Record length %d. Payload length %d", length, payload_length),
$conn=c,
$identifier=cat(c$uid, length, payload_length)
]);
}
else if ( is_orig )
{
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack,
$msg=fmt("Heartbeat request before encryption. Probable Scan without exploit attempt. Message length: %d. Payload length: %d", length, payload_length),
$conn=c,
$n=length,
$identifier=cat(c$uid, length)
]);
}
}
if ( heartbeat_type == 2 && c$ssl$heartbleed_detected )
{
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack_Success,
$msg=fmt("An TLS heartbleed attack detected before was probably exploited. Message length: %d. Payload length: %d", length, payload_length),
$conn=c,
$identifier=c$uid
]);
}
}
event ssl_encrypted_heartbeat(c: connection, is_orig: bool, length: count)
{
if ( is_orig )
++c$ssl$originator_heartbeats;
else
++c$ssl$responder_heartbeats;
local duration = network_time() - c$start_time;
if ( c$ssl$enc_appdata_packages == 0 )
NOTICE([$note=SSL_Heartbeat_Attack,
$msg=fmt("Heartbeat before ciphertext. Probable attack or scan. Length: %d, is_orig: %d", length, is_orig),
$conn=c,
$n=length,
$identifier=fmt("%s%s", c$uid, "early")
]);
else if ( duration < 1min )
NOTICE([$note=SSL_Heartbeat_Attack,
$msg=fmt("Heartbeat within first minute. Possible attack or scan. Length: %d, is_orig: %d, time: %s", length, is_orig, duration),
$conn=c,
$n=length,
$identifier=fmt("%s%s", c$uid, "early")
]);
if ( c$ssl$originator_heartbeats > c$ssl$responder_heartbeats + 3 )
NOTICE([$note=SSL_Heartbeat_Many_Requests,
$msg=fmt("More than 3 heartbeat requests without replies from server. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
$conn=c,
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
]);
if ( c$ssl$responder_heartbeats > c$ssl$originator_heartbeats + 3 )
NOTICE([$note=SSL_Heartbeat_Many_Requests,
$msg=fmt("Server sending more heartbeat responses than requests seen. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
$conn=c,
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
]);
if ( is_orig && length < 19 )
NOTICE([$note=SSL_Heartbeat_Odd_Length,
$msg=fmt("Heartbeat message smaller than minimum required length. Probable attack or scan. Message length: %d. Cipher: %s. Time: %f", length, c$ssl$cipher, duration),
$conn=c,
$n=length,
$identifier=fmt("%s-weak-%d", c$uid, length)
]);
# Examine request lengths based on used cipher...
local min_length_choice: vector of min_length;
if ( (c$ssl$version == "TLSv11") || (c$ssl$version == "TLSv12") ) # tls 1.1+ have different lengths for CBC
min_length_choice = min_lengths_tls11;
else
min_length_choice = min_lengths;
for ( i in min_length_choice )
{
if ( min_length_choice[i]$cipher in c$ssl$cipher )
{
if ( length < min_length_choice[i]$min_length )
{
NOTICE([$note=SSL_Heartbeat_Odd_Length,
$msg=fmt("Heartbeat message smaller than minimum required length. Probable attack. Message length: %d. Required length: %d. Cipher: %s. Cipher match: %s", length, min_length_choice[i]$min_length, c$ssl$cipher, min_length_choice[i]$cipher),
$conn=c,
$n=length,
$identifier=fmt("%s-weak-%d", c$uid, length)
]);
}
break;
}
}
if ( is_orig )
{
if ( c$ssl?$last_responder_heartbeat_request_size )
{
# server originated heartbeat. Ignore & continue
delete c$ssl$last_responder_heartbeat_request_size;
}
else
c$ssl$last_originator_heartbeat_request_size = length;
}
else
{
if ( c$ssl?$last_originator_heartbeat_request_size && c$ssl$last_originator_heartbeat_request_size < length )
{
NOTICE([$note=SSL_Heartbeat_Attack_Success,
$msg=fmt("An encrypted TLS heartbleed attack was probably detected! First packet client record length %d, first packet server record length %d. Time: %f",
c$ssl$last_originator_heartbeat_request_size, length, duration),
$conn=c,
$identifier=c$uid # only throw once per connection
]);
}
else if ( ! c$ssl?$last_originator_heartbeat_request_size )
c$ssl$last_responder_heartbeat_request_size = length;
if ( c$ssl?$last_originator_heartbeat_request_size )
delete c$ssl$last_originator_heartbeat_request_size;
}
}
event ssl_encrypted_data(c: connection, is_orig: bool, content_type: count, length: count)
{
if ( !c?$ssl )
return;
if ( content_type == SSL::HEARTBEAT )
event ssl_encrypted_heartbeat(c, is_orig, length);
else if ( (content_type == SSL::APPLICATION_DATA) && (length > 0) )
{
++c$ssl$enc_appdata_packages;
c$ssl$enc_appdata_bytes += length;
}
}

View file

@ -3,7 +3,7 @@
@load base/utils/directions-and-hosts
@load base/protocols/ssl
@load protocols/ssl/cert-hash
@load base/files/x509
module Known;
@ -33,7 +33,7 @@ export {
## The set of all known certificates to store for preventing duplicate
## logging. It can also be used from other scripts to
## inspect if a certificate has been seen in use. The string value
## in the set is for storing the DER formatted certificate's MD5 hash.
## in the set is for storing the DER formatted certificate' SHA1 hash.
global certs: set[addr, string] &create_expire=1day &synchronized &redef;
## Event that can be handled to access the loggable record as it is sent
@ -46,16 +46,28 @@ event bro_init() &priority=5
Log::create_stream(Known::CERTS_LOG, [$columns=CertsInfo, $ev=log_known_certs]);
}
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3
event ssl_established(c: connection) &priority=3
{
# Make sure this is the server cert and we have a hash for it.
if ( is_orig || chain_idx != 0 || ! c$ssl?$cert_hash )
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| < 1 ||
! c$ssl$cert_chain[0]?$x509 )
return;
local host = c$id$resp_h;
if ( [host, c$ssl$cert_hash] !in certs && addr_matches_host(host, cert_tracking) )
local fuid = c$ssl$cert_chain_fuids[0];
if ( ! c$ssl$cert_chain[0]?$sha1 )
{
add certs[host, c$ssl$cert_hash];
Reporter::error(fmt("Certificate with fuid %s did not contain sha1 hash when checking for known certs. Aborting",
fuid));
return;
}
local hash = c$ssl$cert_chain[0]$sha1;
local cert = c$ssl$cert_chain[0]$x509$certificate;
local host = c$id$resp_h;
if ( [host, hash] !in certs && addr_matches_host(host, cert_tracking) )
{
add certs[host, hash];
Log::write(Known::CERTS_LOG, [$ts=network_time(), $host=host,
$port_num=c$id$resp_p, $subject=cert$subject,
$issuer_subject=cert$issuer,

View file

@ -0,0 +1,68 @@
##! When this script is loaded, only the host certificates (client and server)
##! will be logged to x509.log. Logging of all other certificates will be suppressed.
@load base/protocols/ssl
@load base/files/x509
module X509;
export {
redef record Info += {
# Logging is suppressed if field is set to F
logcert: bool &default=T;
};
}
# We need both the Info and the fa_file record modified.
# The only instant when we have both, the connection and the
# file available without having to loop is in the file_over_new_connection
# event.
# When that event is raised, the x509 record in f$info (which is the only
# record the logging framework gets) is not yet available. So - we
# have to do this two times, sorry.
# Alternatively, we could place it info Files::Info first - but we would
# still have to copy it.
redef record fa_file += {
logcert: bool &default=T;
};
function host_certs_only(rec: X509::Info): bool
{
return rec$logcert;
}
event bro_init() &priority=2
{
local f = Log::get_filter(X509::LOG, "default");
Log::remove_filter(X509::LOG, "default"); # disable default logging
f$pred=host_certs_only; # and add our predicate
Log::add_filter(X509::LOG, f);
}
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=2
{
if ( ! c?$ssl )
return;
local chain: vector of string;
if ( is_orig )
chain = c$ssl$client_cert_chain_fuids;
else
chain = c$ssl$cert_chain_fuids;
if ( |chain| == 0 )
{
Reporter::warning(fmt("Certificate not in chain? (fuid %s)", f$id));
return;
}
# Check if this is the host certificate
if ( f$id != chain[0] )
f$logcert=F;
}
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=2
{
f$info$x509$logcert = f$logcert; # info record available, copy information.
}

View file

@ -16,7 +16,6 @@ export {
}
redef record SSL::Info += {
sha1: string &log &optional;
notary: Response &log &optional;
};
@ -38,14 +37,13 @@ function clear_waitlist(digest: string)
}
}
event x509_certificate(c: connection, is_orig: bool, cert: X509,
chain_idx: count, chain_len: count, der_cert: string)
event ssl_established(c: connection) &priority=3
{
if ( is_orig || chain_idx != 0 || ! c?$ssl )
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
! c$ssl$cert_chain[0]?$sha1 )
return;
local digest = sha1_hash(der_cert);
c$ssl$sha1 = digest;
local digest = c$ssl$cert_chain[0]$sha1;
if ( digest in notary_cache )
{

Some files were not shown because too many files have changed in this diff Show more