mirror of
https://github.com/zeek/zeek.git
synced 2025-10-02 14:48:21 +00:00
Merge remote-tracking branch 'origin/master' into topic/seth/files-tracking
Conflicts: src/Reassem.cc src/Reassem.h src/analyzer/protocol/tcp/TCP_Reassembler.cc testing/btest/Baseline/scripts.base.frameworks.file-analysis.bifs.set_timeout_interval/bro..stdout testing/btest/Baseline/scripts.base.frameworks.file-analysis.http.partial-content/b.out testing/btest/Baseline/scripts.base.frameworks.file-analysis.http.partial-content/c.out testing/btest/Baseline/scripts.base.frameworks.file-analysis.logging/files.log
This commit is contained in:
commit
fb0a658a7c
658 changed files with 21875 additions and 5792 deletions
3
.gitmodules
vendored
3
.gitmodules
vendored
|
@ -16,9 +16,6 @@
|
|||
[submodule "cmake"]
|
||||
path = cmake
|
||||
url = git://git.bro.org/cmake
|
||||
[submodule "magic"]
|
||||
path = magic
|
||||
url = git://git.bro.org/bromagic
|
||||
[submodule "src/3rdparty"]
|
||||
path = src/3rdparty
|
||||
url = git://git.bro.org/bro-3rdparty
|
||||
|
|
746
CHANGES
746
CHANGES
|
@ -1,4 +1,750 @@
|
|||
|
||||
2.2-470 | 2014-05-16 15:16:32 -0700
|
||||
|
||||
* Add a new section "Cluster Configuration" to the docs that is
|
||||
intended as a how-to for configuring a Bro cluster. Most of this
|
||||
content was moved here from the BroControl doc (which is now
|
||||
intended as more of a reference guide for more experienced users)
|
||||
and the load balancing FAQ on the website. (Daniel Thayer)
|
||||
|
||||
* Update some doc tests and line numbers (Daniel Thayer)
|
||||
|
||||
2.2-457 | 2014-05-16 14:38:31 -0700
|
||||
|
||||
* New script policy/protocols/ssl/validate-ocsp.bro that adds OSCP
|
||||
validation to ssl.log. The work is done by a new bif
|
||||
x509_ocsp_verify(). (Bernhard Amann)
|
||||
|
||||
* STARTTLS support for POP3 and SMTP. The SSL analyzer takes over
|
||||
when seen. smtp.log now logs when a connection switches to SSL.
|
||||
(Bernhard Amann)
|
||||
|
||||
* Replace errors when parsing x509 certs with weirds. (Bernhard
|
||||
Amann)
|
||||
|
||||
* Improved Heartbleed attack/scan detection. (Bernhard Amann)
|
||||
|
||||
* Let TLS analyzer fail better when no longer in sync with the data
|
||||
stream. (Bernhard Amann)
|
||||
|
||||
2.2-444 | 2014-05-16 14:10:32 -0500
|
||||
|
||||
* Disable all default AppStat plugins except facebook. (Jon Siwek)
|
||||
|
||||
* Update for the active http test to force it to use ipv4. (Seth Hall)
|
||||
|
||||
2.2-441 | 2014-05-15 11:29:56 -0700
|
||||
|
||||
* A new RADIUS analyzer. (Vlad Grigorescu)
|
||||
|
||||
It produces a radius.log and generates two events:
|
||||
|
||||
event radius_message(c: connection, result: RADIUS::Message);
|
||||
event radius_attribute(c: connection, attr_type: count, value: string);
|
||||
|
||||
2.2-427 | 2014-05-15 13:37:23 -0400
|
||||
|
||||
* Fix dynamic SumStats update on clusters (Bernhard Amann)
|
||||
|
||||
2.2-425 | 2014-05-08 16:34:44 -0700
|
||||
|
||||
* Fix reassembly of data w/ sizes beyond 32-bit capacities. (Jon Siwek)
|
||||
|
||||
Reassembly code (e.g. for TCP) now uses int64/uint64 (signedness
|
||||
is situational) data types in place of int types in order to
|
||||
support delivering data to analyzers that pass 2GB thresholds.
|
||||
There's also changes in logic that accompany the change in data
|
||||
types, e.g. to fix TCP sequence space arithmetic inconsistencies.
|
||||
|
||||
Another significant change is in the Analyzer API: the *Packet and
|
||||
*Undelivered methods now use a uint64 in place of an int for the
|
||||
relative sequence space offset parameter.
|
||||
|
||||
Addresses BIT-348.
|
||||
|
||||
* Fixing compiler warnings. (Robin Sommer)
|
||||
|
||||
* Update SNMP analyzer's DeliverPacket method signature. (Jon Siwek)
|
||||
|
||||
2.2-417 | 2014-05-07 10:59:22 -0500
|
||||
|
||||
* Change handling of atypical OpenSSL error case in x509 verification. (Jon Siwek)
|
||||
|
||||
* Fix memory leaks in X509 certificate parsing/verification. (Jon Siwek)
|
||||
|
||||
* Fix new []/delete mismatch in input::reader::Raw::DoClose(). (Jon Siwek)
|
||||
|
||||
* Fix buffer over-reads in file_analysis::Manager::Terminate() (Jon Siwek)
|
||||
|
||||
* Fix buffer overlows in IP address masking logic. (Jon Siwek)
|
||||
|
||||
That could occur either in taking a zero-length mask on an IPv6 address
|
||||
(e.g. [fe80::]/0) or a reverse mask of length 128 on any address (e.g.
|
||||
via the remask_addr BuiltIn Function).
|
||||
|
||||
* Fix new []/delete mismatch in ~Base64Converter. (Jon Siwek)
|
||||
|
||||
2.2-410 | 2014-05-02 12:49:53 -0500
|
||||
|
||||
* Replace an unneeded OPENSSL_malloc call. (Jon Siwek)
|
||||
|
||||
2.2-409 | 2014-05-02 12:09:06 -0500
|
||||
|
||||
* Clean up and documentation for base SNMP script. (Jon Siwek)
|
||||
|
||||
* Update base SNMP script to now produce a snmp.log. (Seth Hall)
|
||||
|
||||
* Add DH support to SSL analyzer. When using DHE or DH-Anon, sever
|
||||
key parameters are now available in scriptland. Also add script to
|
||||
alert on weak certificate keys or weak dh-params. (Bernhard Amann)
|
||||
|
||||
* Add a few more ciphers Bro did not know at all so far. (Bernhard Amann)
|
||||
|
||||
* Log chosen curve when using ec cipher suite in TLS. (Bernhard Amann)
|
||||
|
||||
2.2-397 | 2014-05-01 20:29:20 -0700
|
||||
|
||||
* Fix reference counting for lookup_ID() usages. (Jon Siwek)
|
||||
|
||||
2.2-395 | 2014-05-01 20:25:48 -0700
|
||||
|
||||
* Fix missing "irc-dcc-data" service field from IRC DCC connections.
|
||||
(Jon Siwek)
|
||||
|
||||
* Correct a notice for heartbleed. The notice is thrown correctly,
|
||||
just the message conteined wrong values. (Bernhard Amann)
|
||||
|
||||
* Improve/standardize some malloc/realloc return value checks. (Jon
|
||||
Siwek)
|
||||
|
||||
* Improve file analysis manager shutdown/cleanup. (Jon Siwek)
|
||||
|
||||
2.2-388 | 2014-04-24 18:38:07 -0700
|
||||
|
||||
* Fix decoding of MIME quoted-printable. (Mareq)
|
||||
|
||||
2.2-386 | 2014-04-24 18:22:29 -0700
|
||||
|
||||
* Do a Intel::ADDR lookup for host field if we find an IP address
|
||||
there. (jshlbrd)
|
||||
|
||||
2.2-381 | 2014-04-24 17:08:45 -0700
|
||||
|
||||
* Add Java version to software framework. (Brian Little)
|
||||
|
||||
2.2-379 | 2014-04-24 17:06:21 -0700
|
||||
|
||||
* Remove unused Val::attribs member. (Jon Siwek)
|
||||
|
||||
2.2-377 | 2014-04-24 16:57:54 -0700
|
||||
|
||||
* A larger set of SSL improvements and extensions. Addresses
|
||||
BIT-1178. (Bernhard Amann)
|
||||
|
||||
- Fixes TLS protocol version detection. It also should
|
||||
bail-out correctly on non-tls-connections now
|
||||
|
||||
- Adds support for a few TLS extensions, including
|
||||
server_name, alpn, and ec-curves.
|
||||
|
||||
- Adds support for the heartbeat events.
|
||||
|
||||
- Add Heartbleed detector script.
|
||||
|
||||
- Adds basic support for OCSP stapling.
|
||||
|
||||
* Fix parsing of DNS TXT RRs w/ multiple character-strings.
|
||||
Addresses BIT-1156. (Jon Siwek)
|
||||
|
||||
2.2-353 | 2014-04-24 16:12:30 -0700
|
||||
|
||||
* Adapt HTTP partial content to cache file analysis IDs. (Jon Siwek)
|
||||
|
||||
* Adapt SSL analyzer to generate file analysis handles itself. (Jon
|
||||
Siwek)
|
||||
|
||||
* Adapt more of HTTP analyzer to use cached file analysis IDs. (Jon
|
||||
Siwek)
|
||||
|
||||
* Adapt IRC/FTP analyzers to cache file analysis IDs. (Jon Siwek)
|
||||
|
||||
* Refactor regex/signature AcceptingSet data structure and usages.
|
||||
(Jon Siwek)
|
||||
|
||||
* Enforce data size limit when checking files for MIME matches. (Jon
|
||||
Siwek)
|
||||
|
||||
* Refactor file analysis file ID lookup. (Jon Siwek)
|
||||
|
||||
2.2-344 | 2014-04-22 20:13:30 -0700
|
||||
|
||||
* Refactor various hex escaping code. (Jon Siwek)
|
||||
|
||||
2.2-341 | 2014-04-17 18:01:41 -0500
|
||||
|
||||
* Fix duplicate DNS log entries. (Robin Sommer)
|
||||
|
||||
2.2-341 | 2014-04-17 18:01:01 -0500
|
||||
|
||||
* Refactor initialization of ASCII log writer options. (Jon Siwek)
|
||||
|
||||
* Fix a memory leak in ASCII log writer. (Jon Siwek)
|
||||
|
||||
2.2-338 | 2014-04-17 17:48:17 -0500
|
||||
|
||||
* Disable input/logging threads setting their names on every
|
||||
heartbeat. (Jon Siwek)
|
||||
|
||||
* Fix bug when clearing Bloom filter contents. Reported by
|
||||
@colonelxc. (Matthias Vallentin)
|
||||
|
||||
2.2-335 | 2014-04-10 15:04:57 -0700
|
||||
|
||||
* Small logic fix for main SSL script. (Bernhard Amann)
|
||||
|
||||
* Update DPD signatures for detecting TLS 1.2. (Bernhard Amann)
|
||||
|
||||
* Remove unused data member of SMTP_Analyzer to silence a Coverity
|
||||
warning. (Jon Siwek)
|
||||
|
||||
* Fix missing @load dependencies in some scripts. Also update the
|
||||
unit test which is supposed to catch such errors. (Jon Siwek)
|
||||
|
||||
2.2-326 | 2014-04-08 15:21:51 -0700
|
||||
|
||||
* Add SNMP datagram parsing support.This supports parsing of SNMPv1
|
||||
(RFC 1157), SNMPv2 (RFC 1901/3416), and SNMPv2 (RFC 3412). An
|
||||
event is raised for each SNMP PDU type, though there's not
|
||||
currently any event handlers for them and not a default snmp.log
|
||||
either. However, simple presence of SNMP is currently visible now
|
||||
in conn.log service field and known_services.log. (Jon Siwek)
|
||||
|
||||
2.2-319 | 2014-04-03 15:53:25 -0700
|
||||
|
||||
* Improve __load__.bro creation for .bif.bro stubs. (Jon Siwek)
|
||||
|
||||
2.2-317 | 2014-04-03 10:51:31 -0400
|
||||
|
||||
* Add a uid field to the signatures.log. Addresses BIT-1171
|
||||
(Anthony Verez)
|
||||
|
||||
2.2-315 | 2014-04-01 16:50:01 -0700
|
||||
|
||||
* Change logging's "#types" description of sets to "set". Addresses
|
||||
BIT-1163 (Bernhard Amann)
|
||||
|
||||
2.2-313 | 2014-04-01 16:40:19 -0700
|
||||
|
||||
* Fix a couple nits reported by Coverity.(Jon Siwek)
|
||||
|
||||
* Fix potential memory leak in IP frag reassembly reported by
|
||||
Coverity. (Jon Siwek)
|
||||
|
||||
2.2-310 | 2014-03-31 18:52:22 -0700
|
||||
|
||||
* Fix memory leak and unchecked dynamic cast reported by Coverity.
|
||||
(Jon Siwek)
|
||||
|
||||
* Fix potential memory leak in x509 parser reported by Coverity.
|
||||
(Bernhard Amann)
|
||||
|
||||
2.2-304 | 2014-03-30 23:05:54 +0200
|
||||
|
||||
* Replace libmagic w/ Bro signatures for file MIME type
|
||||
identification. Addresses BIT-1143. (Jon Siwek)
|
||||
|
||||
Includes:
|
||||
|
||||
- libmagic is no longer used at all. All MIME type detection is
|
||||
done through new Bro signatures, and there's no longer a means
|
||||
to get verbose file type descriptions. The majority of the
|
||||
default file magic signatures are derived from the default magic
|
||||
database of libmagic ~5.17.
|
||||
|
||||
- File magic signatures consist of two new constructs in the
|
||||
signature rule parsing grammar: "file-magic" gives a regular
|
||||
expression to match against, and "file-mime" gives the MIME type
|
||||
string of content that matches the magic and an optional strength
|
||||
value for the match.
|
||||
|
||||
- Modified signature/rule syntax for identifiers: they can no
|
||||
longer start with a '-', which made for ambiguous syntax when
|
||||
doing negative strength values in "file-mime". Also brought
|
||||
syntax for Bro script identifiers in line with reality (they
|
||||
can't start with numbers or include '-' at all).
|
||||
|
||||
- A new built-in function, "file_magic", can be used to get all
|
||||
file magic matches and their corresponding strength against a
|
||||
given chunk of data.
|
||||
|
||||
- The second parameter of the "identify_data" built-in function
|
||||
can no longer be used to get verbose file type descriptions,
|
||||
though it can still be used to get the strongest matching file
|
||||
magic signature.
|
||||
|
||||
- The "file_transferred" event's "descr" parameter no longer
|
||||
contains verbose file type descriptions.
|
||||
|
||||
- The BROMAGIC environment variable no longer changes any behavior
|
||||
in Bro as magic databases are no longer used/installed.
|
||||
|
||||
- Removed "binary" and "octet-stream" mime type detections. They
|
||||
don' provide any more information than an uninitialized
|
||||
mime_type field which implicitly means no magic signature
|
||||
matches and so the media type is unknown to Bro.
|
||||
|
||||
- The "fa_file" record now contains a "mime_types" field that
|
||||
contains all magic signatures that matched the file content
|
||||
(where the "mime_type" field is just a shortcut for the
|
||||
strongest match).
|
||||
|
||||
- Reverted back to minimum requirement of CMake 2.6.3 from 2.8.0.
|
||||
|
||||
* The logic for adding file ids to {orig,resp}_fuids fields of the
|
||||
http.log incorrectly depended on the state of
|
||||
{orig,resp}_mime_types fields, so sometimes not all file ids
|
||||
associated w/ the session were logged. (Jon Siwek)
|
||||
|
||||
* Fix MHR script's use of fa_file$mime_type before checking if it's
|
||||
initialized. (Jon Siwek)
|
||||
|
||||
2.2-294 | 2014-03-30 22:08:25 +0200
|
||||
|
||||
* Rework and move X509 certificate processing from the SSL protocol
|
||||
analyzer to a dedicated file analyzer. This will allow us to
|
||||
examine X509 certificates from sources other than SSL in the
|
||||
future. Furthermore, Bro now parses more fields and extensions
|
||||
from the certificates (e.g. elliptic curve information, subject
|
||||
alternative names, basic constraints). Certificate validation also
|
||||
was improved, should be easier to use and exposes information like
|
||||
the full verified certificate chain. (Bernhard Amann)
|
||||
|
||||
This update changes the format of ssl.log and adds a new x509.log
|
||||
with certificate information. Furthermore all x509 events and
|
||||
handling functions have changed.
|
||||
|
||||
2.2-271 | 2014-03-30 20:25:17 +0200
|
||||
|
||||
* Add unit tests covering vector/set/table ctors/inits. (Jon Siwek)
|
||||
|
||||
* Fix parsing of "local" named table constructors. (Jon Siwek)
|
||||
|
||||
* Improve type checking of records. Addresses BIT-1159. (Jon Siwek)
|
||||
|
||||
2.2-267 | 2014-03-30 20:21:43 +0200
|
||||
|
||||
* Improve documentation of Bro clusters. Addresses BIT-1160.
|
||||
(Daniel Thayer)
|
||||
|
||||
2.2-263 | 2014-03-30 20:19:05 +0200
|
||||
|
||||
* Don't include locations into serialization when cloning values.
|
||||
(Robin Sommer)
|
||||
|
||||
2.2-262 | 2014-03-30 20:12:47 +0200
|
||||
|
||||
* Refactor SerializationFormat::EndWrite and ChunkedIO::Chunk memory
|
||||
management. (Jon Siwek)
|
||||
|
||||
* Improve SerializationFormat's write buffer growth strategy. (Jon
|
||||
Siwek)
|
||||
|
||||
* Add --parse-only option to exit after parsing scripts. May be
|
||||
useful for syntax-checking tools. (Jon Siwek)
|
||||
|
||||
2.2-256 | 2014-03-30 19:57:28 +0200
|
||||
|
||||
* For the summary statistics framewirk, change all &create_expire
|
||||
attributes to &read_expire in the cluster part. (Bernhard Amann)
|
||||
|
||||
2.2-254 | 2014-03-30 19:55:22 +0200
|
||||
|
||||
* Update instructions on how to build Bro docs. (Daniel Thayer)
|
||||
|
||||
2.2-251 | 2014-03-28 08:37:37 -0400
|
||||
|
||||
* Quick fix to the ElasticSearch writer. (Seth Hall)
|
||||
|
||||
2.2-250 | 2014-03-19 17:20:55 -0400
|
||||
|
||||
* Improve performance of MHR script by reducing cloned Vals in
|
||||
a "when" scope. (Jon Siwek)
|
||||
|
||||
2.2-248 | 2014-03-19 14:47:40 -0400
|
||||
|
||||
* Make SumStats work incrementally and non-blocking in non-cluster
|
||||
mode, but force it to operate by blocking if Bro is shutting
|
||||
down. (Seth Hall)
|
||||
|
||||
2.2-244 | 2014-03-17 08:24:17 -0700
|
||||
|
||||
* Fix compile errror on FreeBSD caused by wrong include file order.
|
||||
(Bernhard Amann)
|
||||
|
||||
2.2-240 | 2014-03-14 10:23:54 -0700
|
||||
|
||||
* Derive results of DNS lookups from from input when in BRO_DNS_FAKE
|
||||
mode. Addresses BIT-1134. (Jon Siwek)
|
||||
|
||||
* Fixing a few cases of undefined behaviour introduced by recent
|
||||
formatter work.
|
||||
|
||||
* Fixing compiler error. (Robin Sommer)
|
||||
|
||||
* Fixing (very unlikely) double delete in HTTP analyzer when
|
||||
decapsulating CONNECTs. (Robin Sommer)
|
||||
|
||||
2.2-235 | 2014-03-13 16:21:19 -0700
|
||||
|
||||
* The Ascii writer has a new option LogAscii::use_json for writing
|
||||
out logs as JSON. (Seth Hall)
|
||||
|
||||
* Ascii input reader now supports all config options as per-input
|
||||
stream "config" values. (Seth Hall)
|
||||
|
||||
* Refactored formatters and updated the the writers a bit. (Seth
|
||||
Hall)
|
||||
|
||||
2.2-229 | 2014-03-13 14:58:30 -0700
|
||||
|
||||
* Refactoring analyzer manager code to reuse
|
||||
ApplyScheduledAnalyzers(). (Robin Sommer)
|
||||
|
||||
2.2-228 | 2014-03-13 14:25:53 -0700
|
||||
|
||||
* Teach async DNS lookup builtin-functions about BRO_DNS_FAKE.
|
||||
Addresses BIT-1134. (Jon Siwek)
|
||||
|
||||
* Enable fake DNS mode for test suites.
|
||||
|
||||
* Improve analysis of TCP SYN/SYN-ACK reversal situations. (Jon
|
||||
Siwek)
|
||||
|
||||
- Since it's just the handshake packets out of order, they're no
|
||||
longer treated as partial connections, which some protocol analyzers
|
||||
immediately refuse to look at.
|
||||
|
||||
- The TCP_Reassembler "is_orig" state failed to change, which led to
|
||||
protocol analyzers sometimes using the wrong value for that.
|
||||
|
||||
- Add a unit test which exercises the Connection::FlipRoles() code
|
||||
path (i.e. the SYN/SYN-ACK reversal situation).
|
||||
|
||||
Addresses BIT-1148.
|
||||
|
||||
* Fix bug in Connection::FlipRoles. It didn't swap address values
|
||||
right and also didn't consider that analyzers might be scheduled
|
||||
for the new connection tuple. Reported by Kevin McMahon. Addresses
|
||||
BIT-1148. (Jon Siwek)
|
||||
|
||||
2.2-221 | 2014-03-12 17:23:18 -0700
|
||||
|
||||
* Teach configure script --enable-jemalloc, --with-jemalloc.
|
||||
Addresses BIT-1128. (Jon Siwek)
|
||||
|
||||
2.2-218 | 2014-03-12 17:19:45 -0700
|
||||
|
||||
* Improve DBG_LOG macro (perf. improvement for --enable-debug mode).
|
||||
(Jon Siwek)
|
||||
|
||||
* Silences some documentation warnings from Sphinx. (Jon Siwek)
|
||||
|
||||
2.2-215 | 2014-03-10 11:10:15 -0700
|
||||
|
||||
* Fix non-deterministic logging of unmatched DNS msgs. Addresses
|
||||
BIT-1153 (Jon Siwek)
|
||||
|
||||
2.2-213 | 2014-03-09 08:57:37 -0700
|
||||
|
||||
* No longer accidentally attempting to parse NBSTAT RRs as SRV RRs
|
||||
in DNS analyzer. (Seth Hall)
|
||||
|
||||
* Fix DNS SRV responses and a small issue with NBNS queries and
|
||||
label length. (Seth Hall)
|
||||
|
||||
- DNS SRV responses never had the code written to actually
|
||||
generate the dns_SRV_reply event. Adding this required
|
||||
extending the event a bit to add extra information. SRV responses
|
||||
now appear in the dns.log file correctly.
|
||||
|
||||
- Fixed an issue where some Microsoft NetBIOS Name Service lookups
|
||||
would exceed the max label length for DNS and cause an incorrect
|
||||
"DNS_label_too_long" weird.
|
||||
|
||||
2.2-210 | 2014-03-06 22:52:36 -0500
|
||||
|
||||
* Improve SSL logging so that connections are logged even when the
|
||||
ssl_established event is not generated as well as other small SSL
|
||||
fixes. (Bernhard Amann)
|
||||
|
||||
2.2-206 | 2014-03-03 16:52:28 -0800
|
||||
|
||||
* HTTP CONNECT proxy support. The HTTP analyzer now supports
|
||||
handling HTTP CONNECT proxies. (Seth Hall)
|
||||
|
||||
* Expanding the HTTP methods used in the DPD signature to detect
|
||||
HTTP traffic. (Seth Hall)
|
||||
|
||||
* Fixing removal of support analyzers. (Robin Sommer)
|
||||
|
||||
2.2-199 | 2014-03-03 16:34:20 -0800
|
||||
|
||||
* Allow iterating over bif functions with result type vector of any.
|
||||
This changes the internal type that is used to signal that a
|
||||
vector is unspecified from any to void. Addresses BIT-1144
|
||||
(Bernhard Amann)
|
||||
|
||||
2.2-197 | 2014-02-28 15:36:58 -0800
|
||||
|
||||
* Remove test code. (Robin Sommer)
|
||||
|
||||
2.2-194 | 2014-02-28 14:50:53 -0800
|
||||
|
||||
* Remove packet sorter. Addresses BIT-700. (Bernhard Amann)
|
||||
|
||||
2.2-192 | 2014-02-28 09:46:43 -0800
|
||||
|
||||
* Update Mozilla root bundle. (Bernhard Amann)
|
||||
|
||||
2.2-190 | 2014-02-27 07:34:44 -0800
|
||||
|
||||
* Adjust timings of a few leak tests. (Bernhard Amann)
|
||||
|
||||
2.2-187 | 2014-02-25 07:24:42 -0800
|
||||
|
||||
* More Google TLS extensions that are being actively used. (Bernhard
|
||||
Amann)
|
||||
|
||||
* Remove unused, and potentially unsafe, function
|
||||
ListVal::IncludedInString. (Bernhard Amann)
|
||||
|
||||
2.2-184 | 2014-02-24 07:28:18 -0800
|
||||
|
||||
* New TLS constants from
|
||||
https://tools.ietf.org/html/draft-bmoeller-tls-downgrade-scsv-01.
|
||||
(Bernhard Amann)
|
||||
|
||||
2.2-180 | 2014-02-20 17:29:14 -0800
|
||||
|
||||
* New SSL alert descriptions from
|
||||
https://tools.ietf.org/html/draft-ietf-tls-applayerprotoneg-04.
|
||||
(Bernhard Amann)
|
||||
|
||||
* Update SQLite. (Bernhard Amann)
|
||||
|
||||
2.2-177 | 2014-02-20 17:27:46 -0800
|
||||
|
||||
* Update to libmagic version 5.17. Addresses BIT-1136. (Jon Siwek)
|
||||
|
||||
2.2-174 | 2014-02-14 12:07:04 -0800
|
||||
|
||||
* Support for MPLS over VLAN. (Chris Kanich)
|
||||
|
||||
2.2-173 | 2014-02-14 10:50:15 -0800
|
||||
|
||||
* Fix misidentification of SOCKS traffic that in particiular seemed
|
||||
to happen a lot with DCE/RPC traffic. (Vlad Grigorescu)
|
||||
|
||||
2.2-170 | 2014-02-13 16:42:07 -0800
|
||||
|
||||
* Refactor DNS script's state management to improve performance.
|
||||
(Jon Siwek)
|
||||
|
||||
* Revert "Expanding the HTTP methods used in the signature to detect
|
||||
HTTP traffic." (Robin Sommer)
|
||||
|
||||
2.2-167 | 2014-02-12 20:17:39 -0800
|
||||
|
||||
* Increase timeouts of some unit tests. (Jon Siwek)
|
||||
|
||||
* Fix memory leak in modbus analyzer. Would happen if there's a
|
||||
'modbus_read_fifo_queue_response' event handler. (Jon Siwek)
|
||||
|
||||
* Add channel_id TLS extension number. This number is not IANA
|
||||
defined, but we see it being actively used. (Bernhard Amann)
|
||||
|
||||
* Test baseline updates for DNS change. (Robin Sommer)
|
||||
|
||||
2.2-158 | 2014-02-09 23:45:39 -0500
|
||||
|
||||
* Change dns.log to include only standard DNS queries. (Jon Siwek)
|
||||
|
||||
* Improve DNS analysis. (Jon Siwek)
|
||||
|
||||
- Fix parsing of empty question sections (when QDCOUNT == 0). In this
|
||||
case, the DNS parser would extract two 2-byte fields for use in either
|
||||
"dns_query_reply" or "dns_rejected" events (dependent on value of
|
||||
RCODE) as qclass and qtype parameters. This is not correct, because
|
||||
such fields don't actually exist in the DNS message format when
|
||||
QDCOUNT is 0. As a result, these events are no longer raised when
|
||||
there's an empty question section. Scripts that depends on checking
|
||||
for an empty question section can do that in the "dns_message" event.
|
||||
|
||||
- Add a new "dns_unknown_reply" event, for when Bro does not know how
|
||||
to fully parse a particular resource record type. This helps fix a
|
||||
problem in the default DNS scripts where the logic to complete
|
||||
request-reply pair matching doesn't work because it's waiting on more
|
||||
RR events to complete the reply. i.e. it expects ANCOUNT number of
|
||||
dns_*_reply events and will wait until it gets that many before
|
||||
completing a request-reply pair and logging it to dns.log. This could
|
||||
cause bogus replies to match a previous request if they happen to
|
||||
share a DNS transaction ID. (Jon Siwek)
|
||||
|
||||
- The previous method of matching queries with replies was still
|
||||
unreliable in cases where the reply contains no answers. The new code
|
||||
also takes extra measures to avoid pending state growing too large in
|
||||
cases where the condition to match a query with a corresponding reply is
|
||||
never met, but yet DNS messages continue to be exchanged over the same
|
||||
connection 5-tuple (preventing cleanup of the pending state). (Jon Siwek)
|
||||
|
||||
* Updates to httpmonitor and mimestats documentation. (Jeannette Dopheide)
|
||||
|
||||
* Updates to Logs and Cluster documentation (Jeannette Dopheide)
|
||||
|
||||
2.2-147 | 2014-02-07 08:06:53 -0800
|
||||
|
||||
* Fix x509-extension test sometimes failing. (Bernhard Amann)
|
||||
|
||||
2.2-144 | 2014-02-06 20:31:18 -0800
|
||||
|
||||
* Fixing bug in POP3 analyzer. With certain input the analyzer could
|
||||
end up trying to write to non-writable memory. (Robin Sommer)
|
||||
|
||||
2.2-140 | 2014-02-06 17:58:04 -0800
|
||||
|
||||
* Fixing memory leaks in input framework. (Robin Sommer)
|
||||
|
||||
* Add script to detect filtered TCP traces. Addresses BIT-1119. (Jon
|
||||
Siwek)
|
||||
|
||||
2.2-137 | 2014-02-04 09:09:55 -0800
|
||||
|
||||
* Minor unified2 script documentation fix. (Jon Siwek)
|
||||
|
||||
2.2-135 | 2014-01-31 11:09:36 -0800
|
||||
|
||||
* Added some grammar and spelling corrections to Installation and
|
||||
Quick Start Guide. (Jeannette Dopheide)
|
||||
|
||||
2.2-131 | 2014-01-30 16:11:11 -0800
|
||||
|
||||
* Extend file analysis API to allow file ID caching. This allows an
|
||||
analyzer to either provide file IDs associated with some file
|
||||
content or to cache a file ID that was already determined by
|
||||
script-layer logic so that subsequent calls to the file analysis
|
||||
interface can bypass costly detours through script-layer. This
|
||||
can yield a decent performance improvement for analyzers that are
|
||||
able to take advantage of it and deal with streaming content (like
|
||||
HTTP, which has been adapted accordingly). (Jon Siwek)
|
||||
|
||||
2.2-128 | 2014-01-30 15:58:47 -0800
|
||||
|
||||
* Add leak test for Exec module. (Bernhard Amann)
|
||||
|
||||
* Fix file_over_new_connection event to trigger when entire file is
|
||||
missed. (Jon Siwek)
|
||||
|
||||
* Improve TCP connection size reporting for half-open connections.
|
||||
(Jon Siwek)
|
||||
|
||||
* Improve gap reporting in TCP connections that never see data. We
|
||||
no longer accomodate SYN/FIN/RST-filtered traces by not reporting
|
||||
missing data. The behavior can be reverted by redef'ing
|
||||
"detect_filtered_trace". (Jon Siwek)
|
||||
|
||||
* Improve TCP FIN retransmission handling. (Jon Siwek)
|
||||
|
||||
2.2-120 | 2014-01-28 10:25:23 -0800
|
||||
|
||||
* Fix and extend x509_extension() event, which now actually returns
|
||||
the extension. (Bernhard Amann)
|
||||
|
||||
New event signauture:
|
||||
|
||||
event x509_extension(c: connection, is_orig: bool, cert:X509, extension: X509_extension_info)
|
||||
|
||||
2.2-117 | 2014-01-23 14:18:19 -0800
|
||||
|
||||
* Fixing initialization context in anonymous functions. (Robin
|
||||
Sommer)
|
||||
|
||||
2.2-115 | 2014-01-22 12:11:18 -0800
|
||||
|
||||
* Add unit tests for new Bro Manual docs. (Jon Siwek)
|
||||
|
||||
* New content for the "Using Bro" section of the manual. (Rafael
|
||||
Bonilla/Jon Siwek)
|
||||
|
||||
2.2-105 | 2014-01-20 12:16:48 -0800
|
||||
|
||||
* Support GRE tunnel decapsulation, including enhanced GRE headers.
|
||||
GRE tunnels are treated just like IP-in-IP tunnels by parsing past
|
||||
the GRE header in between the delivery and payload IP packets.
|
||||
Addresses BIT-867. (Jon Siwek)
|
||||
|
||||
* Simplify FragReassembler memory management. (Jon Siwek)
|
||||
|
||||
2.2-102 | 2014-01-20 12:00:29 -0800
|
||||
|
||||
* Include file information (MIME type and description) into notice
|
||||
emails if available. (Justin Azoff)
|
||||
|
||||
2.2-100 | 2014-01-20 11:54:58 -0800
|
||||
|
||||
* Fix caching of recently validated SSL certifcates. (Justin Azoff)
|
||||
|
||||
2.2-98 | 2014-01-20 11:50:32 -0800
|
||||
|
||||
* For notice suppresion, instead of storing the entire notice in
|
||||
Notice::suppressing, just store the time the notice should be
|
||||
suppressed until. This saves significant memory but can no longer
|
||||
raise end_suppression, which has been removed. (Justin Azoff)
|
||||
|
||||
2.2-96 | 2014-01-20 11:41:07 -0800
|
||||
|
||||
* Integrate libmagic 5.16. Bro now now always relies on
|
||||
builtin/shipped magic library/database. (Jon Siwek)
|
||||
|
||||
* Bro now requires a CMake 2.8.x, but no longer a pre-installed
|
||||
libmagic. (Jon Siwek)
|
||||
|
||||
2.2-93 | 2014-01-13 09:16:51 -0800
|
||||
|
||||
* Fixing compile problems with some versions of libc++. Reported by
|
||||
Craig Leres. (Robin Sommer)
|
||||
|
||||
2.2-91 | 2014-01-13 01:33:28 -0800
|
||||
|
||||
* Improve GeoIP City database support. When trying to open a city
|
||||
database, it now considers both the "REV0" and "REV1" versions of
|
||||
the city database instead of just the former. (Jon Siwek)
|
||||
|
||||
* Broxygen init fixes. Addresses BIT-1110. (Jon Siwek)
|
||||
|
||||
- Don't check mtime of bro binary if BRO_DISABLE_BROXYGEN env var set.
|
||||
|
||||
- Fix failure to locate bro binary if invoking from a relative
|
||||
path and '.' isn't in PATH.
|
||||
|
||||
* Fix for packet writing to make it use the global snap length.
|
||||
(Seth Hall)
|
||||
|
||||
* Fix for traffic with TCP segmentation offloading with IP header
|
||||
len field being set to zero. (Seth Hall)
|
||||
|
||||
* Canonify output of a unit test. (Jon Siwek)
|
||||
|
||||
* A set of documentation updates. (Daniel Thayer)
|
||||
|
||||
- Fix typo in Bro 2.2 NEWS on string indexing.
|
||||
- Fix typo in the Quick Start Guide, and clarified the
|
||||
instructions about modifying crontab.
|
||||
- Add/fix documentation for missing/misnamed event parameters.
|
||||
- Fix typos in BIF documentation of hexstr_to_bytestring.
|
||||
- Update the documentation of types and attributes.
|
||||
- Documented the new substring extraction functionality.
|
||||
- Clarified the description of "&priority" and "void".
|
||||
|
||||
2.2-75 | 2013-12-18 08:36:50 -0800
|
||||
|
||||
* Fixing segfault with mismatching set &default in record fields.
|
||||
|
|
|
@ -16,17 +16,12 @@ endif ()
|
|||
get_filename_component(BRO_SCRIPT_INSTALL_PATH ${BRO_SCRIPT_INSTALL_PATH}
|
||||
ABSOLUTE)
|
||||
|
||||
set(BRO_MAGIC_INSTALL_PATH ${BRO_ROOT_DIR}/share/bro/magic)
|
||||
set(BRO_MAGIC_SOURCE_PATH ${CMAKE_CURRENT_SOURCE_DIR}/magic/database)
|
||||
|
||||
configure_file(bro-path-dev.in ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev)
|
||||
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.sh
|
||||
"export BROPATH=`${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
|
||||
"export BROMAGIC=\"${BRO_MAGIC_SOURCE_PATH}\"\n"
|
||||
"export PATH=\"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
|
||||
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh
|
||||
"setenv BROPATH `${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev`\n"
|
||||
"setenv BROMAGIC \"${BRO_MAGIC_SOURCE_PATH}\"\n"
|
||||
"setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
|
||||
|
||||
file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/VERSION" VERSION LIMIT_COUNT 1)
|
||||
|
@ -57,7 +52,6 @@ FindRequiredPackage(BISON)
|
|||
FindRequiredPackage(PCAP)
|
||||
FindRequiredPackage(OpenSSL)
|
||||
FindRequiredPackage(BIND)
|
||||
FindRequiredPackage(LibMagic)
|
||||
FindRequiredPackage(ZLIB)
|
||||
|
||||
if (NOT BinPAC_ROOT_DIR AND
|
||||
|
@ -66,6 +60,10 @@ if (NOT BinPAC_ROOT_DIR AND
|
|||
endif ()
|
||||
FindRequiredPackage(BinPAC)
|
||||
|
||||
if (ENABLE_JEMALLOC)
|
||||
find_package(JeMalloc)
|
||||
endif ()
|
||||
|
||||
if (MISSING_PREREQS)
|
||||
foreach (prereq ${MISSING_PREREQ_DESCS})
|
||||
message(SEND_ERROR ${prereq})
|
||||
|
@ -73,19 +71,13 @@ if (MISSING_PREREQS)
|
|||
message(FATAL_ERROR "Configuration aborted due to missing prerequisites")
|
||||
endif ()
|
||||
|
||||
set(libmagic_req 5.04)
|
||||
if ( LibMagic_VERSION VERSION_LESS ${libmagic_req} )
|
||||
message(FATAL_ERROR "libmagic of at least version ${libmagic_req} required "
|
||||
"(found ${LibMagic_VERSION})")
|
||||
endif ()
|
||||
|
||||
include_directories(BEFORE
|
||||
${PCAP_INCLUDE_DIR}
|
||||
${OpenSSL_INCLUDE_DIR}
|
||||
${BIND_INCLUDE_DIR}
|
||||
${BinPAC_INCLUDE_DIR}
|
||||
${LibMagic_INCLUDE_DIR}
|
||||
${ZLIB_INCLUDE_DIR}
|
||||
${JEMALLOC_INCLUDE_DIR}
|
||||
)
|
||||
|
||||
# Optional Dependencies
|
||||
|
@ -163,8 +155,8 @@ set(brodeps
|
|||
${PCAP_LIBRARY}
|
||||
${OpenSSL_LIBRARIES}
|
||||
${BIND_LIBRARY}
|
||||
${LibMagic_LIBRARY}
|
||||
${ZLIB_LIBRARY}
|
||||
${JEMALLOC_LIBRARIES}
|
||||
${OPTLIBS}
|
||||
)
|
||||
|
||||
|
@ -201,10 +193,6 @@ CheckOptionalBuildSources(aux/broctl Broctl INSTALL_BROCTL)
|
|||
CheckOptionalBuildSources(aux/bro-aux Bro-Aux INSTALL_AUX_TOOLS)
|
||||
CheckOptionalBuildSources(aux/broccoli Broccoli INSTALL_BROCCOLI)
|
||||
|
||||
install(DIRECTORY ./magic/database/
|
||||
DESTINATION ${BRO_MAGIC_INSTALL_PATH}
|
||||
)
|
||||
|
||||
########################################################################
|
||||
## Packaging Setup
|
||||
|
||||
|
@ -249,6 +237,7 @@ message(
|
|||
"\ngperftools found: ${HAVE_PERFTOOLS}"
|
||||
"\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}"
|
||||
"\n debugging: ${USE_PERFTOOLS_DEBUG}"
|
||||
"\njemalloc: ${ENABLE_JEMALLOC}"
|
||||
"\ncURL: ${USE_CURL}"
|
||||
"\n"
|
||||
"\nDataSeries: ${USE_DATASERIES}"
|
||||
|
|
88
NEWS
88
NEWS
|
@ -9,9 +9,44 @@ Bro 2.3
|
|||
|
||||
[In progress]
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
- Libmagic is no longer a dependency.
|
||||
|
||||
New Functionality
|
||||
-----------------
|
||||
|
||||
- Support for GRE tunnel decapsulation, including enhanced GRE
|
||||
headers. GRE tunnels are treated just like IP-in-IP tunnels by
|
||||
parsing past the GRE header in between the delivery and payload IP
|
||||
packets.
|
||||
|
||||
- The DNS analyzer now actually generates the dns_SRV_reply() event.
|
||||
It had been documented before, yet was never raised.
|
||||
|
||||
- Bro now uses "file magic signatures" to identify file types. These
|
||||
are defined via two new constructs in the signature rule parsing
|
||||
grammar: "file-magic" gives a regular expression to match against,
|
||||
and "file-mime" gives the MIME type string of content that matches
|
||||
the magic and an optional strength value for the match. (See also
|
||||
"Changed Functionality" below for changes due to switching from
|
||||
using libmagic to such wsignatures.)
|
||||
|
||||
- A new built-in function, "file_magic", can be used to get all file
|
||||
magic matches and their corresponding strength against a given chunk
|
||||
of data.
|
||||
|
||||
- The SSL analyzer now has support heartbeats as well as for a few
|
||||
extensions, including server_name, alpn, and ec-curves.
|
||||
|
||||
- The SSL analyzer comes with Heartbleed detector script in
|
||||
protocols/ssl/heartbleed.bro.
|
||||
|
||||
- The X509 analyzer can now perform OSCP validation.
|
||||
|
||||
- Bro now analyzers for SNMP and Radius, which produce corresponding
|
||||
snmp.log and radius.log output (as well as various events of course).
|
||||
|
||||
Changed Functionality
|
||||
---------------------
|
||||
|
@ -22,6 +57,55 @@ Changed Functionality
|
|||
- ssl_client_hello() now receives a vector of ciphers, instead of a
|
||||
set, to preserve their order.
|
||||
|
||||
- Notice::end_suppression() has been removed.
|
||||
|
||||
- Bro now parses X.509 extensions headers and, as a result, the
|
||||
corresponding event got a new signature:
|
||||
|
||||
event x509_extension(c: connection, is_orig: bool, cert: X509, ext: X509_extension_info);
|
||||
|
||||
- Generally, all x509 events and handling functions have changed their
|
||||
signatures.
|
||||
|
||||
- Bro no longer special-cases SYN/FIN/RST-filtered traces by not
|
||||
reporting missing data. Instead, if Bro never sees any data segments
|
||||
for analyzed TCP connections, the new
|
||||
base/misc/find-filtered-trace.bro script will log a warning in
|
||||
reporter.log and to stderr.
|
||||
|
||||
The old behavior can be reverted by redef'ing
|
||||
"detect_filtered_trace".
|
||||
|
||||
- We have removed the packet sorter component.
|
||||
|
||||
- Bro no longer uses libmagic to identify file types but instead now
|
||||
comes with its own signature library (which initially is still
|
||||
derived from libmagic;s database). This leads to a number of further
|
||||
changes with regards to MIME types:
|
||||
|
||||
* The second parameter of the "identify_data" built-in function
|
||||
can no longer be used to get verbose file type descriptions,
|
||||
though it can still be used to get the strongest matching file
|
||||
magic signature.
|
||||
|
||||
* The "file_transferred" event's "descr" parameter no longer
|
||||
contains verbose file type descriptions.
|
||||
|
||||
* The BROMAGIC environment variable no longer changes any behavior
|
||||
in Bro as magic databases are no longer used/installed.
|
||||
|
||||
* Removed "binary" and "octet-stream" mime type detections. They
|
||||
don' provide any more information than an uninitialized
|
||||
mime_type field.
|
||||
|
||||
* The "fa_file" record now contains a "mime_types" field that
|
||||
contains all magic signatures that matched the file content
|
||||
(where the "mime_type" field is just a shortcut for the
|
||||
strongest match).
|
||||
|
||||
- dns_TXT_reply() now supports more than one string entry by receiving
|
||||
a vector of strings.
|
||||
|
||||
Bro 2.2
|
||||
=======
|
||||
|
||||
|
@ -198,9 +282,9 @@ New Functionality
|
|||
global s = MySet([$c=1], [$c=2]);
|
||||
|
||||
- Strings now support the subscript operator to extract individual
|
||||
characters and substrings (e.g., ``s[4]``, ``s[1,5]``). The index
|
||||
characters and substrings (e.g., ``s[4]``, ``s[1:5]``). The index
|
||||
expression can take up to two indices for the start and end index of
|
||||
the substring to return (e.g. ``mystring[1,3]``).
|
||||
the substring to return (e.g. ``mystring[1:3]``).
|
||||
|
||||
- Functions now support default parameters, e.g.::
|
||||
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
2.2-75
|
||||
2.2-470
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit 54b321009b750268526419bdbd841f421c839313
|
||||
Subproject commit b0877edc68af6ae08face528fc411c8ce21f2e30
|
|
@ -1 +1 @@
|
|||
Subproject commit ebf9c0d88ae8230845b91f15755156f93ff21aa8
|
||||
Subproject commit 6dfc648d22d234d2ba4b1cb0fc74cda2eb023d1e
|
|
@ -1 +1 @@
|
|||
Subproject commit e02ccc0a27e64b147f01e4c7deb5b897864d59d5
|
||||
Subproject commit 561ccdd6edec4ac5540f3d5565aefb59e7510634
|
|
@ -1 +1 @@
|
|||
Subproject commit 2e07720b4f129802e07ca99498e2aff4542c737a
|
||||
Subproject commit 73f4307742bb8841017ee1b4eb5927674bc5f792
|
|
@ -1 +1 @@
|
|||
Subproject commit 26c3136d56493017bc33c5a2f22ae393d585c2d9
|
||||
Subproject commit 4e2ec35917acb883c7d2ab19af487f3863c687ae
|
2
cmake
2
cmake
|
@ -1 +1 @@
|
|||
Subproject commit e7a46cb82ee10aa522c4d88115baf10181277d20
|
||||
Subproject commit 0f301aa08a970150195a2ea5b3ed43d2d98b35b3
|
10
configure
vendored
10
configure
vendored
|
@ -32,6 +32,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
|
|||
--enable-perftools force use of Google perftools on non-Linux systems
|
||||
(automatically on when perftools is present on Linux)
|
||||
--enable-perftools-debug use Google's perftools for debugging
|
||||
--enable-jemalloc link against jemalloc
|
||||
--enable-ruby build ruby bindings for broccoli (deprecated)
|
||||
--disable-broccoli don't build or install the Broccoli library
|
||||
--disable-broctl don't install Broctl
|
||||
|
@ -54,6 +55,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
|
|||
Optional Packages in Non-Standard Locations:
|
||||
--with-geoip=PATH path to the libGeoIP install root
|
||||
--with-perftools=PATH path to Google Perftools install root
|
||||
--with-jemalloc=PATH path to jemalloc install root
|
||||
--with-python=PATH path to Python interpreter
|
||||
--with-python-lib=PATH path to libpython
|
||||
--with-python-inc=PATH path to Python headers
|
||||
|
@ -105,6 +107,7 @@ append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
|
|||
append_cache_entry ENABLE_DEBUG BOOL false
|
||||
append_cache_entry ENABLE_PERFTOOLS BOOL false
|
||||
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
|
||||
append_cache_entry ENABLE_JEMALLOC BOOL false
|
||||
append_cache_entry BinPAC_SKIP_INSTALL BOOL true
|
||||
append_cache_entry BUILD_SHARED_LIBS BOOL true
|
||||
append_cache_entry INSTALL_AUX_TOOLS BOOL true
|
||||
|
@ -160,6 +163,9 @@ while [ $# -ne 0 ]; do
|
|||
append_cache_entry ENABLE_PERFTOOLS BOOL true
|
||||
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true
|
||||
;;
|
||||
--enable-jemalloc)
|
||||
append_cache_entry ENABLE_JEMALLOC BOOL true
|
||||
;;
|
||||
--disable-broccoli)
|
||||
append_cache_entry INSTALL_BROCCOLI BOOL false
|
||||
;;
|
||||
|
@ -214,6 +220,10 @@ while [ $# -ne 0 ]; do
|
|||
--with-perftools=*)
|
||||
append_cache_entry GooglePerftools_ROOT_DIR PATH $optarg
|
||||
;;
|
||||
--with-jemalloc=*)
|
||||
append_cache_entry JEMALLOC_ROOT_DIR PATH $optarg
|
||||
append_cache_entry ENABLE_JEMALLOC BOOL true
|
||||
;;
|
||||
--with-python=*)
|
||||
append_cache_entry PYTHON_EXECUTABLE PATH $optarg
|
||||
;;
|
||||
|
|
|
@ -14,8 +14,6 @@ if (NOT ${retval} EQUAL 0)
|
|||
message(FATAL_ERROR "Problem setting BROPATH")
|
||||
endif ()
|
||||
|
||||
set(BROMAGIC ${BRO_MAGIC_SOURCE_PATH})
|
||||
|
||||
# Configure the Sphinx config file (expand variables CMake might know about).
|
||||
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in
|
||||
${CMAKE_CURRENT_BINARY_DIR}/conf.py
|
||||
|
@ -34,7 +32,6 @@ add_custom_target(sphinxdoc
|
|||
${CMAKE_CURRENT_SOURCE_DIR}/ ${SPHINX_INPUT_DIR}
|
||||
# Use Bro/Broxygen to dynamically generate reST for all Bro scripts.
|
||||
COMMAND BROPATH=${BROPATH}
|
||||
BROMAGIC=${BROMAGIC}
|
||||
${CMAKE_BINARY_DIR}/src/bro
|
||||
-X ${CMAKE_CURRENT_BINARY_DIR}/broxygen.conf
|
||||
broxygen >/dev/null
|
||||
|
|
|
@ -10,7 +10,7 @@ common/general documentation, style sheets, JavaScript, etc. The Sphinx
|
|||
config file is produced from ``conf.py.in``, and can be edited to change
|
||||
various Sphinx options.
|
||||
|
||||
There is also a custom Sphinx domain implemented in ``source/ext/bro.py``
|
||||
There is also a custom Sphinx domain implemented in ``ext/bro.py``
|
||||
which adds some reST directives and roles that aid in generating useful
|
||||
index entries and cross-references. Other extensions can be added in
|
||||
a similar fashion.
|
||||
|
@ -19,7 +19,8 @@ The ``make doc`` target in the top-level Makefile can be used to locally
|
|||
render the reST files into HTML. That target depends on:
|
||||
|
||||
* Python interpreter >= 2.5
|
||||
* `Sphinx <http://sphinx.pocoo.org/>`_ >= 1.0.1
|
||||
* `Sphinx <http://sphinx-doc.org/>`_ >= 1.0.1
|
||||
* Doxygen (required only for building the Broccoli API doc)
|
||||
|
||||
After completion, HTML documentation is symlinked in ``build/html``.
|
||||
|
||||
|
|
9
doc/_static/basic.css
vendored
9
doc/_static/basic.css
vendored
|
@ -439,8 +439,17 @@ td.linenos pre {
|
|||
color: #aaa;
|
||||
}
|
||||
|
||||
.highlight-guess {
|
||||
overflow:auto;
|
||||
}
|
||||
|
||||
.highlight-none {
|
||||
overflow:auto;
|
||||
}
|
||||
|
||||
table.highlighttable {
|
||||
margin-left: 0.5em;
|
||||
overflow:scroll;
|
||||
}
|
||||
|
||||
table.highlighttable td {
|
||||
|
|
79
doc/broids/index.rst
Normal file
79
doc/broids/index.rst
Normal file
|
@ -0,0 +1,79 @@
|
|||
|
||||
.. _bro-ids:
|
||||
|
||||
=======
|
||||
Bro IDS
|
||||
=======
|
||||
|
||||
An Intrusion Detection System (IDS) allows you to detect suspicious
|
||||
activities happening on your network as a result of a past or active
|
||||
attack. Because of its programming capabilities, Bro can easily be
|
||||
configured to behave like traditional IDSs and detect common attacks
|
||||
with well known patterns, or you can create your own scripts to detect
|
||||
conditions specific to your particular case.
|
||||
|
||||
In the following sections, we present a few examples of common uses of
|
||||
Bro as an IDS.
|
||||
|
||||
-------------------------------------------------
|
||||
Detecting an FTP Brute-force Attack and Notifying
|
||||
-------------------------------------------------
|
||||
|
||||
For the purpose of this exercise, we define FTP brute-forcing as too many
|
||||
rejected usernames and passwords occurring from a single address. We
|
||||
start by defining a threshold for the number of attempts, a monitoring
|
||||
interval (in minutes), and a new notice type.
|
||||
|
||||
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
|
||||
:lines: 9-25
|
||||
|
||||
Using the ftp_reply event, we check for error codes from the `500
|
||||
series <http://en.wikipedia.org/wiki/List_of_FTP_server_return_codes>`_
|
||||
for the "USER" and "PASS" commands, representing rejected usernames or
|
||||
passwords. For this, we can use the :bro:see:`FTP::parse_ftp_reply_code`
|
||||
function to break down the reply code and check if the first digit is a
|
||||
"5" or not. If true, we then use the :ref:`Summary Statistics Framework
|
||||
<sumstats-framework>` to keep track of the number of failed attempts.
|
||||
|
||||
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
|
||||
:lines: 52-60
|
||||
|
||||
Next, we use the SumStats framework to raise a notice of the attack when
|
||||
the number of failed attempts exceeds the specified threshold during the
|
||||
measuring interval.
|
||||
|
||||
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
|
||||
:lines: 28-50
|
||||
|
||||
Below is the final code for our script.
|
||||
|
||||
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/protocols/ftp/detect-bruteforcing.bro
|
||||
|
||||
.. btest:: ftp-bruteforce
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/ftp/bruteforce.pcap protocols/ftp/detect-bruteforcing.bro
|
||||
@TEST-EXEC: btest-rst-include notice.log
|
||||
|
||||
As a final note, the :doc:`detect-bruteforcing.bro
|
||||
</scripts/policy/protocols/ftp/detect-bruteforcing.bro>` script above is
|
||||
included with Bro out of the box. Use this feature by loading this script
|
||||
during startup.
|
||||
|
||||
-------------
|
||||
Other Attacks
|
||||
-------------
|
||||
|
||||
Detecting SQL Injection Attacks
|
||||
-------------------------------
|
||||
|
||||
Checking files against known malware hashes
|
||||
-------------------------------------------
|
||||
|
||||
Files transmitted on your network could either be completely harmless or
|
||||
contain viruses and other threats. One possible action against this
|
||||
threat is to compute the hashes of the files and compare them against a
|
||||
list of known malware hashes. Bro simplifies this task by offering a
|
||||
:doc:`detect-MHR.bro </scripts/policy/frameworks/files/detect-MHR.bro>`
|
||||
script that creates and compares hashes against the `Malware Hash
|
||||
Registry <https://www.team-cymru.org/Services/MHR/>`_ maintained by Team
|
||||
Cymru. Use this feature by loading this script during startup.
|
|
@ -1,12 +1,19 @@
|
|||
|
||||
========================
|
||||
Setting up a Bro Cluster
|
||||
Bro Cluster Architecture
|
||||
========================
|
||||
|
||||
Intro
|
||||
------
|
||||
|
||||
Bro is not multithreaded, so once the limitations of a single processor core are reached, the only option currently is to spread the workload across many cores or even many physical computers. The cluster deployment scenario for Bro is the current solution to build these larger systems. The accompanying tools and scripts provide the structure to easily manage many Bro processes examining packets and doing correlation activities but acting as a singular, cohesive entity.
|
||||
Bro is not multithreaded, so once the limitations of a single processor core
|
||||
are reached the only option currently is to spread the workload across many
|
||||
cores, or even many physical computers. The cluster deployment scenario for
|
||||
Bro is the current solution to build these larger systems. The tools and
|
||||
scripts that accompany Bro provide the structure to easily manage many Bro
|
||||
processes examining packets and doing correlation activities but acting as
|
||||
a singular, cohesive entity. This document describes the Bro cluster
|
||||
architecture. For information on how to configure a Bro cluster,
|
||||
see the documentation for
|
||||
:doc:`BroControl <../components/broctl/README>`.
|
||||
|
||||
Architecture
|
||||
---------------
|
||||
|
@ -17,42 +24,97 @@ The figure below illustrates the main components of a Bro cluster.
|
|||
|
||||
Tap
|
||||
***
|
||||
This is a mechanism that splits the packet stream in order to make a copy
|
||||
available for inspection. Examples include the monitoring port on a switch and
|
||||
an optical splitter for fiber networks.
|
||||
The tap is a mechanism that splits the packet stream in order to make a copy
|
||||
available for inspection. Examples include the monitoring port on a switch
|
||||
and an optical splitter on fiber networks.
|
||||
|
||||
Frontend
|
||||
********
|
||||
This is a discrete hardware device or on-host technique that will split your traffic into many streams or flows. The Bro binary does not do this job. There are numerous ways to accomplish this task, some of which are described below in `Frontend Options`_.
|
||||
The frontend is a discrete hardware device or on-host technique that splits
|
||||
traffic into many streams or flows. The Bro binary does not do this job.
|
||||
There are numerous ways to accomplish this task, some of which are described
|
||||
below in `Frontend Options`_.
|
||||
|
||||
Manager
|
||||
*******
|
||||
This is a Bro process which has two primary jobs. It receives log messages and notices from the rest of the nodes in the cluster using the Bro communications protocol. The result is that you will end up with single logs for each log instead of many discrete logs that you have to later combine in some manner with post processing. The manager also takes the opportunity to de-duplicate notices and it has the ability to do so since it’s acting as the choke point for notices and how notices might be processed into actions such as emailing, paging, or blocking.
|
||||
The manager is a Bro process that has two primary jobs. It receives log
|
||||
messages and notices from the rest of the nodes in the cluster using the Bro
|
||||
communications protocol. The result is a single log instead of many
|
||||
discrete logs that you have to combine in some manner with post-processing.
|
||||
The manager also takes the opportunity to de-duplicate notices, and it has the
|
||||
ability to do so since it's acting as the choke point for notices and how
|
||||
notices might be processed into actions (e.g., emailing, paging, or blocking).
|
||||
|
||||
The manager process is started first by BroControl and it only opens it’s designated port and waits for connections, it doesn’t initiate any connections to the rest of the cluster. Once the workers are started and connect to the manager, logs and notices will start arriving to the manager process from the workers.
|
||||
The manager process is started first by BroControl and it only opens its
|
||||
designated port and waits for connections, it doesn't initiate any
|
||||
connections to the rest of the cluster. Once the workers are started and
|
||||
connect to the manager, logs and notices will start arriving to the manager
|
||||
process from the workers.
|
||||
|
||||
Proxy
|
||||
*****
|
||||
This is a Bro process which manages synchronized state. Variables can be synchronized across connected Bro processes automatically in Bro and proxies will help the workers by alleviating the need for all of the workers to connect directly to each other.
|
||||
The proxy is a Bro process that manages synchronized state. Variables can
|
||||
be synchronized across connected Bro processes automatically. Proxies help
|
||||
the workers by alleviating the need for all of the workers to connect
|
||||
directly to each other.
|
||||
|
||||
Examples of synchronized state from the scripts that ship with Bro are things such as the full list of “known” hosts and services which are hosts or services which have been detected as performing full TCP handshakes or an analyzed protocol has been found on the connection. If worker A detects host 1.2.3.4 as an active host, it would be beneficial for worker B to know that as well so worker A shares that information as an insertion to a set <link to set documentation would be good here> which travels to the cluster’s proxy and the proxy then sends that same set insertion to worker B. The result is that worker A and worker B have shared knowledge about host and services that are active on the network being monitored.
|
||||
Examples of synchronized state from the scripts that ship with Bro include
|
||||
the full list of "known" hosts and services (which are hosts or services
|
||||
identified as performing full TCP handshakes) or an analyzed protocol has been
|
||||
found on the connection. If worker A detects host 1.2.3.4 as an active host,
|
||||
it would be beneficial for worker B to know that as well. So worker A shares
|
||||
that information as an insertion to a set which travels to the cluster's
|
||||
proxy and the proxy sends that same set insertion to worker B. The result
|
||||
is that worker A and worker B have shared knowledge about host and services
|
||||
that are active on the network being monitored.
|
||||
|
||||
The proxy model extends to having multiple proxies as well if necessary for performance reasons, it only adds one additional step for the Bro processes. Each proxy connects to another proxy in a ring and the workers are shared between them as evenly as possible. When a proxy receives some new bit of state, it will share that with it’s proxy which is then shared around the ring of proxies and down to all of the workers. From a practical standpoint, there are no rules of thumb established yet for the number of proxies necessary for the number of workers they are serving. Best is to start with a single proxy and add more if communication performance problems are found.
|
||||
The proxy model extends to having multiple proxies when necessary for
|
||||
performance reasons. It only adds one additional step for the Bro processes.
|
||||
Each proxy connects to another proxy in a ring and the workers are shared
|
||||
between them as evenly as possible. When a proxy receives some new bit of
|
||||
state it will share that with its proxy, which is then shared around the
|
||||
ring of proxies, and down to all of the workers. From a practical standpoint,
|
||||
there are no rules of thumb established for the number of proxies
|
||||
necessary for the number of workers they are serving. It is best to start
|
||||
with a single proxy and add more if communication performance problems are
|
||||
found.
|
||||
|
||||
Bro processes acting as proxies don’t tend to be extremely intense to CPU or memory and users frequently run proxy processes on the same physical host as the manager.
|
||||
Bro processes acting as proxies don't tend to be extremely hard on CPU
|
||||
or memory and users frequently run proxy processes on the same physical
|
||||
host as the manager.
|
||||
|
||||
Worker
|
||||
******
|
||||
This is the Bro process that sniffs network traffic and does protocol analysis on the reassembled traffic streams. Most of the work of an active cluster takes place on the workers and as such, the workers typically represent the bulk of the Bro processes that are running in a cluster. The fastest memory and CPU core speed you can afford is best here since all of the protocol parsing and most analysis will take place here. There are no particular requirements for the disks in workers since almost all logging is done remotely to the manager and very little is normally written to disk.
|
||||
The worker is the Bro process that sniffs network traffic and does protocol
|
||||
analysis on the reassembled traffic streams. Most of the work of an active
|
||||
cluster takes place on the workers and as such, the workers typically
|
||||
represent the bulk of the Bro processes that are running in a cluster.
|
||||
The fastest memory and CPU core speed you can afford is recommended
|
||||
since all of the protocol parsing and most analysis will take place here.
|
||||
There are no particular requirements for the disks in workers since almost all
|
||||
logging is done remotely to the manager, and normally very little is written
|
||||
to disk.
|
||||
|
||||
The rule of thumb we have followed recently is to allocate approximately 1 core for every 80Mbps of traffic that is being analyzed, however this estimate could be extremely traffic mix specific. It has generally worked for mixed traffic with many users and servers. For example, if your traffic peaks around 2Gbps (combined) and you want to handle traffic at peak load, you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps estimate works for your traffic, this could be handled by 3 physical hosts dedicated to being workers with each one containing dual 6-core processors.
|
||||
The rule of thumb we have followed recently is to allocate approximately 1
|
||||
core for every 80Mbps of traffic that is being analyzed. However, this
|
||||
estimate could be extremely traffic mix-specific. It has generally worked
|
||||
for mixed traffic with many users and servers. For example, if your traffic
|
||||
peaks around 2Gbps (combined) and you want to handle traffic at peak load,
|
||||
you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps
|
||||
estimate works for your traffic, this could be handled by 3 physical hosts
|
||||
dedicated to being workers with each one containing dual 6-core processors.
|
||||
|
||||
Once a flow based load balancer is put into place this model is extremely easy to scale as well so it’s recommended that you guess at the amount of hardware you will need to fully analyze your traffic. If it turns out that you need more, it’s relatively easy to increase the size of the cluster in most cases.
|
||||
Once a flow-based load balancer is put into place this model is extremely
|
||||
easy to scale. It is recommended that you estimate the amount of
|
||||
hardware you will need to fully analyze your traffic. If more is needed it's
|
||||
relatively easy to increase the size of the cluster in most cases.
|
||||
|
||||
Frontend Options
|
||||
----------------
|
||||
|
||||
There are many options for setting up a frontend flow distributor and in many cases it may even be beneficial to do multiple stages of flow distribution on the network and on the host.
|
||||
There are many options for setting up a frontend flow distributor. In many
|
||||
cases it is beneficial to do multiple stages of flow distribution
|
||||
on the network and on the host.
|
||||
|
||||
Discrete hardware flow balancers
|
||||
********************************
|
||||
|
@ -60,12 +122,24 @@ Discrete hardware flow balancers
|
|||
cPacket
|
||||
^^^^^^^
|
||||
|
||||
If you are monitoring one or more 10G physical interfaces, the recommended solution is to use either a cFlow or cVu device from cPacket because they are currently being used very successfully at a number of sites. These devices will perform layer-2 load balancing by rewriting the destination ethernet MAC address to cause each packet associated with a particular flow to have the same destination MAC. The packets can then be passed directly to a monitoring host where each worker has a BPF filter to limit its visibility to only that stream of flows or onward to a commodity switch to split the traffic out to multiple 1G interfaces for the workers. This can ultimately greatly reduce costs since workers can use relatively inexpensive 1G interfaces.
|
||||
If you are monitoring one or more 10G physical interfaces, the recommended
|
||||
solution is to use either a cFlow or cVu device from cPacket because they
|
||||
are used successfully at a number of sites. These devices will perform
|
||||
layer-2 load balancing by rewriting the destination Ethernet MAC address
|
||||
to cause each packet associated with a particular flow to have the same
|
||||
destination MAC. The packets can then be passed directly to a monitoring
|
||||
host where each worker has a BPF filter to limit its visibility to only that
|
||||
stream of flows, or onward to a commodity switch to split the traffic out to
|
||||
multiple 1G interfaces for the workers. This greatly reduces
|
||||
costs since workers can use relatively inexpensive 1G interfaces.
|
||||
|
||||
OpenFlow Switches
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
We are currently exploring the use of OpenFlow based switches to do flow based load balancing directly on the switch which can greatly reduce frontend costs for many users. This document will be updated when we have more information.
|
||||
We are currently exploring the use of OpenFlow based switches to do flow-based
|
||||
load balancing directly on the switch, which greatly reduces frontend
|
||||
costs for many users. This document will be updated when we have more
|
||||
information.
|
||||
|
||||
On host flow balancing
|
||||
**********************
|
||||
|
@ -73,14 +147,26 @@ On host flow balancing
|
|||
PF_RING
|
||||
^^^^^^^
|
||||
|
||||
The PF_RING software for Linux has a “clustering” feature which will do flow based load balancing across a number of processes that are sniffing the same interface. This will allow you to easily take advantage of multiple cores in a single physical host because Bro’s main event loop is single threaded and can’t natively utilize all of the cores. More information about Bro with PF_RING can be found here: (someone want to write a quick Bro/PF_RING tutorial to link to here? document installing kernel module, libpcap wrapper, building Bro with the --with-pcap configure option)
|
||||
The PF_RING software for Linux has a "clustering" feature which will do
|
||||
flow-based load balancing across a number of processes that are sniffing the
|
||||
same interface. This allows you to easily take advantage of multiple
|
||||
cores in a single physical host because Bro's main event loop is single
|
||||
threaded and can't natively utilize all of the cores. If you want to use
|
||||
PF_RING, see the documentation on `how to configure Bro with PF_RING
|
||||
<http://bro.org/documentation/load-balancing.html>`_.
|
||||
|
||||
Netmap
|
||||
^^^^^^
|
||||
|
||||
FreeBSD has an in-progress project named Netmap which will enable flow based load balancing as well. When it becomes viable for real world use, this document will be updated.
|
||||
FreeBSD has an in-progress project named Netmap which will enable flow-based
|
||||
load balancing as well. When it becomes viable for real world use, this
|
||||
document will be updated.
|
||||
|
||||
Click! Software Router
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Click! can be used for flow based load balancing with a simple configuration. (link to an example for the config). This solution is not recommended on Linux due to Bro’s PF_RING support and only as a last resort on other operating systems since it causes a lot of overhead due to context switching back and forth between kernel and userland several times per packet.
|
||||
Click! can be used for flow based load balancing with a simple configuration.
|
||||
This solution is not recommended on
|
||||
Linux due to Bro's PF_RING support and only as a last resort on other
|
||||
operating systems since it causes a lot of overhead due to context switching
|
||||
back and forth between kernel and userland several times per packet.
|
||||
|
|
263
doc/configuration/index.rst
Normal file
263
doc/configuration/index.rst
Normal file
|
@ -0,0 +1,263 @@
|
|||
|
||||
.. _configuration:
|
||||
|
||||
=====================
|
||||
Cluster Configuration
|
||||
=====================
|
||||
|
||||
.. contents::
|
||||
|
||||
A *Bro Cluster* is a set of systems jointly analyzing the traffic of
|
||||
a network link in a coordinated fashion. You can operate such a setup from
|
||||
a central manager system easily using BroControl because BroControl
|
||||
hides much of the complexity of the multi-machine installation.
|
||||
|
||||
This section gives examples of how to setup common cluster configurations
|
||||
using BroControl. For a full reference on BroControl, see the
|
||||
:doc:`BroControl <../components/broctl/README>` documentation.
|
||||
|
||||
|
||||
Preparing to Setup a Cluster
|
||||
============================
|
||||
|
||||
In this document we refer to the user account used to set up the cluster
|
||||
as the "Bro user". When setting up a cluster the Bro user must be set up
|
||||
on all hosts, and this user must have ssh access from the manager to all
|
||||
machines in the cluster, and it must work without being prompted for a
|
||||
password/passphrase (for example, using ssh public key authentication).
|
||||
Also, on the worker nodes this user must have access to the target
|
||||
network interface in promiscuous mode.
|
||||
|
||||
Additional storage must be available on all hosts under the same path,
|
||||
which we will call the cluster's prefix path. We refer to this directory
|
||||
as ``<prefix>``. If you build Bro from source, then ``<prefix>`` is
|
||||
the directory specified with the ``--prefix`` configure option,
|
||||
or ``/usr/local/bro`` by default. The Bro user must be able to either
|
||||
create this directory or, where it already exists, must have write
|
||||
permission inside this directory on all hosts.
|
||||
|
||||
When trying to decide how to configure the Bro nodes, keep in mind that
|
||||
there can be multiple Bro instances running on the same host. For example,
|
||||
it's possible to run a proxy and the manager on the same host. However, it is
|
||||
recommended to run workers on a different machine than the manager because
|
||||
workers can consume a lot of CPU resources. The maximum recommended
|
||||
number of workers to run on a machine should be one or two less than
|
||||
the number of CPU cores available on that machine. Using a load-balancing
|
||||
method (such as PF_RING) along with CPU pinning can decrease the load on
|
||||
the worker machines.
|
||||
|
||||
|
||||
Basic Cluster Configuration
|
||||
===========================
|
||||
|
||||
With all prerequisites in place, perform the following steps to setup
|
||||
a Bro cluster (do this as the Bro user on the manager host only):
|
||||
|
||||
- Edit the BroControl configuration file, ``<prefix>/etc/broctl.cfg``,
|
||||
and change the value of any BroControl options to be more suitable for
|
||||
your environment. You will most likely want to change the value of
|
||||
the ``MailTo`` and ``LogRotationInterval`` options. A complete
|
||||
reference of all BroControl options can be found in the
|
||||
:doc:`BroControl <../components/broctl/README>` documentation.
|
||||
|
||||
- Edit the BroControl node configuration file, ``<prefix>/etc/node.cfg``
|
||||
to define where manager, proxies, and workers are to run. For a cluster
|
||||
configuration, you must comment-out (or remove) the standalone node
|
||||
in that file, and either uncomment or add node entries for each node
|
||||
in your cluster (manager, proxy, and workers). For example, if you wanted
|
||||
to run four Bro nodes (two workers, one proxy, and a manager) on a cluster
|
||||
consisting of three machines, your cluster configuration would look like
|
||||
this::
|
||||
|
||||
[manager]
|
||||
type=manager
|
||||
host=10.0.0.10
|
||||
|
||||
[proxy-1]
|
||||
type=proxy
|
||||
host=10.0.0.10
|
||||
|
||||
[worker-1]
|
||||
type=worker
|
||||
host=10.0.0.11
|
||||
interface=eth0
|
||||
|
||||
[worker-2]
|
||||
type=worker
|
||||
host=10.0.0.12
|
||||
interface=eth0
|
||||
|
||||
For a complete reference of all options that are allowed in the ``node.cfg``
|
||||
file, see the :doc:`BroControl <../components/broctl/README>` documentation.
|
||||
|
||||
- Edit the network configuration file ``<prefix>/etc/networks.cfg``. This
|
||||
file lists all of the networks which the cluster should consider as local
|
||||
to the monitored environment.
|
||||
|
||||
- Install workers and proxies using BroControl::
|
||||
|
||||
> broctl install
|
||||
|
||||
- Some tasks need to be run on a regular basis. On the manager node,
|
||||
insert a line like this into the crontab of the user running the
|
||||
cluster::
|
||||
|
||||
0-59/5 * * * * <prefix>/bin/broctl cron
|
||||
|
||||
(Note: if you are editing the system crontab instead of a user's own
|
||||
crontab, then you need to also specify the user which the command
|
||||
will be run as. The username must be placed after the time fields
|
||||
and before the broctl command.)
|
||||
|
||||
Note that on some systems (FreeBSD in particular), the default PATH
|
||||
for cron jobs does not include the directories where bash and python
|
||||
are installed (the symptoms of this problem would be that "broctl cron"
|
||||
works when run directly by the user, but does not work from a cron job).
|
||||
To solve this problem, you would either need to create symlinks
|
||||
to bash and python in a directory that is in the default PATH for
|
||||
cron jobs, or specify a new PATH in the crontab.
|
||||
|
||||
|
||||
PF_RING Cluster Configuration
|
||||
=============================
|
||||
|
||||
`PF_RING <http://www.ntop.org/products/pf_ring/>`_ allows speeding up the
|
||||
packet capture process by installing a new type of socket in Linux systems.
|
||||
It supports 10Gbit hardware packet filtering using standard network adapters,
|
||||
and user-space DNA (Direct NIC Access) for fast packet capture/transmission.
|
||||
|
||||
Installing PF_RING
|
||||
^^^^^^^^^^^^^^^^^^
|
||||
|
||||
1. Download and install PF_RING for your system following the instructions
|
||||
`here <http://www.ntop.org/get-started/download/#PF_RING>`_. The following
|
||||
commands will install the PF_RING libraries and kernel module (replace
|
||||
the version number 5.6.2 in this example with the version that you
|
||||
downloaded)::
|
||||
|
||||
cd /usr/src
|
||||
tar xvzf PF_RING-5.6.2.tar.gz
|
||||
cd PF_RING-5.6.2/userland/lib
|
||||
./configure --prefix=/opt/pfring
|
||||
make install
|
||||
|
||||
cd ../libpcap
|
||||
./configure --prefix=/opt/pfring
|
||||
make install
|
||||
|
||||
cd ../tcpdump-4.1.1
|
||||
./configure --prefix=/opt/pfring
|
||||
make install
|
||||
|
||||
cd ../../kernel
|
||||
make install
|
||||
|
||||
modprobe pf_ring enable_tx_capture=0 min_num_slots=32768
|
||||
|
||||
Refer to the documentation for your Linux distribution on how to load the
|
||||
pf_ring module at boot time. You will need to install the PF_RING
|
||||
library files and kernel module on all of the workers in your cluster.
|
||||
|
||||
2. Download the Bro source code.
|
||||
|
||||
3. Configure and install Bro using the following commands::
|
||||
|
||||
./configure --with-pcap=/opt/pfring
|
||||
make
|
||||
make install
|
||||
|
||||
4. Make sure Bro is correctly linked to the PF_RING libpcap libraries::
|
||||
|
||||
ldd /usr/local/bro/bin/bro | grep pcap
|
||||
libpcap.so.1 => /opt/pfring/lib/libpcap.so.1 (0x00007fa6d7d24000)
|
||||
|
||||
5. Configure BroControl to use PF_RING (explained below).
|
||||
|
||||
6. Run "broctl install" on the manager. This command will install Bro and
|
||||
all required scripts to the other machines in your cluster.
|
||||
|
||||
Using PF_RING
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
In order to use PF_RING, you need to specify the correct configuration
|
||||
options for your worker nodes in BroControl's node configuration file.
|
||||
Edit the ``node.cfg`` file and specify ``lb_method=pf_ring`` for each of
|
||||
your worker nodes. Next, use the ``lb_procs`` node option to specify how
|
||||
many Bro processes you'd like that worker node to run, and optionally pin
|
||||
those processes to certain CPU cores with the ``pin_cpus`` option (CPU
|
||||
numbering starts at zero). The correct ``pin_cpus`` setting to use is
|
||||
dependent on your CPU architecture (Intel and AMD systems enumerate
|
||||
processors in different ways). Using the wrong ``pin_cpus`` setting
|
||||
can cause poor performance. Here is what a worker node entry should
|
||||
look like when using PF_RING and CPU pinning::
|
||||
|
||||
[worker-1]
|
||||
type=worker
|
||||
host=10.0.0.50
|
||||
interface=eth0
|
||||
lb_method=pf_ring
|
||||
lb_procs=10
|
||||
pin_cpus=2,3,4,5,6,7,8,9,10,11
|
||||
|
||||
|
||||
Using PF_RING+DNA with symmetric RSS
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You must have a PF_RING+DNA license in order to do this. You can sniff
|
||||
each packet only once.
|
||||
|
||||
1. Load the DNA NIC driver (i.e. ixgbe) on each worker host.
|
||||
|
||||
2. Run "ethtool -L dna0 combined 10" (this will establish 10 RSS queues
|
||||
on your NIC) on each worker host. You must make sure that you set the
|
||||
number of RSS queues to the same as the number you specify for the
|
||||
lb_procs option in the node.cfg file.
|
||||
|
||||
3. On the manager, configure your worker(s) in node.cfg::
|
||||
|
||||
[worker-1]
|
||||
type=worker
|
||||
host=10.0.0.50
|
||||
interface=dna0
|
||||
lb_method=pf_ring
|
||||
lb_procs=10
|
||||
|
||||
|
||||
Using PF_RING+DNA with pfdnacluster_master
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You must have a PF_RING+DNA license and a libzero license in order to do
|
||||
this. You can load balance between multiple applications and sniff the
|
||||
same packets multiple times with different tools.
|
||||
|
||||
1. Load the DNA NIC driver (i.e. ixgbe) on each worker host.
|
||||
|
||||
2. Run "ethtool -L dna0 1" (this will establish 1 RSS queues on your NIC)
|
||||
on each worker host.
|
||||
|
||||
3. Run the pfdnacluster_master command on each worker host. For example::
|
||||
|
||||
pfdnacluster_master -c 21 -i dna0 -n 10
|
||||
|
||||
Make sure that your cluster ID (21 in this example) matches the interface
|
||||
name you specify in the node.cfg file. Also make sure that the number
|
||||
of processes you're balancing across (10 in this example) matches
|
||||
the lb_procs option in the node.cfg file.
|
||||
|
||||
4. If you are load balancing to other processes, you can use the
|
||||
pfringfirstappinstance variable in broctl.cfg to set the first
|
||||
application instance that Bro should use. For example, if you are running
|
||||
pfdnacluster_master with "-n 10,4" you would set
|
||||
pfringfirstappinstance=4. Unfortunately that's still a global setting
|
||||
in broctl.cfg at the moment but we may change that to something you can
|
||||
set in node.cfg eventually.
|
||||
|
||||
5. On the manager, configure your worker(s) in node.cfg::
|
||||
|
||||
[worker-1]
|
||||
type=worker
|
||||
host=10.0.0.50
|
||||
interface=dnacluster:21
|
||||
lb_method=pf_ring
|
||||
lb_procs=10
|
||||
|
|
@ -1,3 +1,6 @@
|
|||
|
||||
.. _file-analysis-framework:
|
||||
|
||||
=============
|
||||
File Analysis
|
||||
=============
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
|
||||
.. _notice-framework:
|
||||
|
||||
Notice Framework
|
||||
================
|
||||
|
||||
|
|
|
@ -64,8 +64,8 @@ expect that signature file in the same directory as the Bro script. The
|
|||
default extension of the file name is ``.sig``, and Bro appends that
|
||||
automatically when necessary.
|
||||
|
||||
Signature language
|
||||
==================
|
||||
Signature Language for Network Traffic
|
||||
======================================
|
||||
|
||||
Let's look at the format of a signature more closely. Each individual
|
||||
signature has the format ``signature <id> { <attributes> }``. ``<id>``
|
||||
|
@ -286,6 +286,44 @@ two actions defined:
|
|||
connection (``"http"``, ``"ftp"``, etc.). This is used by Bro's
|
||||
dynamic protocol detection to activate analyzers on the fly.
|
||||
|
||||
Signature Language for File Content
|
||||
===================================
|
||||
|
||||
The signature framework can also be used to identify MIME types of files
|
||||
irrespective of the network protocol/connection over which the file is
|
||||
transferred. A special type of signature can be written for this
|
||||
purpose and will be used automatically by the :doc:`Files Framework
|
||||
<file-analysis>` or by Bro scripts that use the :bro:see:`file_magic`
|
||||
built-in function.
|
||||
|
||||
Conditions
|
||||
----------
|
||||
|
||||
File signatures use a single type of content condition in the form of a
|
||||
regular expression:
|
||||
|
||||
``file-magic /<regular expression>/``
|
||||
|
||||
This is analogous to the ``payload`` content condition for the network
|
||||
traffic signature language described above. The difference is that
|
||||
``payload`` signatures are applied to payloads of network connections,
|
||||
but ``file-magic`` can be applied to any arbitrary data, it does not
|
||||
have to be tied to a network protocol/connection.
|
||||
|
||||
Actions
|
||||
-------
|
||||
|
||||
Upon matching a chunk of data, file signatures use the following action
|
||||
to get information about that data's MIME type:
|
||||
|
||||
``file-mime <string> [, <integer>]``
|
||||
|
||||
The arguments include the MIME type string associated with the file
|
||||
magic regular expression and an optional "strength" as a signed integer.
|
||||
Since multiple file magic signatures may match against a given chunk of
|
||||
data, the strength value may be used to help choose a "winner". Higher
|
||||
values are considered stronger.
|
||||
|
||||
Things to keep in mind when writing signatures
|
||||
==============================================
|
||||
|
||||
|
|
|
@ -1,3 +1,6 @@
|
|||
|
||||
.. _sumstats-framework:
|
||||
|
||||
==================
|
||||
Summary Statistics
|
||||
==================
|
||||
|
|
24
doc/httpmonitor/file_extraction.bro
Normal file
24
doc/httpmonitor/file_extraction.bro
Normal file
|
@ -0,0 +1,24 @@
|
|||
|
||||
global mime_to_ext: table[string] of string = {
|
||||
["application/x-dosexec"] = "exe",
|
||||
["text/plain"] = "txt",
|
||||
["image/jpeg"] = "jpg",
|
||||
["image/png"] = "png",
|
||||
["text/html"] = "html",
|
||||
};
|
||||
|
||||
event file_new(f: fa_file)
|
||||
{
|
||||
if ( f$source != "HTTP" )
|
||||
return;
|
||||
|
||||
if ( ! f?$mime_type )
|
||||
return;
|
||||
|
||||
if ( f$mime_type !in mime_to_ext )
|
||||
return;
|
||||
|
||||
local fname = fmt("%s-%s.%s", f$source, f$id, mime_to_ext[f$mime_type]);
|
||||
print fmt("Extracting file %s", fname);
|
||||
Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]);
|
||||
}
|
5
doc/httpmonitor/http_proxy_01.bro
Normal file
5
doc/httpmonitor/http_proxy_01.bro
Normal file
|
@ -0,0 +1,5 @@
|
|||
event http_reply(c: connection, version: string, code: count, reason: string)
|
||||
{
|
||||
if ( /^[hH][tT][tT][pP]:/ in c$http$uri && c$http$status_code == 200 )
|
||||
print fmt("A local server is acting as an open proxy: %s", c$id$resp_h);
|
||||
}
|
26
doc/httpmonitor/http_proxy_02.bro
Normal file
26
doc/httpmonitor/http_proxy_02.bro
Normal file
|
@ -0,0 +1,26 @@
|
|||
|
||||
module HTTP;
|
||||
|
||||
export {
|
||||
|
||||
global success_status_codes: set[count] = {
|
||||
200,
|
||||
201,
|
||||
202,
|
||||
203,
|
||||
204,
|
||||
205,
|
||||
206,
|
||||
207,
|
||||
208,
|
||||
226,
|
||||
304
|
||||
};
|
||||
}
|
||||
|
||||
event http_reply(c: connection, version: string, code: count, reason: string)
|
||||
{
|
||||
if ( /^[hH][tT][tT][pP]:/ in c$http$uri &&
|
||||
c$http$status_code in HTTP::success_status_codes )
|
||||
print fmt("A local server is acting as an open proxy: %s", c$id$resp_h);
|
||||
}
|
31
doc/httpmonitor/http_proxy_03.bro
Normal file
31
doc/httpmonitor/http_proxy_03.bro
Normal file
|
@ -0,0 +1,31 @@
|
|||
|
||||
@load base/utils/site
|
||||
|
||||
redef Site::local_nets += { 192.168.0.0/16 };
|
||||
|
||||
module HTTP;
|
||||
|
||||
export {
|
||||
|
||||
global success_status_codes: set[count] = {
|
||||
200,
|
||||
201,
|
||||
202,
|
||||
203,
|
||||
204,
|
||||
205,
|
||||
206,
|
||||
207,
|
||||
208,
|
||||
226,
|
||||
304
|
||||
};
|
||||
}
|
||||
|
||||
event http_reply(c: connection, version: string, code: count, reason: string)
|
||||
{
|
||||
if ( Site::is_local_addr(c$id$resp_h) &&
|
||||
/^[hH][tT][tT][pP]:/ in c$http$uri &&
|
||||
c$http$status_code in HTTP::success_status_codes )
|
||||
print fmt("A local server is acting as an open proxy: %s", c$id$resp_h);
|
||||
}
|
40
doc/httpmonitor/http_proxy_04.bro
Normal file
40
doc/httpmonitor/http_proxy_04.bro
Normal file
|
@ -0,0 +1,40 @@
|
|||
@load base/utils/site
|
||||
@load base/frameworks/notice
|
||||
|
||||
redef Site::local_nets += { 192.168.0.0/16 };
|
||||
|
||||
module HTTP;
|
||||
|
||||
export {
|
||||
|
||||
redef enum Notice::Type += {
|
||||
Open_Proxy
|
||||
};
|
||||
|
||||
global success_status_codes: set[count] = {
|
||||
200,
|
||||
201,
|
||||
202,
|
||||
203,
|
||||
204,
|
||||
205,
|
||||
206,
|
||||
207,
|
||||
208,
|
||||
226,
|
||||
304
|
||||
};
|
||||
}
|
||||
|
||||
event http_reply(c: connection, version: string, code: count, reason: string)
|
||||
{
|
||||
if ( Site::is_local_addr(c$id$resp_h) &&
|
||||
/^[hH][tT][tT][pP]:/ in c$http$uri &&
|
||||
c$http$status_code in HTTP::success_status_codes )
|
||||
NOTICE([$note=HTTP::Open_Proxy,
|
||||
$msg=fmt("A local server is acting as an open proxy: %s",
|
||||
c$id$resp_h),
|
||||
$conn=c,
|
||||
$identifier=cat(c$id$resp_h),
|
||||
$suppress_for=1day]);
|
||||
}
|
162
doc/httpmonitor/index.rst
Normal file
162
doc/httpmonitor/index.rst
Normal file
|
@ -0,0 +1,162 @@
|
|||
|
||||
.. _http-monitor:
|
||||
|
||||
================================
|
||||
Monitoring HTTP Traffic with Bro
|
||||
================================
|
||||
|
||||
Bro can be used to log the entire HTTP traffic from your network to the
|
||||
http.log file. This file can then be used for analysis and auditing
|
||||
purposes.
|
||||
|
||||
In the sections below we briefly explain the structure of the http.log
|
||||
file, then we show you how to perform basic HTTP traffic monitoring and
|
||||
analysis tasks with Bro. Some of these ideas and techniques can later be
|
||||
applied to monitor different protocols in a similar way.
|
||||
|
||||
----------------------------
|
||||
Introduction to the HTTP log
|
||||
----------------------------
|
||||
|
||||
The http.log file contains a summary of all HTTP requests and responses
|
||||
sent over a Bro-monitored network. Here are the first few columns of
|
||||
``http.log``::
|
||||
|
||||
# ts uid orig_h orig_p resp_h resp_p
|
||||
1311627961.8 HSH4uV8KVJg 192.168.1.100 52303 192.150.187.43 80
|
||||
|
||||
Every single line in this log starts with a timestamp, a unique
|
||||
connection identifier (UID), and a connection 4-tuple (originator
|
||||
host/port and responder host/port). The UID can be used to identify all
|
||||
logged activity (possibly across multiple log files) associated with a
|
||||
given connection 4-tuple over its lifetime.
|
||||
|
||||
The remaining columns detail the activity that's occurring. For
|
||||
example, the columns on the line below (shortened for brevity) show a
|
||||
request to the root of Bro website::
|
||||
|
||||
# method host uri referrer user_agent
|
||||
GET bro.org / - <...>Chrome/12.0.742.122<...>
|
||||
|
||||
Network administrators and security engineers, for instance, can use the
|
||||
information in this log to understand the HTTP activity on the network
|
||||
and troubleshoot network problems or search for anomalous activities. We must
|
||||
stress that there is no single right way to perform an analysis. It will
|
||||
depend on the expertise of the person performing the analysis and the
|
||||
specific details of the task.
|
||||
|
||||
For more information about how to handle the HTTP protocol in Bro,
|
||||
including a complete list of the fields available in http.log, go to
|
||||
Bro's :doc:`HTTP script reference
|
||||
</scripts/base/protocols/http/main.bro>`.
|
||||
|
||||
------------------------
|
||||
Detecting a Proxy Server
|
||||
------------------------
|
||||
|
||||
A proxy server is a device on your network configured to request a
|
||||
service on behalf of a third system; one of the most common examples is
|
||||
a Web proxy server. A client without Internet access connects to the
|
||||
proxy and requests a web page, the proxy sends the request to the web
|
||||
server, which receives the response, and passes it to the original
|
||||
client.
|
||||
|
||||
Proxies were conceived to help manage a network and provide better
|
||||
encapsulation. Proxies by themselves are not a security threat, but a
|
||||
misconfigured or unauthorized proxy can allow others, either inside or
|
||||
outside the network, to access any web site and even conduct malicious
|
||||
activities anonymously using the network's resources.
|
||||
|
||||
What Proxy Server traffic looks like
|
||||
-------------------------------------
|
||||
|
||||
In general, when a client starts talking with a proxy server, the
|
||||
traffic consists of two parts: (i) a GET request, and (ii) an HTTP/
|
||||
reply::
|
||||
|
||||
Request: GET http://www.bro.org/ HTTP/1.1
|
||||
Reply: HTTP/1.0 200 OK
|
||||
|
||||
This will differ from traffic between a client and a normal Web server
|
||||
because GET requests should not include "http" on the string. So we can
|
||||
use this to identify a proxy server.
|
||||
|
||||
We can write a basic script in Bro to handle the http_reply event and
|
||||
detect a reply for a ``GET http://`` request.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/httpmonitor/http_proxy_01.bro
|
||||
|
||||
.. btest:: http_proxy_01
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/http/proxy.pcap ${DOC_ROOT}/httpmonitor/http_proxy_01.bro
|
||||
|
||||
Basically, the script is checking for a "200 OK" status code on a reply
|
||||
for a request that includes "http:" (case insensitive). In reality, the
|
||||
HTTP protocol defines several success status codes other than 200, so we
|
||||
will extend our basic script to also consider the additional codes.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/httpmonitor/http_proxy_02.bro
|
||||
|
||||
.. btest:: http_proxy_02
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/http/proxy.pcap ${DOC_ROOT}/httpmonitor/http_proxy_02.bro
|
||||
|
||||
Next, we will make sure that the responding proxy is part of our local
|
||||
network.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/httpmonitor/http_proxy_03.bro
|
||||
|
||||
.. btest:: http_proxy_03
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/http/proxy.pcap ${DOC_ROOT}/httpmonitor/http_proxy_03.bro
|
||||
|
||||
.. note::
|
||||
|
||||
The redefinition of :bro:see:`Site::local_nets` is only done inside
|
||||
this script to make it a self-contained example. It's typically
|
||||
redefined somewhere else.
|
||||
|
||||
Finally, our goal should be to generate an alert when a proxy has been
|
||||
detected instead of printing a message on the console output. For that,
|
||||
we will tag the traffic accordingly and define a new ``Open_Proxy``
|
||||
``Notice`` type to alert of all tagged communications. Once a
|
||||
notification has been fired, we will further suppress it for one day.
|
||||
Below is the complete script.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/httpmonitor/http_proxy_04.bro
|
||||
|
||||
.. btest:: http_proxy_04
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/http/proxy.pcap ${DOC_ROOT}/httpmonitor/http_proxy_04.bro
|
||||
@TEST-EXEC: btest-rst-include notice.log
|
||||
|
||||
Note that this script only logs the presence of the proxy to
|
||||
``notice.log``, but if an additional email is desired (and email
|
||||
functionality is enabled), then that's done simply by redefining
|
||||
:bro:see:`Notice::emailed_types` to add the ``Open_proxy`` notice type
|
||||
to it.
|
||||
|
||||
----------------
|
||||
Inspecting Files
|
||||
----------------
|
||||
|
||||
Files are often transmitted on regular HTTP conversations between a
|
||||
client and a server. Most of the time these files are harmless, just
|
||||
images and some other multimedia content, but there are also types of
|
||||
files, specially executable files, that can damage your system. We can
|
||||
instruct Bro to create a copy of all files of certain types that it sees
|
||||
using the :ref:`File Analysis Framework <file-analysis-framework>`
|
||||
(introduced with Bro 2.2):
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/httpmonitor/file_extraction.bro
|
||||
|
||||
.. btest:: file_extraction
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd -n 5 bro -r ${TRACES}/http/bro.org.pcap ${DOC_ROOT}/httpmonitor/file_extraction.bro
|
||||
|
||||
Here, the ``mime_to_ext`` table serves two purposes. It defines which
|
||||
mime types to extract and also the file suffix of the extracted files.
|
||||
Extracted files are written to a new ``extract_files`` subdirectory.
|
||||
Also note that the first conditional in the :bro:see:`file_new` event
|
||||
handler can be removed to make this behavior generic to other protocols
|
||||
besides HTTP.
|
|
@ -1,23 +1,52 @@
|
|||
|
||||
.. Bro documentation master file
|
||||
|
||||
=================
|
||||
Bro Documentation
|
||||
=================
|
||||
==========
|
||||
Bro Manual
|
||||
==========
|
||||
|
||||
Introduction Section
|
||||
====================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
intro/index.rst
|
||||
cluster/index.rst
|
||||
install/index.rst
|
||||
quickstart/index.rst
|
||||
using/index.rst
|
||||
configuration/index.rst
|
||||
|
||||
..
|
||||
|
||||
.. _using-bro:
|
||||
|
||||
Using Bro Section
|
||||
=================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
logs/index.rst
|
||||
httpmonitor/index.rst
|
||||
broids/index.rst
|
||||
mimestats/index.rst
|
||||
|
||||
..
|
||||
|
||||
Reference Section
|
||||
=================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
scripting/index.rst
|
||||
frameworks/index.rst
|
||||
cluster/index.rst
|
||||
script-reference/index.rst
|
||||
components/index.rst
|
||||
|
||||
..
|
||||
|
||||
* :ref:`General Index <genindex>`
|
||||
* :ref:`search`
|
||||
|
||||
|
|
|
@ -1,43 +1,47 @@
|
|||
|
||||
.. _upgrade-guidelines:
|
||||
|
||||
==================
|
||||
General Guidelines
|
||||
==================
|
||||
==============
|
||||
How to Upgrade
|
||||
==============
|
||||
|
||||
If you're doing an upgrade install (rather than a fresh install),
|
||||
there's two suggested approaches: either install Bro using the same
|
||||
installation prefix directory as before, or pick a new prefix and copy
|
||||
local customizations over. In the following we summarize general
|
||||
guidelines for upgrading, see the :ref:`release-notes` for
|
||||
version-specific information.
|
||||
local customizations over. Regardless of which approach you choose,
|
||||
if you are using BroControl, then after upgrading Bro you will need to
|
||||
run "broctl check" (to verify that your new configuration is OK)
|
||||
and "broctl install" to complete the upgrade process.
|
||||
|
||||
Re-Using Previous Install Prefix
|
||||
In the following we summarize general guidelines for upgrading, see
|
||||
the :ref:`release-notes` for version-specific information.
|
||||
|
||||
|
||||
Reusing Previous Install Prefix
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you choose to configure and install Bro with the same prefix
|
||||
directory as before, local customization and configuration to files in
|
||||
``$prefix/share/bro/site`` and ``$prefix/etc`` won't be overwritten
|
||||
(``$prefix`` indicating the root of where Bro was installed). Also, logs
|
||||
generated at run-time won't be touched by the upgrade. (But making
|
||||
a backup of local changes before upgrading is still recommended.)
|
||||
generated at run-time won't be touched by the upgrade. Backing up local
|
||||
changes before upgrading is still recommended.
|
||||
|
||||
After upgrading, remember to check ``$prefix/share/bro/site`` and
|
||||
``$prefix/etc`` for ``.example`` files, which indicate the
|
||||
distribution's version of the file differs from the local one, which may
|
||||
include local changes. Review the differences, and make adjustments
|
||||
as necessary (for differences that aren't the result of a local change,
|
||||
use the new version's).
|
||||
``$prefix/etc`` for ``.example`` files, which indicate that the
|
||||
distribution's version of the file differs from the local one, and therefore,
|
||||
may include local changes. Review the differences and make adjustments
|
||||
as necessary. Use the new version for differences that aren't a result of
|
||||
a local change.
|
||||
|
||||
Using a New Install prefix
|
||||
Using a New Install Prefix
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you want to install the newer version in a different prefix
|
||||
directory than before, you can just copy local customization and
|
||||
configuration files from ``$prefix/share/bro/site`` and ``$prefix/etc``
|
||||
to the new location (``$prefix`` indicating the root of where Bro was
|
||||
originally installed). Make sure to review the files for difference
|
||||
before copying and make adjustments as necessary (for differences that
|
||||
aren't the result of a local change, use the new version's). Of
|
||||
particular note, the copied version of ``$prefix/etc/broctl.cfg`` is
|
||||
likely to need changes to the ``SpoolDir`` and ``LogDir`` settings.
|
||||
To install the newer version in a different prefix directory than before,
|
||||
copy local customization and configuration files from ``$prefix/share/bro/site``
|
||||
and ``$prefix/etc`` to the new location (``$prefix`` indicating the root of
|
||||
where Bro was originally installed). Review the files for differences
|
||||
before copying and make adjustments as necessary (use the new version for
|
||||
differences that aren't a result of a local change). Of particular note,
|
||||
the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes
|
||||
to the ``SpoolDir`` and ``LogDir`` settings.
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
.. _Xcode: https://developer.apple.com/xcode/
|
||||
.. _MacPorts: http://www.macports.org
|
||||
.. _Fink: http://www.finkproject.org
|
||||
.. _Homebrew: http://mxcl.github.com/homebrew
|
||||
.. _Homebrew: http://brew.sh
|
||||
.. _bro downloads page: http://bro.org/download/index.html
|
||||
|
||||
.. _installing-bro:
|
||||
|
@ -29,7 +29,6 @@ before you begin:
|
|||
* Libpcap (http://www.tcpdump.org)
|
||||
* OpenSSL libraries (http://www.openssl.org)
|
||||
* BIND8 library
|
||||
* Libmagic 5.04 or greater
|
||||
* Libz
|
||||
* Bash (for BroControl)
|
||||
* Python (for BroControl)
|
||||
|
@ -44,7 +43,6 @@ To build Bro from source, the following additional dependencies are required:
|
|||
* Flex (Fast Lexical Analyzer)
|
||||
* Libpcap headers (http://www.tcpdump.org)
|
||||
* OpenSSL headers (http://www.openssl.org)
|
||||
* libmagic headers
|
||||
* zlib headers
|
||||
* Perl
|
||||
|
||||
|
@ -55,13 +53,13 @@ that ``bash`` and ``python`` are in your ``PATH``):
|
|||
|
||||
.. console::
|
||||
|
||||
sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel file-devel
|
||||
sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel
|
||||
|
||||
* DEB/Debian-based Linux:
|
||||
|
||||
.. console::
|
||||
|
||||
sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev libmagic-dev
|
||||
sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev
|
||||
|
||||
* FreeBSD:
|
||||
|
||||
|
@ -78,15 +76,11 @@ that ``bash`` and ``python`` are in your ``PATH``):
|
|||
then going through its "Preferences..." -> "Downloads" menus to
|
||||
install the "Command Line Tools" component.
|
||||
|
||||
Lion (10.7) and Mountain Lion (10.8) come with all required
|
||||
dependencies except for CMake_, SWIG_, and ``libmagic``.
|
||||
|
||||
OS X comes with all required dependencies except for CMake_ and SWIG_.
|
||||
Distributions of these dependencies can likely be obtained from your
|
||||
preferred Mac OS X package management system (e.g. MacPorts_, Fink_,
|
||||
or Homebrew_).
|
||||
|
||||
Specifically for MacPorts, the ``cmake``, ``swig``,
|
||||
``swig-python`` and ``file`` packages provide the required dependencies.
|
||||
or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``,
|
||||
and ``swig-python`` packages provide the required dependencies.
|
||||
|
||||
|
||||
Optional Dependencies
|
||||
|
@ -95,7 +89,7 @@ Optional Dependencies
|
|||
Bro can make use of some optional libraries and tools if they are found at
|
||||
build time:
|
||||
|
||||
* LibGeoIP (for geo-locating IP addresses)
|
||||
* LibGeoIP (for geolocating IP addresses)
|
||||
* sendmail (enables Bro and BroControl to send mail)
|
||||
* gawk (enables all features of bro-cut)
|
||||
* curl (used by a Bro script that implements active HTTP)
|
||||
|
@ -143,14 +137,14 @@ The primary install prefix for binary packages is ``/opt/bro``.
|
|||
Non-MacOS packages that include BroControl also put variable/runtime
|
||||
data (e.g. Bro logs) in ``/var/opt/bro``.
|
||||
|
||||
Installing From Source
|
||||
Installing from Source
|
||||
==========================
|
||||
|
||||
Bro releases are bundled into source packages for convenience and
|
||||
available from the `bro downloads page`_. Alternatively, the latest
|
||||
Bro releases are bundled into source packages for convenience and are
|
||||
available on the `bro downloads page`_. Alternatively, the latest
|
||||
Bro development version can be obtained through git repositories
|
||||
hosted at ``git.bro.org``. See our `git development documentation
|
||||
<http://bro.org/development/process.html>`_ for comprehensive
|
||||
<http://bro.org/development/howtos/process.html>`_ for comprehensive
|
||||
information on Bro's use of git revision control, but the short story
|
||||
for downloading the full source code experience for Bro via git is:
|
||||
|
||||
|
@ -190,6 +184,11 @@ OpenBSD users, please see our `FAQ
|
|||
<http://www.bro.org/documentation/faq.html>`_ if you are having
|
||||
problems installing Bro.
|
||||
|
||||
Finally, if you want to build the Bro documentation (not required, because
|
||||
all of the documentation for the latest Bro release is available on the
|
||||
Bro web site), there are instructions in ``doc/README`` in the source
|
||||
distribution.
|
||||
|
||||
Configure the Run-Time Environment
|
||||
==================================
|
||||
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
|
||||
.. _using-bro:
|
||||
.. _bro-logging:
|
||||
|
||||
=========
|
||||
Using Bro
|
||||
=========
|
||||
===========
|
||||
Bro Logging
|
||||
===========
|
||||
|
||||
.. contents::
|
||||
|
||||
|
@ -24,17 +24,17 @@ Working with Log Files
|
|||
|
||||
Generally, all of Bro's log files are produced by a corresponding
|
||||
script that defines their individual structure. However, as each log
|
||||
file flows through the Logging Framework, there share a set of
|
||||
file flows through the Logging Framework, they share a set of
|
||||
structural similarities. Without breaking into the scripting aspect of
|
||||
Bro here, a bird's eye view of how the log files are produced would
|
||||
progress as follows. The script's author defines the kinds of data,
|
||||
Bro here, a bird's eye view of how the log files are produced
|
||||
progresses as follows. The script's author defines the kinds of data,
|
||||
such as the originating IP address or the duration of a connection,
|
||||
which will make up the fields (i.e., columns) of the log file. The
|
||||
author then decides what network activity should generate a single log
|
||||
file entry (i.e., one line); that could, e.g., be a connection having
|
||||
been completed or an HTTP ``GET`` method being issued by an
|
||||
file entry (i.e., one line). For example, this could be a connection
|
||||
having been completed or an HTTP ``GET`` request being issued by an
|
||||
originator. When these behaviors are observed during operation, the
|
||||
data is passed to the Logging Framework which, in turn, adds the entry
|
||||
data is passed to the Logging Framework which adds the entry
|
||||
to the appropriate log file.
|
||||
|
||||
As the fields of the log entries can be further customized by the
|
||||
|
@ -57,7 +57,7 @@ data, the string ``(empty)`` as the indicator for an empty field and
|
|||
the ``-`` character as the indicator for a field that hasn't been set.
|
||||
The timestamp for when the file was created is included under
|
||||
``#open``. The header then goes on to detail the fields being listed
|
||||
in the file and the data types of those fields in ``#fields`` and
|
||||
in the file and the data types of those fields, in ``#fields`` and
|
||||
``#types``, respectively. These two entries are often the two most
|
||||
significant points of interest as they detail not only the field names
|
||||
but the data types used. When navigating through the different log
|
||||
|
@ -66,12 +66,12 @@ definitions readily available saves the user some mental leg work. The
|
|||
field names are also a key resource for using the :ref:`bro-cut
|
||||
<bro-cut>` utility included with Bro, see below.
|
||||
|
||||
Next to the header follows the main content; in this example we see 7
|
||||
Next to the header follows the main content. In this example we see 7
|
||||
connections with their key properties, such as originator and
|
||||
responder IP addresses (note how Bro transparely handles both IPv4 and
|
||||
IPv6), transport-layer ports, application-layer services - the
|
||||
``service`` field is filled ias Bro determines a specific protocol to
|
||||
be in use, independent of the connection's ports - payload size, and
|
||||
responder IP addresses (note how Bro transparently handles both IPv4 and
|
||||
IPv6), transport-layer ports, application-layer services ( - the
|
||||
``service`` field is filled in as Bro determines a specific protocol to
|
||||
be in use, independent of the connection's ports), payload size, and
|
||||
more. See :bro:type:`Conn::Info` for a description of all fields.
|
||||
|
||||
In addition to ``conn.log``, Bro generates many further logs by
|
||||
|
@ -87,8 +87,8 @@ default, including:
|
|||
A log of FTP session-level activity.
|
||||
|
||||
``files.log``
|
||||
Summaries of files transfered over the network. This information
|
||||
is aggregrated from different protocols, including HTTP, FTP, and
|
||||
Summaries of files transferred over the network. This information
|
||||
is aggregated from different protocols, including HTTP, FTP, and
|
||||
SMTP.
|
||||
|
||||
``http.log``
|
||||
|
@ -106,7 +106,7 @@ default, including:
|
|||
``weird.log``
|
||||
A log of unexpected protocol-level activity. Whenever Bro's
|
||||
protocol analysis encounters a situation it would not expect
|
||||
(e.g., an RFC violation) is logs it in this file. Note that in
|
||||
(e.g., an RFC violation) it logs it in this file. Note that in
|
||||
practice, real-world networks tend to exhibit a large number of
|
||||
such "crud" that is usually not worth following up on.
|
||||
|
||||
|
@ -120,7 +120,7 @@ Using ``bro-cut``
|
|||
|
||||
The ``bro-cut`` utility can be used in place of other tools to build
|
||||
terminal commands that remain flexible and accurate independent of
|
||||
possible changes to log file itself. It accomplishes this by parsing
|
||||
possible changes to the log file itself. It accomplishes this by parsing
|
||||
the header in each file and allowing the user to refer to the specific
|
||||
columnar data available (in contrast to tools like ``awk`` that
|
||||
require the user to refer to fields referenced by their position).
|
||||
|
@ -131,7 +131,7 @@ from a ``conn.log``:
|
|||
|
||||
@TEST-EXEC: btest-rst-cmd -n 10 "cat conn.log | bro-cut id.orig_h id.orig_p id.resp_h duration"
|
||||
|
||||
The correspding ``awk`` command would look like this:
|
||||
The corresponding ``awk`` command will look like this:
|
||||
|
||||
.. btest:: using_bro
|
||||
|
||||
|
@ -185,8 +185,8 @@ Working with Timestamps
|
|||
|
||||
``bro-cut`` accepts the flag ``-d`` to convert the epoch time values
|
||||
in the log files to human-readable format. The following command
|
||||
includes the human readable time stamp, the unique identifier and the
|
||||
HTTP ``Host`` and HTTP ``URI`` as extracted from the ``http.log``
|
||||
includes the human readable time stamp, the unique identifier, the
|
||||
HTTP ``Host``, and HTTP ``URI`` as extracted from the ``http.log``
|
||||
file:
|
||||
|
||||
.. btest:: using_bro
|
||||
|
@ -218,7 +218,7 @@ See ``man strfime`` for more options for the format string.
|
|||
Using UIDs
|
||||
----------
|
||||
|
||||
While Bro can do signature based analysis, its primary focus is on
|
||||
While Bro can do signature-based analysis, its primary focus is on
|
||||
behavioral detection which alters the practice of log review from
|
||||
"reactionary review" to a process a little more akin to a hunting
|
||||
trip. A common progression of review includes correlating a session
|
||||
|
@ -251,3 +251,43 @@ stream and Bro is able to extract and track that information for you,
|
|||
giving you an in-depth and structured view into HTTP traffic on your
|
||||
network.
|
||||
|
||||
-----------------------
|
||||
Common Log Files
|
||||
-----------------------
|
||||
As a monitoring tool, Bro records a detailed view of the traffic inspected
|
||||
and the events generated in a series of relevant log files. These files can
|
||||
later be reviewed for monitoring, auditing and troubleshooting purposes.
|
||||
|
||||
In this section we present a brief explanation of the most commonly used log
|
||||
files generated by Bro including links to descriptions of some of the fields
|
||||
for each log type.
|
||||
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| Log File | Description | Field Descriptions |
|
||||
+=================+=======================================+==============================+
|
||||
| http.log | Shows all HTTP requests and replies | :bro:type:`HTTP::Info` |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| ftp.log | Records FTP activity | :bro:type:`FTP::Info` |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| ssl.log | Records SSL sessions including | :bro:type:`SSL::Info` |
|
||||
| | certificates used | |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| known_certs.log | Includes SSL certificates used | :bro:type:`Known::CertsInfo` |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| smtp.log | Summarizes SMTP traffic on a network | :bro:type:`SMTP::Info` |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| dns.log | Shows all DNS activity on a network | :bro:type:`DNS::Info` |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| conn.log | Records all connections seen by Bro | :bro:type:`Conn::Info` |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| dpd.log | Shows network activity on | :bro:type:`DPD::Info` |
|
||||
| | non-standard ports | |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| files.log | Records information about all files | :bro:type:`Files::Info` |
|
||||
| | transmitted over the network | |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
| weird.log | Records unexpected protocol-level | :bro:type:`Weird::Info` |
|
||||
| | activity | |
|
||||
+-----------------+---------------------------------------+------------------------------+
|
||||
|
||||
|
71
doc/mimestats/index.rst
Normal file
71
doc/mimestats/index.rst
Normal file
|
@ -0,0 +1,71 @@
|
|||
|
||||
.. _mime-stats:
|
||||
|
||||
====================
|
||||
MIME Type Statistics
|
||||
====================
|
||||
|
||||
Files are constantly transmitted over HTTP on regular networks. These
|
||||
files belong to a specific category (e.g., executable, text, image)
|
||||
identified by a `Multipurpose Internet Mail Extension (MIME)
|
||||
<http://en.wikipedia.org/wiki/MIME>`_. Although MIME was originally
|
||||
developed to identify the type of non-text attachments on email, it is
|
||||
also used by a web browser to identify the type of files transmitted and
|
||||
present them accordingly.
|
||||
|
||||
In this tutorial, we will demonstrate how to use the Sumstats Framework
|
||||
to collect statistical information based on MIME types; specifically,
|
||||
the total number of occurrences, size in bytes, and number of unique
|
||||
hosts transmitting files over HTTP per each type. For instructions on
|
||||
extracting and creating a local copy of these files, visit :ref:`this
|
||||
tutorial <http-monitor>`.
|
||||
|
||||
------------------------------------------------
|
||||
MIME Statistics with Sumstats
|
||||
------------------------------------------------
|
||||
|
||||
When working with the :ref:`Summary Statistics Framework
|
||||
<sumstats-framework>`, you need to define three different pieces: (i)
|
||||
Observations, where the event is observed and fed into the framework.
|
||||
(ii) Reducers, where observations are collected and measured. (iii)
|
||||
Sumstats, where the main functionality is implemented.
|
||||
|
||||
We start by defining our observation along with a record to store
|
||||
all statistical values and an observation interval. We are conducting our
|
||||
observation on the :bro:see:`HTTP::log_http` event and are interested
|
||||
in the MIME type, size of the file ("response_body_len"), and the
|
||||
originator host ("orig_h"). We use the MIME type as our key and create
|
||||
observers for the other two values.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
|
||||
:lines: 6-29, 54-64
|
||||
|
||||
Next, we create the reducers. The first will accumulate file sizes
|
||||
and the second will make sure we only store a host ID once. Below is
|
||||
the partial code from a :bro:see:`bro_init` handler.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
|
||||
:lines: 34-37
|
||||
|
||||
In our final step, we create the SumStats where we check for the
|
||||
observation interval. Once it expires, we populate the record
|
||||
(defined above) with all the relevant data and write it to a log.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
|
||||
:lines: 38-51
|
||||
|
||||
After putting the three pieces together we end up with the following final code for
|
||||
our script.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/mimestats/mimestats.bro
|
||||
|
||||
.. btest:: mimestats
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/http/bro.org.pcap ${DOC_ROOT}/mimestats/mimestats.bro
|
||||
@TEST-EXEC: btest-rst-include mime_metrics.log
|
||||
|
||||
.. note::
|
||||
|
||||
The redefinition of :bro:see:`Site::local_nets` is only done inside
|
||||
this script to make it a self-contained example. It's typically
|
||||
redefined somewhere else.
|
64
doc/mimestats/mimestats.bro
Normal file
64
doc/mimestats/mimestats.bro
Normal file
|
@ -0,0 +1,64 @@
|
|||
@load base/utils/site
|
||||
@load base/frameworks/sumstats
|
||||
|
||||
redef Site::local_nets += { 10.0.0.0/8 };
|
||||
|
||||
module MimeMetrics;
|
||||
|
||||
export {
|
||||
|
||||
redef enum Log::ID += { LOG };
|
||||
|
||||
type Info: record {
|
||||
## Timestamp when the log line was finished and written.
|
||||
ts: time &log;
|
||||
## Time interval that the log line covers.
|
||||
ts_delta: interval &log;
|
||||
## The mime type
|
||||
mtype: string &log;
|
||||
## The number of unique local hosts that fetched this mime type
|
||||
uniq_hosts: count &log;
|
||||
## The number of hits to the mime type
|
||||
hits: count &log;
|
||||
## The total number of bytes received by this mime type
|
||||
bytes: count &log;
|
||||
};
|
||||
|
||||
## The frequency of logging the stats collected by this script.
|
||||
const break_interval = 5mins &redef;
|
||||
}
|
||||
|
||||
event bro_init() &priority=3
|
||||
{
|
||||
Log::create_stream(MimeMetrics::LOG, [$columns=Info]);
|
||||
local r1: SumStats::Reducer = [$stream="mime.bytes",
|
||||
$apply=set(SumStats::SUM)];
|
||||
local r2: SumStats::Reducer = [$stream="mime.hits",
|
||||
$apply=set(SumStats::UNIQUE)];
|
||||
SumStats::create([$name="mime-metrics",
|
||||
$epoch=break_interval,
|
||||
$reducers=set(r1, r2),
|
||||
$epoch_result(ts: time, key: SumStats::Key, result: SumStats::Result) =
|
||||
{
|
||||
local l: Info;
|
||||
l$ts = network_time();
|
||||
l$ts_delta = break_interval;
|
||||
l$mtype = key$str;
|
||||
l$bytes = double_to_count(floor(result["mime.bytes"]$sum));
|
||||
l$hits = result["mime.hits"]$num;
|
||||
l$uniq_hosts = result["mime.hits"]$unique;
|
||||
Log::write(MimeMetrics::LOG, l);
|
||||
}]);
|
||||
}
|
||||
|
||||
event HTTP::log_http(rec: HTTP::Info)
|
||||
{
|
||||
if ( Site::is_local_addr(rec$id$orig_h) && rec?$resp_mime_types )
|
||||
{
|
||||
local mime_type = rec$resp_mime_types[0];
|
||||
SumStats::observe("mime.bytes", [$str=mime_type],
|
||||
[$num=rec$response_body_len]);
|
||||
SumStats::observe("mime.hits", [$str=mime_type],
|
||||
[$str=cat(rec$id$orig_h)]);
|
||||
}
|
||||
}
|
|
@ -12,8 +12,10 @@ Quick Start Guide
|
|||
Bro works on most modern, Unix-based systems and requires no custom
|
||||
hardware. It can be downloaded in either pre-built binary package or
|
||||
source code forms. See :ref:`installing-bro` for instructions on how to
|
||||
install Bro. Below, ``$PREFIX`` is used to reference the Bro
|
||||
installation root directory, which by default is ``/usr/local/`` if
|
||||
install Bro.
|
||||
|
||||
In the examples below, ``$PREFIX`` is used to reference the Bro
|
||||
installation root directory, which by default is ``/usr/local/bro`` if
|
||||
you install from source.
|
||||
|
||||
Managing Bro with BroControl
|
||||
|
@ -21,13 +23,16 @@ Managing Bro with BroControl
|
|||
|
||||
BroControl is an interactive shell for easily operating/managing Bro
|
||||
installations on a single system or even across multiple systems in a
|
||||
traffic-monitoring cluster.
|
||||
traffic-monitoring cluster. This section explains how to use BroControl
|
||||
to manage a stand-alone Bro installation. For instructions on how to
|
||||
configure a Bro cluster, see the :doc:`Cluster Configuration
|
||||
<../configuration/index>` documentation.
|
||||
|
||||
A Minimal Starting Configuration
|
||||
--------------------------------
|
||||
|
||||
These are the basic configuration changes to make for a minimal BroControl installation
|
||||
that will manage a single Bro instance on the ``localhost``:
|
||||
These are the basic configuration changes to make for a minimal BroControl
|
||||
installation that will manage a single Bro instance on the ``localhost``:
|
||||
|
||||
1) In ``$PREFIX/etc/node.cfg``, set the right interface to monitor.
|
||||
2) In ``$PREFIX/etc/networks.cfg``, comment out the default settings and add
|
||||
|
@ -72,7 +77,8 @@ You can leave it running for now, but to stop this Bro instance you would do:
|
|||
|
||||
[BroControl] > stop
|
||||
|
||||
We also recommend to insert the following entry into `crontab`::
|
||||
We also recommend to insert the following entry into the crontab of the user
|
||||
running BroControl::
|
||||
|
||||
0-59/5 * * * * $PREFIX/bin/broctl cron
|
||||
|
||||
|
@ -154,7 +160,7 @@ changes we want to make:
|
|||
attempt looks like it may have been successful, and we want email when
|
||||
that happens, but only for certain servers.
|
||||
|
||||
So we've defined *what* we want to do, but need to know *where* to do it.
|
||||
We've defined *what* we want to do, but need to know *where* to do it.
|
||||
The answer is to use a script written in the Bro programming language, so
|
||||
let's do a quick intro to Bro scripting.
|
||||
|
||||
|
@ -180,7 +186,7 @@ must explicitly choose if they want to load them.
|
|||
|
||||
The main entry point for the default analysis configuration of a standalone
|
||||
Bro instance managed by BroControl is the ``$PREFIX/share/bro/site/local.bro``
|
||||
script. So we'll be adding to that in the following sections, but first
|
||||
script. We'll be adding to that in the following sections, but first
|
||||
we have to figure out what to add.
|
||||
|
||||
Redefining Script Option Variables
|
||||
|
@ -196,7 +202,7 @@ A redefineable constant might seem strange, but what that really means is that
|
|||
the variable's value may not change at run-time, but whose initial value can be
|
||||
modified via the ``redef`` operator at parse-time.
|
||||
|
||||
So let's continue on our path to modify the behavior for the two SSL
|
||||
Let's continue on our path to modify the behavior for the two SSL
|
||||
and SSH notices. Looking at :doc:`/scripts/base/frameworks/notice/main.bro`,
|
||||
we see that it advertises:
|
||||
|
||||
|
@ -210,7 +216,7 @@ we see that it advertises:
|
|||
const ignored_types: set[Notice::Type] = {} &redef;
|
||||
}
|
||||
|
||||
That's exactly what we want to do for the SSL notice. So add to ``local.bro``:
|
||||
That's exactly what we want to do for the SSL notice. Add to ``local.bro``:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
|
@ -275,9 +281,9 @@ an email on the condition that the predicate function evaluates to true, which
|
|||
is whenever the notice type is an SSH login and the responding host stored
|
||||
inside the ``Info`` record's connection field is in the set of watched servers.
|
||||
|
||||
.. note:: record field member access is done with the '$' character
|
||||
.. note:: Record field member access is done with the '$' character
|
||||
instead of a '.' as might be expected from other languages, in
|
||||
order to avoid ambiguity with the builtin address type's use of '.'
|
||||
order to avoid ambiguity with the built-in address type's use of '.'
|
||||
in IPv4 dotted decimal representations.
|
||||
|
||||
Remember, to finalize that configuration change perform the ``check``,
|
||||
|
@ -291,9 +297,10 @@ tweak the most basic options. Here's some suggestions on what to explore next:
|
|||
|
||||
* We only looked at how to change options declared in the notice framework,
|
||||
there's many more options to look at in other script packages.
|
||||
* Continue reading with :ref:`using-bro` chapter which goes into more
|
||||
depth on working with Bro; then look at :ref:`writing-scripts` for
|
||||
learning how to start writing your own scripts.
|
||||
* Continue reading with :ref:`Using Bro <using-bro>` chapter which goes
|
||||
into more depth on working with Bro; then look at
|
||||
:ref:`writing-scripts` for learning how to start writing your own
|
||||
scripts.
|
||||
* Look at the scripts in ``$PREFIX/share/bro/policy`` for further ones
|
||||
you may want to load; you can browse their documentation at the
|
||||
:ref:`overview of script packages <script-packages>`.
|
||||
|
@ -406,7 +413,7 @@ logging) and adds SSL certificate validation.
|
|||
You might notice that a script you load from the command line uses the
|
||||
``@load`` directive in the Bro language to declare dependence on other scripts.
|
||||
This directive is similar to the ``#include`` of C/C++, except the semantics
|
||||
are "load this script if it hasn't already been loaded".
|
||||
are, "load this script if it hasn't already been loaded."
|
||||
|
||||
.. note:: If one wants Bro to be able to load scripts that live outside the
|
||||
default directories in Bro's installation root, the ``BROPATH`` environment
|
||||
|
|
|
@ -23,7 +23,8 @@ The Bro scripting language supports the following built-in types.
|
|||
|
||||
.. bro:type:: void
|
||||
|
||||
An internal Bro type representing the absence of a return type for a
|
||||
An internal Bro type (i.e., "void" is not a reserved keyword in the Bro
|
||||
scripting language) representing the absence of a return type for a
|
||||
function.
|
||||
|
||||
.. bro:type:: bool
|
||||
|
@ -132,10 +133,23 @@ The Bro scripting language supports the following built-in types.
|
|||
|
||||
Strings support concatenation (``+``), and assignment (``=``, ``+=``).
|
||||
Strings also support the comparison operators (``==``, ``!=``, ``<``,
|
||||
``<=``, ``>``, ``>=``). Substring searching can be performed using
|
||||
the "in" or "!in" operators (e.g., "bar" in "foobar" yields true).
|
||||
The number of characters in a string can be found by enclosing the
|
||||
string within pipe characters (e.g., ``|"abc"|`` is 3).
|
||||
``<=``, ``>``, ``>=``). The number of characters in a string can be
|
||||
found by enclosing the string within pipe characters (e.g., ``|"abc"|``
|
||||
is 3).
|
||||
|
||||
The subscript operator can extract an individual character or a substring
|
||||
of a string (string indexing is zero-based, but an index of
|
||||
-1 refers to the last character in the string, and -2 refers to the
|
||||
second-to-last character, etc.). When extracting a substring, the
|
||||
starting and ending index values are separated by a colon. For example::
|
||||
|
||||
local orig = "0123456789";
|
||||
local third_char = orig[2];
|
||||
local last_char = orig[-1];
|
||||
local first_three_chars = orig[0:2];
|
||||
|
||||
Substring searching can be performed using the "in" or "!in"
|
||||
operators (e.g., "bar" in "foobar" yields true).
|
||||
|
||||
Note that Bro represents strings internally as a count and vector of
|
||||
bytes rather than a NUL-terminated byte string (although string
|
||||
|
@ -767,7 +781,7 @@ The Bro scripting language supports the following built-in types.
|
|||
.. bro:type:: hook
|
||||
|
||||
A hook is another flavor of function that shares characteristics of
|
||||
both a :bro:type:`function` and a :bro:type:`event`. They are like
|
||||
both a :bro:type:`function` and an :bro:type:`event`. They are like
|
||||
events in that many handler bodies can be defined for the same hook
|
||||
identifier and the order of execution can be enforced with
|
||||
:bro:attr:`&priority`. They are more like functions in the way they
|
||||
|
@ -856,14 +870,14 @@ scripting language supports the following built-in attributes.
|
|||
.. bro:attr:: &optional
|
||||
|
||||
Allows a record field to be missing. For example the type ``record {
|
||||
a: int, b: port &optional }`` could be instantiated both as
|
||||
a: addr; b: port &optional; }`` could be instantiated both as
|
||||
singleton ``[$a=127.0.0.1]`` or pair ``[$a=127.0.0.1, $b=80/tcp]``.
|
||||
|
||||
.. bro:attr:: &default
|
||||
|
||||
Uses a default value for a record field, a function/hook/event
|
||||
parameter, or container elements. For example, ``table[int] of
|
||||
string &default="foo" }`` would create a table that returns the
|
||||
string &default="foo"`` would create a table that returns the
|
||||
:bro:type:`string` ``"foo"`` for any non-existing index.
|
||||
|
||||
.. bro:attr:: &redef
|
||||
|
@ -901,7 +915,7 @@ scripting language supports the following built-in attributes.
|
|||
Called right before a container element expires. The function's
|
||||
first parameter is of the same type of the container and the second
|
||||
parameter the same type of the container's index. The return
|
||||
value is a :bro:type:`interval` indicating the amount of additional
|
||||
value is an :bro:type:`interval` indicating the amount of additional
|
||||
time to wait before expiring the container element at the given
|
||||
index (which will trigger another execution of this function).
|
||||
|
||||
|
@ -925,7 +939,7 @@ scripting language supports the following built-in attributes.
|
|||
|
||||
.. bro:attr:: &persistent
|
||||
|
||||
Makes a variable persistent, i.e., its value is writen to disk (per
|
||||
Makes a variable persistent, i.e., its value is written to disk (per
|
||||
default at shutdown time).
|
||||
|
||||
.. bro:attr:: &synchronized
|
||||
|
@ -957,8 +971,9 @@ scripting language supports the following built-in attributes.
|
|||
|
||||
.. bro:attr:: &priority
|
||||
|
||||
Specifies the execution priority of an event handler. Higher values
|
||||
are executed before lower ones. The default value is 0.
|
||||
Specifies the execution priority (as a signed integer) of a hook or
|
||||
event handler. Higher values are executed before lower ones. The
|
||||
default value is 0.
|
||||
|
||||
.. bro:attr:: &group
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
@load base/protocols/conn
|
||||
@load base/protocols/dns
|
||||
@load base/protocols/http
|
||||
|
||||
event connection_state_remove(c: connection)
|
||||
{
|
||||
|
|
|
@ -87,7 +87,7 @@ Up until this point, the script has merely done some basic setup. With the next
|
|||
the script starts to define instructions to take in a given event.
|
||||
|
||||
.. btest-include:: ${BRO_SRC_ROOT}/scripts/policy/frameworks/files/detect-MHR.bro
|
||||
:lines: 38-62
|
||||
:lines: 38-71
|
||||
|
||||
The workhorse of the script is contained in the event handler for
|
||||
``file_hash``. The :bro:see:`file_hash` event allows scripts to access
|
||||
|
@ -232,7 +232,7 @@ overly populated.
|
|||
|
||||
.. btest:: connection-record-01
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_01.bro
|
||||
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_01.bro
|
||||
|
||||
As you can see from the output, the connection record is something of
|
||||
a jumble when printed on its own. Regularly taking a peek at a
|
||||
|
@ -248,9 +248,9 @@ originating host is referenced by ``c$id$orig_h`` which if given a
|
|||
narrative relates to ``orig_h`` which is a member of ``id`` which is
|
||||
a member of the data structure referred to as ``c`` that was passed
|
||||
into the event handler." Given that the responder port
|
||||
(``c$id$resp_p``) is ``53/tcp``, it's likely that Bro's base DNS scripts
|
||||
(``c$id$resp_p``) is ``53/tcp``, it's likely that Bro's base HTTP scripts
|
||||
can further populate the connection record. Let's load the
|
||||
``base/protocols/dns`` scripts and check the output of our script.
|
||||
``base/protocols/http`` scripts and check the output of our script.
|
||||
|
||||
Bro uses the dollar sign as its field delimiter and a direct
|
||||
correlation exists between the output of the connection record and the
|
||||
|
@ -262,16 +262,16 @@ brackets, which would correspond to the ``$``-delimiter in a Bro script.
|
|||
|
||||
.. btest:: connection-record-02
|
||||
|
||||
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/dns-session.trace ${DOC_ROOT}/scripting/connection_record_02.bro
|
||||
@TEST-EXEC: btest-rst-cmd bro -b -r ${TRACES}/http/get.trace ${DOC_ROOT}/scripting/connection_record_02.bro
|
||||
|
||||
The addition of the ``base/protocols/dns`` scripts populates the
|
||||
``dns=[]`` member of the connection record. While Bro is doing a
|
||||
The addition of the ``base/protocols/http`` scripts populates the
|
||||
``http=[]`` member of the connection record. While Bro is doing a
|
||||
massive amount of work in the background, it is in what is commonly
|
||||
called "scriptland" that details are being refined and decisions
|
||||
being made. Were we to continue running in "bare mode" we could slowly
|
||||
keep adding infrastructure through ``@load`` statements. For example,
|
||||
were we to ``@load base/frameworks/logging``, Bro would generate a
|
||||
``conn.log`` and ``dns.log`` for us in the current working directory.
|
||||
``conn.log`` and ``http.log`` for us in the current working directory.
|
||||
As mentioned above, including the appropriate ``@load`` statements is
|
||||
not only good practice, but can also help to indicate which
|
||||
functionalities are being used in a script. Take a second to run the
|
||||
|
@ -345,13 +345,13 @@ keyword. Unlike globals, constants can only be set or altered at
|
|||
parse time if the ``&redef`` attribute has been used. Afterwards (in
|
||||
runtime) the constants are unalterable. In most cases, re-definable
|
||||
constants are used in Bro scripts as containers for configuration
|
||||
options. For example, the configuration option to log password
|
||||
options. For example, the configuration option to log passwords
|
||||
decrypted from HTTP streams is stored in
|
||||
``HTTP::default_capture_password`` as shown in the stripped down
|
||||
:bro:see:`HTTP::default_capture_password` as shown in the stripped down
|
||||
excerpt from :doc:`/scripts/base/protocols/http/main.bro` below.
|
||||
|
||||
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro
|
||||
:lines: 8-10,19-21,120
|
||||
:lines: 9-11,20-22,121
|
||||
|
||||
Because the constant was declared with the ``&redef`` attribute, if we
|
||||
needed to turn this option on globally, we could do so by adding the
|
||||
|
|
1
magic
1
magic
|
@ -1 +0,0 @@
|
|||
Subproject commit e87fe13a7b776182ffc8c75076d42702f5c28fed
|
|
@ -7,10 +7,10 @@ module Unified2;
|
|||
export {
|
||||
redef enum Log::ID += { LOG };
|
||||
|
||||
## Directory to watch for Unified2 files.
|
||||
## File to watch for Unified2 files.
|
||||
const watch_file = "" &redef;
|
||||
|
||||
## File to watch for Unified2 records.
|
||||
## Directory to watch for Unified2 records.
|
||||
const watch_dir = "" &redef;
|
||||
|
||||
## The sid-msg.map file you would like to use for your alerts.
|
||||
|
|
1
scripts/base/files/x509/README
Normal file
1
scripts/base/files/x509/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for X509 certificates with the file analysis framework.
|
1
scripts/base/files/x509/__load__.bro
Normal file
1
scripts/base/files/x509/__load__.bro
Normal file
|
@ -0,0 +1 @@
|
|||
@load ./main
|
77
scripts/base/files/x509/main.bro
Normal file
77
scripts/base/files/x509/main.bro
Normal file
|
@ -0,0 +1,77 @@
|
|||
@load base/frameworks/files
|
||||
@load base/files/hash
|
||||
|
||||
module X509;
|
||||
|
||||
export {
|
||||
redef enum Log::ID += { LOG };
|
||||
|
||||
type Info: record {
|
||||
## Current timestamp.
|
||||
ts: time &log;
|
||||
|
||||
## File id of this certificate.
|
||||
id: string &log;
|
||||
|
||||
## Basic information about the certificate.
|
||||
certificate: X509::Certificate &log;
|
||||
|
||||
## The opaque wrapping the certificate. Mainly used
|
||||
## for the verify operations.
|
||||
handle: opaque of x509;
|
||||
|
||||
## All extensions that were encountered in the certificate.
|
||||
extensions: vector of X509::Extension &default=vector();
|
||||
|
||||
## Subject alternative name extension of the certificate.
|
||||
san: X509::SubjectAlternativeName &optional &log;
|
||||
|
||||
## Basic constraints extension of the certificate.
|
||||
basic_constraints: X509::BasicConstraints &optional &log;
|
||||
};
|
||||
|
||||
## Event for accessing logged records.
|
||||
global log_x509: event(rec: Info);
|
||||
}
|
||||
|
||||
event bro_init() &priority=5
|
||||
{
|
||||
Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509]);
|
||||
}
|
||||
|
||||
redef record Files::Info += {
|
||||
## Information about X509 certificates. This is used to keep
|
||||
## certificate information until all events have been received.
|
||||
x509: X509::Info &optional;
|
||||
};
|
||||
|
||||
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=5
|
||||
{
|
||||
f$info$x509 = [$ts=f$info$ts, $id=f$id, $certificate=cert, $handle=cert_ref];
|
||||
}
|
||||
|
||||
event x509_extension(f: fa_file, ext: X509::Extension) &priority=5
|
||||
{
|
||||
if ( f$info?$x509 )
|
||||
f$info$x509$extensions[|f$info$x509$extensions|] = ext;
|
||||
}
|
||||
|
||||
event x509_ext_basic_constraints(f: fa_file, ext: X509::BasicConstraints) &priority=5
|
||||
{
|
||||
if ( f$info?$x509 )
|
||||
f$info$x509$basic_constraints = ext;
|
||||
}
|
||||
|
||||
event x509_ext_subject_alternative_name(f: fa_file, ext: X509::SubjectAlternativeName) &priority=5
|
||||
{
|
||||
if ( f$info?$x509 )
|
||||
f$info$x509$san = ext;
|
||||
}
|
||||
|
||||
event file_state_remove(f: fa_file) &priority=5
|
||||
{
|
||||
if ( ! f$info?$x509 )
|
||||
return;
|
||||
|
||||
Log::write(LOG, f$info$x509);
|
||||
}
|
|
@ -1 +1,2 @@
|
|||
@load ./main.bro
|
||||
@load ./magic
|
||||
|
|
2
scripts/base/frameworks/files/magic/__load__.bro
Normal file
2
scripts/base/frameworks/files/magic/__load__.bro
Normal file
|
@ -0,0 +1,2 @@
|
|||
@load-sigs ./general
|
||||
@load-sigs ./libmagic
|
11
scripts/base/frameworks/files/magic/general.sig
Normal file
11
scripts/base/frameworks/files/magic/general.sig
Normal file
|
@ -0,0 +1,11 @@
|
|||
# General purpose file magic signatures.
|
||||
|
||||
signature file-plaintext {
|
||||
file-magic /([[:print:][:space:]]{10})/
|
||||
file-mime "text/plain", -20
|
||||
}
|
||||
|
||||
signature file-tar {
|
||||
file-magic /([[:print:]\x00]){100}(([[:digit:]\x00\x20]){8}){3}/
|
||||
file-mime "application/x-tar", 150
|
||||
}
|
4213
scripts/base/frameworks/files/magic/libmagic.sig
Normal file
4213
scripts/base/frameworks/files/magic/libmagic.sig
Normal file
File diff suppressed because it is too large
Load diff
|
@ -41,15 +41,15 @@ export {
|
|||
## If this file was transferred over a network
|
||||
## connection this should show the host or hosts that
|
||||
## the data sourced from.
|
||||
tx_hosts: set[addr] &log;
|
||||
tx_hosts: set[addr] &default=addr_set() &log;
|
||||
|
||||
## If this file was transferred over a network
|
||||
## connection this should show the host or hosts that
|
||||
## the data traveled to.
|
||||
rx_hosts: set[addr] &log;
|
||||
rx_hosts: set[addr] &default=addr_set() &log;
|
||||
|
||||
## Connection UIDs over which the file was transferred.
|
||||
conn_uids: set[string] &log;
|
||||
conn_uids: set[string] &default=string_set() &log;
|
||||
|
||||
## An identification of the source of the file data. E.g. it
|
||||
## may be a network protocol over which it was transferred, or a
|
||||
|
@ -63,12 +63,13 @@ export {
|
|||
depth: count &default=0 &log;
|
||||
|
||||
## A set of analysis types done during the file analysis.
|
||||
analyzers: set[string] &log;
|
||||
analyzers: set[string] &default=string_set() &log;
|
||||
|
||||
## A mime type provided by libmagic against the *bof_buffer*
|
||||
## field of :bro:see:`fa_file`, or in the cases where no
|
||||
## buffering of the beginning of file occurs, an initial
|
||||
## guess of the mime type based on the first data seen.
|
||||
## A mime type provided by the strongest file magic signature
|
||||
## match against the *bof_buffer* field of :bro:see:`fa_file`,
|
||||
## or in the cases where no buffering of the beginning of file
|
||||
## occurs, an initial guess of the mime type based on the first
|
||||
## data seen.
|
||||
mime_type: string &log &optional;
|
||||
|
||||
## A filename for the file if one is available from the source
|
||||
|
|
|
@ -17,27 +17,51 @@ module LogAscii;
|
|||
export {
|
||||
## If true, output everything to stdout rather than
|
||||
## into files. This is primarily for debugging purposes.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const output_to_stdout = F &redef;
|
||||
|
||||
## If true, the default will be to write logs in a JSON format.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const use_json = F &redef;
|
||||
|
||||
## Format of timestamps when writing out JSON. By default, the JSON formatter will
|
||||
## use double values for timestamps which represent the number of seconds from the
|
||||
## UNIX epoch.
|
||||
const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef;
|
||||
|
||||
## If true, include lines with log meta information such as column names
|
||||
## with types, the values of ASCII logging options that are in use, and
|
||||
## the time when the file was opened and closed (the latter at the end).
|
||||
##
|
||||
## If writing in JSON format, this is implicitly disabled.
|
||||
const include_meta = T &redef;
|
||||
|
||||
## Prefix for lines with meta information.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const meta_prefix = "#" &redef;
|
||||
|
||||
## Separator between fields.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const separator = Log::separator &redef;
|
||||
|
||||
## Separator between set elements.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const set_separator = Log::set_separator &redef;
|
||||
|
||||
## String to use for empty fields. This should be different from
|
||||
## *unset_field* to make the output unambiguous.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const empty_field = Log::empty_field &redef;
|
||||
|
||||
## String to use for an unset &optional field.
|
||||
##
|
||||
## This option is also available as a per-filter ``$config`` option.
|
||||
const unset_field = Log::unset_field &redef;
|
||||
}
|
||||
|
||||
|
|
|
@ -23,7 +23,8 @@ redef Cluster::worker2manager_events += /Notice::cluster_notice/;
|
|||
@if ( Cluster::local_node_type() != Cluster::MANAGER )
|
||||
event Notice::begin_suppression(n: Notice::Info)
|
||||
{
|
||||
suppressing[n$note, n$identifier] = n;
|
||||
local suppress_until = n$ts + n$suppress_for;
|
||||
suppressing[n$note, n$identifier] = suppress_until;
|
||||
}
|
||||
@endif
|
||||
|
||||
|
|
|
@ -206,6 +206,38 @@ export {
|
|||
## The maximum amount of time a plugin can delay email from being sent.
|
||||
const max_email_delay = 15secs &redef;
|
||||
|
||||
## Contains a portion of :bro:see:`fa_file` that's also contained in
|
||||
## :bro:see:`Notice::Info`.
|
||||
type FileInfo: record {
|
||||
fuid: string; ##< File UID.
|
||||
desc: string; ##< File description from e.g.
|
||||
##< :bro:see:`Files::describe`.
|
||||
mime: string &optional; ##< Strongest mime type match for file.
|
||||
cid: conn_id &optional; ##< Connection tuple over which file is sent.
|
||||
cuid: string &optional; ##< Connection UID over which file is sent.
|
||||
};
|
||||
|
||||
## Creates a record containing a subset of a full :bro:see:`fa_file` record.
|
||||
##
|
||||
## f: record containing metadata about a file.
|
||||
##
|
||||
## Returns: record containing a subset of fields copied from *f*.
|
||||
global create_file_info: function(f: fa_file): Notice::FileInfo;
|
||||
|
||||
## Populates file-related fields in a notice info record.
|
||||
##
|
||||
## f: record containing metadata about a file.
|
||||
##
|
||||
## n: a notice record that needs file-related fields populated.
|
||||
global populate_file_info: function(f: fa_file, n: Notice::Info);
|
||||
|
||||
## Populates file-related fields in a notice info record.
|
||||
##
|
||||
## fi: record containing metadata about a file.
|
||||
##
|
||||
## n: a notice record that needs file-related fields populated.
|
||||
global populate_file_info2: function(fi: Notice::FileInfo, n: Notice::Info);
|
||||
|
||||
## A log postprocessing function that implements emailing the contents
|
||||
## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`.
|
||||
## The rotated log is removed upon being sent.
|
||||
|
@ -242,12 +274,6 @@ export {
|
|||
## being suppressed.
|
||||
global suppressed: event(n: Notice::Info);
|
||||
|
||||
## This event is generated when a notice stops being suppressed.
|
||||
##
|
||||
## n: The record containing notice data regarding the notice type
|
||||
## that was being suppressed.
|
||||
global end_suppression: event(n: Notice::Info);
|
||||
|
||||
## Call this function to send a notice in an email. It is already used
|
||||
## by default with the built in :bro:enum:`Notice::ACTION_EMAIL` and
|
||||
## :bro:enum:`Notice::ACTION_PAGE` actions.
|
||||
|
@ -285,27 +311,22 @@ export {
|
|||
}
|
||||
|
||||
# This is used as a hack to implement per-item expiration intervals.
|
||||
function per_notice_suppression_interval(t: table[Notice::Type, string] of Notice::Info, idx: any): interval
|
||||
function per_notice_suppression_interval(t: table[Notice::Type, string] of time, idx: any): interval
|
||||
{
|
||||
local n: Notice::Type;
|
||||
local s: string;
|
||||
[n,s] = idx;
|
||||
|
||||
local suppress_time = t[n,s]$suppress_for - (network_time() - t[n,s]$ts);
|
||||
local suppress_time = t[n,s] - network_time();
|
||||
if ( suppress_time < 0secs )
|
||||
suppress_time = 0secs;
|
||||
|
||||
# If there is no more suppression time left, the notice needs to be sent
|
||||
# to the end_suppression event.
|
||||
if ( suppress_time == 0secs )
|
||||
event Notice::end_suppression(t[n,s]);
|
||||
|
||||
return suppress_time;
|
||||
}
|
||||
|
||||
# This is the internally maintained notice suppression table. It's
|
||||
# indexed on the Notice::Type and the $identifier field from the notice.
|
||||
global suppressing: table[Type, string] of Notice::Info = {}
|
||||
global suppressing: table[Type, string] of time = {}
|
||||
&create_expire=0secs
|
||||
&expire_func=per_notice_suppression_interval;
|
||||
|
||||
|
@ -400,11 +421,22 @@ function email_notice_to(n: Notice::Info, dest: string, extend: bool)
|
|||
|
||||
# First off, finish the headers and include the human readable messages
|
||||
# then leave a blank line after the message.
|
||||
email_text = string_cat(email_text, "\nMessage: ", n$msg);
|
||||
if ( n?$sub )
|
||||
email_text = string_cat(email_text, "\nSub-message: ", n$sub);
|
||||
email_text = string_cat(email_text, "\nMessage: ", n$msg, "\n");
|
||||
|
||||
email_text = string_cat(email_text, "\n\n");
|
||||
if ( n?$sub )
|
||||
email_text = string_cat(email_text, "Sub-message: ", n$sub, "\n");
|
||||
|
||||
email_text = string_cat(email_text, "\n");
|
||||
|
||||
# Add information about the file if it exists.
|
||||
if ( n?$file_desc )
|
||||
email_text = string_cat(email_text, "File Description: ", n$file_desc, "\n");
|
||||
|
||||
if ( n?$file_mime_type )
|
||||
email_text = string_cat(email_text, "File MIME Type: ", n$file_mime_type, "\n");
|
||||
|
||||
if ( n?$file_desc || n?$file_mime_type )
|
||||
email_text = string_cat(email_text, "\n");
|
||||
|
||||
# Next, add information about the connection if it exists.
|
||||
if ( n?$id )
|
||||
|
@ -467,7 +499,8 @@ hook Notice::notice(n: Notice::Info) &priority=-5
|
|||
[n$note, n$identifier] !in suppressing &&
|
||||
n$suppress_for != 0secs )
|
||||
{
|
||||
suppressing[n$note, n$identifier] = n;
|
||||
local suppress_until = n$ts + n$suppress_for;
|
||||
suppressing[n$note, n$identifier] = suppress_until;
|
||||
event Notice::begin_suppression(n);
|
||||
}
|
||||
}
|
||||
|
@ -492,6 +525,42 @@ function execute_with_notice(cmd: string, n: Notice::Info)
|
|||
#system_env(cmd, tags);
|
||||
}
|
||||
|
||||
function create_file_info(f: fa_file): Notice::FileInfo
|
||||
{
|
||||
local fi: Notice::FileInfo = Notice::FileInfo($fuid = f$id,
|
||||
$desc = Files::describe(f));
|
||||
|
||||
if ( f?$mime_type )
|
||||
fi$mime = f$mime_type;
|
||||
|
||||
if ( f?$conns && |f$conns| == 1 )
|
||||
for ( id in f$conns )
|
||||
{
|
||||
fi$cid = id;
|
||||
fi$cuid = f$conns[id]$uid;
|
||||
}
|
||||
|
||||
return fi;
|
||||
}
|
||||
|
||||
function populate_file_info(f: fa_file, n: Notice::Info)
|
||||
{
|
||||
populate_file_info2(create_file_info(f), n);
|
||||
}
|
||||
|
||||
function populate_file_info2(fi: Notice::FileInfo, n: Notice::Info)
|
||||
{
|
||||
if ( ! n?$fuid )
|
||||
n$fuid = fi$fuid;
|
||||
|
||||
if ( ! n?$file_mime_type && fi?$mime )
|
||||
n$file_mime_type = fi$mime;
|
||||
|
||||
n$file_desc = fi$desc;
|
||||
n$id = fi$cid;
|
||||
n$uid = fi$cuid;
|
||||
}
|
||||
|
||||
# This is run synchronously as a function before all of the other
|
||||
# notice related functions and events. It also modifies the
|
||||
# :bro:type:`Notice::Info` record in place.
|
||||
|
@ -502,21 +571,7 @@ function apply_policy(n: Notice::Info)
|
|||
n$ts = network_time();
|
||||
|
||||
if ( n?$f )
|
||||
{
|
||||
if ( ! n?$fuid )
|
||||
n$fuid = n$f$id;
|
||||
|
||||
if ( ! n?$file_mime_type && n$f?$mime_type )
|
||||
n$file_mime_type = n$f$mime_type;
|
||||
|
||||
n$file_desc = Files::describe(n$f);
|
||||
|
||||
if ( n$f?$conns && |n$f$conns| == 1 )
|
||||
{
|
||||
for ( id in n$f$conns )
|
||||
n$conn = n$f$conns[id];
|
||||
}
|
||||
}
|
||||
populate_file_info(n$f, n);
|
||||
|
||||
if ( n?$conn )
|
||||
{
|
||||
|
|
|
@ -185,6 +185,7 @@ export {
|
|||
["RPC_underflow"] = ACTION_LOG,
|
||||
["RST_storm"] = ACTION_LOG,
|
||||
["RST_with_data"] = ACTION_LOG,
|
||||
["SSL_many_server_names"] = ACTION_LOG,
|
||||
["simultaneous_open"] = ACTION_LOG_PER_CONN,
|
||||
["spontaneous_FIN"] = ACTION_IGNORE,
|
||||
["spontaneous_RST"] = ACTION_IGNORE,
|
||||
|
|
|
@ -70,6 +70,9 @@ export {
|
|||
## The network time at which a signature matching type of event
|
||||
## to be logged has occurred.
|
||||
ts: time &log;
|
||||
## A unique identifier of the connection which triggered the
|
||||
## signature match event
|
||||
uid: string &log &optional;
|
||||
## The host which triggered the signature match event.
|
||||
src_addr: addr &log &optional;
|
||||
## The host port on which the signature-matching activity
|
||||
|
@ -192,6 +195,7 @@ event signature_match(state: signature_state, msg: string, data: string)
|
|||
{
|
||||
local info: Info = [$ts=network_time(),
|
||||
$note=Sensitive_Signature,
|
||||
$uid=state$conn$uid,
|
||||
$src_addr=src_addr,
|
||||
$src_port=src_port,
|
||||
$dst_addr=dst_addr,
|
||||
|
|
|
@ -287,6 +287,13 @@ function parse_mozilla(unparsed_version: string): Description
|
|||
if ( 2 in parts )
|
||||
v = parse(parts[2])$version;
|
||||
}
|
||||
else if ( / Java\/[0-9]\./ in unparsed_version )
|
||||
{
|
||||
software_name = "Java";
|
||||
parts = split_all(unparsed_version, /Java\/[0-9\._]*/);
|
||||
if ( 2 in parts )
|
||||
v = parse(parts[2])$version;
|
||||
}
|
||||
|
||||
return [$version=v, $unparsed_version=unparsed_version, $name=software_name];
|
||||
}
|
||||
|
|
|
@ -28,10 +28,6 @@ export {
|
|||
## values for a sumstat.
|
||||
global cluster_ss_request: event(uid: string, ss_name: string, cleanup: bool);
|
||||
|
||||
# Event sent by nodes that are collecting sumstats after receiving a
|
||||
# request for the sumstat from the manager.
|
||||
#global cluster_ss_response: event(uid: string, ss_name: string, data: ResultTable, done: bool, cleanup: bool);
|
||||
|
||||
## This event is sent by the manager in a cluster to initiate the
|
||||
## collection of a single key value from a sumstat. It's typically used
|
||||
## to get intermediate updates before the break interval triggers to
|
||||
|
@ -62,7 +58,7 @@ export {
|
|||
# Add events to the cluster framework to make this work.
|
||||
redef Cluster::manager2worker_events += /SumStats::cluster_(ss_request|get_result|threshold_crossed)/;
|
||||
redef Cluster::manager2worker_events += /SumStats::(get_a_key)/;
|
||||
redef Cluster::worker2manager_events += /SumStats::cluster_(ss_response|send_result|key_intermediate_response)/;
|
||||
redef Cluster::worker2manager_events += /SumStats::cluster_(send_result|key_intermediate_response)/;
|
||||
redef Cluster::worker2manager_events += /SumStats::(send_a_key|send_no_key)/;
|
||||
|
||||
@if ( Cluster::local_node_type() != Cluster::MANAGER )
|
||||
|
@ -74,7 +70,7 @@ global recent_global_view_keys: table[string, Key] of count &create_expire=1min
|
|||
|
||||
# Result tables indexed on a uid that are currently being sent to the
|
||||
# manager.
|
||||
global sending_results: table[string] of ResultTable = table() &create_expire=1min;
|
||||
global sending_results: table[string] of ResultTable = table() &read_expire=1min;
|
||||
|
||||
# This is done on all non-manager node types in the event that a sumstat is
|
||||
# being collected somewhere other than a worker.
|
||||
|
@ -195,6 +191,19 @@ event SumStats::cluster_threshold_crossed(ss_name: string, key: SumStats::Key, t
|
|||
threshold_tracker[ss_name][key] = thold_index;
|
||||
}
|
||||
|
||||
# request-key is a non-op on the workers.
|
||||
# It only should be called by the manager. Due to the fact that we usually run the same scripts on the
|
||||
# workers and the manager, it might also be called by the workers, so we just ignore it here.
|
||||
#
|
||||
# There is a small chance that people will try running it on events that are just thrown on the workers.
|
||||
# This does not work at the moment and we cannot throw an error message, because we cannot distinguish it
|
||||
# from the "script is running it everywhere" case. But - people should notice that they do not get results.
|
||||
# Not entirely pretty, sorry :(
|
||||
function request_key(ss_name: string, key: Key): Result
|
||||
{
|
||||
return Result();
|
||||
}
|
||||
|
||||
@endif
|
||||
|
||||
|
||||
|
@ -203,7 +212,7 @@ event SumStats::cluster_threshold_crossed(ss_name: string, key: SumStats::Key, t
|
|||
# This variable is maintained by manager nodes as they collect and aggregate
|
||||
# results.
|
||||
# Index on a uid.
|
||||
global stats_keys: table[string] of set[Key] &create_expire=1min
|
||||
global stats_keys: table[string] of set[Key] &read_expire=1min
|
||||
&expire_func=function(s: table[string] of set[Key], idx: string): interval
|
||||
{
|
||||
Reporter::warning(fmt("SumStat key request for the %s SumStat uid took longer than 1 minute and was automatically cancelled.", idx));
|
||||
|
@ -215,17 +224,16 @@ global stats_keys: table[string] of set[Key] &create_expire=1min
|
|||
# matches the number of peer nodes that results should be coming from, the
|
||||
# result is written out and deleted from here.
|
||||
# Indexed on a uid.
|
||||
# TODO: add an &expire_func in case not all results are received.
|
||||
global done_with: table[string] of count &create_expire=1min &default=0;
|
||||
global done_with: table[string] of count &read_expire=1min &default=0;
|
||||
|
||||
# This variable is maintained by managers to track intermediate responses as
|
||||
# they are getting a global view for a certain key.
|
||||
# Indexed on a uid.
|
||||
global key_requests: table[string] of Result &create_expire=1min;
|
||||
global key_requests: table[string] of Result &read_expire=1min;
|
||||
|
||||
# Store uids for dynamic requests here to avoid cleanup on the uid.
|
||||
# (This needs to be done differently!)
|
||||
global dynamic_requests: set[string] &create_expire=1min;
|
||||
global dynamic_requests: set[string] &read_expire=1min;
|
||||
|
||||
# This variable is maintained by managers to prevent overwhelming communication due
|
||||
# to too many intermediate updates. Each sumstat is tracked separately so that
|
||||
|
|
|
@ -2,23 +2,59 @@
|
|||
|
||||
module SumStats;
|
||||
|
||||
event SumStats::process_epoch_result(ss: SumStat, now: time, data: ResultTable)
|
||||
{
|
||||
# TODO: is this the right processing group size?
|
||||
local i = 50;
|
||||
for ( key in data )
|
||||
{
|
||||
ss$epoch_result(now, key, data[key]);
|
||||
delete data[key];
|
||||
|
||||
if ( |data| == 0 )
|
||||
{
|
||||
if ( ss?$epoch_finished )
|
||||
ss$epoch_finished(now);
|
||||
|
||||
# Now that no data is left we can finish.
|
||||
return;
|
||||
}
|
||||
|
||||
i = i-1;
|
||||
if ( i == 0 )
|
||||
{
|
||||
# TODO: is this the right interval?
|
||||
schedule 0.01 secs { process_epoch_result(ss, now, data) };
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
event SumStats::finish_epoch(ss: SumStat)
|
||||
{
|
||||
if ( ss$name in result_store )
|
||||
{
|
||||
local now = network_time();
|
||||
|
||||
if ( ss?$epoch_result )
|
||||
{
|
||||
local data = result_store[ss$name];
|
||||
# TODO: don't block here.
|
||||
local now = network_time();
|
||||
if ( bro_is_terminating() )
|
||||
{
|
||||
for ( key in data )
|
||||
ss$epoch_result(now, key, data[key]);
|
||||
}
|
||||
|
||||
if ( ss?$epoch_finished )
|
||||
ss$epoch_finished(now);
|
||||
}
|
||||
else
|
||||
{
|
||||
event SumStats::process_epoch_result(ss, now, data);
|
||||
}
|
||||
}
|
||||
|
||||
# We can reset here because we know that the reference
|
||||
# to the data will be maintained by the process_epoch_result
|
||||
# event.
|
||||
reset(ss);
|
||||
}
|
||||
|
||||
|
|
|
@ -39,6 +39,14 @@ type count_set: set[count];
|
|||
## directly and then remove this alias.
|
||||
type index_vec: vector of count;
|
||||
|
||||
## A vector of any, used by some builtin functions to store a list of varying
|
||||
## types.
|
||||
##
|
||||
## .. todo:: We need this type definition only for declaring builtin functions
|
||||
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
|
||||
## directly and then remove this alias.
|
||||
type any_vec: vector of any;
|
||||
|
||||
## A vector of strings.
|
||||
##
|
||||
## .. todo:: We need this type definition only for declaring builtin functions
|
||||
|
@ -46,6 +54,13 @@ type index_vec: vector of count;
|
|||
## directly and then remove this alias.
|
||||
type string_vec: vector of string;
|
||||
|
||||
## A vector of x509 opaques.
|
||||
##
|
||||
## .. todo:: We need this type definition only for declaring builtin functions
|
||||
## via ``bifcl``. We should extend ``bifcl`` to understand composite types
|
||||
## directly and then remove this alias.
|
||||
type x509_opaque_vector: vector of opaque of x509;
|
||||
|
||||
## A vector of addresses.
|
||||
##
|
||||
## .. todo:: We need this type definition only for declaring builtin functions
|
||||
|
@ -60,6 +75,23 @@ type addr_vec: vector of addr;
|
|||
## directly and then remove this alias.
|
||||
type table_string_of_string: table[string] of string;
|
||||
|
||||
## A structure indicating a MIME type and strength of a match against
|
||||
## file magic signatures.
|
||||
##
|
||||
## :bro:see:`file_magic`
|
||||
type mime_match: record {
|
||||
strength: int; ##< How strongly the signature matched. Used for
|
||||
##< prioritization when multiple file magic signatures
|
||||
##< match.
|
||||
mime: string; ##< The MIME type of the file magic signature match.
|
||||
};
|
||||
|
||||
## A vector of file magic signature matches, ordered by strength of
|
||||
## the signature, strongest first.
|
||||
##
|
||||
## :bro:see:`file_magic`
|
||||
type mime_matches: vector of mime_match;
|
||||
|
||||
## A connection's transport-layer protocol. Note that Bro uses the term
|
||||
## "connection" broadly, using flow semantics for ICMP and UDP.
|
||||
type transport_proto: enum {
|
||||
|
@ -372,10 +404,15 @@ type fa_file: record {
|
|||
## This is also the buffer that's used for file/mime type detection.
|
||||
bof_buffer: string &optional;
|
||||
|
||||
## A mime type provided by libmagic against the *bof_buffer*, or
|
||||
## in the cases where no buffering of the beginning of file occurs,
|
||||
## an initial guess of the mime type based on the first data seen.
|
||||
## The mime type of the strongest file magic signature matches against
|
||||
## the data chunk in *bof_buffer*, or in the cases where no buffering
|
||||
## of the beginning of file occurs, an initial guess of the mime type
|
||||
## based on the first data seen.
|
||||
mime_type: string &optional;
|
||||
|
||||
## All mime types that matched file magic signatures against the data
|
||||
## chunk in *bof_buffer*, in order of their strength value.
|
||||
mime_types: mime_matches &optional;
|
||||
} &redef;
|
||||
|
||||
## Fields of a SYN packet.
|
||||
|
@ -1029,13 +1066,6 @@ const rpc_timeout = 24 sec &redef;
|
|||
## means "forever", which resists evasion, but can lead to state accrual.
|
||||
const frag_timeout = 0.0 sec &redef;
|
||||
|
||||
## Time window for reordering packets. This is used for dealing with timestamp
|
||||
## discrepancy between multiple packet sources.
|
||||
##
|
||||
## .. note:: Setting this can have a major performance impact as now packets
|
||||
## need to be potentially copied and buffered.
|
||||
const packet_sort_window = 0 usecs &redef;
|
||||
|
||||
## If positive, indicates the encapsulation header size that should
|
||||
## be skipped. This applies to all packets.
|
||||
const encap_hdr_size = 0 &redef;
|
||||
|
@ -2421,18 +2451,6 @@ global dns_skip_all_addl = T &redef;
|
|||
## traffic and do not process it. Set to 0 to turn off this functionality.
|
||||
global dns_max_queries = 5;
|
||||
|
||||
## An X509 certificate.
|
||||
##
|
||||
## .. bro:see:: x509_certificate
|
||||
type X509: record {
|
||||
version: count; ##< Version number.
|
||||
serial: string; ##< Serial number.
|
||||
subject: string; ##< Subject.
|
||||
issuer: string; ##< Issuer.
|
||||
not_valid_before: time; ##< Timestamp before when certificate is not valid.
|
||||
not_valid_after: time; ##< Timestamp after when certificate is not valid.
|
||||
};
|
||||
|
||||
## HTTP session statistics.
|
||||
##
|
||||
## .. bro:see:: http_stats
|
||||
|
@ -2754,6 +2772,55 @@ export {
|
|||
};
|
||||
}
|
||||
|
||||
module X509;
|
||||
export {
|
||||
type Certificate: record {
|
||||
version: count; ##< Version number.
|
||||
serial: string; ##< Serial number.
|
||||
subject: string; ##< Subject.
|
||||
issuer: string; ##< Issuer.
|
||||
not_valid_before: time; ##< Timestamp before when certificate is not valid.
|
||||
not_valid_after: time; ##< Timestamp after when certificate is not valid.
|
||||
key_alg: string; ##< Name of the key algorithm
|
||||
sig_alg: string; ##< Name of the signature algorithm
|
||||
key_type: string &optional; ##< Key type, if key parseable by openssl (either rsa, dsa or ec)
|
||||
key_length: count &optional; ##< Key length in bits
|
||||
exponent: string &optional; ##< Exponent, if RSA-certificate
|
||||
curve: string &optional; ##< Curve, if EC-certificate
|
||||
} &log;
|
||||
|
||||
type Extension: record {
|
||||
name: string; ##< Long name of extension. oid if name not known
|
||||
short_name: string &optional; ##< Short name of extension if known
|
||||
oid: string; ##< Oid of extension
|
||||
critical: bool; ##< True if extension is critical
|
||||
value: string; ##< Extension content parsed to string for known extensions. Raw data otherwise.
|
||||
};
|
||||
|
||||
type BasicConstraints: record {
|
||||
ca: bool; ##< CA flag set?
|
||||
path_len: count &optional; ##< Maximum path length
|
||||
} &log;
|
||||
|
||||
type SubjectAlternativeName: record {
|
||||
dns: string_vec &optional &log; ##< List of DNS entries in SAN
|
||||
uri: string_vec &optional &log; ##< List of URI entries in SAN
|
||||
email: string_vec &optional &log; ##< List of email entries in SAN
|
||||
ip: addr_vec &optional &log; ##< List of IP entries in SAN
|
||||
other_fields: bool; ##< True if the certificate contained other, not recognized or parsed name fields
|
||||
};
|
||||
|
||||
## Result of an X509 certificate chain verification
|
||||
type Result: record {
|
||||
## OpenSSL result code
|
||||
result: int;
|
||||
## Result as string
|
||||
result_string: string;
|
||||
## References to the final certificate chain, if verification successful. End-host certificate is first.
|
||||
chain_certs: vector of opaque of x509 &optional;
|
||||
};
|
||||
}
|
||||
|
||||
module SOCKS;
|
||||
export {
|
||||
## This record is for a SOCKS client or server to provide either a
|
||||
|
@ -2763,6 +2830,148 @@ export {
|
|||
name: string &optional;
|
||||
} &log;
|
||||
}
|
||||
|
||||
module RADIUS;
|
||||
|
||||
export {
|
||||
type RADIUS::AttributeList: vector of string;
|
||||
type RADIUS::Attributes: table[count] of RADIUS::AttributeList;
|
||||
|
||||
type RADIUS::Message: record {
|
||||
## The type of message (Access-Request, Access-Accept, etc.).
|
||||
code : count;
|
||||
## The transaction ID.
|
||||
trans_id : count;
|
||||
## The "authenticator" string.
|
||||
authenticator : string;
|
||||
## Any attributes.
|
||||
attributes : RADIUS::Attributes &optional;
|
||||
};
|
||||
}
|
||||
module GLOBAL;
|
||||
|
||||
@load base/bif/plugins/Bro_SNMP.types.bif
|
||||
|
||||
module SNMP;
|
||||
export {
|
||||
## The top-level message data structure of an SNMPv1 datagram, not
|
||||
## including the PDU data. See :rfc:`1157`.
|
||||
type SNMP::HeaderV1: record {
|
||||
community: string;
|
||||
};
|
||||
|
||||
## The top-level message data structure of an SNMPv2 datagram, not
|
||||
## including the PDU data. See :rfc:`1901`.
|
||||
type SNMP::HeaderV2: record {
|
||||
community: string;
|
||||
};
|
||||
|
||||
## The ``ScopedPduData`` data structure of an SNMPv3 datagram, not
|
||||
## including the PDU data (i.e. just the "context" fields).
|
||||
## See :rfc:`3412`.
|
||||
type SNMP::ScopedPDU_Context: record {
|
||||
engine_id: string;
|
||||
name: string;
|
||||
};
|
||||
|
||||
## The top-level message data structure of an SNMPv3 datagram, not
|
||||
## including the PDU data. See :rfc:`3412`.
|
||||
type SNMP::HeaderV3: record {
|
||||
id: count;
|
||||
max_size: count;
|
||||
flags: count;
|
||||
auth_flag: bool;
|
||||
priv_flag: bool;
|
||||
reportable_flag: bool;
|
||||
security_model: count;
|
||||
security_params: string;
|
||||
pdu_context: SNMP::ScopedPDU_Context &optional;
|
||||
};
|
||||
|
||||
## A generic SNMP header data structure that may include data from
|
||||
## any version of SNMP. The value of the ``version`` field
|
||||
## determines what header field is initialized.
|
||||
type SNMP::Header: record {
|
||||
version: count;
|
||||
v1: SNMP::HeaderV1 &optional; ##< Set when ``version`` is 0.
|
||||
v2: SNMP::HeaderV2 &optional; ##< Set when ``version`` is 1.
|
||||
v3: SNMP::HeaderV3 &optional; ##< Set when ``version`` is 3.
|
||||
};
|
||||
|
||||
## A generic SNMP object value, that may include any of the
|
||||
## valid ``ObjectSyntax`` values from :rfc:`1155` or :rfc:`3416`.
|
||||
## The value is decoded whenever possible and assigned to
|
||||
## the appropriate field, which can be determined from the value
|
||||
## of the ``tag`` field. For tags that can't be mapped to an
|
||||
## appropriate type, the ``octets`` field holds the BER encoded
|
||||
## ASN.1 content if there is any (though, ``octets`` is may also
|
||||
## be used for other tags such as OCTET STRINGS or Opaque). Null
|
||||
## values will only have their corresponding tag value set.
|
||||
type SNMP::ObjectValue: record {
|
||||
tag: count;
|
||||
oid: string &optional;
|
||||
signed: int &optional;
|
||||
unsigned: count &optional;
|
||||
address: addr &optional;
|
||||
octets: string &optional;
|
||||
};
|
||||
|
||||
# These aren't an enum because it's easier to type fields as count.
|
||||
# That way don't have to deal with type conversion, plus doesn't
|
||||
# mislead that these are the only valid tag values (it's just the set
|
||||
# of known tags).
|
||||
const SNMP::OBJ_INTEGER_TAG : count = 0x02; ##< Signed 64-bit integer.
|
||||
const SNMP::OBJ_OCTETSTRING_TAG : count = 0x04; ##< An octet string.
|
||||
const SNMP::OBJ_UNSPECIFIED_TAG : count = 0x05; ##< A NULL value.
|
||||
const SNMP::OBJ_OID_TAG : count = 0x06; ##< An Object Identifier.
|
||||
const SNMP::OBJ_IPADDRESS_TAG : count = 0x40; ##< An IP address.
|
||||
const SNMP::OBJ_COUNTER32_TAG : count = 0x41; ##< Unsigned 32-bit integer.
|
||||
const SNMP::OBJ_UNSIGNED32_TAG : count = 0x42; ##< Unsigned 32-bit integer.
|
||||
const SNMP::OBJ_TIMETICKS_TAG : count = 0x43; ##< Unsigned 32-bit integer.
|
||||
const SNMP::OBJ_OPAQUE_TAG : count = 0x44; ##< An octet string.
|
||||
const SNMP::OBJ_COUNTER64_TAG : count = 0x46; ##< Unsigned 64-bit integer.
|
||||
const SNMP::OBJ_NOSUCHOBJECT_TAG : count = 0x80; ##< A NULL value.
|
||||
const SNMP::OBJ_NOSUCHINSTANCE_TAG: count = 0x81; ##< A NULL value.
|
||||
const SNMP::OBJ_ENDOFMIBVIEW_TAG : count = 0x82; ##< A NULL value.
|
||||
|
||||
## The ``VarBind`` data structure from either :rfc:`1157` or
|
||||
## :rfc:`3416`, which maps an Object Identifier to a value.
|
||||
type SNMP::Binding: record {
|
||||
oid: string;
|
||||
value: SNMP::ObjectValue;
|
||||
};
|
||||
|
||||
## A ``VarBindList`` data structure from either :rfc:`1157` or :rfc:`3416`.
|
||||
## A sequences of :bro:see:`SNMP::Binding`, which maps an OIDs to values.
|
||||
type SNMP::Bindings: vector of SNMP::Binding;
|
||||
|
||||
## A ``PDU`` data structure from either :rfc:`1157` or :rfc:`3416`.
|
||||
type SNMP::PDU: record {
|
||||
request_id: int;
|
||||
error_status: int;
|
||||
error_index: int;
|
||||
bindings: SNMP::Bindings;
|
||||
};
|
||||
|
||||
## A ``Trap-PDU`` data structure from :rfc:`1157`.
|
||||
type SNMP::TrapPDU: record {
|
||||
enterprise: string;
|
||||
agent: addr;
|
||||
generic_trap: int;
|
||||
specific_trap: int;
|
||||
time_stamp: count;
|
||||
bindings: SNMP::Bindings;
|
||||
};
|
||||
|
||||
## A ``BulkPDU`` data structure from :rfc:`3416`.
|
||||
type SNMP::BulkPDU: record {
|
||||
request_id: int;
|
||||
non_repeaters: count;
|
||||
max_repititions: count;
|
||||
bindings: SNMP::Bindings;
|
||||
};
|
||||
}
|
||||
|
||||
module GLOBAL;
|
||||
|
||||
@load base/bif/event.bif
|
||||
|
@ -2850,6 +3059,12 @@ global load_sample_freq = 20 &redef;
|
|||
## .. bro:see:: gap_report
|
||||
const gap_report_freq = 1.0 sec &redef;
|
||||
|
||||
## Whether to attempt to automatically detect SYN/FIN/RST-filtered trace
|
||||
## and not report missing segments for such connections.
|
||||
## If this is enabled, then missing data at the end of connections may not
|
||||
## be reported via :bro:see:`content_gap`.
|
||||
const detect_filtered_trace = F &redef;
|
||||
|
||||
## Whether we want :bro:see:`content_gap` and :bro:see:`gap_report` for partial
|
||||
## connections. A connection is partial if it is missing a full handshake. Note
|
||||
## that gap reports for partial connections might not be reliable.
|
||||
|
@ -3040,6 +3255,24 @@ const record_all_packets = F &redef;
|
|||
## .. bro:see:: conn_stats
|
||||
const ignore_keep_alive_rexmit = F &redef;
|
||||
|
||||
module JSON;
|
||||
export {
|
||||
type TimestampFormat: enum {
|
||||
## Timestamps will be formatted as UNIX epoch doubles. This is
|
||||
## the format that Bro typically writes out timestamps.
|
||||
TS_EPOCH,
|
||||
## Timestamps will be formatted as unsigned integers that
|
||||
## represent the number of milliseconds since the UNIX
|
||||
## epoch.
|
||||
TS_MILLIS,
|
||||
## Timestamps will be formatted in the ISO8601 DateTime format.
|
||||
## Subseconds are also included which isn't actually part of the
|
||||
## standard but most consumers that parse ISO8601 seem to be able
|
||||
## to cope with that.
|
||||
TS_ISO8601,
|
||||
};
|
||||
}
|
||||
|
||||
module Tunnel;
|
||||
export {
|
||||
## The maximum depth of a tunnel to decapsulate until giving up.
|
||||
|
@ -3058,6 +3291,9 @@ export {
|
|||
## Toggle whether to do GTPv1 decapsulation.
|
||||
const enable_gtpv1 = T &redef;
|
||||
|
||||
## Toggle whether to do GRE decapsulation.
|
||||
const enable_gre = T &redef;
|
||||
|
||||
## With this option set, the Teredo analysis will first check to see if
|
||||
## other protocol analyzers have confirmed that they think they're
|
||||
## parsing the right protocol and only continue with Teredo tunnel
|
||||
|
@ -3083,7 +3319,8 @@ export {
|
|||
## may work better.
|
||||
const delay_gtp_confirmation = F &redef;
|
||||
|
||||
## How often to cleanup internal state for inactive IP tunnels.
|
||||
## How often to cleanup internal state for inactive IP tunnels
|
||||
## (includes GRE tunnels).
|
||||
const ip_tunnel_timeout = 24hrs &redef;
|
||||
} # end export
|
||||
module GLOBAL;
|
||||
|
|
|
@ -47,6 +47,8 @@
|
|||
@load base/protocols/irc
|
||||
@load base/protocols/modbus
|
||||
@load base/protocols/pop3
|
||||
@load base/protocols/radius
|
||||
@load base/protocols/snmp
|
||||
@load base/protocols/smtp
|
||||
@load base/protocols/socks
|
||||
@load base/protocols/ssh
|
||||
|
@ -57,6 +59,7 @@
|
|||
@load base/files/hash
|
||||
@load base/files/extract
|
||||
@load base/files/unified2
|
||||
|
||||
@load base/files/x509
|
||||
|
||||
@load base/misc/find-checksum-offloading
|
||||
@load base/misc/find-filtered-trace
|
||||
|
|
49
scripts/base/misc/find-filtered-trace.bro
Normal file
49
scripts/base/misc/find-filtered-trace.bro
Normal file
|
@ -0,0 +1,49 @@
|
|||
##! Discovers trace files that contain TCP traffic consisting only of
|
||||
##! control packets (e.g. it's been filtered to contain only SYN/FIN/RST
|
||||
##! packets and no content). On finding such a trace, a warning is
|
||||
##! emitted that suggests toggling the :bro:see:`detect_filtered_trace`
|
||||
##! option may be desired if the user does not want Bro to report
|
||||
##! missing TCP segments.
|
||||
|
||||
module FilteredTraceDetection;
|
||||
|
||||
export {
|
||||
|
||||
## Flag to enable filtered trace file detection and warning message.
|
||||
global enable: bool = T &redef;
|
||||
}
|
||||
|
||||
global saw_tcp_conn_with_data: bool = F;
|
||||
global saw_a_tcp_conn: bool = F;
|
||||
|
||||
event connection_state_remove(c: connection)
|
||||
{
|
||||
if ( ! reading_traces() )
|
||||
return;
|
||||
|
||||
if ( ! enable )
|
||||
return;
|
||||
|
||||
if ( saw_tcp_conn_with_data )
|
||||
return;
|
||||
|
||||
if ( ! is_tcp_port(c$id$orig_p) )
|
||||
return;
|
||||
|
||||
saw_a_tcp_conn = T;
|
||||
|
||||
if ( /[Dd]/ in c$history )
|
||||
saw_tcp_conn_with_data = T;
|
||||
}
|
||||
|
||||
event bro_done()
|
||||
{
|
||||
if ( ! enable )
|
||||
return;
|
||||
|
||||
if ( ! saw_a_tcp_conn )
|
||||
return;
|
||||
|
||||
if ( ! saw_tcp_conn_with_data )
|
||||
Reporter::warning("The analyzed trace file was determined to contain only TCP control packets, which may indicate it's been pre-filtered. By default, Bro reports the missing segments for this type of trace, but the 'detect_filtered_trace' option may be toggled if that's not desired.");
|
||||
}
|
|
@ -63,15 +63,17 @@ export {
|
|||
## The DNS query was rejected by the server.
|
||||
rejected: bool &log &default=F;
|
||||
|
||||
## This value indicates if this request/response pair is ready
|
||||
## to be logged.
|
||||
ready: bool &default=F;
|
||||
## The total number of resource records in a reply message's
|
||||
## answer section.
|
||||
total_answers: count &optional;
|
||||
## The total number of resource records in a reply message's
|
||||
## answer, authority, and additional sections.
|
||||
total_replies: count &optional;
|
||||
|
||||
## Whether the full DNS query has been seen.
|
||||
saw_query: bool &default=F;
|
||||
## Whether the full DNS reply has been seen.
|
||||
saw_reply: bool &default=F;
|
||||
};
|
||||
|
||||
## An event that can be handled to access the :bro:type:`DNS::Info`
|
||||
|
@ -90,7 +92,7 @@ export {
|
|||
## ans: The general information of a RR response.
|
||||
##
|
||||
## reply: The specific response information according to RR type/class.
|
||||
global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
|
||||
global do_reply: hook(c: connection, msg: dns_msg, ans: dns_answer, reply: string);
|
||||
|
||||
## A hook that is called whenever a session is being set.
|
||||
## This can be used if additional initialization logic needs to happen
|
||||
|
@ -103,17 +105,37 @@ export {
|
|||
## is_query: Indicator for if this is being called for a query or a response.
|
||||
global set_session: hook(c: connection, msg: dns_msg, is_query: bool);
|
||||
|
||||
## Yields a queue of :bro:see:`DNS::Info` objects for a given
|
||||
## DNS message query/transaction ID.
|
||||
type PendingMessages: table[count] of Queue::Queue;
|
||||
|
||||
## The amount of time that DNS queries or replies for a given
|
||||
## query/transaction ID are allowed to be queued while waiting for
|
||||
## a matching reply or query.
|
||||
const pending_msg_expiry_interval = 2min &redef;
|
||||
|
||||
## Give up trying to match pending DNS queries or replies for a given
|
||||
## query/transaction ID once this number of unmatched queries or replies
|
||||
## is reached (this shouldn't happen unless either the DNS server/resolver
|
||||
## is broken, Bro is not seeing all the DNS traffic, or an AXFR query
|
||||
## response is ongoing).
|
||||
const max_pending_msgs = 50 &redef;
|
||||
|
||||
## Give up trying to match pending DNS queries or replies across all
|
||||
## query/transaction IDs once there is at least one unmatched query or
|
||||
## reply across this number of different query IDs.
|
||||
const max_pending_query_ids = 50 &redef;
|
||||
|
||||
## A record type which tracks the status of DNS queries for a given
|
||||
## :bro:type:`connection`.
|
||||
type State: record {
|
||||
## Indexed by query id, returns Info record corresponding to
|
||||
## query/response which haven't completed yet.
|
||||
pending: table[count] of Queue::Queue;
|
||||
## queries that haven't been matched with a response yet.
|
||||
pending_queries: PendingMessages;
|
||||
|
||||
## This is the list of DNS responses that have completed based
|
||||
## on the number of responses declared and the number received.
|
||||
## The contents of the set are transaction IDs.
|
||||
finished_answers: set[count];
|
||||
## Indexed by query id, returns Info record corresponding to
|
||||
## replies that haven't been matched with a query yet.
|
||||
pending_replies: PendingMessages;
|
||||
};
|
||||
}
|
||||
|
||||
|
@ -143,6 +165,66 @@ function new_session(c: connection, trans_id: count): Info
|
|||
return info;
|
||||
}
|
||||
|
||||
function log_unmatched_msgs_queue(q: Queue::Queue)
|
||||
{
|
||||
local infos: vector of Info;
|
||||
Queue::get_vector(q, infos);
|
||||
|
||||
for ( i in infos )
|
||||
{
|
||||
event flow_weird("dns_unmatched_msg",
|
||||
infos[i]$id$orig_h, infos[i]$id$resp_h);
|
||||
Log::write(DNS::LOG, infos[i]);
|
||||
}
|
||||
}
|
||||
|
||||
function log_unmatched_msgs(msgs: PendingMessages)
|
||||
{
|
||||
for ( trans_id in msgs )
|
||||
log_unmatched_msgs_queue(msgs[trans_id]);
|
||||
|
||||
clear_table(msgs);
|
||||
}
|
||||
|
||||
function enqueue_new_msg(msgs: PendingMessages, id: count, msg: Info)
|
||||
{
|
||||
if ( id !in msgs )
|
||||
{
|
||||
if ( |msgs| > max_pending_query_ids )
|
||||
{
|
||||
event flow_weird("dns_unmatched_query_id_quantity",
|
||||
msg$id$orig_h, msg$id$resp_h);
|
||||
# Throw away all unmatched on assumption they'll never be matched.
|
||||
log_unmatched_msgs(msgs);
|
||||
}
|
||||
|
||||
msgs[id] = Queue::init();
|
||||
}
|
||||
else
|
||||
{
|
||||
if ( Queue::len(msgs[id]) > max_pending_msgs )
|
||||
{
|
||||
event flow_weird("dns_unmatched_msg_quantity",
|
||||
msg$id$orig_h, msg$id$resp_h);
|
||||
log_unmatched_msgs_queue(msgs[id]);
|
||||
# Throw away all unmatched on assumption they'll never be matched.
|
||||
msgs[id] = Queue::init();
|
||||
}
|
||||
}
|
||||
|
||||
Queue::put(msgs[id], msg);
|
||||
}
|
||||
|
||||
function pop_msg(msgs: PendingMessages, id: count): Info
|
||||
{
|
||||
local rval: Info = Queue::get(msgs[id]);
|
||||
|
||||
if ( Queue::len(msgs[id]) == 0 )
|
||||
delete msgs[id];
|
||||
|
||||
return rval;
|
||||
}
|
||||
|
||||
hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
|
||||
{
|
||||
if ( ! c?$dns_state )
|
||||
|
@ -151,29 +233,39 @@ hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
|
|||
c$dns_state = state;
|
||||
}
|
||||
|
||||
if ( msg$id !in c$dns_state$pending )
|
||||
c$dns_state$pending[msg$id] = Queue::init();
|
||||
|
||||
local info: Info;
|
||||
# If this is either a query or this is the reply but
|
||||
# no Info records are in the queue (we missed the query?)
|
||||
# we need to create an Info record and put it in the queue.
|
||||
if ( is_query ||
|
||||
Queue::len(c$dns_state$pending[msg$id]) == 0 )
|
||||
{
|
||||
info = new_session(c, msg$id);
|
||||
Queue::put(c$dns_state$pending[msg$id], info);
|
||||
}
|
||||
|
||||
if ( is_query )
|
||||
# If this is a query, assign the newly created info variable
|
||||
# so that the world looks correct to anything else handling
|
||||
# this query.
|
||||
c$dns = info;
|
||||
{
|
||||
if ( msg$id in c$dns_state$pending_replies &&
|
||||
Queue::len(c$dns_state$pending_replies[msg$id]) > 0 )
|
||||
{
|
||||
# Match this DNS query w/ what's at head of pending reply queue.
|
||||
c$dns = pop_msg(c$dns_state$pending_replies, msg$id);
|
||||
}
|
||||
else
|
||||
# Peek at the next item in the queue for this trans_id and
|
||||
# assign it to c$dns since this is a response.
|
||||
c$dns = Queue::peek(c$dns_state$pending[msg$id]);
|
||||
{
|
||||
# Create a new DNS session and put it in the query queue so
|
||||
# we can wait for a matching reply.
|
||||
c$dns = new_session(c, msg$id);
|
||||
enqueue_new_msg(c$dns_state$pending_queries, msg$id, c$dns);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
if ( msg$id in c$dns_state$pending_queries &&
|
||||
Queue::len(c$dns_state$pending_queries[msg$id]) > 0 )
|
||||
{
|
||||
# Match this DNS reply w/ what's at head of pending query queue.
|
||||
c$dns = pop_msg(c$dns_state$pending_queries, msg$id);
|
||||
}
|
||||
else
|
||||
{
|
||||
# Create a new DNS session and put it in the reply queue so
|
||||
# we can wait for a matching query.
|
||||
c$dns = new_session(c, msg$id);
|
||||
event conn_weird("dns_unmatched_reply", c, "");
|
||||
enqueue_new_msg(c$dns_state$pending_replies, msg$id, c$dns);
|
||||
}
|
||||
}
|
||||
|
||||
if ( ! is_query )
|
||||
{
|
||||
|
@ -183,36 +275,36 @@ hook set_session(c: connection, msg: dns_msg, is_query: bool) &priority=5
|
|||
if ( ! c$dns?$total_answers )
|
||||
c$dns$total_answers = msg$num_answers;
|
||||
|
||||
if ( c$dns?$total_replies &&
|
||||
c$dns$total_replies != msg$num_answers + msg$num_addl + msg$num_auth )
|
||||
{
|
||||
event conn_weird("dns_changed_number_of_responses", c,
|
||||
fmt("The declared number of responses changed from %d to %d",
|
||||
c$dns$total_replies,
|
||||
msg$num_answers + msg$num_addl + msg$num_auth));
|
||||
}
|
||||
else
|
||||
{
|
||||
# Store the total number of responses expected from the first reply.
|
||||
if ( ! c$dns?$total_replies )
|
||||
c$dns$total_replies = msg$num_answers + msg$num_addl + msg$num_auth;
|
||||
}
|
||||
|
||||
if ( msg$rcode != 0 && msg$num_queries == 0 )
|
||||
c$dns$rejected = T;
|
||||
}
|
||||
}
|
||||
|
||||
event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) &priority=5
|
||||
{
|
||||
hook set_session(c, msg, is_orig);
|
||||
if ( msg$opcode != 0 )
|
||||
# Currently only standard queries are tracked.
|
||||
return;
|
||||
|
||||
hook set_session(c, msg, ! msg$QR);
|
||||
}
|
||||
|
||||
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
|
||||
hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
|
||||
{
|
||||
if ( msg$opcode != 0 )
|
||||
# Currently only standard queries are tracked.
|
||||
return;
|
||||
|
||||
if ( ! msg$QR )
|
||||
# This is weird: the inquirer must also be providing answers in
|
||||
# the request, which is not what we want to track.
|
||||
return;
|
||||
|
||||
if ( ans$answer_type == DNS_ANS )
|
||||
{
|
||||
if ( ! c?$dns )
|
||||
{
|
||||
event conn_weird("dns_unmatched_reply", c, "");
|
||||
hook set_session(c, msg, F);
|
||||
}
|
||||
c$dns$AA = msg$AA;
|
||||
c$dns$RA = msg$RA;
|
||||
|
||||
|
@ -226,29 +318,35 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
|
|||
c$dns$TTLs = vector();
|
||||
c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
|
||||
}
|
||||
|
||||
if ( c$dns?$answers && c$dns?$total_answers &&
|
||||
|c$dns$answers| == c$dns$total_answers )
|
||||
{
|
||||
# Indicate this request/reply pair is ready to be logged.
|
||||
c$dns$ready = T;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=-5
|
||||
event dns_end(c: connection, msg: dns_msg) &priority=5
|
||||
{
|
||||
if ( c$dns$ready )
|
||||
if ( ! c?$dns )
|
||||
return;
|
||||
|
||||
if ( msg$QR )
|
||||
c$dns$saw_reply = T;
|
||||
else
|
||||
c$dns$saw_query = T;
|
||||
}
|
||||
|
||||
event dns_end(c: connection, msg: dns_msg) &priority=-5
|
||||
{
|
||||
if ( c?$dns && c$dns$saw_reply && c$dns$saw_query )
|
||||
{
|
||||
Log::write(DNS::LOG, c$dns);
|
||||
# This record is logged and no longer pending.
|
||||
Queue::get(c$dns_state$pending[c$dns$trans_id]);
|
||||
delete c$dns;
|
||||
}
|
||||
}
|
||||
|
||||
event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
|
||||
{
|
||||
if ( msg$opcode != 0 )
|
||||
# Currently only standard queries are tracked.
|
||||
return;
|
||||
|
||||
c$dns$RD = msg$RD;
|
||||
c$dns$TC = msg$TC;
|
||||
c$dns$qclass = qclass;
|
||||
|
@ -261,64 +359,88 @@ event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qcla
|
|||
# Note: I'm ignoring the name type for now. Not sure if this should be
|
||||
# worked into the query/response in some fashion.
|
||||
if ( c$id$resp_p == 137/udp )
|
||||
{
|
||||
query = decode_netbios_name(query);
|
||||
if ( c$dns$qtype_name == "SRV" )
|
||||
{
|
||||
# The SRV RFC used the ID used for NetBios Status RRs.
|
||||
# So if this is NetBios Name Service we name it correctly.
|
||||
c$dns$qtype_name = "NBSTAT";
|
||||
}
|
||||
}
|
||||
c$dns$query = query;
|
||||
}
|
||||
|
||||
|
||||
event dns_unknown_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
|
||||
{
|
||||
hook DNS::do_reply(c, msg, ans, fmt("<unknown type=%s>", ans$qtype));
|
||||
}
|
||||
|
||||
event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, fmt("%s", a));
|
||||
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
|
||||
}
|
||||
|
||||
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, str: string) &priority=5
|
||||
event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, strs: string_vec) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, str);
|
||||
local txt_strings: string = "";
|
||||
|
||||
for ( i in strs )
|
||||
{
|
||||
if ( i > 0 )
|
||||
txt_strings += " ";
|
||||
|
||||
txt_strings += fmt("TXT %d %s", |strs[i]|, strs[i]);
|
||||
}
|
||||
|
||||
hook DNS::do_reply(c, msg, ans, txt_strings);
|
||||
}
|
||||
|
||||
event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, fmt("%s", a));
|
||||
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
|
||||
}
|
||||
|
||||
event dns_A6_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, fmt("%s", a));
|
||||
hook DNS::do_reply(c, msg, ans, fmt("%s", a));
|
||||
}
|
||||
|
||||
event dns_NS_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, name);
|
||||
hook DNS::do_reply(c, msg, ans, name);
|
||||
}
|
||||
|
||||
event dns_CNAME_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, name);
|
||||
hook DNS::do_reply(c, msg, ans, name);
|
||||
}
|
||||
|
||||
event dns_MX_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string,
|
||||
preference: count) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, name);
|
||||
hook DNS::do_reply(c, msg, ans, name);
|
||||
}
|
||||
|
||||
event dns_PTR_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, name);
|
||||
hook DNS::do_reply(c, msg, ans, name);
|
||||
}
|
||||
|
||||
event dns_SOA_reply(c: connection, msg: dns_msg, ans: dns_answer, soa: dns_soa) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, soa$mname);
|
||||
hook DNS::do_reply(c, msg, ans, soa$mname);
|
||||
}
|
||||
|
||||
event dns_WKS_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, "");
|
||||
hook DNS::do_reply(c, msg, ans, "");
|
||||
}
|
||||
|
||||
event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
|
||||
event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer, target: string, priority: count, weight: count, p: count) &priority=5
|
||||
{
|
||||
event DNS::do_reply(c, msg, ans, "");
|
||||
hook DNS::do_reply(c, msg, ans, target);
|
||||
}
|
||||
|
||||
# TODO: figure out how to handle these
|
||||
|
@ -339,6 +461,7 @@ event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5
|
|||
|
||||
event dns_rejected(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5
|
||||
{
|
||||
if ( c?$dns )
|
||||
c$dns$rejected = T;
|
||||
}
|
||||
|
||||
|
@ -347,16 +470,8 @@ event connection_state_remove(c: connection) &priority=-5
|
|||
if ( ! c?$dns_state )
|
||||
return;
|
||||
|
||||
# If Bro is expiring state, we should go ahead and log all unlogged
|
||||
# request/response pairs now.
|
||||
for ( trans_id in c$dns_state$pending )
|
||||
{
|
||||
local infos: vector of Info;
|
||||
Queue::get_vector(c$dns_state$pending[trans_id], infos);
|
||||
for ( i in infos )
|
||||
{
|
||||
Log::write(DNS::LOG, infos[i]);
|
||||
# If Bro is expiring state, we should go ahead and log all unmatched
|
||||
# queries and replies now.
|
||||
log_unmatched_msgs(c$dns_state$pending_queries);
|
||||
log_unmatched_msgs(c$dns_state$pending_replies);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
# List of HTTP headers pulled from:
|
||||
# http://annevankesteren.nl/2007/10/http-methods
|
||||
signature dpd_http_client {
|
||||
ip-proto == tcp
|
||||
payload /^[[:space:]]*(GET|HEAD|POST)[[:space:]]*/
|
||||
payload /^[[:space:]]*(OPTIONS|GET|HEAD|POST|PUT|DELETE|TRACE|CONNECT|PROPFIND|PROPPATCH|MKCOL|COPY|MOVE|LOCK|UNLOCK|VERSION-CONTROL|REPORT|CHECKOUT|CHECKIN|UNCHECKOUT|MKWORKSPACE|UPDATE|LABEL|MERGE|BASELINE-CONTROL|MKACTIVITY|ORDERPATCH|ACL|PATCH|SEARCH|BCOPY|BDELETE|BMOVE|BPROPFIND|BPROPPATCH|NOTIFY|POLL|SUBSCRIBE|UNSUBSCRIBE|X-MS-ENUMATTS|RPC_OUT_DATA|RPC_IN_DATA)[[:space:]]*/
|
||||
tcp-state originator
|
||||
}
|
||||
|
||||
|
|
|
@ -72,7 +72,7 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
|
|||
|
||||
if ( f$is_orig )
|
||||
{
|
||||
if ( ! c$http?$orig_mime_types )
|
||||
if ( ! c$http?$orig_fuids )
|
||||
c$http$orig_fuids = string_vec(f$id);
|
||||
else
|
||||
c$http$orig_fuids[|c$http$orig_fuids|] = f$id;
|
||||
|
@ -87,7 +87,7 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
|
|||
}
|
||||
else
|
||||
{
|
||||
if ( ! c$http?$resp_mime_types )
|
||||
if ( ! c$http?$resp_fuids )
|
||||
c$http$resp_fuids = string_vec(f$id);
|
||||
else
|
||||
c$http$resp_fuids[|c$http$resp_fuids|] = f$id;
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
|
||||
@load base/utils/numbers
|
||||
@load base/utils/files
|
||||
@load base/frameworks/tunnels
|
||||
|
||||
module HTTP;
|
||||
|
||||
|
@ -217,6 +218,17 @@ event http_reply(c: connection, version: string, code: count, reason: string) &p
|
|||
c$http$info_code = code;
|
||||
c$http$info_msg = reason;
|
||||
}
|
||||
|
||||
if ( c$http?$method && c$http$method == "CONNECT" && code == 200 )
|
||||
{
|
||||
# Copy this conn_id and set the orig_p to zero because in the case of CONNECT
|
||||
# proxies there will be potentially many source ports since a new proxy connection
|
||||
# is established for each proxied connection. We treat this as a singular
|
||||
# "tunnel".
|
||||
local tid = copy(c$id);
|
||||
tid$orig_p = 0/tcp;
|
||||
Tunnel::register([$cid=tid, $tunnel_type=Tunnel::HTTP]);
|
||||
}
|
||||
}
|
||||
|
||||
event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=5
|
||||
|
|
|
@ -76,7 +76,7 @@ event irc_dcc_message(c: connection, is_orig: bool,
|
|||
dcc_expected_transfers[address, p] = c$irc;
|
||||
}
|
||||
|
||||
event expected_connection_seen(c: connection, a: Analyzer::Tag) &priority=10
|
||||
event scheduled_analyzer_applied(c: connection, a: Analyzer::Tag) &priority=10
|
||||
{
|
||||
local id = c$id;
|
||||
if ( [id$resp_h, id$resp_p] in dcc_expected_transfers )
|
||||
|
|
1
scripts/base/protocols/radius/__load__.bro
Normal file
1
scripts/base/protocols/radius/__load__.bro
Normal file
|
@ -0,0 +1 @@
|
|||
@load ./main
|
231
scripts/base/protocols/radius/consts.bro
Normal file
231
scripts/base/protocols/radius/consts.bro
Normal file
|
@ -0,0 +1,231 @@
|
|||
|
||||
module RADIUS;
|
||||
|
||||
const msg_types: table[count] of string = {
|
||||
[1] = "Access-Request",
|
||||
[2] = "Access-Accept",
|
||||
[3] = "Access-Reject",
|
||||
[4] = "Accounting-Request",
|
||||
[5] = "Accounting-Response",
|
||||
[11] = "Access-Challenge",
|
||||
[12] = "Status-Server",
|
||||
[13] = "Status-Client",
|
||||
} &default=function(i: count): string { return fmt("unknown-%d", i); };
|
||||
|
||||
const attr_types: table[count] of string = {
|
||||
[1] = "User-Name",
|
||||
[2] = "User-Password",
|
||||
[3] = "CHAP-Password",
|
||||
[4] = "NAS-IP-Address",
|
||||
[5] = "NAS-Port",
|
||||
[6] = "Service-Type",
|
||||
[7] = "Framed-Protocol",
|
||||
[8] = "Framed-IP-Address",
|
||||
[9] = "Framed-IP-Netmask",
|
||||
[10] = "Framed-Routing",
|
||||
[11] = "Filter-Id",
|
||||
[12] = "Framed-MTU",
|
||||
[13] = "Framed-Compression",
|
||||
[14] = "Login-IP-Host",
|
||||
[15] = "Login-Service",
|
||||
[16] = "Login-TCP-Port",
|
||||
[18] = "Reply-Message",
|
||||
[19] = "Callback-Number",
|
||||
[20] = "Callback-Id",
|
||||
[22] = "Framed-Route",
|
||||
[23] = "Framed-IPX-Network",
|
||||
[24] = "State",
|
||||
[25] = "Class",
|
||||
[26] = "Vendor-Specific",
|
||||
[27] = "Session-Timeout",
|
||||
[28] = "Idle-Timeout",
|
||||
[29] = "Termination-Action",
|
||||
[30] = "Called-Station-Id",
|
||||
[31] = "Calling-Station-Id",
|
||||
[32] = "NAS-Identifier",
|
||||
[33] = "Proxy-State",
|
||||
[34] = "Login-LAT-Service",
|
||||
[35] = "Login-LAT-Node",
|
||||
[36] = "Login-LAT-Group",
|
||||
[37] = "Framed-AppleTalk-Link",
|
||||
[38] = "Framed-AppleTalk-Network",
|
||||
[39] = "Framed-AppleTalk-Zone",
|
||||
[40] = "Acct-Status-Type",
|
||||
[41] = "Acct-Delay-Time",
|
||||
[42] = "Acct-Input-Octets",
|
||||
[43] = "Acct-Output-Octets",
|
||||
[44] = "Acct-Session-Id",
|
||||
[45] = "Acct-Authentic",
|
||||
[46] = "Acct-Session-Time",
|
||||
[47] = "Acct-Input-Packets",
|
||||
[48] = "Acct-Output-Packets",
|
||||
[49] = "Acct-Terminate-Cause",
|
||||
[50] = "Acct-Multi-Session-Id",
|
||||
[51] = "Acct-Link-Count",
|
||||
[52] = "Acct-Input-Gigawords",
|
||||
[53] = "Acct-Output-Gigawords",
|
||||
[55] = "Event-Timestamp",
|
||||
[56] = "Egress-VLANID",
|
||||
[57] = "Ingress-Filters",
|
||||
[58] = "Egress-VLAN-Name",
|
||||
[59] = "User-Priority-Table",
|
||||
[60] = "CHAP-Challenge",
|
||||
[61] = "NAS-Port-Type",
|
||||
[62] = "Port-Limit",
|
||||
[63] = "Login-LAT-Port",
|
||||
[64] = "Tunnel-Type",
|
||||
[65] = "Tunnel-Medium-Type",
|
||||
[66] = "Tunnel-Client-EndPoint",
|
||||
[67] = "Tunnel-Server-EndPoint",
|
||||
[68] = "Acct-Tunnel-Connection",
|
||||
[69] = "Tunnel-Password",
|
||||
[70] = "ARAP-Password",
|
||||
[71] = "ARAP-Features",
|
||||
[72] = "ARAP-Zone-Access",
|
||||
[73] = "ARAP-Security",
|
||||
[74] = "ARAP-Security-Data",
|
||||
[75] = "Password-Retry",
|
||||
[76] = "Prompt",
|
||||
[77] = "Connect-Info",
|
||||
[78] = "Configuration-Token",
|
||||
[79] = "EAP-Message",
|
||||
[80] = "Message Authenticator",
|
||||
[81] = "Tunnel-Private-Group-ID",
|
||||
[82] = "Tunnel-Assignment-ID",
|
||||
[83] = "Tunnel-Preference",
|
||||
[84] = "ARAP-Challenge-Response",
|
||||
[85] = "Acct-Interim-Interval",
|
||||
[86] = "Acct-Tunnel-Packets-Lost",
|
||||
[87] = "NAS-Port-Id",
|
||||
[88] = "Framed-Pool",
|
||||
[89] = "CUI",
|
||||
[90] = "Tunnel-Client-Auth-ID",
|
||||
[91] = "Tunnel-Server-Auth-ID",
|
||||
[92] = "NAS-Filter-Rule",
|
||||
[94] = "Originating-Line-Info",
|
||||
[95] = "NAS-IPv6-Address",
|
||||
[96] = "Framed-Interface-Id",
|
||||
[97] = "Framed-IPv6-Prefix",
|
||||
[98] = "Login-IPv6-Host",
|
||||
[99] = "Framed-IPv6-Route",
|
||||
[100] = "Framed-IPv6-Pool",
|
||||
[101] = "Error-Cause",
|
||||
[102] = "EAP-Key-Name",
|
||||
[103] = "Digest-Response",
|
||||
[104] = "Digest-Realm",
|
||||
[105] = "Digest-Nonce",
|
||||
[106] = "Digest-Response-Auth",
|
||||
[107] = "Digest-Nextnonce",
|
||||
[108] = "Digest-Method",
|
||||
[109] = "Digest-URI",
|
||||
[110] = "Digest-Qop",
|
||||
[111] = "Digest-Algorithm",
|
||||
[112] = "Digest-Entity-Body-Hash",
|
||||
[113] = "Digest-CNonce",
|
||||
[114] = "Digest-Nonce-Count",
|
||||
[115] = "Digest-Username",
|
||||
[116] = "Digest-Opaque",
|
||||
[117] = "Digest-Auth-Param",
|
||||
[118] = "Digest-AKA-Auts",
|
||||
[119] = "Digest-Domain",
|
||||
[120] = "Digest-Stale",
|
||||
[121] = "Digest-HA1",
|
||||
[122] = "SIP-AOR",
|
||||
[123] = "Delegated-IPv6-Prefix",
|
||||
[124] = "MIP6-Feature-Vector",
|
||||
[125] = "MIP6-Home-Link-Prefix",
|
||||
[126] = "Operator-Name",
|
||||
[127] = "Location-Information",
|
||||
[128] = "Location-Data",
|
||||
[129] = "Basic-Location-Policy-Rules",
|
||||
[130] = "Extended-Location-Policy-Rules",
|
||||
[131] = "Location-Capable",
|
||||
[132] = "Requested-Location-Info",
|
||||
[133] = "Framed-Management-Protocol",
|
||||
[134] = "Management-Transport-Protection",
|
||||
[135] = "Management-Policy-Id",
|
||||
[136] = "Management-Privilege-Level",
|
||||
[137] = "PKM-SS-Cert",
|
||||
[138] = "PKM-CA-Cert",
|
||||
[139] = "PKM-Config-Settings",
|
||||
[140] = "PKM-Cryptosuite-List",
|
||||
[141] = "PKM-SAID",
|
||||
[142] = "PKM-SA-Descriptor",
|
||||
[143] = "PKM-Auth-Key",
|
||||
[144] = "DS-Lite-Tunnel-Name",
|
||||
[145] = "Mobile-Node-Identifier",
|
||||
[146] = "Service-Selection",
|
||||
[147] = "PMIP6-Home-LMA-IPv6-Address",
|
||||
[148] = "PMIP6-Visited-LMA-IPv6-Address",
|
||||
[149] = "PMIP6-Home-LMA-IPv4-Address",
|
||||
[150] = "PMIP6-Visited-LMA-IPv4-Address",
|
||||
[151] = "PMIP6-Home-HN-Prefix",
|
||||
[152] = "PMIP6-Visited-HN-Prefix",
|
||||
[153] = "PMIP6-Home-Interface-ID",
|
||||
[154] = "PMIP6-Visited-Interface-ID",
|
||||
[155] = "PMIP6-Home-IPv4-HoA",
|
||||
[156] = "PMIP6-Visited-IPv4-HoA",
|
||||
[157] = "PMIP6-Home-DHCP4-Server-Address",
|
||||
[158] = "PMIP6-Visited-DHCP4-Server-Address",
|
||||
[159] = "PMIP6-Home-DHCP6-Server-Address",
|
||||
[160] = "PMIP6-Visited-DHCP6-Server-Address",
|
||||
[161] = "PMIP6-Home-IPv4-Gateway",
|
||||
[162] = "PMIP6-Visited-IPv4-Gateway",
|
||||
[163] = "EAP-Lower-Layer",
|
||||
[164] = "GSS-Acceptor-Service-Name",
|
||||
[165] = "GSS-Acceptor-Host-Name",
|
||||
[166] = "GSS-Acceptor-Service-Specifics",
|
||||
[167] = "GSS-Acceptor-Realm-Name",
|
||||
[168] = "Framed-IPv6-Address",
|
||||
[169] = "DNS-Server-IPv6-Address",
|
||||
[170] = "Route-IPv6-Information",
|
||||
[171] = "Delegated-IPv6-Prefix-Pool",
|
||||
[172] = "Stateful-IPv6-Address-Pool",
|
||||
[173] = "IPv6-6rd-Configuration"
|
||||
} &default=function(i: count): string { return fmt("unknown-%d", i); };
|
||||
|
||||
const nas_port_types: table[count] of string = {
|
||||
[0] = "Async",
|
||||
[1] = "Sync",
|
||||
[2] = "ISDN Sync",
|
||||
[3] = "ISDN Async V.120",
|
||||
[4] = "ISDN Async V.110",
|
||||
[5] = "Virtual",
|
||||
[6] = "PIAFS",
|
||||
[7] = "HDLC Clear Channel",
|
||||
[8] = "X.25",
|
||||
[9] = "X.75",
|
||||
[10] = "G.3 Fax",
|
||||
[11] = "SDSL - Symmetric DSL",
|
||||
[12] = "ADSL-CAP - Asymmetric DSL, Carrierless Amplitude Phase Modulation",
|
||||
[13] = "ADSL-DMT - Asymmetric DSL, Discrete Multi-Tone",
|
||||
[14] = "IDSL - ISDN Digital Subscriber Line",
|
||||
[15] = "Ethernet",
|
||||
[16] = "xDSL - Digital Subscriber Line of unknown type",
|
||||
[17] = "Cable",
|
||||
[18] = "Wireless - Other",
|
||||
[19] = "Wireless - IEEE 802.11"
|
||||
} &default=function(i: count): string { return fmt("unknown-%d", i); };
|
||||
|
||||
const service_types: table[count] of string = {
|
||||
[1] = "Login",
|
||||
[2] = "Framed",
|
||||
[3] = "Callback Login",
|
||||
[4] = "Callback Framed",
|
||||
[5] = "Outbound",
|
||||
[6] = "Administrative",
|
||||
[7] = "NAS Prompt",
|
||||
[8] = "Authenticate Only",
|
||||
[9] = "Callback NAS Prompt",
|
||||
[10] = "Call Check",
|
||||
[11] = "Callback Administrative",
|
||||
} &default=function(i: count): string { return fmt("unknown-%d", i); };
|
||||
|
||||
const framed_protocol_types: table[count] of string = {
|
||||
[1] = "PPP",
|
||||
[2] = "SLIP",
|
||||
[3] = "AppleTalk Remote Access Protocol (ARAP)",
|
||||
[4] = "Gandalf proprietary SingleLink/MultiLink protocol",
|
||||
[5] = "Xylogics proprietary IPX/SLIP",
|
||||
[6] = "X.75 Synchronous"
|
||||
} &default=function(i: count): string { return fmt("unknown-%d", i); };
|
126
scripts/base/protocols/radius/main.bro
Normal file
126
scripts/base/protocols/radius/main.bro
Normal file
|
@ -0,0 +1,126 @@
|
|||
##! Implements base functionality for RADIUS analysis. Generates the radius.log file.
|
||||
|
||||
module RADIUS;
|
||||
|
||||
@load ./consts.bro
|
||||
@load base/utils/addrs
|
||||
|
||||
export {
|
||||
redef enum Log::ID += { LOG };
|
||||
|
||||
type Info: record {
|
||||
## Timestamp for when the event happened.
|
||||
ts : time &log;
|
||||
## Unique ID for the connection.
|
||||
uid : string &log;
|
||||
## The connection's 4-tuple of endpoint addresses/ports.
|
||||
id : conn_id &log;
|
||||
## The username, if present.
|
||||
username : string &log &optional;
|
||||
## MAC address, if present.
|
||||
mac : string &log &optional;
|
||||
## Remote IP address, if present.
|
||||
remote_ip : addr &log &optional;
|
||||
## Connect info, if present.
|
||||
connect_info : string &log &optional;
|
||||
## Successful or failed authentication.
|
||||
result : string &log &optional;
|
||||
## Whether this has already been logged and can be ignored.
|
||||
logged : bool &optional;
|
||||
|
||||
};
|
||||
|
||||
## The amount of time we wait for an authentication response before
|
||||
## expiring it.
|
||||
const expiration_interval = 10secs &redef;
|
||||
|
||||
## Logs an authentication attempt if we didn't see a response in time.
|
||||
##
|
||||
## t: A table of Info records.
|
||||
##
|
||||
## idx: The index of the connection$radius table corresponding to the
|
||||
## radius authentication about to expire.
|
||||
##
|
||||
## Returns: 0secs, which when this function is used as an
|
||||
## :bro:attr:`&expire_func`, indicates to remove the element at
|
||||
## *idx* immediately.
|
||||
global expire: function(t: table[count] of Info, idx: count): interval;
|
||||
|
||||
## Event that can be handled to access the RADIUS record as it is sent on
|
||||
## to the loggin framework.
|
||||
global log_radius: event(rec: Info);
|
||||
}
|
||||
|
||||
redef record connection += {
|
||||
radius: table[count] of Info &optional &write_expire=expiration_interval &expire_func=expire;
|
||||
};
|
||||
|
||||
const ports = { 1812/udp };
|
||||
|
||||
event bro_init() &priority=5
|
||||
{
|
||||
Log::create_stream(RADIUS::LOG, [$columns=Info, $ev=log_radius]);
|
||||
Analyzer::register_for_ports(Analyzer::ANALYZER_RADIUS, ports);
|
||||
}
|
||||
|
||||
event radius_message(c: connection, result: RADIUS::Message)
|
||||
{
|
||||
local info: Info;
|
||||
|
||||
if ( c?$radius && result$trans_id in c$radius )
|
||||
info = c$radius[result$trans_id];
|
||||
else
|
||||
{
|
||||
c$radius = table();
|
||||
info$ts = network_time();
|
||||
info$uid = c$uid;
|
||||
info$id = c$id;
|
||||
}
|
||||
|
||||
switch ( RADIUS::msg_types[result$code] ) {
|
||||
case "Access-Request":
|
||||
if ( result?$attributes ) {
|
||||
# User-Name
|
||||
if ( ! info?$username && 1 in result$attributes )
|
||||
info$username = result$attributes[1][0];
|
||||
|
||||
# Calling-Station-Id (we expect this to be a MAC)
|
||||
if ( ! info?$mac && 31 in result$attributes )
|
||||
info$mac = normalize_mac(result$attributes[31][0]);
|
||||
|
||||
# Tunnel-Client-EndPoint (useful for VPNs)
|
||||
if ( ! info?$remote_ip && 66 in result$attributes )
|
||||
info$remote_ip = to_addr(result$attributes[66][0]);
|
||||
|
||||
# Connect-Info
|
||||
if ( ! info?$connect_info && 77 in result$attributes )
|
||||
info$connect_info = result$attributes[77][0];
|
||||
}
|
||||
|
||||
break;
|
||||
|
||||
case "Access-Accept":
|
||||
info$result = "success";
|
||||
break;
|
||||
|
||||
case "Access-Reject":
|
||||
info$result = "failed";
|
||||
break;
|
||||
}
|
||||
|
||||
if ( info?$result && ! info?$logged )
|
||||
{
|
||||
info$logged = T;
|
||||
Log::write(RADIUS::LOG, info);
|
||||
}
|
||||
|
||||
c$radius[result$trans_id] = info;
|
||||
}
|
||||
|
||||
|
||||
function expire(t: table[count] of Info, idx: count): interval
|
||||
{
|
||||
t[idx]$result = "unknown";
|
||||
Log::write(RADIUS::LOG, t[idx]);
|
||||
return 0secs;
|
||||
}
|
|
@ -48,6 +48,6 @@ event bro_init() &priority=5
|
|||
|
||||
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
|
||||
{
|
||||
if ( c?$smtp )
|
||||
if ( c?$smtp && !c$smtp$tls )
|
||||
c$smtp$fuids[|c$smtp$fuids|] = f$id;
|
||||
}
|
||||
|
|
|
@ -50,6 +50,8 @@ export {
|
|||
## Value of the User-Agent header from the client.
|
||||
user_agent: string &log &optional;
|
||||
|
||||
## Indicates that the connection has switched to using TLS.
|
||||
tls: bool &log &default=F;
|
||||
## Indicates if the "Received: from" headers should still be
|
||||
## processed.
|
||||
process_received_from: bool &default=T;
|
||||
|
@ -276,6 +278,12 @@ event connection_state_remove(c: connection) &priority=-5
|
|||
smtp_message(c);
|
||||
}
|
||||
|
||||
event smtp_starttls(c: connection) &priority=5
|
||||
{
|
||||
if ( c?$smtp )
|
||||
c$smtp$tls = T;
|
||||
}
|
||||
|
||||
function describe(rec: Info): string
|
||||
{
|
||||
if ( rec?$mailfrom && rec?$rcptto )
|
||||
|
|
1
scripts/base/protocols/snmp/README
Normal file
1
scripts/base/protocols/snmp/README
Normal file
|
@ -0,0 +1 @@
|
|||
Support for Simple Network Management Protocol (SNMP) analysis.
|
1
scripts/base/protocols/snmp/__load__.bro
Normal file
1
scripts/base/protocols/snmp/__load__.bro
Normal file
|
@ -0,0 +1 @@
|
|||
@load ./main
|
182
scripts/base/protocols/snmp/main.bro
Normal file
182
scripts/base/protocols/snmp/main.bro
Normal file
|
@ -0,0 +1,182 @@
|
|||
##! Enables analysis and logging of SNMP datagrams.
|
||||
|
||||
module SNMP;
|
||||
|
||||
export {
|
||||
redef enum Log::ID += { LOG };
|
||||
|
||||
## Information tracked per SNMP session.
|
||||
type Info: record {
|
||||
## Timestamp of first packet belonging to the SNMP session.
|
||||
ts: time &log;
|
||||
## The unique ID for the connection.
|
||||
uid: string &log;
|
||||
## The connection's 5-tuple of addresses/ports (ports inherently
|
||||
## include transport protocol information)
|
||||
id: conn_id &log;
|
||||
## The amount of time between the first packet beloning to
|
||||
## the SNMP session and the latest one seen.
|
||||
duration: interval &log &default=0secs;
|
||||
## The version of SNMP being used.
|
||||
version: string &log;
|
||||
## The community string of the first SNMP packet associated with
|
||||
## the session. This is used as part of SNMP's (v1 and v2c)
|
||||
## administrative/security framework. See :rfc:`1157` or :rfc:`1901`.
|
||||
community: string &log &optional;
|
||||
|
||||
## The number of variable bindings in GetRequest/GetNextRequest PDUs
|
||||
## seen for the session.
|
||||
get_requests: count &log &default=0;
|
||||
## The number of variable bindings in GetBulkRequest PDUs seen for
|
||||
## the session.
|
||||
get_bulk_requests: count &log &default=0;
|
||||
## The number of variable bindings in GetResponse/Response PDUs seen
|
||||
## for the session.
|
||||
get_responses: count &log &default=0;
|
||||
## The number of variable bindings in SetRequest PDUs seen for
|
||||
## the session.
|
||||
set_requests: count &log &default=0;
|
||||
|
||||
## A system description of the SNMP responder endpoint.
|
||||
display_string: string &log &optional;
|
||||
## The time at which the SNMP responder endpoint claims it's been
|
||||
## up since.
|
||||
up_since: time &log &optional;
|
||||
};
|
||||
|
||||
## Maps an SNMP version integer to a human readable string.
|
||||
const version_map: table[count] of string = {
|
||||
[0] = "1",
|
||||
[1] = "2c",
|
||||
[3] = "3",
|
||||
} &redef &default="unknown";
|
||||
|
||||
## Event that can be handled to access the SNMP record as it is sent on
|
||||
## to the logging framework.
|
||||
global log_snmp: event(rec: Info);
|
||||
}
|
||||
|
||||
redef record connection += {
|
||||
snmp: SNMP::Info &optional;
|
||||
};
|
||||
|
||||
const ports = { 161/udp, 162/udp };
|
||||
redef likely_server_ports += { ports };
|
||||
|
||||
event bro_init() &priority=5
|
||||
{
|
||||
Analyzer::register_for_ports(Analyzer::ANALYZER_SNMP, ports);
|
||||
Log::create_stream(SNMP::LOG, [$columns=SNMP::Info, $ev=log_snmp]);
|
||||
}
|
||||
|
||||
function init_state(c: connection, h: SNMP::Header): Info
|
||||
{
|
||||
if ( ! c?$snmp )
|
||||
{
|
||||
c$snmp = Info($ts=network_time(),
|
||||
$uid=c$uid, $id=c$id,
|
||||
$version=version_map[h$version]);
|
||||
}
|
||||
|
||||
local s = c$snmp;
|
||||
|
||||
if ( ! s?$community )
|
||||
{
|
||||
if ( h?$v1 )
|
||||
s$community = h$v1$community;
|
||||
else if ( h?$v2 )
|
||||
s$community = h$v2$community;
|
||||
}
|
||||
|
||||
s$duration = network_time() - s$ts;
|
||||
return s;
|
||||
}
|
||||
|
||||
|
||||
event connection_state_remove(c: connection) &priority=-5
|
||||
{
|
||||
if ( c?$snmp )
|
||||
Log::write(LOG, c$snmp);
|
||||
}
|
||||
|
||||
event snmp_get_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
|
||||
{
|
||||
local s = init_state(c, header);
|
||||
s$get_requests += |pdu$bindings|;
|
||||
}
|
||||
|
||||
event snmp_get_bulk_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::BulkPDU) &priority=5
|
||||
{
|
||||
local s = init_state(c, header);
|
||||
s$get_bulk_requests += |pdu$bindings|;
|
||||
}
|
||||
|
||||
event snmp_get_next_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
|
||||
{
|
||||
local s = init_state(c, header);
|
||||
s$get_requests += |pdu$bindings|;
|
||||
}
|
||||
|
||||
event snmp_response(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
|
||||
{
|
||||
local s = init_state(c, header);
|
||||
s$get_responses += |pdu$bindings|;
|
||||
|
||||
for ( i in pdu$bindings )
|
||||
{
|
||||
local binding = pdu$bindings[i];
|
||||
|
||||
if ( binding$oid == "1.3.6.1.2.1.1.1.0" && binding$value?$octets )
|
||||
c$snmp$display_string = binding$value$octets;
|
||||
else if ( binding$oid == "1.3.6.1.2.1.1.3.0" && binding$value?$unsigned )
|
||||
{
|
||||
local up_seconds = binding$value$unsigned / 100.0;
|
||||
s$up_since = network_time() - double_to_interval(up_seconds);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
event snmp_set_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
|
||||
{
|
||||
local s = init_state(c, header);
|
||||
s$set_requests += |pdu$bindings|;
|
||||
}
|
||||
|
||||
event snmp_trap(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::TrapPDU) &priority=5
|
||||
{
|
||||
init_state(c, header);
|
||||
}
|
||||
|
||||
event snmp_inform_request(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
|
||||
{
|
||||
init_state(c, header);
|
||||
}
|
||||
|
||||
event snmp_trapV2(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
|
||||
{
|
||||
init_state(c, header);
|
||||
}
|
||||
|
||||
event snmp_report(c: connection, is_orig: bool, header: SNMP::Header, pdu: SNMP::PDU) &priority=5
|
||||
{
|
||||
init_state(c, header);
|
||||
}
|
||||
|
||||
event snmp_unknown_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
|
||||
{
|
||||
init_state(c, header);
|
||||
}
|
||||
|
||||
event snmp_unknown_scoped_pdu(c: connection, is_orig: bool, header: SNMP::Header, tag: count) &priority=5
|
||||
{
|
||||
init_state(c, header);
|
||||
}
|
||||
|
||||
event snmp_encrypted_pdu(c: connection, is_orig: bool, header: SNMP::Header) &priority=5
|
||||
{
|
||||
init_state(c, header);
|
||||
}
|
||||
|
||||
#event snmp_unknown_header_version(c: connection, is_orig: bool, version: count) &priority=5
|
||||
# {
|
||||
# }
|
|
@ -1,5 +1,6 @@
|
|||
@load ./consts
|
||||
@load ./main
|
||||
@load ./mozilla-ca-list
|
||||
@load ./files
|
||||
|
||||
@load-sigs ./dpd.sig
|
||||
|
|
|
@ -15,6 +15,17 @@ export {
|
|||
[TLSv12] = "TLSv12",
|
||||
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||
|
||||
## TLS content types:
|
||||
const CHANGE_CIPHER_SPEC = 20;
|
||||
const ALERT = 21;
|
||||
const HANDSHAKE = 22;
|
||||
const APPLICATION_DATA = 23;
|
||||
const HEARTBEAT = 24;
|
||||
const V2_ERROR = 300;
|
||||
const V2_CLIENT_HELLO = 301;
|
||||
const V2_CLIENT_MASTER_KEY = 302;
|
||||
const V2_SERVER_HELLO = 304;
|
||||
|
||||
## Mapping between numeric codes and human readable strings for alert
|
||||
## levels.
|
||||
const alert_levels: table[count] of string = {
|
||||
|
@ -47,6 +58,7 @@ export {
|
|||
[70] = "protocol_version",
|
||||
[71] = "insufficient_security",
|
||||
[80] = "internal_error",
|
||||
[86] = "inappropriate_fallback",
|
||||
[90] = "user_canceled",
|
||||
[100] = "no_renegotiation",
|
||||
[110] = "unsupported_extension",
|
||||
|
@ -55,6 +67,7 @@ export {
|
|||
[113] = "bad_certificate_status_response",
|
||||
[114] = "bad_certificate_hash_value",
|
||||
[115] = "unknown_psk_identity",
|
||||
[120] = "no_application_protocol",
|
||||
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||
|
||||
## Mapping between numeric codes and human readable strings for SSL/TLS
|
||||
|
@ -86,9 +99,55 @@ export {
|
|||
[13172] = "next_protocol_negotiation",
|
||||
[13175] = "origin_bound_certificates",
|
||||
[13180] = "encrypted_client_certificates",
|
||||
[30031] = "channel_id",
|
||||
[30032] = "channel_id_new",
|
||||
[35655] = "padding",
|
||||
[65281] = "renegotiation_info"
|
||||
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||
|
||||
## Mapping between numeric codes and human readable string for SSL/TLS elliptic curves.
|
||||
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-8
|
||||
const ec_curves: table[count] of string = {
|
||||
[1] = "sect163k1",
|
||||
[2] = "sect163r1",
|
||||
[3] = "sect163r2",
|
||||
[4] = "sect193r1",
|
||||
[5] = "sect193r2",
|
||||
[6] = "sect233k1",
|
||||
[7] = "sect233r1",
|
||||
[8] = "sect239k1",
|
||||
[9] = "sect283k1",
|
||||
[10] = "sect283r1",
|
||||
[11] = "sect409k1",
|
||||
[12] = "sect409r1",
|
||||
[13] = "sect571k1",
|
||||
[14] = "sect571r1",
|
||||
[15] = "secp160k1",
|
||||
[16] = "secp160r1",
|
||||
[17] = "secp160r2",
|
||||
[18] = "secp192k1",
|
||||
[19] = "secp192r1",
|
||||
[20] = "secp224k1",
|
||||
[21] = "secp224r1",
|
||||
[22] = "secp256k1",
|
||||
[23] = "secp256r1",
|
||||
[24] = "secp384r1",
|
||||
[25] = "secp521r1",
|
||||
[26] = "brainpoolP256r1",
|
||||
[27] = "brainpoolP384r1",
|
||||
[28] = "brainpoolP512r1",
|
||||
[0xFF01] = "arbitrary_explicit_prime_curves",
|
||||
[0xFF02] = "arbitrary_explicit_char2_curves"
|
||||
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||
|
||||
## Mapping between numeric codes and human readable string for SSL/TLC EC point formats.
|
||||
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-9
|
||||
const ec_point_formats: table[count] of string = {
|
||||
[0] = "uncompressed",
|
||||
[1] = "ansiX962_compressed_prime",
|
||||
[2] = "ansiX962_compressed_char2"
|
||||
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||
|
||||
# SSLv2
|
||||
const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080;
|
||||
const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080;
|
||||
|
@ -262,6 +321,8 @@ export {
|
|||
const TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C3;
|
||||
const TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C4;
|
||||
const TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256 = 0x00C5;
|
||||
# draft-bmoeller-tls-downgrade-scsv-01
|
||||
const TLS_FALLBACK_SCSV = 0x5600;
|
||||
# RFC 4492
|
||||
const TLS_ECDH_ECDSA_WITH_NULL_SHA = 0xC001;
|
||||
const TLS_ECDH_ECDSA_WITH_RC4_128_SHA = 0xC002;
|
||||
|
@ -437,6 +498,10 @@ export {
|
|||
const TLS_PSK_WITH_AES_256_CCM_8 = 0xC0A9;
|
||||
const TLS_PSK_DHE_WITH_AES_128_CCM_8 = 0xC0AA;
|
||||
const TLS_PSK_DHE_WITH_AES_256_CCM_8 = 0xC0AB;
|
||||
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM = 0xC0AC;
|
||||
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM = 0xC0AD;
|
||||
const TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8 = 0xC0AE;
|
||||
const TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8 = 0xC0AF;
|
||||
# draft-agl-tls-chacha20poly1305-02
|
||||
const TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC13;
|
||||
const TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 = 0xCC14;
|
||||
|
@ -628,6 +693,7 @@ export {
|
|||
[TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_DSS_WITH_CAMELLIA_256_CBC_SHA256",
|
||||
[TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256",
|
||||
[TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256] = "TLS_DH_ANON_WITH_CAMELLIA_256_CBC_SHA256",
|
||||
[TLS_FALLBACK_SCSV] = "TLS_FALLBACK_SCSV",
|
||||
[TLS_ECDH_ECDSA_WITH_NULL_SHA] = "TLS_ECDH_ECDSA_WITH_NULL_SHA",
|
||||
[TLS_ECDH_ECDSA_WITH_RC4_128_SHA] = "TLS_ECDH_ECDSA_WITH_RC4_128_SHA",
|
||||
[TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA] = "TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA",
|
||||
|
@ -799,6 +865,10 @@ export {
|
|||
[TLS_PSK_WITH_AES_256_CCM_8] = "TLS_PSK_WITH_AES_256_CCM_8",
|
||||
[TLS_PSK_DHE_WITH_AES_128_CCM_8] = "TLS_PSK_DHE_WITH_AES_128_CCM_8",
|
||||
[TLS_PSK_DHE_WITH_AES_256_CCM_8] = "TLS_PSK_DHE_WITH_AES_256_CCM_8",
|
||||
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM",
|
||||
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM",
|
||||
[TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_128_CCM_8",
|
||||
[TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8] = "TLS_ECDHE_ECDSA_WITH_AES_256_CCM_8",
|
||||
[TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
|
||||
[TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
|
||||
[TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
|
||||
|
@ -813,42 +883,4 @@ export {
|
|||
[TLS_EMPTY_RENEGOTIATION_INFO_SCSV] = "TLS_EMPTY_RENEGOTIATION_INFO_SCSV",
|
||||
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||
|
||||
## Mapping between the constants and string values for SSL/TLS errors.
|
||||
const x509_errors: table[count] of string = {
|
||||
[0] = "ok",
|
||||
[1] = "unable to get issuer cert",
|
||||
[2] = "unable to get crl",
|
||||
[3] = "unable to decrypt cert signature",
|
||||
[4] = "unable to decrypt crl signature",
|
||||
[5] = "unable to decode issuer public key",
|
||||
[6] = "cert signature failure",
|
||||
[7] = "crl signature failure",
|
||||
[8] = "cert not yet valid",
|
||||
[9] = "cert has expired",
|
||||
[10] = "crl not yet valid",
|
||||
[11] = "crl has expired",
|
||||
[12] = "error in cert not before field",
|
||||
[13] = "error in cert not after field",
|
||||
[14] = "error in crl last update field",
|
||||
[15] = "error in crl next update field",
|
||||
[16] = "out of mem",
|
||||
[17] = "depth zero self signed cert",
|
||||
[18] = "self signed cert in chain",
|
||||
[19] = "unable to get issuer cert locally",
|
||||
[20] = "unable to verify leaf signature",
|
||||
[21] = "cert chain too long",
|
||||
[22] = "cert revoked",
|
||||
[23] = "invalid ca",
|
||||
[24] = "path length exceeded",
|
||||
[25] = "invalid purpose",
|
||||
[26] = "cert untrusted",
|
||||
[27] = "cert rejected",
|
||||
[28] = "subject issuer mismatch",
|
||||
[29] = "akid skid mismatch",
|
||||
[30] = "akid issuer serial mismatch",
|
||||
[31] = "keyusage no certsign",
|
||||
[32] = "unable to get crl issuer",
|
||||
[33] = "unhandled critical extension",
|
||||
} &default=function(i: count):string { return fmt("unknown-%d", i); };
|
||||
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
signature dpd_ssl_server {
|
||||
ip-proto == tcp
|
||||
# Server hello.
|
||||
payload /^(\x16\x03[\x00\x01\x02]..\x02...\x03[\x00\x01\x02]|...?\x04..\x00\x02).*/
|
||||
payload /^(\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
|
||||
requires-reverse-signature dpd_ssl_client
|
||||
enable "ssl"
|
||||
tcp-state responder
|
||||
|
@ -10,6 +10,6 @@ signature dpd_ssl_server {
|
|||
signature dpd_ssl_client {
|
||||
ip-proto == tcp
|
||||
# Client hello.
|
||||
payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/
|
||||
payload /^(\x16\x03[\x00\x01\x02\x03]..\x01...\x03[\x00\x01\x02\x03]|...?\x01[\x00\x03][\x00\x01\x02\x03]).*/
|
||||
tcp-state originator
|
||||
}
|
||||
|
|
135
scripts/base/protocols/ssl/files.bro
Normal file
135
scripts/base/protocols/ssl/files.bro
Normal file
|
@ -0,0 +1,135 @@
|
|||
@load ./main
|
||||
@load base/utils/conn-ids
|
||||
@load base/frameworks/files
|
||||
@load base/files/x509
|
||||
|
||||
module SSL;
|
||||
|
||||
export {
|
||||
redef record Info += {
|
||||
## Chain of certificates offered by the server to validate its
|
||||
## complete signing chain.
|
||||
cert_chain: vector of Files::Info &optional;
|
||||
|
||||
## An ordered vector of all certicate file unique IDs for the
|
||||
## certificates offered by the server.
|
||||
cert_chain_fuids: vector of string &optional &log;
|
||||
|
||||
## Chain of certificates offered by the client to validate its
|
||||
## complete signing chain.
|
||||
client_cert_chain: vector of Files::Info &optional;
|
||||
|
||||
## An ordered vector of all certicate file unique IDs for the
|
||||
## certificates offered by the client.
|
||||
client_cert_chain_fuids: vector of string &optional &log;
|
||||
|
||||
## Subject of the X.509 certificate offered by the server.
|
||||
subject: string &log &optional;
|
||||
|
||||
## Subject of the signer of the X.509 certificate offered by the
|
||||
## server.
|
||||
issuer: string &log &optional;
|
||||
|
||||
## Subject of the X.509 certificate offered by the client.
|
||||
client_subject: string &log &optional;
|
||||
|
||||
## Subject of the signer of the X.509 certificate offered by the
|
||||
## client.
|
||||
client_issuer: string &log &optional;
|
||||
|
||||
## Current number of certificates seen from either side. Used
|
||||
## to create file handles.
|
||||
server_depth: count &default=0;
|
||||
client_depth: count &default=0;
|
||||
};
|
||||
|
||||
## Default file handle provider for SSL.
|
||||
global get_file_handle: function(c: connection, is_orig: bool): string;
|
||||
|
||||
## Default file describer for SSL.
|
||||
global describe_file: function(f: fa_file): string;
|
||||
}
|
||||
|
||||
function get_file_handle(c: connection, is_orig: bool): string
|
||||
{
|
||||
# Unused. File handles are generated in the analyzer.
|
||||
return "";
|
||||
}
|
||||
|
||||
function describe_file(f: fa_file): string
|
||||
{
|
||||
if ( f$source != "SSL" || ! f?$info || ! f$info?$x509 || ! f$info$x509?$certificate )
|
||||
return "";
|
||||
|
||||
# It is difficult to reliably describe a certificate - especially since
|
||||
# we do not know when this function is called (hence, if the data structures
|
||||
# are already populated).
|
||||
#
|
||||
# Just return a bit of our connection information and hope that that is good enough.
|
||||
for ( cid in f$conns )
|
||||
{
|
||||
if ( f$conns[cid]?$ssl )
|
||||
{
|
||||
local c = f$conns[cid];
|
||||
return cat(c$id$resp_h, ":", c$id$resp_p);
|
||||
}
|
||||
}
|
||||
|
||||
return cat("Serial: ", f$info$x509$certificate$serial, " Subject: ",
|
||||
f$info$x509$certificate$subject, " Issuer: ",
|
||||
f$info$x509$certificate$issuer);
|
||||
}
|
||||
|
||||
event bro_init() &priority=5
|
||||
{
|
||||
Files::register_protocol(Analyzer::ANALYZER_SSL,
|
||||
[$get_file_handle = SSL::get_file_handle,
|
||||
$describe = SSL::describe_file]);
|
||||
}
|
||||
|
||||
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
|
||||
{
|
||||
if ( ! c?$ssl )
|
||||
return;
|
||||
|
||||
if ( ! c$ssl?$cert_chain )
|
||||
{
|
||||
c$ssl$cert_chain = vector();
|
||||
c$ssl$client_cert_chain = vector();
|
||||
c$ssl$cert_chain_fuids = string_vec();
|
||||
c$ssl$client_cert_chain_fuids = string_vec();
|
||||
}
|
||||
|
||||
if ( is_orig )
|
||||
{
|
||||
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = f$info;
|
||||
c$ssl$client_cert_chain_fuids[|c$ssl$client_cert_chain_fuids|] = f$id;
|
||||
}
|
||||
else
|
||||
{
|
||||
c$ssl$cert_chain[|c$ssl$cert_chain|] = f$info;
|
||||
c$ssl$cert_chain_fuids[|c$ssl$cert_chain_fuids|] = f$id;
|
||||
}
|
||||
|
||||
Files::add_analyzer(f, Files::ANALYZER_X509);
|
||||
# always calculate hashes. They are not necessary for base scripts
|
||||
# but very useful for identification, and required for policy scripts
|
||||
Files::add_analyzer(f, Files::ANALYZER_MD5);
|
||||
Files::add_analyzer(f, Files::ANALYZER_SHA1);
|
||||
}
|
||||
|
||||
event ssl_established(c: connection) &priority=6
|
||||
{
|
||||
# update subject and issuer information
|
||||
if ( c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 )
|
||||
{
|
||||
c$ssl$subject = c$ssl$cert_chain[0]$x509$certificate$subject;
|
||||
c$ssl$issuer = c$ssl$cert_chain[0]$x509$certificate$issuer;
|
||||
}
|
||||
|
||||
if ( c$ssl?$client_cert_chain && |c$ssl$client_cert_chain| > 0 )
|
||||
{
|
||||
c$ssl$client_subject = c$ssl$client_cert_chain[0]$x509$certificate$subject;
|
||||
c$ssl$client_issuer = c$ssl$client_cert_chain[0]$x509$certificate$issuer;
|
||||
}
|
||||
}
|
|
@ -19,45 +19,28 @@ export {
|
|||
version: string &log &optional;
|
||||
## SSL/TLS cipher suite that the server chose.
|
||||
cipher: string &log &optional;
|
||||
## Elliptic curve the server chose when using ECDH/ECDHE.
|
||||
curve: string &log &optional;
|
||||
## Value of the Server Name Indicator SSL/TLS extension. It
|
||||
## indicates the server name that the client was requesting.
|
||||
server_name: string &log &optional;
|
||||
## Session ID offered by the client for session resumption.
|
||||
session_id: string &log &optional;
|
||||
## Subject of the X.509 certificate offered by the server.
|
||||
subject: string &log &optional;
|
||||
## Subject of the signer of the X.509 certificate offered by the
|
||||
## server.
|
||||
issuer_subject: string &log &optional;
|
||||
## NotValidBefore field value from the server certificate.
|
||||
not_valid_before: time &log &optional;
|
||||
## NotValidAfter field value from the server certificate.
|
||||
not_valid_after: time &log &optional;
|
||||
## Last alert that was seen during the connection.
|
||||
last_alert: string &log &optional;
|
||||
|
||||
## Subject of the X.509 certificate offered by the client.
|
||||
client_subject: string &log &optional;
|
||||
## Subject of the signer of the X.509 certificate offered by the
|
||||
## client.
|
||||
client_issuer_subject: string &log &optional;
|
||||
|
||||
## Full binary server certificate stored in DER format.
|
||||
cert: string &optional;
|
||||
## Chain of certificates offered by the server to validate its
|
||||
## complete signing chain.
|
||||
cert_chain: vector of string &optional;
|
||||
|
||||
## Full binary client certificate stored in DER format.
|
||||
client_cert: string &optional;
|
||||
## Chain of certificates offered by the client to validate its
|
||||
## complete signing chain.
|
||||
client_cert_chain: vector of string &optional;
|
||||
|
||||
## The analyzer ID used for the analyzer instance attached
|
||||
## to each connection. It is not used for logging since it's a
|
||||
## meaningless arbitrary number.
|
||||
analyzer_id: count &optional;
|
||||
|
||||
## Flag to indicate if this ssl session has been established
|
||||
## succesfully, or if it was aborted during the handshake.
|
||||
established: bool &log &default=F;
|
||||
|
||||
## Flag to indicate if this record already has been logged, to
|
||||
## prevent duplicates.
|
||||
logged: bool &default=F;
|
||||
};
|
||||
|
||||
## The default root CA bundle. By default, the mozilla-ca-list.bro
|
||||
|
@ -108,8 +91,7 @@ event bro_init() &priority=5
|
|||
function set_session(c: connection)
|
||||
{
|
||||
if ( ! c?$ssl )
|
||||
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector(),
|
||||
$client_cert_chain=vector()];
|
||||
c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id];
|
||||
}
|
||||
|
||||
function delay_log(info: Info, token: string)
|
||||
|
@ -127,9 +109,13 @@ function undelay_log(info: Info, token: string)
|
|||
|
||||
function log_record(info: Info)
|
||||
{
|
||||
if ( info$logged )
|
||||
return;
|
||||
|
||||
if ( ! info?$delay_tokens || |info$delay_tokens| == 0 )
|
||||
{
|
||||
Log::write(SSL::LOG, info);
|
||||
info$logged = T;
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -146,11 +132,16 @@ function log_record(info: Info)
|
|||
}
|
||||
}
|
||||
|
||||
function finish(c: connection)
|
||||
# remove_analyzer flag is used to prevent disabling analyzer for finished
|
||||
# connections.
|
||||
function finish(c: connection, remove_analyzer: bool)
|
||||
{
|
||||
log_record(c$ssl);
|
||||
if ( disable_analyzer_after_detection && c?$ssl && c$ssl?$analyzer_id )
|
||||
if ( remove_analyzer && disable_analyzer_after_detection && c?$ssl && c$ssl?$analyzer_id )
|
||||
{
|
||||
disable_analyzer(c$id, c$ssl$analyzer_id);
|
||||
delete c$ssl$analyzer_id;
|
||||
}
|
||||
}
|
||||
|
||||
event ssl_client_hello(c: connection, version: count, possible_ts: time, client_random: string, session_id: string, ciphers: index_vec) &priority=5
|
||||
|
@ -170,55 +161,23 @@ event ssl_server_hello(c: connection, version: count, possible_ts: time, server_
|
|||
c$ssl$cipher = cipher_desc[cipher];
|
||||
}
|
||||
|
||||
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=5
|
||||
event ssl_server_curve(c: connection, curve: count) &priority=5
|
||||
{
|
||||
set_session(c);
|
||||
|
||||
# We aren't doing anything with client certificates yet.
|
||||
if ( is_orig )
|
||||
{
|
||||
if ( chain_idx == 0 )
|
||||
{
|
||||
# Save the primary cert.
|
||||
c$ssl$client_cert = der_cert;
|
||||
|
||||
# Also save other certificate information about the primary cert.
|
||||
c$ssl$client_subject = cert$subject;
|
||||
c$ssl$client_issuer_subject = cert$issuer;
|
||||
}
|
||||
else
|
||||
{
|
||||
# Otherwise, add it to the cert validation chain.
|
||||
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = der_cert;
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
if ( chain_idx == 0 )
|
||||
{
|
||||
# Save the primary cert.
|
||||
c$ssl$cert = der_cert;
|
||||
|
||||
# Also save other certificate information about the primary cert.
|
||||
c$ssl$subject = cert$subject;
|
||||
c$ssl$issuer_subject = cert$issuer;
|
||||
c$ssl$not_valid_before = cert$not_valid_before;
|
||||
c$ssl$not_valid_after = cert$not_valid_after;
|
||||
}
|
||||
else
|
||||
{
|
||||
# Otherwise, add it to the cert validation chain.
|
||||
c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert;
|
||||
}
|
||||
}
|
||||
c$ssl$curve = ec_curves[curve];
|
||||
}
|
||||
|
||||
event ssl_extension(c: connection, is_orig: bool, code: count, val: string) &priority=5
|
||||
event ssl_extension_server_name(c: connection, is_orig: bool, names: string_vec) &priority=5
|
||||
{
|
||||
set_session(c);
|
||||
|
||||
if ( is_orig && extensions[code] == "server_name" )
|
||||
c$ssl$server_name = sub_bytes(val, 6, |val|);
|
||||
if ( is_orig && |names| > 0 )
|
||||
{
|
||||
c$ssl$server_name = names[0];
|
||||
if ( |names| > 1 )
|
||||
event conn_weird("SSL_many_server_names", c, cat(names));
|
||||
}
|
||||
}
|
||||
|
||||
event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priority=5
|
||||
|
@ -228,26 +187,36 @@ event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priori
|
|||
c$ssl$last_alert = alert_descriptions[desc];
|
||||
}
|
||||
|
||||
event ssl_established(c: connection) &priority=5
|
||||
event ssl_established(c: connection) &priority=7
|
||||
{
|
||||
set_session(c);
|
||||
c$ssl$established = T;
|
||||
}
|
||||
|
||||
event ssl_established(c: connection) &priority=-5
|
||||
{
|
||||
finish(c);
|
||||
finish(c, T);
|
||||
}
|
||||
|
||||
event connection_state_remove(c: connection) &priority=-5
|
||||
{
|
||||
if ( c?$ssl )
|
||||
# called in case a SSL connection that has not been established terminates
|
||||
finish(c, F);
|
||||
}
|
||||
|
||||
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5
|
||||
{
|
||||
# Check by checking for existence of c$ssl record.
|
||||
if ( c?$ssl && atype == Analyzer::ANALYZER_SSL )
|
||||
if ( atype == Analyzer::ANALYZER_SSL )
|
||||
{
|
||||
set_session(c);
|
||||
c$ssl$analyzer_id = aid;
|
||||
}
|
||||
}
|
||||
|
||||
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
|
||||
reason: string) &priority=5
|
||||
{
|
||||
if ( c?$ssl )
|
||||
finish(c);
|
||||
finish(c, T);
|
||||
}
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -1,4 +1,4 @@
|
|||
##! Functions for parsing and manipulating IP addresses.
|
||||
##! Functions for parsing and manipulating IP and MAC addresses.
|
||||
|
||||
# Regular expressions for matching IP addresses in strings.
|
||||
const ipv4_addr_regex = /[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}/;
|
||||
|
@ -119,3 +119,30 @@ function addr_to_uri(a: addr): string
|
|||
else
|
||||
return fmt("[%s]", a);
|
||||
}
|
||||
|
||||
## Given a string, extracts the hex digits and returns a MAC address in
|
||||
## the format: 00:a0:32:d7:81:8f. If the string doesn't contain 12 or 16 hex
|
||||
## digits, an empty string is returned.
|
||||
##
|
||||
## a: the string to normalize.
|
||||
##
|
||||
## Returns: a normalized MAC address, or an empty string in the case of an error.
|
||||
function normalize_mac(a: string): string
|
||||
{
|
||||
local result = to_lower(gsub(a, /[^A-Fa-f0-9]/, ""));
|
||||
local octets: string_vec;
|
||||
|
||||
if ( |result| == 12 )
|
||||
{
|
||||
octets = str_split(result, vector(2, 4, 6, 8, 10));
|
||||
return fmt("%s:%s:%s:%s:%s:%s", octets[1], octets[2], octets[3], octets[4], octets[5], octets[6]);
|
||||
}
|
||||
|
||||
if ( |result| == 16 )
|
||||
{
|
||||
octets = str_split(result, vector(2, 4, 6, 8, 10, 12, 14));
|
||||
return fmt("%s:%s:%s:%s:%s:%s:%s:%s", octets[1], octets[2], octets[3], octets[4], octets[5], octets[6], octets[7], octets[8]);
|
||||
}
|
||||
|
||||
return "";
|
||||
}
|
||||
|
|
|
@ -35,28 +35,37 @@ export {
|
|||
const notice_threshold = 10 &redef;
|
||||
}
|
||||
|
||||
event file_hash(f: fa_file, kind: string, hash: string)
|
||||
{
|
||||
if ( kind=="sha1" && match_file_types in f$mime_type )
|
||||
function do_mhr_lookup(hash: string, fi: Notice::FileInfo)
|
||||
{
|
||||
local hash_domain = fmt("%s.malware.hash.cymru.com", hash);
|
||||
|
||||
when ( local MHR_result = lookup_hostname_txt(hash_domain) )
|
||||
{
|
||||
# Data is returned as "<dateFirstDetected> <detectionRate>"
|
||||
local MHR_answer = split1(MHR_result, / /);
|
||||
|
||||
if ( |MHR_answer| == 2 )
|
||||
{
|
||||
local mhr_first_detected = double_to_time(to_double(MHR_answer[1]));
|
||||
local mhr_detect_rate = to_count(MHR_answer[2]);
|
||||
|
||||
local readable_first_detected = strftime("%Y-%m-%d %H:%M:%S", mhr_first_detected);
|
||||
if ( mhr_detect_rate >= notice_threshold )
|
||||
{
|
||||
local mhr_first_detected = double_to_time(to_double(MHR_answer[1]));
|
||||
local readable_first_detected = strftime("%Y-%m-%d %H:%M:%S", mhr_first_detected);
|
||||
local message = fmt("Malware Hash Registry Detection rate: %d%% Last seen: %s", mhr_detect_rate, readable_first_detected);
|
||||
local virustotal_url = fmt(match_sub_url, hash);
|
||||
NOTICE([$note=Match, $msg=message, $sub=virustotal_url, $f=f]);
|
||||
# We don't have the full fa_file record here in order to
|
||||
# avoid the "when" statement cloning it (expensive!).
|
||||
local n: Notice::Info = Notice::Info($note=Match, $msg=message, $sub=virustotal_url);
|
||||
Notice::populate_file_info2(fi, n);
|
||||
NOTICE(n);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
event file_hash(f: fa_file, kind: string, hash: string)
|
||||
{
|
||||
if ( kind == "sha1" && f?$mime_type && match_file_types in f$mime_type )
|
||||
do_mhr_lookup(hash, Notice::create_file_info(f));
|
||||
}
|
||||
|
|
|
@ -7,3 +7,4 @@
|
|||
@load ./ssl
|
||||
@load ./smtp
|
||||
@load ./smtp-url-extraction
|
||||
@load ./x509
|
||||
|
|
|
@ -9,6 +9,12 @@ event http_header(c: connection, is_orig: bool, name: string, value: string)
|
|||
switch ( name )
|
||||
{
|
||||
case "HOST":
|
||||
if ( is_valid_ip(value) )
|
||||
Intel::seen([$host=to_addr(value),
|
||||
$indicator_type=Intel::ADDR,
|
||||
$conn=c,
|
||||
$where=HTTP::IN_HOST_HEADER]);
|
||||
else
|
||||
Intel::seen([$indicator=value,
|
||||
$indicator_type=Intel::DOMAIN,
|
||||
$conn=c,
|
||||
|
|
|
@ -2,27 +2,6 @@
|
|||
@load base/protocols/ssl
|
||||
@load ./where-locations
|
||||
|
||||
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string)
|
||||
{
|
||||
if ( chain_idx == 0 )
|
||||
{
|
||||
if ( /emailAddress=/ in cert$subject )
|
||||
{
|
||||
local email = sub(cert$subject, /^.*emailAddress=/, "");
|
||||
email = sub(email, /,.*$/, "");
|
||||
Intel::seen([$indicator=email,
|
||||
$indicator_type=Intel::EMAIL,
|
||||
$conn=c,
|
||||
$where=(is_orig ? SSL::IN_CLIENT_CERT : SSL::IN_SERVER_CERT)]);
|
||||
}
|
||||
|
||||
Intel::seen([$indicator=sha1_hash(der_cert),
|
||||
$indicator_type=Intel::CERT_HASH,
|
||||
$conn=c,
|
||||
$where=(is_orig ? SSL::IN_CLIENT_CERT : SSL::IN_SERVER_CERT)]);
|
||||
}
|
||||
}
|
||||
|
||||
event ssl_extension(c: connection, is_orig: bool, code: count, val: string)
|
||||
{
|
||||
if ( is_orig && SSL::extensions[code] == "server_name" &&
|
||||
|
|
|
@ -21,9 +21,8 @@ export {
|
|||
SMTP::IN_REPLY_TO,
|
||||
SMTP::IN_X_ORIGINATING_IP_HEADER,
|
||||
SMTP::IN_MESSAGE,
|
||||
SSL::IN_SERVER_CERT,
|
||||
SSL::IN_CLIENT_CERT,
|
||||
SSL::IN_SERVER_NAME,
|
||||
SMTP::IN_HEADER,
|
||||
X509::IN_CERT,
|
||||
};
|
||||
}
|
||||
|
|
16
scripts/policy/frameworks/intel/seen/x509.bro
Normal file
16
scripts/policy/frameworks/intel/seen/x509.bro
Normal file
|
@ -0,0 +1,16 @@
|
|||
@load base/frameworks/intel
|
||||
@load base/files/x509
|
||||
@load ./where-locations
|
||||
|
||||
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate)
|
||||
{
|
||||
if ( /emailAddress=/ in cert$subject )
|
||||
{
|
||||
local email = sub(cert$subject, /^.*emailAddress=/, "");
|
||||
email = sub(email, /,.*$/, "");
|
||||
Intel::seen([$indicator=email,
|
||||
$indicator_type=Intel::EMAIL,
|
||||
$f=f,
|
||||
$where=X509::IN_CERT]);
|
||||
}
|
||||
}
|
|
@ -1,6 +1,6 @@
|
|||
@load ./facebook
|
||||
@load ./gmail
|
||||
@load ./google
|
||||
@load ./netflix
|
||||
@load ./pandora
|
||||
@load ./youtube
|
||||
#@load ./gmail
|
||||
#@load ./google
|
||||
#@load ./netflix
|
||||
#@load ./pandora
|
||||
#@load ./youtube
|
||||
|
|
|
@ -19,12 +19,16 @@ export {
|
|||
};
|
||||
}
|
||||
|
||||
event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=4
|
||||
hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5
|
||||
{
|
||||
# The "ready" flag will be set here. This causes the setting from the
|
||||
# base script to be overridden since the base script will log immediately
|
||||
# after all of the ANS replies have been seen.
|
||||
c$dns$ready=F;
|
||||
if ( msg$opcode != 0 )
|
||||
# Currently only standard queries are tracked.
|
||||
return;
|
||||
|
||||
if ( ! msg$QR )
|
||||
# This is weird: the inquirer must also be providing answers in
|
||||
# the request, which is not what we want to track.
|
||||
return;
|
||||
|
||||
if ( ans$answer_type == DNS_AUTH )
|
||||
{
|
||||
|
@ -38,11 +42,4 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
|
|||
c$dns$addl = set();
|
||||
add c$dns$addl[reply];
|
||||
}
|
||||
|
||||
if ( c$dns?$answers && c$dns?$auth && c$dns?$addl &&
|
||||
c$dns$total_replies == |c$dns$answers| + |c$dns$auth| + |c$dns$addl| )
|
||||
{
|
||||
# *Now* all replies desired have been seen.
|
||||
c$dns$ready = T;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,22 +0,0 @@
|
|||
##! Calculate MD5 sums for server DER formatted certificates.
|
||||
|
||||
@load base/protocols/ssl
|
||||
|
||||
module SSL;
|
||||
|
||||
export {
|
||||
redef record Info += {
|
||||
## MD5 sum of the raw server certificate.
|
||||
cert_hash: string &log &optional;
|
||||
};
|
||||
}
|
||||
|
||||
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=4
|
||||
{
|
||||
# We aren't tracking client certificates yet and we are also only tracking
|
||||
# the primary cert. Watch that this came from an SSL analyzed session too.
|
||||
if ( is_orig || chain_idx != 0 || ! c?$ssl )
|
||||
return;
|
||||
|
||||
c$ssl$cert_hash = md5_hash(der_cert);
|
||||
}
|
|
@ -3,11 +3,10 @@
|
|||
##! certificate.
|
||||
|
||||
@load base/protocols/ssl
|
||||
@load base/files/x509
|
||||
@load base/frameworks/notice
|
||||
@load base/utils/directions-and-hosts
|
||||
|
||||
@load protocols/ssl/cert-hash
|
||||
|
||||
module SSL;
|
||||
|
||||
export {
|
||||
|
@ -35,30 +34,31 @@ export {
|
|||
const notify_when_cert_expiring_in = 30days &redef;
|
||||
}
|
||||
|
||||
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3
|
||||
event ssl_established(c: connection) &priority=3
|
||||
{
|
||||
# If this isn't the host cert or we aren't interested in the server, just return.
|
||||
if ( is_orig ||
|
||||
chain_idx != 0 ||
|
||||
! c$ssl?$cert_hash ||
|
||||
# If there are no certificates or we are not interested in the server, just return.
|
||||
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
|
||||
! addr_matches_host(c$id$resp_h, notify_certs_expiration) )
|
||||
return;
|
||||
|
||||
local fuid = c$ssl$cert_chain_fuids[0];
|
||||
local cert = c$ssl$cert_chain[0]$x509$certificate;
|
||||
|
||||
if ( cert$not_valid_before > network_time() )
|
||||
NOTICE([$note=Certificate_Not_Valid_Yet,
|
||||
$conn=c, $suppress_for=1day,
|
||||
$msg=fmt("Certificate %s isn't valid until %T", cert$subject, cert$not_valid_before),
|
||||
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]);
|
||||
$fuid=fuid]);
|
||||
|
||||
else if ( cert$not_valid_after < network_time() )
|
||||
NOTICE([$note=Certificate_Expired,
|
||||
$conn=c, $suppress_for=1day,
|
||||
$msg=fmt("Certificate %s expired at %T", cert$subject, cert$not_valid_after),
|
||||
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]);
|
||||
$fuid=fuid]);
|
||||
|
||||
else if ( cert$not_valid_after - notify_when_cert_expiring_in < network_time() )
|
||||
NOTICE([$note=Certificate_Expires_Soon,
|
||||
$msg=fmt("Certificate %s is going to expire at %T", cert$subject, cert$not_valid_after),
|
||||
$conn=c, $suppress_for=1day,
|
||||
$identifier=cat(c$id$resp_h, c$id$resp_p, c$ssl$cert_hash)]);
|
||||
$fuid=fuid]);
|
||||
}
|
||||
|
|
|
@ -10,8 +10,8 @@
|
|||
##!
|
||||
|
||||
@load base/protocols/ssl
|
||||
@load base/files/x509
|
||||
@load base/utils/directions-and-hosts
|
||||
@load protocols/ssl/cert-hash
|
||||
|
||||
module SSL;
|
||||
|
||||
|
@ -23,41 +23,31 @@ export {
|
|||
}
|
||||
|
||||
# This is an internally maintained variable to prevent relogging of
|
||||
# certificates that have already been seen. It is indexed on an md5 sum of
|
||||
# certificates that have already been seen. It is indexed on an sha1 sum of
|
||||
# the certificate.
|
||||
global extracted_certs: set[string] = set() &read_expire=1hr &redef;
|
||||
|
||||
event ssl_established(c: connection) &priority=5
|
||||
{
|
||||
if ( ! c$ssl?$cert )
|
||||
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
|
||||
return;
|
||||
|
||||
if ( ! addr_matches_host(c$id$resp_h, extract_certs_pem) )
|
||||
return;
|
||||
|
||||
if ( c$ssl$cert_hash in extracted_certs )
|
||||
local hash = c$ssl$cert_chain[0]$sha1;
|
||||
local cert = c$ssl$cert_chain[0]$x509$handle;
|
||||
|
||||
if ( hash in extracted_certs )
|
||||
# If we already extracted this cert, don't do it again.
|
||||
return;
|
||||
|
||||
add extracted_certs[c$ssl$cert_hash];
|
||||
add extracted_certs[hash];
|
||||
local filename = Site::is_local_addr(c$id$resp_h) ? "certs-local.pem" : "certs-remote.pem";
|
||||
local outfile = open_for_append(filename);
|
||||
enable_raw_output(outfile);
|
||||
|
||||
print outfile, "-----BEGIN CERTIFICATE-----";
|
||||
print outfile, x509_get_certificate_string(cert, T);
|
||||
|
||||
# Encode to base64 and format to fit 50 lines. Otherwise openssl won't like it later.
|
||||
local lines = split_all(encode_base64(c$ssl$cert), /.{50}/);
|
||||
local i = 1;
|
||||
for ( line in lines )
|
||||
{
|
||||
if ( |lines[i]| > 0 )
|
||||
{
|
||||
print outfile, lines[i];
|
||||
}
|
||||
i+=1;
|
||||
}
|
||||
|
||||
print outfile, "-----END CERTIFICATE-----";
|
||||
print outfile, "";
|
||||
close(outfile);
|
||||
}
|
||||
|
|
235
scripts/policy/protocols/ssl/heartbleed.bro
Normal file
235
scripts/policy/protocols/ssl/heartbleed.bro
Normal file
|
@ -0,0 +1,235 @@
|
|||
##! Detect the TLS heartbleed attack. See http://heartbleed.com for more.
|
||||
|
||||
@load base/protocols/ssl
|
||||
@load base/frameworks/notice
|
||||
|
||||
module Heartbleed;
|
||||
|
||||
export {
|
||||
redef enum Notice::Type += {
|
||||
## Indicates that a host performed a heartbleed attack or scan.
|
||||
SSL_Heartbeat_Attack,
|
||||
## Indicates that a host performing a heartbleed attack was probably successful.
|
||||
SSL_Heartbeat_Attack_Success,
|
||||
## Indicates we saw heartbeat requests with odd length. Probably an attack or scan.
|
||||
SSL_Heartbeat_Odd_Length,
|
||||
## Indicates we saw many heartbeat requests without an reply. Might be an attack.
|
||||
SSL_Heartbeat_Many_Requests
|
||||
};
|
||||
}
|
||||
|
||||
# Do not disable analyzers after detection - otherwhise we will not notice
|
||||
# encrypted attacks.
|
||||
redef SSL::disable_analyzer_after_detection=F;
|
||||
|
||||
redef record SSL::Info += {
|
||||
last_originator_heartbeat_request_size: count &optional;
|
||||
last_responder_heartbeat_request_size: count &optional;
|
||||
|
||||
originator_heartbeats: count &default=0;
|
||||
responder_heartbeats: count &default=0;
|
||||
|
||||
# Unencrypted connections - was an exploit attempt detected yet.
|
||||
heartbleed_detected: bool &default=F;
|
||||
|
||||
# Count number of appdata packages and bytes exchanged so far.
|
||||
enc_appdata_packages: count &default=0;
|
||||
enc_appdata_bytes: count &default=0;
|
||||
};
|
||||
|
||||
type min_length: record {
|
||||
cipher: pattern;
|
||||
min_length: count;
|
||||
};
|
||||
|
||||
global min_lengths: vector of min_length = vector();
|
||||
global min_lengths_tls11: vector of min_length = vector();
|
||||
|
||||
event bro_init()
|
||||
{
|
||||
# Minimum length a heartbeat packet must have for different cipher suites.
|
||||
# Note - tls 1.1f and 1.0 have different lengths :(
|
||||
# This should be all cipher suites usually supported by vulnerable servers.
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_256_GCM_SHA384$/, $min_length=43];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_128_GCM_SHA256$/, $min_length=43];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA384$/, $min_length=96];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA256$/, $min_length=80];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA256$/, $min_length=80];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_256_CBC_SHA$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_128_CBC_SHA$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_DES_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=40];
|
||||
}
|
||||
|
||||
event ssl_heartbeat(c: connection, is_orig: bool, length: count, heartbeat_type: count, payload_length: count, payload: string)
|
||||
{
|
||||
if ( ! c?$ssl )
|
||||
return;
|
||||
|
||||
if ( heartbeat_type == 1 )
|
||||
{
|
||||
local checklength: count = (length<(3+16)) ? length : (length - 3 - 16);
|
||||
|
||||
if ( payload_length > checklength )
|
||||
{
|
||||
c$ssl$heartbleed_detected = T;
|
||||
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack,
|
||||
$msg=fmt("An TLS heartbleed attack was detected! Record length %d. Payload length %d", length, payload_length),
|
||||
$conn=c,
|
||||
$identifier=cat(c$uid, length, payload_length)
|
||||
]);
|
||||
}
|
||||
else if ( is_orig )
|
||||
{
|
||||
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack,
|
||||
$msg=fmt("Heartbeat request before encryption. Probable Scan without exploit attempt. Message length: %d. Payload length: %d", length, payload_length),
|
||||
$conn=c,
|
||||
$n=length,
|
||||
$identifier=cat(c$uid, length)
|
||||
]);
|
||||
}
|
||||
}
|
||||
|
||||
if ( heartbeat_type == 2 && c$ssl$heartbleed_detected )
|
||||
{
|
||||
NOTICE([$note=Heartbleed::SSL_Heartbeat_Attack_Success,
|
||||
$msg=fmt("An TLS heartbleed attack detected before was probably exploited. Message length: %d. Payload length: %d", length, payload_length),
|
||||
$conn=c,
|
||||
$identifier=c$uid
|
||||
]);
|
||||
}
|
||||
}
|
||||
|
||||
event ssl_encrypted_heartbeat(c: connection, is_orig: bool, length: count)
|
||||
{
|
||||
if ( is_orig )
|
||||
++c$ssl$originator_heartbeats;
|
||||
else
|
||||
++c$ssl$responder_heartbeats;
|
||||
|
||||
local duration = network_time() - c$start_time;
|
||||
|
||||
if ( c$ssl$enc_appdata_packages == 0 )
|
||||
NOTICE([$note=SSL_Heartbeat_Attack,
|
||||
$msg=fmt("Heartbeat before ciphertext. Probable attack or scan. Length: %d, is_orig: %d", length, is_orig),
|
||||
$conn=c,
|
||||
$n=length,
|
||||
$identifier=fmt("%s%s", c$uid, "early")
|
||||
]);
|
||||
else if ( duration < 1min )
|
||||
NOTICE([$note=SSL_Heartbeat_Attack,
|
||||
$msg=fmt("Heartbeat within first minute. Possible attack or scan. Length: %d, is_orig: %d, time: %d", length, is_orig, duration),
|
||||
$conn=c,
|
||||
$n=length,
|
||||
$identifier=fmt("%s%s", c$uid, "early")
|
||||
]);
|
||||
|
||||
if ( c$ssl$originator_heartbeats > c$ssl$responder_heartbeats + 3 )
|
||||
NOTICE([$note=SSL_Heartbeat_Many_Requests,
|
||||
$msg=fmt("More than 3 heartbeat requests without replies from server. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
|
||||
$conn=c,
|
||||
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
|
||||
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
|
||||
]);
|
||||
|
||||
if ( c$ssl$responder_heartbeats > c$ssl$originator_heartbeats + 3 )
|
||||
NOTICE([$note=SSL_Heartbeat_Many_Requests,
|
||||
$msg=fmt("Server sending more heartbeat responses than requests seen. Possible attack. Client count: %d, server count: %d", c$ssl$originator_heartbeats, c$ssl$responder_heartbeats),
|
||||
$conn=c,
|
||||
$n=(c$ssl$originator_heartbeats-c$ssl$responder_heartbeats),
|
||||
$identifier=fmt("%s%d", c$uid, c$ssl$responder_heartbeats/1000) # re-throw every 1000 heartbeats
|
||||
]);
|
||||
|
||||
if ( is_orig && length < 19 )
|
||||
NOTICE([$note=SSL_Heartbeat_Odd_Length,
|
||||
$msg=fmt("Heartbeat message smaller than minimum required length. Probable attack or scan. Message length: %d. Cipher: %s. Time: %f", length, c$ssl$cipher, duration),
|
||||
$conn=c,
|
||||
$n=length,
|
||||
$identifier=fmt("%s-weak-%d", c$uid, length)
|
||||
]);
|
||||
|
||||
# Examine request lengths based on used cipher...
|
||||
local min_length_choice: vector of min_length;
|
||||
if ( (c$ssl$version == "TLSv11") || (c$ssl$version == "TLSv12") ) # tls 1.1+ have different lengths for CBC
|
||||
min_length_choice = min_lengths_tls11;
|
||||
else
|
||||
min_length_choice = min_lengths;
|
||||
|
||||
for ( i in min_length_choice )
|
||||
{
|
||||
if ( min_length_choice[i]$cipher in c$ssl$cipher )
|
||||
{
|
||||
if ( length < min_length_choice[i]$min_length )
|
||||
{
|
||||
NOTICE([$note=SSL_Heartbeat_Odd_Length,
|
||||
$msg=fmt("Heartbeat message smaller than minimum required length. Probable attack. Message length: %d. Required length: %d. Cipher: %s. Cipher match: %s", length, min_length_choice[i]$min_length, c$ssl$cipher, min_length_choice[i]$cipher),
|
||||
$conn=c,
|
||||
$n=length,
|
||||
$identifier=fmt("%s-weak-%d", c$uid, length)
|
||||
]);
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
if ( is_orig )
|
||||
{
|
||||
if ( c$ssl?$last_responder_heartbeat_request_size )
|
||||
{
|
||||
# server originated heartbeat. Ignore & continue
|
||||
delete c$ssl$last_responder_heartbeat_request_size;
|
||||
}
|
||||
|
||||
else
|
||||
c$ssl$last_originator_heartbeat_request_size = length;
|
||||
}
|
||||
else
|
||||
{
|
||||
if ( c$ssl?$last_originator_heartbeat_request_size && c$ssl$last_originator_heartbeat_request_size < length )
|
||||
{
|
||||
NOTICE([$note=SSL_Heartbeat_Attack_Success,
|
||||
$msg=fmt("An encrypted TLS heartbleed attack was probably detected! First packet client record length %d, first packet server record length %d. Time: %f",
|
||||
c$ssl$last_originator_heartbeat_request_size, length, duration),
|
||||
$conn=c,
|
||||
$identifier=c$uid # only throw once per connection
|
||||
]);
|
||||
}
|
||||
|
||||
else if ( ! c$ssl?$last_originator_heartbeat_request_size )
|
||||
c$ssl$last_responder_heartbeat_request_size = length;
|
||||
|
||||
if ( c$ssl?$last_originator_heartbeat_request_size )
|
||||
delete c$ssl$last_originator_heartbeat_request_size;
|
||||
}
|
||||
}
|
||||
|
||||
event ssl_encrypted_data(c: connection, is_orig: bool, content_type: count, length: count)
|
||||
{
|
||||
if ( content_type == SSL::HEARTBEAT )
|
||||
event ssl_encrypted_heartbeat(c, is_orig, length);
|
||||
else if ( (content_type == SSL::APPLICATION_DATA) && (length > 0) )
|
||||
{
|
||||
++c$ssl$enc_appdata_packages;
|
||||
c$ssl$enc_appdata_bytes += length;
|
||||
}
|
||||
}
|
|
@ -3,7 +3,7 @@
|
|||
|
||||
@load base/utils/directions-and-hosts
|
||||
@load base/protocols/ssl
|
||||
@load protocols/ssl/cert-hash
|
||||
@load base/files/x509
|
||||
|
||||
module Known;
|
||||
|
||||
|
@ -33,7 +33,7 @@ export {
|
|||
## The set of all known certificates to store for preventing duplicate
|
||||
## logging. It can also be used from other scripts to
|
||||
## inspect if a certificate has been seen in use. The string value
|
||||
## in the set is for storing the DER formatted certificate's MD5 hash.
|
||||
## in the set is for storing the DER formatted certificate' SHA1 hash.
|
||||
global certs: set[addr, string] &create_expire=1day &synchronized &redef;
|
||||
|
||||
## Event that can be handled to access the loggable record as it is sent
|
||||
|
@ -46,16 +46,27 @@ event bro_init() &priority=5
|
|||
Log::create_stream(Known::CERTS_LOG, [$columns=CertsInfo, $ev=log_known_certs]);
|
||||
}
|
||||
|
||||
event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3
|
||||
event ssl_established(c: connection) &priority=3
|
||||
{
|
||||
# Make sure this is the server cert and we have a hash for it.
|
||||
if ( is_orig || chain_idx != 0 || ! c$ssl?$cert_hash )
|
||||
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| < 1 )
|
||||
return;
|
||||
|
||||
local host = c$id$resp_h;
|
||||
if ( [host, c$ssl$cert_hash] !in certs && addr_matches_host(host, cert_tracking) )
|
||||
local fuid = c$ssl$cert_chain_fuids[0];
|
||||
|
||||
if ( ! c$ssl$cert_chain[0]?$sha1 )
|
||||
{
|
||||
add certs[host, c$ssl$cert_hash];
|
||||
Reporter::error(fmt("Certificate with fuid %s did not contain sha1 hash when checking for known certs. Aborting",
|
||||
fuid));
|
||||
return;
|
||||
}
|
||||
|
||||
local hash = c$ssl$cert_chain[0]$sha1;
|
||||
local cert = c$ssl$cert_chain[0]$x509$certificate;
|
||||
|
||||
local host = c$id$resp_h;
|
||||
if ( [host, hash] !in certs && addr_matches_host(host, cert_tracking) )
|
||||
{
|
||||
add certs[host, hash];
|
||||
Log::write(Known::CERTS_LOG, [$ts=network_time(), $host=host,
|
||||
$port_num=c$id$resp_p, $subject=cert$subject,
|
||||
$issuer_subject=cert$issuer,
|
||||
|
|
68
scripts/policy/protocols/ssl/log-hostcerts-only.bro
Normal file
68
scripts/policy/protocols/ssl/log-hostcerts-only.bro
Normal file
|
@ -0,0 +1,68 @@
|
|||
##! When this script is loaded, only the host certificates (client and server)
|
||||
##! will be logged to x509.log. Logging of all other certificates will be suppressed.
|
||||
|
||||
@load base/protocols/ssl
|
||||
@load base/files/x509
|
||||
|
||||
module X509;
|
||||
|
||||
export {
|
||||
redef record Info += {
|
||||
# Logging is suppressed if field is set to F
|
||||
logcert: bool &default=T;
|
||||
};
|
||||
}
|
||||
|
||||
# We need both the Info and the fa_file record modified.
|
||||
# The only instant when we have both, the connection and the
|
||||
# file available without having to loop is in the file_over_new_connection
|
||||
# event.
|
||||
# When that event is raised, the x509 record in f$info (which is the only
|
||||
# record the logging framework gets) is not yet available. So - we
|
||||
# have to do this two times, sorry.
|
||||
# Alternatively, we could place it info Files::Info first - but we would
|
||||
# still have to copy it.
|
||||
redef record fa_file += {
|
||||
logcert: bool &default=T;
|
||||
};
|
||||
|
||||
function host_certs_only(rec: X509::Info): bool
|
||||
{
|
||||
return rec$logcert;
|
||||
}
|
||||
|
||||
event bro_init() &priority=2
|
||||
{
|
||||
local f = Log::get_filter(X509::LOG, "default");
|
||||
Log::remove_filter(X509::LOG, "default"); # disable default logging
|
||||
f$pred=host_certs_only; # and add our predicate
|
||||
Log::add_filter(X509::LOG, f);
|
||||
}
|
||||
|
||||
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=2
|
||||
{
|
||||
if ( ! c?$ssl )
|
||||
return;
|
||||
|
||||
local chain: vector of string;
|
||||
|
||||
if ( is_orig )
|
||||
chain = c$ssl$client_cert_chain_fuids;
|
||||
else
|
||||
chain = c$ssl$cert_chain_fuids;
|
||||
|
||||
if ( |chain| == 0 )
|
||||
{
|
||||
Reporter::warning(fmt("Certificate not in chain? (fuid %s)", f$id));
|
||||
return;
|
||||
}
|
||||
|
||||
# Check if this is the host certificate
|
||||
if ( f$id != chain[0] )
|
||||
f$logcert=F;
|
||||
}
|
||||
|
||||
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=2
|
||||
{
|
||||
f$info$x509$logcert = f$logcert; # info record available, copy information.
|
||||
}
|
|
@ -16,7 +16,6 @@ export {
|
|||
}
|
||||
|
||||
redef record SSL::Info += {
|
||||
sha1: string &log &optional;
|
||||
notary: Response &log &optional;
|
||||
};
|
||||
|
||||
|
@ -38,14 +37,12 @@ function clear_waitlist(digest: string)
|
|||
}
|
||||
}
|
||||
|
||||
event x509_certificate(c: connection, is_orig: bool, cert: X509,
|
||||
chain_idx: count, chain_len: count, der_cert: string)
|
||||
event ssl_established(c: connection) &priority=3
|
||||
{
|
||||
if ( is_orig || chain_idx != 0 || ! c?$ssl )
|
||||
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
|
||||
return;
|
||||
|
||||
local digest = sha1_hash(der_cert);
|
||||
c$ssl$sha1 = digest;
|
||||
local digest = c$ssl$cert_chain[0]$sha1;
|
||||
|
||||
if ( digest in notary_cache )
|
||||
{
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
|
||||
@load base/frameworks/notice
|
||||
@load base/protocols/ssl
|
||||
@load protocols/ssl/cert-hash
|
||||
|
||||
module SSL;
|
||||
|
||||
|
@ -19,9 +18,9 @@ export {
|
|||
validation_status: string &log &optional;
|
||||
};
|
||||
|
||||
## MD5 hash values for recently validated certs along with the
|
||||
## MD5 hash values for recently validated chains along with the
|
||||
## validation status message are kept in this table to avoid constant
|
||||
## validation every time the same certificate is seen.
|
||||
## validation every time the same certificate chain is seen.
|
||||
global recently_validated_certs: table[string] of string = table()
|
||||
&read_expire=5mins &synchronized &redef;
|
||||
}
|
||||
|
@ -29,17 +28,26 @@ export {
|
|||
event ssl_established(c: connection) &priority=3
|
||||
{
|
||||
# If there aren't any certs we can't very well do certificate validation.
|
||||
if ( ! c$ssl?$cert || ! c$ssl?$cert_chain )
|
||||
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 )
|
||||
return;
|
||||
|
||||
if ( c$ssl?$cert_hash && c$ssl$cert_hash in recently_validated_certs )
|
||||
local chain_id = join_string_vec(c$ssl$cert_chain_fuids, ".");
|
||||
|
||||
local chain: vector of opaque of x509 = vector();
|
||||
for ( i in c$ssl$cert_chain )
|
||||
{
|
||||
c$ssl$validation_status = recently_validated_certs[c$ssl$cert_hash];
|
||||
chain[i] = c$ssl$cert_chain[i]$x509$handle;
|
||||
}
|
||||
|
||||
if ( chain_id in recently_validated_certs )
|
||||
{
|
||||
c$ssl$validation_status = recently_validated_certs[chain_id];
|
||||
}
|
||||
else
|
||||
{
|
||||
local result = x509_verify(c$ssl$cert, c$ssl$cert_chain, root_certs);
|
||||
c$ssl$validation_status = x509_err2str(result);
|
||||
local result = x509_verify(chain, root_certs);
|
||||
c$ssl$validation_status = result$result_string;
|
||||
recently_validated_certs[chain_id] = result$result_string;
|
||||
}
|
||||
|
||||
if ( c$ssl$validation_status != "ok" )
|
||||
|
@ -47,7 +55,7 @@ event ssl_established(c: connection) &priority=3
|
|||
local message = fmt("SSL certificate validation failed with (%s)", c$ssl$validation_status);
|
||||
NOTICE([$note=Invalid_Server_Cert, $msg=message,
|
||||
$sub=c$ssl$subject, $conn=c,
|
||||
$identifier=cat(c$id$resp_h,c$id$resp_p,c$ssl$validation_status,c$ssl$cert_hash)]);
|
||||
$identifier=cat(c$id$resp_h,c$id$resp_p,c$ssl$validation_status)]);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
63
scripts/policy/protocols/ssl/validate-ocsp.bro
Normal file
63
scripts/policy/protocols/ssl/validate-ocsp.bro
Normal file
|
@ -0,0 +1,63 @@
|
|||
##! Perform OCSP response validation.
|
||||
|
||||
@load base/frameworks/notice
|
||||
@load base/protocols/ssl
|
||||
|
||||
module SSL;
|
||||
|
||||
export {
|
||||
redef enum Notice::Type += {
|
||||
## This indicates that the OCSP response was not deemed
|
||||
## to be valid.
|
||||
Invalid_Ocsp_Response
|
||||
};
|
||||
|
||||
redef record Info += {
|
||||
## Result of ocsp validation for this connection.
|
||||
ocsp_status: string &log &optional;
|
||||
|
||||
## ocsp response as string.
|
||||
ocsp_response: string &optional;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
# MD5 hash values for recently validated chains along with the OCSP validation
|
||||
# status are kept in this table to avoid constant validation every time the same
|
||||
# certificate chain is seen.
|
||||
global recently_ocsp_validated: table[string] of string = table() &read_expire=5mins;
|
||||
|
||||
event ssl_stapled_ocsp(c: connection, is_orig: bool, response: string) &priority=3
|
||||
{
|
||||
c$ssl$ocsp_response = response;
|
||||
}
|
||||
|
||||
event ssl_established(c: connection) &priority=3
|
||||
{
|
||||
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 || !c$ssl?$ocsp_response )
|
||||
return;
|
||||
|
||||
local chain: vector of opaque of x509 = vector();
|
||||
for ( i in c$ssl$cert_chain )
|
||||
chain[i] = c$ssl$cert_chain[i]$x509$handle;
|
||||
|
||||
local reply_id = cat(md5_hash(c$ssl$ocsp_response), join_string_vec(c$ssl$cert_chain_fuids, "."));
|
||||
|
||||
if ( reply_id in recently_ocsp_validated )
|
||||
{
|
||||
c$ssl$ocsp_status = recently_ocsp_validated[reply_id];
|
||||
return;
|
||||
}
|
||||
|
||||
local result = x509_ocsp_verify(chain, c$ssl$ocsp_response, root_certs);
|
||||
c$ssl$ocsp_status = result$result_string;
|
||||
recently_ocsp_validated[reply_id] = result$result_string;
|
||||
|
||||
if( result$result_string != "good" )
|
||||
{
|
||||
local message = fmt("OCSP response validation failed with (%s)", result$result_string);
|
||||
NOTICE([$note=Invalid_Ocsp_Response, $msg=message,
|
||||
$sub=c$ssl$subject, $conn=c,
|
||||
$identifier=cat(c$id$resp_h,c$id$resp_p,c$ssl$ocsp_status)]);
|
||||
}
|
||||
}
|
91
scripts/policy/protocols/ssl/weak-keys.bro
Normal file
91
scripts/policy/protocols/ssl/weak-keys.bro
Normal file
|
@ -0,0 +1,91 @@
|
|||
##! Generate notices when SSL/TLS connections use certificates or DH parameters
|
||||
##! that have potentially unsafe key lengths.
|
||||
|
||||
@load base/protocols/ssl
|
||||
@load base/frameworks/notice
|
||||
@load base/utils/directions-and-hosts
|
||||
|
||||
module SSL;
|
||||
|
||||
export {
|
||||
redef enum Notice::Type += {
|
||||
## Indicates that a server is using a potentially unsafe key.
|
||||
Weak_Key,
|
||||
};
|
||||
|
||||
## The category of hosts you would like to be notified about which have
|
||||
## certificates that are going to be expiring soon. By default, these
|
||||
## notices will be suppressed by the notice framework for 1 day after a particular
|
||||
## certificate has had a notice generated. Choices are: LOCAL_HOSTS, REMOTE_HOSTS,
|
||||
## ALL_HOSTS, NO_HOSTS
|
||||
const notify_weak_keys = LOCAL_HOSTS &redef;
|
||||
|
||||
## The minimal key length in bits that is considered to be safe. Any shorter
|
||||
## (non-EC) key lengths will trigger the notice.
|
||||
const notify_minimal_key_length = 1024 &redef;
|
||||
|
||||
## Warn if the DH key length is smaller than the certificate key length. This is
|
||||
## potentially unsafe because it gives a wrong impression of safety due to the
|
||||
## certificate key length. However, it is very common and cannot be avoided in some
|
||||
## settings (e.g. with old jave clients).
|
||||
const notify_dh_length_shorter_cert_length = T &redef;
|
||||
}
|
||||
|
||||
# We check key lengths only for DSA or RSA certificates. For others, we do
|
||||
# not know what is safe (e.g. EC is safe even with very short key lengths).
|
||||
event ssl_established(c: connection) &priority=3
|
||||
{
|
||||
# If there are no certificates or we are not interested in the server, just return.
|
||||
if ( ! c$ssl?$cert_chain || |c$ssl$cert_chain| == 0 ||
|
||||
! addr_matches_host(c$id$resp_h, notify_weak_keys) )
|
||||
return;
|
||||
|
||||
local fuid = c$ssl$cert_chain_fuids[0];
|
||||
local cert = c$ssl$cert_chain[0]$x509$certificate;
|
||||
|
||||
if ( !cert?$key_type || !cert?$key_length )
|
||||
return;
|
||||
|
||||
if ( cert$key_type != "dsa" && cert$key_type != "rsa" )
|
||||
return;
|
||||
|
||||
local key_length = cert$key_length;
|
||||
|
||||
if ( key_length < notify_minimal_key_length )
|
||||
NOTICE([$note=Weak_Key,
|
||||
$msg=fmt("Host uses weak certificate with %d bit key", key_length),
|
||||
$conn=c, $suppress_for=1day,
|
||||
$identifier=cat(c$id$orig_h, c$id$orig_p, key_length)
|
||||
]);
|
||||
}
|
||||
|
||||
event ssl_dh_server_params(c: connection, p: string, q: string, Ys: string) &priority=3
|
||||
{
|
||||
if ( ! addr_matches_host(c$id$resp_h, notify_weak_keys) )
|
||||
return;
|
||||
|
||||
local key_length = |Ys| * 8; # key length in bits
|
||||
|
||||
if ( key_length < notify_minimal_key_length )
|
||||
NOTICE([$note=Weak_Key,
|
||||
$msg=fmt("Host uses weak DH parameters with %d key bits", key_length),
|
||||
$conn=c, $suppress_for=1day,
|
||||
$identifier=cat(c$id$orig_h, c$id$orig_p, key_length)
|
||||
]);
|
||||
|
||||
if ( notify_dh_length_shorter_cert_length &&
|
||||
c?$ssl && c$ssl?$cert_chain && |c$ssl$cert_chain| > 0 && c$ssl$cert_chain[0]?$x509 &&
|
||||
c$ssl$cert_chain[0]$x509?$certificate && c$ssl$cert_chain[0]$x509$certificate?$key_type &&
|
||||
(c$ssl$cert_chain[0]$x509$certificate$key_type == "rsa" ||
|
||||
c$ssl$cert_chain[0]$x509$certificate$key_type == "dsa" ))
|
||||
{
|
||||
if ( c$ssl$cert_chain[0]$x509$certificate?$key_length &&
|
||||
c$ssl$cert_chain[0]$x509$certificate$key_length > key_length )
|
||||
NOTICE([$note=Weak_Key,
|
||||
$msg=fmt("DH key length of %d bits is smaller certificate key length of %d bits",
|
||||
key_length, c$ssl$cert_chain[0]$x509$certificate$key_length),
|
||||
$conn=c, $suppress_for=1day,
|
||||
$identifier=cat(c$id$orig_h, c$id$orig_p)
|
||||
]);
|
||||
}
|
||||
}
|
4
scripts/policy/tuning/json-logs.bro
Normal file
4
scripts/policy/tuning/json-logs.bro
Normal file
|
@ -0,0 +1,4 @@
|
|||
##! Loading this script will cause all logs to be written
|
||||
##! out as JSON by default.
|
||||
|
||||
redef LogAscii::use_json=T;
|
|
@ -55,6 +55,9 @@
|
|||
# This script enables SSL/TLS certificate validation.
|
||||
@load protocols/ssl/validate-certs
|
||||
|
||||
# This script prevents the logging of SSL CA certificates in x509.log
|
||||
@load protocols/ssl/log-hostcerts-only
|
||||
|
||||
# Uncomment the following line to check each SSL certificate hash against the ICSI
|
||||
# certificate notary service; see http://notary.icsi.berkeley.edu .
|
||||
# @load protocols/ssl/notary
|
||||
|
@ -78,3 +81,6 @@
|
|||
# Detect SHA1 sums in Team Cymru's Malware Hash Registry.
|
||||
@load frameworks/files/detect-MHR
|
||||
|
||||
# Uncomment the following line to enable detection of the heartbleed attack. Enabling
|
||||
# this might impact performance a bit.
|
||||
# @load policy/protocols/ssl/heartbleed
|
||||
|
|
|
@ -26,6 +26,7 @@
|
|||
@load frameworks/intel/seen/smtp.bro
|
||||
@load frameworks/intel/seen/ssl.bro
|
||||
@load frameworks/intel/seen/where-locations.bro
|
||||
@load frameworks/intel/seen/x509.bro
|
||||
@load frameworks/files/detect-MHR.bro
|
||||
@load frameworks/files/hash-all-files.bro
|
||||
@load frameworks/packet-filter/shunt.bro
|
||||
|
@ -82,17 +83,21 @@
|
|||
@load protocols/ssh/geo-data.bro
|
||||
@load protocols/ssh/interesting-hostnames.bro
|
||||
@load protocols/ssh/software.bro
|
||||
@load protocols/ssl/cert-hash.bro
|
||||
@load protocols/ssl/expiring-certs.bro
|
||||
@load protocols/ssl/extract-certs-pem.bro
|
||||
@load protocols/ssl/heartbleed.bro
|
||||
@load protocols/ssl/known-certs.bro
|
||||
@load protocols/ssl/log-hostcerts-only.bro
|
||||
#@load protocols/ssl/notary.bro
|
||||
@load protocols/ssl/validate-certs.bro
|
||||
@load protocols/ssl/validate-ocsp.bro
|
||||
@load protocols/ssl/weak-keys.bro
|
||||
@load tuning/__load__.bro
|
||||
@load tuning/defaults/__load__.bro
|
||||
@load tuning/defaults/extracted_file_limits.bro
|
||||
@load tuning/defaults/packet-fragments.bro
|
||||
@load tuning/defaults/warnings.bro
|
||||
@load tuning/json-logs.bro
|
||||
@load tuning/logs-to-elasticsearch.bro
|
||||
@load tuning/track-all-assets.bro
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue