Merge branch 'bro-master'

This commit is contained in:
Aaron Eppert 2015-10-26 17:48:21 -04:00
commit 81d141959f
519 changed files with 10101 additions and 4537 deletions

552
CHANGES
View file

@ -1,4 +1,556 @@
2.4-188 | 2015-10-26 14:11:21 -0700
* Extending rexmit_inconsistency() event to receive an additional
parameter with the packet's TCP flags, if available. (Robin
Sommer)
2.4-187 | 2015-10-26 13:43:32 -0700
* Updating NEWS for new plugins. (Robin Sommer)
2.4-186 | 2015-10-23 15:07:06 -0700
* Removing pcap options for AF_PACKET support. Addresses BIT-1363.
(Robin Sommer)
* Correct a typo in controller.bro documentation. (Daniel Thayer)
* Extend SSL DPD signature to allow alert before server_hello.
(Johanna Amann)
* Make join_string_vec work with vectors containing empty elements.
(Johanna Amann)
* Fix support for HTTP CONNECT when server adds headers to response.
(Eric Karasuda).
* Load static CA list for validation tests too. (Johanna Amann)
* Remove cluster certificate validation script. (Johanna Amann)
* Fix a bug in diff-remove-x509-names canonifier. (Daniel Thayer)
* Fix test canonifiers in scripts/policy/protocols/ssl. (Daniel
Thayer)
2.4-169 | 2015-10-01 17:21:21 -0700
* Fixed parsing of V_ASN1_GENERALIZEDTIME timestamps in x509
certificates. (Yun Zheng Hu)
* Improve X509 end-of-string-check code. (Johanna Amann)
* Refactor X509 generalizedtime support and test. (Johanna Amann)
* Fix case of offset=-1 (EOF) for RAW reader. Addresses BIT-1479.
(Johanna Amann)
* Improve a number of test canonifiers. (Daniel Thayer)
* Remove unnecessary use of TEST_DIFF_CANONIFIER. (Daniel Thayer)
* Fixed some test canonifiers to read only from stdin
* Remove unused test canonifier scripts. (Daniel Thayer)
* A potpourri of updates and improvements across the documentation.
(Daniel Thayer)
* Add configure option to disable Broker Python bindings. Also
improve the configure summary output to more clearly show whether
or not Broker Python bindings will be built. (Daniel Thayer)
2.4-131 | 2015-09-11 12:16:39 -0700
* Add README.rst symlink. Addresses BIT-1413 (Vlad Grigorescu)
2.4-129 | 2015-09-11 11:56:04 -0700
* hash-all-files.bro depends on base/files/hash (Richard van den Berg)
* Make dns_max_queries redef-able, and bump default to 25. Addresses
BIT-1460 (Vlad Grigorescu)
2.4-125 | 2015-09-03 20:10:36 -0700
* Move SIP analyzer to flowunit instead of datagram Addresses
BIT-1458 (Vlad Grigorescu)
2.4-122 | 2015-08-31 14:39:41 -0700
* Add a number of out-of-bound checks to layer 2 code. Addresses
BIT-1463 (Johanna Amann)
* Fix error in 2.4 release notes regarding SSH events. (Robin
Sommer)
2.4-118 | 2015-08-31 10:55:29 -0700
* Fix FreeBSD build errors (Johanna Amann)
2.4-117 | 2015-08-30 22:16:24 -0700
* Fix initialization of a pointer in RDP analyzer. (Daniel
Thayer/Robin Sommer)
2.4-115 | 2015-08-30 21:57:35 -0700
* Enable Bro to leverage packet fanout mode on Linux. (Kris
Nielander).
## Toggle whether to do packet fanout (Linux-only).
const Pcap::packet_fanout_enable = F &redef;
## If packet fanout is enabled, the id to sue for it. This should be shared amongst
## worker processes processing the same socket.
const Pcap::packet_fanout_id = 0 &redef;
## If packet fanout is enabled, whether packets are to be defragmented before
## fanout is applied.
const Pcap::packet_fanout_defrag = T &redef;
* Allow libpcap buffer size to be set via configuration. (Kris Nielander)
## Number of Mbytes to provide as buffer space when capturing from live
## interfaces.
const Pcap::bufsize = 128 &redef;
* Move the pcap-related script-level identifiers into the new Pcap
namespace. (Robin Sommer)
snaplen -> Pcap::snaplen
precompile_pcap_filter() -> Pcap::precompile_pcap_filter()
install_pcap_filter() -> Pcap::install_pcap_filter()
pcap_error() -> Pcap::pcap_error()
2.4-108 | 2015-08-30 20:14:31 -0700
* Update Base64 decoding. (Jan Grashoefer)
- A new built-in function, decode_base64_conn() for Base64
decoding. It works like decode_base64() but receives an
additional connection argument that will be used for
reporting decoding errors into weird.log (instead of
reporter.log).
- FTP, POP3, and HTTP analyzers now likewise log Base64
decoding errors to weird.log.
- The built-in functions decode_base64_custom() and
encode_base64_custom() are now deprecated. Their
functionality is provided directly by decode_base64() and
encode_base64(), which take an optional parameter to change
the Base64 alphabet.
* Fix potential crash if TCP header was captured incompletely.
(Robin Sommer)
2.4-103 | 2015-08-29 10:51:55 -0700
* Make ASN.1 date/time parsing more robust. (Johanna Amann)
* Be more permissive on what characters we accept as an unquoted
multipart boundary. Addresses BIT-1459. (Johanna Amann)
2.4-99 | 2015-08-25 07:56:57 -0700
* Add ``Q`` and update ``I`` documentation for connection history
field. Addresses BIT-1466. (Vlad Grigorescu)
2.4-96 | 2015-08-21 17:37:56 -0700
* Update SIP analyzer. (balintm)
- Allows space on both sides of ':'.
- Require CR/LF after request/reply line.
2.4-94 | 2015-08-21 17:31:32 -0700
* Add file type detection support for video/MP2T. (Mike Freemon)
2.4-93 | 2015-08-21 17:23:39 -0700
* Make plugin install honor DESTDIR= convention. (Jeff Barber)
2.4-89 | 2015-08-18 07:53:36 -0700
* Fix diff-canonifier-external to use basename of input file.
(Daniel Thayer)
2.4-87 | 2015-08-14 08:34:41 -0700
* Removing the yielding_teredo_decapsulation option. (Robin Sommer)
2.4-86 | 2015-08-12 17:02:24 -0700
* Make Teredo DPD signature more precise. (Martina Balint)
2.4-84 | 2015-08-10 14:44:39 -0700
* Add hook 'HookSetupAnalyzerTree' to allow plugins access to a
connection's initial analyzer tree for customization. (James
Swaro)
* Plugins now look for a file "__preload__.bro" in the top-level
script directory. If found, they load it first, before any scripts
defining BiF elements. This can be used to define types that the
BiFs already depend on (like a custom type for an event argument).
(Robin Sommer)
2.4-81 | 2015-08-08 07:38:42 -0700
* Fix a test that is failing very frequently. (Daniel Thayer)
2.4-78 | 2015-08-06 22:25:19 -0400
* Remove build dependency on Perl (now requiring Python instad).
(Daniel Thayer)
* CID 1314754: Fixing unreachable code in RSH analyzer. (Robin
Sommer)
* CID 1312752: Add comment to mark 'case' fallthrough as ok. (Robin
Sommer)
* CID 1312751: Removing redundant assignment. (Robin Sommer)
2.4-73 | 2015-07-31 08:53:49 -0700
* BIT-1429: SMTP logs now include CC: addresses. (Albert Zaharovits)
2.4-70 | 2015-07-30 07:23:44 -0700
* Updated detection of Flash and AdobeAIR. (Jan Grashoefer)
* Adding tests for Flash version parsing and browser plugin
detection. (Robin Sommer)
2.4-63 | 2015-07-28 12:26:37 -0700
* Updating submodule(s).
2.4-61 | 2015-07-28 12:13:39 -0700
* Renaming config.h to bro-config.h. (Robin Sommer)
2.4-58 | 2015-07-24 15:06:07 -0700
* Add script protocols/conn/vlan-logging.bro to record VLAN data in
conn.log. (Aaron Brown)
* Add field "vlan" and "inner_vlan" to connection record. (Aaron
Brown)
* Save the inner vlan in the Packet object for Q-in-Q setups. (Aaron
Brown)
* Increasing plugin API version for recent packet source changes.
(Robin Sommer)
* Slightly earlier protocol confirmation for POP3. (Johanna Amann)
2.4-46 | 2015-07-22 10:56:40 -0500
* Fix broker python bindings install location to track --prefix.
(Jon Siwek)
2.4-45 | 2015-07-21 15:19:43 -0700
* Enabling Broker by default. This means CAF is now a required
dependency, altjough for now at least, there's still a switch
--disable-broker to turn it off.
* Requiring a C++11 compiler, and turning on C++11 support. (Robin
Sommer)
* Tweaking the listing of hooks in "bro -NN" for consistency. (Robin
Sommer)
2.4-41 | 2015-07-21 08:35:17 -0700
* Fixing compiler warning. (Robin Sommer)
* Updates to IANA TLS registry. (Johanna Amann)
2.4-38 | 2015-07-20 15:30:35 -0700
* Refactor code to use a common Packet type throught. (Jeff
Barber/Robin Sommer)
* Extend parsing layer 2 and keeping track of layer 3 protoco. (Jeff Barber)
* Add a raw_packet() event that generated for all packets and
include layer 2 information. (Jeff Barber)
2.4-27 | 2015-07-15 13:31:49 -0700
* Fix race condition in intel test. (Johanna Amann)
2.4-24 | 2015-07-14 08:04:11 -0700
* Correct Perl package name on FreeBSD in documentation.(Justin Azoff)
* Adding an environment variable to BTest configuration for external
scripts. (Robin Sommer)
2.4-20 | 2015-07-03 10:40:21 -0700
* Adding a weird for when truncated packets lead TCP reassembly to
ignore content. (Robin Sommer)
2.4-19 | 2015-07-03 09:04:54 -0700
* A set of tests exercising IP defragmentation and TCP reassembly.
(Robin Sommer)
2.4-17 | 2015-06-28 13:02:41 -0700
* BIT-1314: Add detection for Quantum Insert attacks. The TCP
reassembler can now keep a history of old TCP segments using the
tcp_max_old_segments option. An overlapping segment with different
data will then generate an rexmit_inconsistency event. The default
for tcp_max_old_segments is zero, which disabled any additional
buffering. (Yun Zheng Hu/Robin Sommer)
2.4-14 | 2015-06-28 12:30:12 -0700
* BIT-1400: Allow '<' and '>' in MIME multipart boundaries. The spec
doesn't actually seem to permit these, but they seem to occur in
the wild. (Jon Siwek)
2.4-12 | 2015-06-28 12:21:11 -0700
* BIT-1399: Trying to decompress deflated HTTP content even when
zlib headers are missing. (Seth Hall)
2.4-10 | 2015-06-25 07:11:17 -0700
* Correct a name used in a header identifier (Justin Azoff)
2.4-8 | 2015-06-24 07:50:50 -0700
* Restore the --load-seeds cmd-line option and enable the short
options -G/-H for --load-seeds/--save-seeds. (Daniel Thayer)
2.4-6 | 2015-06-19 16:26:40 -0700
* Generate protocol confirmations for Modbus, making it appear as a
confirmed service in conn.log. (Seth Hall)
* Put command line options in alphabetical order. (Daniel Thayer)
* Removing dead code for no longer supported -G switch. (Robin
Sommer) (Robin Sommer)
2.4 | 2015-06-09 07:30:53 -0700
* Release 2.4.
* Fixing tiny thing in NEWS. (Robin Sommer)
2.4-beta-42 | 2015-06-08 09:41:39 -0700
* Fix reporter errors with GridFTP traffic. (Robin Sommer)
2.4-beta-40 | 2015-06-06 08:20:52 -0700
* PE Analyzer: Change how we calculate the rva_table size. (Vlad Grigorescu)
2.4-beta-39 | 2015-06-05 09:09:44 -0500
* Fix a unit test to check for Broker requirement. (Jon Siwek)
2.4-beta-38 | 2015-06-04 14:48:37 -0700
* Test for Broker termination. (Robin Sommer)
2.4-beta-37 | 2015-06-04 07:53:52 -0700
* BIT-1408: Improve I/O loop and Broker IOSource. (Jon Siwek)
2.4-beta-34 | 2015-06-02 10:37:22 -0700
* Add signature support for F4M files. (Seth Hall)
2.4-beta-32 | 2015-06-02 09:43:31 -0700
* A larger set of documentation updates, fixes, and extentions.
(Daniel Thayer)
2.4-beta-14 | 2015-06-02 09:16:44 -0700
* Add memleak btest for attachments over SMTP. (Vlad Grigorescu)
* BIT-1410: Fix flipped tx_hosts and rx_hosts in files.log. Reported
by Ali Hadi. (Vlad Grigorescu)
* Updating the Mozilla root certs. (Seth Hall)
* Updates for the urls.bro script. Fixes BIT-1404. (Seth Hall)
2.4-beta-6 | 2015-05-28 13:20:44 -0700
* Updating submodule(s).
2.4-beta-2 | 2015-05-26 08:58:37 -0700
* Fix segfault when DNS is not available. Addresses BIT-1387. (Frank
Meier and Robin Sommer)
2.4-beta | 2015-05-07 21:55:31 -0700
* Release 2.4-beta.
* Update local-compat.test (Johanna Amann)
2.3-913 | 2015-05-06 09:58:00 -0700
* Add /sbin to PATH in btest.cfg and remove duplicate default_path.
(Daniel Thayer)
2.3-911 | 2015-05-04 09:58:09 -0700
* Update usage output and list of command line options. (Daniel
Thayer)
* Fix to ssh/geo-data.bro for unset directions. (Vlad Grigorescu)
* Improve SIP logging and remove reporter messages. (Seth Hall)
2.3-905 | 2015-04-29 17:01:30 -0700
* Improve SIP logging and remove reporter messages. (Seth Hall)
2.3-903 | 2015-04-27 17:27:59 -0700
* BIT-1350: Improve record coercion type checking. (Jon Siwek)
2.3-901 | 2015-04-27 17:25:27 -0700
* BIT-1384: Remove -O (optimize scripts) command-line option, which
hadn't been working for a while already. (Jon Siwek)
2.3-899 | 2015-04-27 17:22:42 -0700
* Fix the -J/--set-seed cmd-line option. (Daniel Thayer)
* Remove unused -l, -L, and -Z cmd-line options. (Daniel Thayer)
2.3-892 | 2015-04-27 08:22:22 -0700
* Fix typos in the Broker BIF documentation. (Daniel Thayer)
* Update installation instructions and remove outdated references.
(Johanna Amann)
* Easier support for systems with tcmalloc_minimal installed. (Seth
Hall)
2.3-884 | 2015-04-23 12:30:15 -0500
* Fix some outdated documentation unit tests. (Jon Siwek)
2.3-883 | 2015-04-23 07:10:36 -0700
* Fix -N option to work with builtin plugins as well. (Robin Sommer)
2.3-882 | 2015-04-23 06:59:40 -0700
* Add missing .pac dependencies for some binpac analyzer targets.
(Jon Siwek)
2.3-879 | 2015-04-22 10:38:07 -0500
* Fix compile errors. (Jon Siwek)
2.3-878 | 2015-04-22 08:21:23 -0700
* Fix another compiler warning in DTLS. (Johanna Amann)
2.3-877 | 2015-04-21 20:14:16 -0700
* Adding missing include. (Robin Sommer)
2.3-876 | 2015-04-21 16:40:10 -0700
* Attempt at fixing a potential std::length_error exception in RDP
analyzer. Addresses BIT-1337. (Robin Sommer)
* Fixing compile problem caused by overeager factorization. (Robin
Sommer)
2.3-874 | 2015-04-21 16:09:20 -0700
* Change details of escaping when logging/printing. (Seth Hall/Robin
Sommer)
- Log files now escape non-printable characters consistently
as "\xXX'. Furthermore, backslashes are escaped as "\\",
making the representation fully reversible.
- When escaping via script-level functions (escape_string,
clean), we likewise now escape consistently with "\xXX" and
"\\".
- There's no "alternative" output style anymore, i.e., fmt()
'%A' qualifier is gone.
Addresses BIT-1333.
* Remove several BroString escaping methods that are no longer
useful. (Seth Hall)
2.3-864 | 2015-04-21 15:24:02 -0700
* A SIP protocol analyzer. (Vlad Grigorescu)
Activity gets logged into sip.log. It generates the following
events:
event sip_request(c: connection, method: string, original_URI: string, version: string);
event sip_reply(c: connection, version: string, code: count, reason: string);
event sip_header(c: connection, is_orig: bool, name: string, value: string);
event sip_all_headers(c: connection, is_orig: bool, hlist: mime_header_list);
event sip_begin_entity(c: connection, is_orig: bool);
event sip_end_entity(c: connection, is_orig: bool);
The analyzer support SIP over UDP currently.
* BIT-1343: Factor common ASN.1 code from RDP, SNMP, and Kerberos
analyzers. (Jon Siwek/Robin Sommer)
2.3-838 | 2015-04-21 13:40:12 -0700
* BIT-1373: Fix vector index assignment reference count bug. (Jon Siwek)
2.3-836 | 2015-04-21 13:37:31 -0700
* Fix SSH direction field being unset. Addresses BIT-1365. (Vlad
Grigorescu)
2.3-835 | 2015-04-21 16:36:00 -0500
* Clarify Broker examples. (Jon Siwek)
2.3-833 | 2015-04-21 12:38:32 -0700
* A Kerberos protocol analyzer. (Vlad Grigorescu)
Activity gets logged into kerberos.log. It generates the following
events:
event krb_as_request(c: connection, msg: KRB::KDC_Request);
event krb_as_response(c: connection, msg: KRB::KDC_Response);
event krb_tgs_request(c: connection, msg: KRB::KDC_Request);
event krb_tgs_response(c: connection, msg: KRB::KDC_Response);
event krb_ap_request(c: connection, ticket: KRB::Ticket, opts: KRB::AP_Options);
event krb_priv(c: connection, is_orig: bool);
event krb_safe(c: connection, is_orig: bool, msg: KRB::SAFE_Msg);
event krb_cred(c: connection, is_orig: bool, tickets: KRB::Ticket_Vector);
event krb_error(c: connection, msg: KRB::Error_Msg);
2.3-793 | 2015-04-20 20:51:00 -0700 2.3-793 | 2015-04-20 20:51:00 -0700
* Add decoding of PROXY-AUTHORIZATION header to HTTP analyze, * Add decoding of PROXY-AUTHORIZATION header to HTTP analyze,

View file

@ -61,7 +61,7 @@ if (NOT SED_EXE)
endif () endif ()
endif () endif ()
FindRequiredPackage(Perl) FindRequiredPackage(PythonInterp)
FindRequiredPackage(FLEX) FindRequiredPackage(FLEX)
FindRequiredPackage(BISON) FindRequiredPackage(BISON)
FindRequiredPackage(PCAP) FindRequiredPackage(PCAP)
@ -113,7 +113,7 @@ if (NOT DISABLE_PERFTOOLS)
find_package(GooglePerftools) find_package(GooglePerftools)
endif () endif ()
if (GOOGLEPERFTOOLS_FOUND) if (GOOGLEPERFTOOLS_FOUND OR TCMALLOC_FOUND)
set(HAVE_PERFTOOLS true) set(HAVE_PERFTOOLS true)
# Non-Linux systems may not be well-supported by gperftools, so # Non-Linux systems may not be well-supported by gperftools, so
# require explicit request from user to enable it in that case. # require explicit request from user to enable it in that case.
@ -165,22 +165,19 @@ include(PCAPTests)
include(OpenSSLTests) include(OpenSSLTests)
include(CheckNameserCompat) include(CheckNameserCompat)
include(GetArchitecture) include(GetArchitecture)
include(RequireCXX11)
# Tell the plugin code that we're building as part of the main tree. # Tell the plugin code that we're building as part of the main tree.
set(BRO_PLUGIN_INTERNAL_BUILD true CACHE INTERNAL "" FORCE) set(BRO_PLUGIN_INTERNAL_BUILD true CACHE INTERNAL "" FORCE)
configure_file(${CMAKE_CURRENT_SOURCE_DIR}/config.h.in configure_file(${CMAKE_CURRENT_SOURCE_DIR}/bro-config.h.in
${CMAKE_CURRENT_BINARY_DIR}/config.h) ${CMAKE_CURRENT_BINARY_DIR}/bro-config.h)
include_directories(${CMAKE_CURRENT_BINARY_DIR}) include_directories(${CMAKE_CURRENT_BINARY_DIR})
######################################################################## ########################################################################
## Recurse on sub-directories ## Recurse on sub-directories
if ( ENABLE_CXX11 )
include(RequireCXX11)
endif ()
if ( ENABLE_BROKER ) if ( ENABLE_BROKER )
add_subdirectory(aux/broker) add_subdirectory(aux/broker)
set(brodeps ${brodeps} broker) set(brodeps ${brodeps} broker)
@ -236,6 +233,7 @@ message(
"\nCPP: ${CMAKE_CXX_COMPILER}" "\nCPP: ${CMAKE_CXX_COMPILER}"
"\n" "\n"
"\nBroker: ${ENABLE_BROKER}" "\nBroker: ${ENABLE_BROKER}"
"\nBroker Python: ${BROKER_PYTHON_BINDINGS}"
"\nBroccoli: ${INSTALL_BROCCOLI}" "\nBroccoli: ${INSTALL_BROCCOLI}"
"\nBroctl: ${INSTALL_BROCTL}" "\nBroctl: ${INSTALL_BROCTL}"
"\nAux. Tools: ${INSTALL_AUX_TOOLS}" "\nAux. Tools: ${INSTALL_AUX_TOOLS}"

View file

@ -1,4 +1,4 @@
Copyright (c) 1995-2013, The Regents of the University of California Copyright (c) 1995-2015, The Regents of the University of California
through the Lawrence Berkeley National Laboratory and the through the Lawrence Berkeley National Laboratory and the
International Computer Science Institute. All rights reserved. International Computer Science Institute. All rights reserved.

138
NEWS
View file

@ -4,19 +4,80 @@ release. For an exhaustive list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with (note that submodules, such as BroControl and Broccoli, come with
their own ``CHANGES``.) their own ``CHANGES``.)
Bro 2.4 (in progress) Bro 2.5 (in progress)
===================== =====================
New Dependencies
----------------
- Bro now requires a compiler with C++11 support for building the
source code.
- Bro now requires the C++ Actor Framework, CAF, which must be
installed first. See http://actor-framework.org.
- Bro now requires Python instead of Perl to compile the source code.
- The pcap buffer size can set through the new option Pcap::bufsize.
New Functionality
-----------------
- Bro now tracks VLAN IDs. To record them inside the connection log,
load protocols/conn/vlan-logging.bro.
- A new per-packet event raw_packet() provides access to layer 2
information. Use with care, generating events per packet is
expensive.
- A new built-in function, decode_base64_conn() for Base64 decoding.
It works like decode_base64() but receives an additional connection
argument that will be used for decoding errors into weird.log
(instead of reporter.log).
- New Bro plugins in aux/plugins:
- af_packet: Native AF_PACKET support.
- myricom: Native Myricom SNF v3 support.
- pf_ring: Native PF_RING support.
- redis: An experimental log writer for Redis.
Changed Functionality
---------------------
- Some script-level identifier have changed their names:
snaplen -> Pcap::snaplen
precompile_pcap_filter() -> Pcap::precompile_pcap_filter()
install_pcap_filter() -> Pcap::install_pcap_filter()
pcap_error() -> Pcap::pcap_error()
Deprecated Functionality
------------------------
- The built-in functions decode_base64_custom() and
encode_base64_custom() are no longer needed and will be removed
in the future. Their functionality is now provided directly by
decode_base64() and encode_base64(), which take an optional
parameter to change the Base64 alphabet.
Bro 2.4
=======
New Functionality New Functionality
----------------- -----------------
- Bro now has support for external plugins that can extend its core - Bro now has support for external plugins that can extend its core
functionality, like protocol/file analysis, via shared libraries. functionality, like protocol/file analysis, via shared libraries.
Plugins can be developed and distributed externally, and will be Plugins can be developed and distributed externally, and will be
pulled in dynamically at startup. Currently, a plugin can provide pulled in dynamically at startup (the environment variables
custom protocol analyzers, file analyzers, log writers, input BRO_PLUGIN_PATH and BRO_PLUGIN_ACTIVATE can be used to specify the
readers, packet sources and dumpers, and new built-in functions. A locations and names of plugins to activate). Currently, a plugin
plugin can furthermore hook into Bro's processing at a number of can provide custom protocol analyzers, file analyzers, log writers,
input readers, packet sources and dumpers, and new built-in functions.
A plugin can furthermore hook into Bro's processing at a number of
places to add custom logic. places to add custom logic.
See https://www.bro.org/sphinx-git/devel/plugins.html for more See https://www.bro.org/sphinx-git/devel/plugins.html for more
@ -27,21 +88,35 @@ New Functionality
- Bro now parses DTLS traffic. Activity gets logged into ssl.log. - Bro now parses DTLS traffic. Activity gets logged into ssl.log.
- Bro now has support for the Kerberos KRB5 protocol over TCP and
UDP. Activity gets logged into kerberos.log.
- Bro now has an RDP analyzer. Activity gets logged into rdp.log. - Bro now has an RDP analyzer. Activity gets logged into rdp.log.
- Bro now has a file analyzer for Portable Executables. Activity gets - Bro now has a file analyzer for Portable Executables. Activity gets
logged into pe.log. logged into pe.log.
- Bro now features a completely rewritten, enhanced SSH analyzer, with - Bro now has support for the SIP protocol over UDP. Activity gets
a set of addedd events being generated. A lot more information about logged into sip.log.
SSH sessions is logged. The analyzer is able to determine if logins
failed or succeeded in most circumstances. - Bro now features a completely rewritten, enhanced SSH analyzer. The
new analyzer is able to determine if logins failed or succeeded in
most circumstances, logs a lot more more information about SSH
sessions, supports v1, and introduces the intelligence type
``Intel::PUBKEY_HASH`` and location ``SSH::IN_SERVER_HOST_KEY``. The
analayzer also generates a set of additional events
(``ssh_auth_successful``, ``ssh_auth_failed``, ``ssh_capabilities``,
``ssh2_server_host_key``, ``ssh1_server_host_key``,
``ssh_encrypted_packet``, ``ssh2_dh_server_params``,
``ssh2_gss_error``, ``ssh2_ecc_key``). See next section for
incompatible SSH changes.
- Bro's file analysis now supports reassembly of files that are not - Bro's file analysis now supports reassembly of files that are not
transferred/seen sequentially. The default file reassembly buffer transferred/seen sequentially. The default file reassembly buffer
size is set with the ``Files::reassembly_buffer_size`` variable. size is set with the ``Files::reassembly_buffer_size`` variable.
- Bro's file type identification has been greatly improved. - Bro's file type identification has been greatly improved (new file types,
bug fixes, and performance improvements).
- Bro's scripting language now has a ``while`` statement:: - Bro's scripting language now has a ``while`` statement::
@ -67,7 +142,7 @@ New Functionality
C++11 compiler (e.g. GCC 4.8+ or Clang 3.3+). C++11 compiler (e.g. GCC 4.8+ or Clang 3.3+).
Broker will become a mandatory dependency in future Bro versions and Broker will become a mandatory dependency in future Bro versions and
replace the current communcation and serialization system. replace the current communication and serialization system.
- Add --enable-c++11 configure flag to compile Bro's source code in - Add --enable-c++11 configure flag to compile Bro's source code in
C++11 mode with a corresponding compiler. Note that 2.4 will be the C++11 mode with a corresponding compiler. Note that 2.4 will be the
@ -75,10 +150,10 @@ New Functionality
- The SSL analysis now alerts when encountering SSL connections with - The SSL analysis now alerts when encountering SSL connections with
old protocol versions or unsafe cipher suites. It also gained old protocol versions or unsafe cipher suites. It also gained
extended reporting of weak keys, caching of already valdidated extended reporting of weak keys, caching of already validated
certificates, full support TLS record defragmentation. SSL generally certificates, and full support for TLS record defragmentation. SSL generally
became much more robust and added several fields to ssl.log (while became much more robust and added several fields to ssl.log (while
removing some other). removing some others).
- A new icmp_sent_payload event provides access to ICMP payload. - A new icmp_sent_payload event provides access to ICMP payload.
@ -91,6 +166,9 @@ New Functionality
threshold in terms of packets or bytes. The primary API for that threshold in terms of packets or bytes. The primary API for that
functionality is in base/protocols/conn/thresholds.bro. functionality is in base/protocols/conn/thresholds.bro.
- There is a new command-line option -Q/--time that prints Bro's execution
time and memory usage to stderr.
- BroControl now has a new command "deploy" which is equivalent to running - BroControl now has a new command "deploy" which is equivalent to running
the "check", "install", "stop", and "start" commands (in that order). the "check", "install", "stop", and "start" commands (in that order).
@ -139,10 +217,22 @@ Changed Functionality
reassembly for non-sequential files, "offset" can be obtained reassembly for non-sequential files, "offset" can be obtained
with other information already available -- adding together with other information already available -- adding together
``seen_bytes`` and ``missed_bytes`` fields of the ``fa_file`` ``seen_bytes`` and ``missed_bytes`` fields of the ``fa_file``
record gives the how many bytes have been written so far (i.e. record gives how many bytes have been written so far (i.e.
the "offset"). the "offset").
- has_valid_octets: now uses a string_vec parameter instead of - The SSH changes come with a few incompatibilities. The following
events have been renamed:
* ``SSH::heuristic_failed_login`` to ``ssh_auth_failed``
* ``SSH::heuristic_successful_login`` to ``ssh_auth_successful``
The ``SSH::Info`` status field has been removed and replaced with
the ``auth_success`` field. This field has been changed from a
string that was previously ``success``, ``failure`` or
``undetermined`` to a boolean. a boolean that is ``T``, ``F``, or
unset.
- The has_valid_octets function now uses a string_vec parameter instead of
string_array. string_array.
- conn.log gained a new field local_resp that works like local_orig, - conn.log gained a new field local_resp that works like local_orig,
@ -186,12 +276,24 @@ Changed Functionality
to stdout. Error messages are still sent to stderr, however. to stdout. Error messages are still sent to stderr, however.
- The capability of processing NetFlow input has been removed for the - The capability of processing NetFlow input has been removed for the
time being. time being. Therefore, the -y/--flowfile and -Y/--netflow command-line
options have been removed, and the netflow_v5_header and netflow_v5_record
events have been removed.
- The -D/--dfa-size command-line option has been removed.
- The -L/--rule-benchmark command-line option has been removed.
- The -O/--optimize command-line option has been removed.
- The deprecated fields "hot" and "addl" have been removed from the - The deprecated fields "hot" and "addl" have been removed from the
connection record. Likewise, the functions append_addl() and connection record. Likewise, the functions append_addl() and
append_addl_marker() have been removed. append_addl_marker() have been removed.
- Log files now escape non-printable characters consistently as "\xXX'.
Furthermore, backslashes are escaped as "\\", making the
representation fully reversible.
Deprecated Functionality Deprecated Functionality
------------------------ ------------------------
@ -201,7 +303,7 @@ Deprecated Functionality
concatenation/extraction functions. Note that the new functions use concatenation/extraction functions. Note that the new functions use
0-based indexing, rather than 1-based. 0-based indexing, rather than 1-based.
The full list of now deprecation functions is: The full list of now deprecated functions is:
* split: use split_string instead. * split: use split_string instead.

1
README.rst Symbolic link
View file

@ -0,0 +1 @@
README

View file

@ -1 +1 @@
2.3-793 2.4-188

@ -1 +1 @@
Subproject commit 544330932e7cd4615d6d19f63907e8aa2acebb9e Subproject commit 214294c502d377bb7bf511eac8c43608e54c875a

@ -1 +1 @@
Subproject commit 462e300bf9c37dcc39b70a4c2d89d19f7351c804 Subproject commit 4e0d2bff4b2c287f66186c3654ef784bb0748d11

@ -1 +1 @@
Subproject commit 45276b39a946d70095c983753cd321ad07dcf285 Subproject commit 80468000859bcb7c3784c69280888fcfe89d8922

@ -1 +1 @@
Subproject commit d52d184bc9aa976ee465914e95ff5c0274a18216 Subproject commit 921b0abcb967666d8349c0c6c2bb8e41e1300579

@ -1 +1 @@
Subproject commit a9d74d91333b403be8d8c01f5aadb03a84968e9c Subproject commit e7da54a3f40e71ca9020f9846256f60c0b885963

@ -1 +1 @@
Subproject commit d69df586c91531db0c3abe838b10a429dda4fa87 Subproject commit ce1d474859cc8a0f39d5eaf69fb1bb56eb1a5161

@ -1 +1 @@
Subproject commit 7a14085394e54a950e477eb4fafb3827ff8dbdc3 Subproject commit 4354b330d914a50f99da05cc78f830b5e86bd64e

2
cmake

@ -1 +1 @@
Subproject commit 2fd35ab6a6245a005828c32f0aa87eb21698c054 Subproject commit 843cdf6a91f06e5407bffbc79a343bff3cf4c81f

48
configure vendored
View file

@ -41,14 +41,13 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-perftools-debug use Google's perftools for debugging --enable-perftools-debug use Google's perftools for debugging
--enable-jemalloc link against jemalloc --enable-jemalloc link against jemalloc
--enable-ruby build ruby bindings for broccoli (deprecated) --enable-ruby build ruby bindings for broccoli (deprecated)
--enable-c++11 build using the C++11 standard --disable-broker disable use of the Broker communication library
--enable-broker enable use of the Broker communication library
(requires C++ Actor Framework and C++11)
--disable-broccoli don't build or install the Broccoli library --disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl --disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools --disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli --disable-python don't try to build python bindings for broccoli
--disable-pybroker don't try to build python bindings for broker
Required Packages in Non-Standard Locations: Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root --with-openssl=PATH path to OpenSSL install root
@ -57,7 +56,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-binpac=PATH path to BinPAC install root --with-binpac=PATH path to BinPAC install root
--with-flex=PATH path to flex executable --with-flex=PATH path to flex executable
--with-bison=PATH path to bison executable --with-bison=PATH path to bison executable
--with-perl=PATH path to perl executable --with-python=PATH path to Python executable
--with-libcaf=PATH path to C++ Actor Framework installation --with-libcaf=PATH path to C++ Actor Framework installation
(a required Broker dependency) (a required Broker dependency)
@ -65,7 +64,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-geoip=PATH path to the libGeoIP install root --with-geoip=PATH path to the libGeoIP install root
--with-perftools=PATH path to Google Perftools install root --with-perftools=PATH path to Google Perftools install root
--with-jemalloc=PATH path to jemalloc install root --with-jemalloc=PATH path to jemalloc install root
--with-python=PATH path to Python interpreter
--with-python-lib=PATH path to libpython --with-python-lib=PATH path to libpython
--with-python-inc=PATH path to Python headers --with-python-inc=PATH path to Python headers
--with-ruby=PATH path to ruby interpreter --with-ruby=PATH path to ruby interpreter
@ -95,7 +93,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
sourcedir="$( cd "$( dirname "$0" )" && pwd )" sourcedir="$( cd "$( dirname "$0" )" && pwd )"
# Function to append a CMake cache entry definition to the # Function to append a CMake cache entry definition to the
# CMakeCacheEntries variable # CMakeCacheEntries variable.
# $1 is the cache entry variable name # $1 is the cache entry variable name
# $2 is the cache entry variable type # $2 is the cache entry variable type
# $3 is the cache entry variable value # $3 is the cache entry variable value
@ -103,6 +101,17 @@ append_cache_entry () {
CMakeCacheEntries="$CMakeCacheEntries -D $1:$2=$3" CMakeCacheEntries="$CMakeCacheEntries -D $1:$2=$3"
} }
# Function to remove a CMake cache entry definition from the
# CMakeCacheEntries variable
# $1 is the cache entry variable name
remove_cache_entry () {
CMakeCacheEntries="$CMakeCacheEntries -U $1"
# Even with -U, cmake still warns by default if
# added previously with -D.
CMakeCacheEntries="$CMakeCacheEntries --no-warn-unused-cli"
}
# set defaults # set defaults
builddir=build builddir=build
prefix=/usr/local/bro prefix=/usr/local/bro
@ -112,10 +121,13 @@ append_cache_entry BRO_ROOT_DIR PATH $prefix
append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl
append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro
append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc
append_cache_entry BROKER_PYTHON_HOME PATH $prefix
append_cache_entry BROKER_PYTHON_BINDINGS BOOL false
append_cache_entry ENABLE_DEBUG BOOL false append_cache_entry ENABLE_DEBUG BOOL false
append_cache_entry ENABLE_PERFTOOLS BOOL false append_cache_entry ENABLE_PERFTOOLS BOOL false
append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false
append_cache_entry ENABLE_JEMALLOC BOOL false append_cache_entry ENABLE_JEMALLOC BOOL false
append_cache_entry ENABLE_BROKER BOOL true
append_cache_entry BinPAC_SKIP_INSTALL BOOL true append_cache_entry BinPAC_SKIP_INSTALL BOOL true
append_cache_entry BUILD_SHARED_LIBS BOOL true append_cache_entry BUILD_SHARED_LIBS BOOL true
append_cache_entry INSTALL_AUX_TOOLS BOOL true append_cache_entry INSTALL_AUX_TOOLS BOOL true
@ -150,8 +162,8 @@ while [ $# -ne 0 ]; do
append_cache_entry BRO_ROOT_DIR PATH $optarg append_cache_entry BRO_ROOT_DIR PATH $optarg
append_cache_entry PY_MOD_INSTALL_DIR PATH $optarg/lib/broctl append_cache_entry PY_MOD_INSTALL_DIR PATH $optarg/lib/broctl
if [ -n "$user_enabled_broker" ]; then if [ -z "$user_disabled_broker" ]; then
append_cache_entry BROKER_PYTHON_HOME PATH $prefix append_cache_entry BROKER_PYTHON_HOME PATH $optarg
fi fi
;; ;;
--scriptdir=*) --scriptdir=*)
@ -187,14 +199,10 @@ while [ $# -ne 0 ]; do
--enable-jemalloc) --enable-jemalloc)
append_cache_entry ENABLE_JEMALLOC BOOL true append_cache_entry ENABLE_JEMALLOC BOOL true
;; ;;
--enable-c++11) --disable-broker)
append_cache_entry ENABLE_CXX11 BOOL true append_cache_entry ENABLE_BROKER BOOL false
;; remove_cache_entry BROKER_PYTHON_HOME
--enable-broker) user_disabled_broker="true"
append_cache_entry ENABLE_CXX11 BOOL true
append_cache_entry ENABLE_BROKER BOOL true
append_cache_entry BROKER_PYTHON_HOME PATH $prefix
user_enabled_broker="true"
;; ;;
--disable-broccoli) --disable-broccoli)
append_cache_entry INSTALL_BROCCOLI BOOL false append_cache_entry INSTALL_BROCCOLI BOOL false
@ -211,6 +219,9 @@ while [ $# -ne 0 ]; do
--disable-python) --disable-python)
append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true
;; ;;
--disable-pybroker)
append_cache_entry DISABLE_PYBROKER BOOL true
;;
--enable-ruby) --enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;; ;;
@ -232,9 +243,6 @@ while [ $# -ne 0 ]; do
--with-bison=*) --with-bison=*)
append_cache_entry BISON_EXECUTABLE PATH $optarg append_cache_entry BISON_EXECUTABLE PATH $optarg
;; ;;
--with-perl=*)
append_cache_entry PERL_EXECUTABLE PATH $optarg
;;
--with-geoip=*) --with-geoip=*)
append_cache_entry LibGeoIP_ROOT_DIR PATH $optarg append_cache_entry LibGeoIP_ROOT_DIR PATH $optarg
;; ;;

View file

@ -0,0 +1 @@
../../../aux/plugins/README

View file

@ -0,0 +1 @@
../../../../aux/plugins/dataseries/README

View file

@ -0,0 +1 @@
../../../../aux/plugins/elasticsearch/README

View file

@ -0,0 +1 @@
../../../../aux/plugins/netmap/README

View file

@ -0,0 +1 @@
../../../../aux/plugins/pf_ring/README

View file

@ -0,0 +1 @@
../../../../aux/plugins/redis/README

View file

@ -21,6 +21,7 @@ current, independent component releases.
Broker - User Manual <broker/broker-manual.rst> Broker - User Manual <broker/broker-manual.rst>
BroControl - Interactive Bro management shell <broctl/README> BroControl - Interactive Bro management shell <broctl/README>
Bro-Aux - Small auxiliary tools for Bro <bro-aux/README> Bro-Aux - Small auxiliary tools for Bro <bro-aux/README>
Bro-Plugins - A collection of plugins for Bro <bro-plugins/README>
BTest - A unit testing framework <btest/README> BTest - A unit testing framework <btest/README>
Capstats - Command-line packet statistic tool <capstats/README> Capstats - Command-line packet statistic tool <capstats/README>
PySubnetTree - Python module for CIDR lookups<pysubnettree/README> PySubnetTree - Python module for CIDR lookups<pysubnettree/README>

View file

@ -3,7 +3,7 @@
Writing Bro Plugins Writing Bro Plugins
=================== ===================
Bro internally provides plugin API that enables extending Bro internally provides a plugin API that enables extending
the system dynamically, without modifying the core code base. That way the system dynamically, without modifying the core code base. That way
custom code remains self-contained and can be maintained, compiled, custom code remains self-contained and can be maintained, compiled,
and installed independently. Currently, plugins can add the following and installed independently. Currently, plugins can add the following
@ -32,7 +32,7 @@ Quick Start
=========== ===========
Writing a basic plugin is quite straight-forward as long as one Writing a basic plugin is quite straight-forward as long as one
follows a few conventions. In the following we walk a simple example follows a few conventions. In the following we create a simple example
plugin that adds a new built-in function (bif) to Bro: we'll add plugin that adds a new built-in function (bif) to Bro: we'll add
``rot13(s: string) : string``, a function that rotates every character ``rot13(s: string) : string``, a function that rotates every character
in a string by 13 places. in a string by 13 places.
@ -81,7 +81,7 @@ The syntax of this file is just like any other ``*.bif`` file; we
won't go into it here. won't go into it here.
Now we can already compile our plugin, we just need to tell the Now we can already compile our plugin, we just need to tell the
configure script that ``init-plugin`` put in place where the Bro configure script (that ``init-plugin`` created) where the Bro
source tree is located (Bro needs to have been built there first):: source tree is located (Bro needs to have been built there first)::
# cd rot13-plugin # cd rot13-plugin
@ -99,7 +99,7 @@ option::
# export BRO_PLUGIN_PATH=/path/to/rot13-plugin/build # export BRO_PLUGIN_PATH=/path/to/rot13-plugin/build
# bro -N # bro -N
[...] [...]
Plugin: Demo::Rot13 - <Insert brief description of plugin> (dynamic, version 1) Demo::Rot13 - <Insert description> (dynamic, version 0.1)
[...] [...]
That looks quite good, except for the dummy description that we should That looks quite good, except for the dummy description that we should
@ -108,28 +108,30 @@ is about. We do this by editing the ``config.description`` line in
``src/Plugin.cc``, like this:: ``src/Plugin.cc``, like this::
[...] [...]
plugin::Configuration Configure() plugin::Configuration Plugin::Configure()
{ {
plugin::Configuration config; plugin::Configuration config;
config.name = "Demo::Rot13"; config.name = "Demo::Rot13";
config.description = "Caesar cipher rotating a string's characters by 13 places."; config.description = "Caesar cipher rotating a string's characters by 13 places.";
config.version.major = 1; config.version.major = 0;
config.version.minor = 0; config.version.minor = 1;
return config; return config;
} }
[...] [...]
Now rebuild and verify that the description is visible::
# make # make
[...] [...]
# bro -N | grep Rot13 # bro -N | grep Rot13
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1) Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 0.1)
Better. Bro can also show us what exactly the plugin provides with the Bro can also show us what exactly the plugin provides with the
more verbose option ``-NN``:: more verbose option ``-NN``::
# bro -NN # bro -NN
[...] [...]
Plugin: Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1) Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 0.1)
[Function] Demo::rot13 [Function] Demo::rot13
[...] [...]
@ -157,10 +159,12 @@ The installed version went into
``<bro-install-prefix>/lib/bro/plugins/Demo_Rot13``. ``<bro-install-prefix>/lib/bro/plugins/Demo_Rot13``.
One can distribute the plugin independently of Bro for others to use. One can distribute the plugin independently of Bro for others to use.
To distribute in source form, just remove the ``build/`` (``make To distribute in source form, just remove the ``build/`` directory
distclean`` does that) and then tar up the whole ``rot13-plugin/`` (``make distclean`` does that) and then tar up the whole ``rot13-plugin/``
directory. Others then follow the same process as above after directory. Others then follow the same process as above after
unpacking. To distribute the plugin in binary form, the build process unpacking.
To distribute the plugin in binary form, the build process
conveniently creates a corresponding tarball in ``build/dist/``. In conveniently creates a corresponding tarball in ``build/dist/``. In
this case, it's called ``Demo_Rot13-0.1.tar.gz``, with the version this case, it's called ``Demo_Rot13-0.1.tar.gz``, with the version
number coming out of the ``VERSION`` file that ``init-plugin`` put number coming out of the ``VERSION`` file that ``init-plugin`` put
@ -169,14 +173,14 @@ plugin, but no further source files. Optionally, one can include
further files by specifying them in the plugin's ``CMakeLists.txt`` further files by specifying them in the plugin's ``CMakeLists.txt``
through the ``bro_plugin_dist_files`` macro; the skeleton does that through the ``bro_plugin_dist_files`` macro; the skeleton does that
for ``README``, ``VERSION``, ``CHANGES``, and ``COPYING``. To use the for ``README``, ``VERSION``, ``CHANGES``, and ``COPYING``. To use the
plugin through the binary tarball, just unpack it and point plugin through the binary tarball, just unpack it into
``BRO_PLUGIN_PATH`` there; or copy it into ``<bro-install-prefix>/lib/bro/plugins/``. Alternatively, if you unpack
``<bro-install-prefix>/lib/bro/plugins/`` directly. it in another location, then you need to point ``BRO_PLUGIN_PATH`` there.
Before distributing your plugin, you should edit some of the meta Before distributing your plugin, you should edit some of the meta
files that ``init-plugin`` puts in place. Edit ``README`` and files that ``init-plugin`` puts in place. Edit ``README`` and
``VERSION``, and update ``CHANGES`` when you make changes. Also put a ``VERSION``, and update ``CHANGES`` when you make changes. Also put a
license file in place as ``COPYING``; if BSD is fine, you find a license file in place as ``COPYING``; if BSD is fine, you will find a
template in ``COPYING.edit-me``. template in ``COPYING.edit-me``.
Plugin Directory Layout Plugin Directory Layout
@ -193,7 +197,7 @@ directory. With the skeleton, ``<base>`` corresponds to ``build/``.
must exist, and its content must consist of a single line with the must exist, and its content must consist of a single line with the
qualified name of the plugin (e.g., "Demo::Rot13"). qualified name of the plugin (e.g., "Demo::Rot13").
``<base>/lib/<plugin-name>-<os>-<arch>.so`` ``<base>/lib/<plugin-name>.<os>-<arch>.so``
The shared library containing the plugin's compiled code. Bro will The shared library containing the plugin's compiled code. Bro will
load this in dynamically at run-time if OS and architecture match load this in dynamically at run-time if OS and architecture match
the current platform. the current platform.
@ -205,8 +209,15 @@ directory. With the skeleton, ``<base>`` corresponds to ``build/``.
"@load"ed. "@load"ed.
``scripts``/__load__.bro ``scripts``/__load__.bro
A Bro script that will be loaded immediately when the plugin gets A Bro script that will be loaded when the plugin gets activated.
activated. See below for more information on activating plugins. When this script executes, any BiF elements that the plugin
defines will already be available. See below for more information
on activating plugins.
``scripts``/__preload__.bro
A Bro script that will be loaded when the plugin gets activated,
but before any BiF elements become available. See below for more
information on activating plugins.
``lib/bif/`` ``lib/bif/``
Directory with auto-generated Bro scripts that declare the plugin's Directory with auto-generated Bro scripts that declare the plugin's
@ -215,8 +226,8 @@ directory. With the skeleton, ``<base>`` corresponds to ``build/``.
Any other files in ``<base>`` are ignored by Bro. Any other files in ``<base>`` are ignored by Bro.
By convention, a plugin should put its custom scripts into sub folders By convention, a plugin should put its custom scripts into sub folders
of ``scripts/``, i.e., ``scripts/<script-namespace>/<script>.bro`` to of ``scripts/``, i.e., ``scripts/<plugin-namespace>/<plugin-name>/<script>.bro``
avoid conflicts. As usual, you can then put a ``__load__.bro`` in to avoid conflicts. As usual, you can then put a ``__load__.bro`` in
there as well so that, e.g., ``@load Demo/Rot13`` could load a whole there as well so that, e.g., ``@load Demo/Rot13`` could load a whole
module in the form of multiple individual scripts. module in the form of multiple individual scripts.
@ -242,7 +253,8 @@ as well as the ``__bro_plugin__`` magic file and any further
distribution files specified in ``CMakeLists.txt`` (e.g., README, distribution files specified in ``CMakeLists.txt`` (e.g., README,
VERSION). You can find a full list of files installed in VERSION). You can find a full list of files installed in
``build/MANIFEST``. Behind the scenes, ``make install`` really just ``build/MANIFEST``. Behind the scenes, ``make install`` really just
copies over the binary tarball in ``build/dist``. unpacks the binary tarball from ``build/dist`` into the destination
directory.
``init-plugin`` will never overwrite existing files. If its target ``init-plugin`` will never overwrite existing files. If its target
directory already exists, it will by default decline to do anything. directory already exists, it will by default decline to do anything.
@ -274,7 +286,9 @@ Activating a plugin will:
1. Load the dynamic module 1. Load the dynamic module
2. Make any bif items available 2. Make any bif items available
3. Add the ``scripts/`` directory to ``BROPATH`` 3. Add the ``scripts/`` directory to ``BROPATH``
4. Load ``scripts/__load__.bro`` 4. Load ``scripts/__preload__.bro``
5. Make BiF elements available to scripts.
6. Load ``scripts/__load__.bro``
By default, Bro will automatically activate all dynamic plugins found By default, Bro will automatically activate all dynamic plugins found
in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro in its search path ``BRO_PLUGIN_PATH``. However, in bare mode (``bro
@ -369,18 +383,19 @@ Testing Plugins
=============== ===============
A plugin should come with a test suite to exercise its functionality. A plugin should come with a test suite to exercise its functionality.
The ``init-plugin`` script puts in place a basic </btest/README> setup The ``init-plugin`` script puts in place a basic
:doc:`BTest <../../components/btest/README>` setup
to start with. Initially, it comes with a single test that just checks to start with. Initially, it comes with a single test that just checks
that Bro loads the plugin correctly. It won't have a baseline yet, so that Bro loads the plugin correctly. It won't have a baseline yet, so
let's get that in place:: let's get that in place::
# cd tests # cd tests
# btest -d # btest -d
[ 0%] plugin.loading ... failed [ 0%] rot13.show-plugin ... failed
% 'btest-diff output' failed unexpectedly (exit code 100) % 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag % cat .diag
== File =============================== == File ===============================
Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 1.0) Demo::Rot13 - Caesar cipher rotating a string's characters by 13 places. (dynamic, version 0.1)
[Function] Demo::rot13 [Function] Demo::rot13
== Error =============================== == Error ===============================
@ -413,8 +428,8 @@ correctly::
Check the output:: Check the output::
# btest -d plugin/rot13.bro # btest -d rot13/bif-rot13.bro
[ 0%] plugin.rot13 ... failed [ 0%] rot13.bif-rot13 ... failed
% 'btest-diff output' failed unexpectedly (exit code 100) % 'btest-diff output' failed unexpectedly (exit code 100)
% cat .diag % cat .diag
== File =============================== == File ===============================
@ -429,7 +444,7 @@ Check the output::
Install the baseline:: Install the baseline::
# btest -U plugin/rot13.bro # btest -U rot13/bif-rot13.bro
all 1 tests successful all 1 tests successful
Run the test-suite:: Run the test-suite::
@ -457,7 +472,7 @@ your plugin's debugging output with ``-B plugin-<name>``, where
``<name>`` is the name of the plugin as returned by its ``<name>`` is the name of the plugin as returned by its
``Configure()`` method, yet with the namespace-separator ``::`` ``Configure()`` method, yet with the namespace-separator ``::``
replaced with a simple dash. Example: If the plugin is called replaced with a simple dash. Example: If the plugin is called
``Bro::Demo``, use ``-B plugin-Bro-Demo``. As usual, the debugging ``Demo::Rot13``, use ``-B plugin-Demo-Rot13``. As usual, the debugging
output will be recorded to ``debug.log`` if Bro's compiled in debug output will be recorded to ``debug.log`` if Bro's compiled in debug
mode. mode.

View file

@ -9,10 +9,7 @@ Broker-Enabled Communication Framework
Bro can now use the `Broker Library Bro can now use the `Broker Library
<../components/broker/README.html>`_ to exchange information with <../components/broker/README.html>`_ to exchange information with
other Bro processes. To enable it run Bro's ``configure`` script other Bro processes.
with the ``--enable-broker`` option. Note that a C++11 compatible
compiler (e.g. GCC 4.8+ or Clang 3.3+) is required as well as the
`C++ Actor Framework <http://actor-framework.org/>`_.
.. contents:: .. contents::
@ -23,26 +20,26 @@ Communication via Broker must first be turned on via
:bro:see:`BrokerComm::enable`. :bro:see:`BrokerComm::enable`.
Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen` Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen`
and then monitor connection status updates via and then monitor connection status updates via the
:bro:see:`BrokerComm::incoming_connection_established` and :bro:see:`BrokerComm::incoming_connection_established` and
:bro:see:`BrokerComm::incoming_connection_broken`. :bro:see:`BrokerComm::incoming_connection_broken` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro
Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect` Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect`
and then monitor connection status updates via and then monitor connection status updates via the
:bro:see:`BrokerComm::outgoing_connection_established`, :bro:see:`BrokerComm::outgoing_connection_established`,
:bro:see:`BrokerComm::outgoing_connection_broken`, and :bro:see:`BrokerComm::outgoing_connection_broken`, and
:bro:see:`BrokerComm::outgoing_connection_incompatible`. :bro:see:`BrokerComm::outgoing_connection_incompatible` events.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro
Remote Printing Remote Printing
=============== ===============
To receive remote print messages, first use To receive remote print messages, first use the
:bro:see:`BrokerComm::subscribe_to_prints` to advertise to peers a topic :bro:see:`BrokerComm::subscribe_to_prints` function to advertise to peers a
prefix of interest and then create an event handler for topic prefix of interest and then create an event handler for
:bro:see:`BrokerComm::print_handler` to handle any print messages that are :bro:see:`BrokerComm::print_handler` to handle any print messages that are
received. received.
@ -71,17 +68,17 @@ the Broker message format is simply:
Remote Events Remote Events
============= =============
Receiving remote events is similar to remote prints. Just use Receiving remote events is similar to remote prints. Just use the
:bro:see:`BrokerComm::subscribe_to_events` and possibly define any new events :bro:see:`BrokerComm::subscribe_to_events` function and possibly define any
along with handlers that peers may want to send. new events along with handlers that peers may want to send.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro
To send events, there are two choices. The first is to use call There are two different ways to send events. The first is to call the
:bro:see:`BrokerComm::event` directly. The second option is to use :bro:see:`BrokerComm::event` function directly. The second option is to call
:bro:see:`BrokerComm::auto_event` to make it so a particular event is the :bro:see:`BrokerComm::auto_event` function where you specify a
automatically sent to peers whenever it is called locally via the normal particular event that will be automatically sent to peers whenever the
event invocation syntax. event is called locally via the normal event invocation syntax.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/events-connector.bro
@ -98,7 +95,7 @@ the Broker message format is:
broker::message{std::string{}, ...}; broker::message{std::string{}, ...};
The first parameter is the name of the event and the remaining ``...`` The first parameter is the name of the event and the remaining ``...``
are its arguments, which are any of the support Broker data types as are its arguments, which are any of the supported Broker data types as
they correspond to the Bro types for the event named in the first they correspond to the Bro types for the event named in the first
parameter of the message. parameter of the message.
@ -107,23 +104,23 @@ Remote Logging
.. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro
Use :bro:see:`BrokerComm::subscribe_to_logs` to advertise interest in logs Use the :bro:see:`BrokerComm::subscribe_to_logs` function to advertise interest
written by peers. The topic names that Bro uses are implicitly of the in logs written by peers. The topic names that Bro uses are implicitly of the
form "bro/log/<stream-name>". form "bro/log/<stream-name>".
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro
To send remote logs either use :bro:see:`Log::enable_remote_logging` or To send remote logs either redef :bro:see:`Log::enable_remote_logging` or
:bro:see:`BrokerComm::enable_remote_logs`. The former allows any log stream use the :bro:see:`BrokerComm::enable_remote_logs` function. The former
to be sent to peers while the later toggles remote logging for allows any log stream to be sent to peers while the latter enables remote
particular streams. logging for particular streams.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-connector.bro .. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-connector.bro
Message Format Message Format
-------------- --------------
For other applications that want to exchange logs messages with Bro, For other applications that want to exchange log messages with Bro,
the Broker message format is: the Broker message format is:
.. code:: c++ .. code:: c++
@ -132,7 +129,7 @@ the Broker message format is:
The enum value corresponds to the stream's :bro:see:`Log::ID` value, and The enum value corresponds to the stream's :bro:see:`Log::ID` value, and
the record corresponds to a single entry of that log's columns record, the record corresponds to a single entry of that log's columns record,
in this case a ``Test::INFO`` value. in this case a ``Test::Info`` value.
Tuning Access Control Tuning Access Control
===================== =====================
@ -152,11 +149,12 @@ that take a :bro:see:`BrokerComm::SendFlags` such as :bro:see:`BrokerComm::print
:bro:see:`BrokerComm::enable_remote_logs`. :bro:see:`BrokerComm::enable_remote_logs`.
If not using the ``auto_advertise`` flag, one can use the If not using the ``auto_advertise`` flag, one can use the
:bro:see:`BrokerComm::advertise_topic` and :bro:see:`BrokerComm::unadvertise_topic` :bro:see:`BrokerComm::advertise_topic` and
to manupulate the set of topic prefixes that are allowed to be :bro:see:`BrokerComm::unadvertise_topic` functions
advertised to peers. If an endpoint does not advertise a topic prefix, to manipulate the set of topic prefixes that are allowed to be
the only way a peers can send messages to it is via the ``unsolicited`` advertised to peers. If an endpoint does not advertise a topic prefix, then
flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching the only way peers can send messages to it is via the ``unsolicited``
flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching
prefix (i.e. full topic may be longer than receivers prefix, just the prefix (i.e. full topic may be longer than receivers prefix, just the
prefix needs to match). prefix needs to match).
@ -172,7 +170,7 @@ specific type of frontend, but a standalone frontend can also exist to
e.g. query and modify the contents of a remote master store without e.g. query and modify the contents of a remote master store without
actually "owning" any of the contents itself. actually "owning" any of the contents itself.
A master data store can be be cloned from remote peers which may then A master data store can be cloned from remote peers which may then
perform lightweight, local queries against the clone, which perform lightweight, local queries against the clone, which
automatically stays synchronized with the master store. Clones cannot automatically stays synchronized with the master store. Clones cannot
modify their content directly, instead they send modifications to the modify their content directly, instead they send modifications to the
@ -181,7 +179,7 @@ all clones.
Master and clone stores get to choose what type of storage backend to Master and clone stores get to choose what type of storage backend to
use. E.g. In-memory versus SQLite for persistence. Note that if clones use. E.g. In-memory versus SQLite for persistence. Note that if clones
are used, data store sizes should still be able to fit within memory are used, then data store sizes must be able to fit within memory
regardless of the storage backend as a single snapshot of the master regardless of the storage backend as a single snapshot of the master
store is sent in a single chunk to initialize the clone. store is sent in a single chunk to initialize the clone.
@ -198,5 +196,5 @@ needed, just replace the :bro:see:`BrokerStore::create_clone` call with
:bro:see:`BrokerStore::create_frontend`. Queries will then be made against :bro:see:`BrokerStore::create_frontend`. Queries will then be made against
the remote master store instead of the local clone. the remote master store instead of the local clone.
Note that all queries are made within Bro's asynchrounous ``when`` Note that all data store queries must be made within Bro's asynchronous
statements and must specify a timeout block. ``when`` statements and must specify a timeout block.

View file

@ -1,5 +1,4 @@
const broker_port: port = 9999/tcp &redef;
const broker_port: port &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef BrokerComm::endpoint_name = "connector";

View file

@ -1,5 +1,4 @@
const broker_port: port = 9999/tcp &redef;
const broker_port: port &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef BrokerComm::endpoint_name = "listener";

View file

@ -1,4 +1,4 @@
const broker_port: port &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef BrokerComm::endpoint_name = "connector";
global my_event: event(msg: string, c: count); global my_event: event(msg: string, c: count);

View file

@ -1,5 +1,4 @@
const broker_port: port = 9999/tcp &redef;
const broker_port: port &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef BrokerComm::endpoint_name = "listener";
global msg_count = 0; global msg_count = 0;

View file

@ -1,6 +1,6 @@
@load ./testlog @load ./testlog
const broker_port: port &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef BrokerComm::endpoint_name = "connector";
redef Log::enable_local_logging = F; redef Log::enable_local_logging = F;

View file

@ -1,6 +1,6 @@
@load ./testlog @load ./testlog
const broker_port: port &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef BrokerComm::endpoint_name = "listener";

View file

@ -1,4 +1,4 @@
const broker_port: port &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector"; redef BrokerComm::endpoint_name = "connector";

View file

@ -1,5 +1,4 @@
const broker_port: port = 9999/tcp &redef;
const broker_port: port &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener"; redef BrokerComm::endpoint_name = "listener";
global msg_count = 0; global msg_count = 0;

View file

@ -1,4 +1,4 @@
const broker_port: port &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
global h: opaque of BrokerStore::Handle; global h: opaque of BrokerStore::Handle;

View file

@ -1,4 +1,4 @@
const broker_port: port &redef; const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T; redef exit_only_after_terminate = T;
global h: opaque of BrokerStore::Handle; global h: opaque of BrokerStore::Handle;

View file

@ -1,4 +1,3 @@
module Test; module Test;
export { export {

View file

@ -20,11 +20,13 @@ GeoLocation
Install libGeoIP Install libGeoIP
---------------- ----------------
Before building Bro, you need to install libGeoIP.
* FreeBSD: * FreeBSD:
.. console:: .. console::
sudo pkg_add -r GeoIP sudo pkg install GeoIP
* RPM/RedHat-based Linux: * RPM/RedHat-based Linux:
@ -40,80 +42,99 @@ Install libGeoIP
* Mac OS X: * Mac OS X:
Vanilla OS X installations don't ship with libGeoIP, but if You need to install from your preferred package management system
installed from your preferred package management system (e.g. (e.g. MacPorts, Fink, or Homebrew). The name of the package that you need
MacPorts, Fink, or Homebrew), they should be automatically detected may be libgeoip, geoip, or geoip-dev, depending on which package management
and Bro will compile against them. system you are using.
GeoIPLite Database Installation GeoIPLite Database Installation
------------------------------------ -------------------------------
A country database for GeoIPLite is included when you do the C API A country database for GeoIPLite is included when you do the C API
install, but for Bro, we are using the city database which includes install, but for Bro, we are using the city database which includes
cities and regions in addition to countries. cities and regions in addition to countries.
`Download <http://www.maxmind.com/app/geolitecity>`__ the GeoLite city `Download <http://www.maxmind.com/app/geolitecity>`__ the GeoLite city
binary database. binary database:
.. console:: .. console::
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
gunzip GeoLiteCity.dat.gz gunzip GeoLiteCity.dat.gz
Next, the file needs to be put in the database directory. This directory Next, the file needs to be renamed and put in the GeoIP database directory.
should already exist and will vary depending on which platform and package This directory should already exist and will vary depending on which platform
you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For Linux, and package you are using. For FreeBSD, use ``/usr/local/share/GeoIP``. For
use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one Linux, use ``/usr/share/GeoIP`` or ``/var/lib/GeoIP`` (choose whichever one
already exists). already exists).
.. console:: .. console::
mv GeoLiteCity.dat <path_to_database_dir>/GeoIPCity.dat mv GeoLiteCity.dat <path_to_database_dir>/GeoIPCity.dat
Note that there is a separate database for IPv6 addresses, which can also
be installed if you want GeoIP functionality for IPv6.
Testing
-------
Before using the GeoIP functionality, it is a good idea to verify that
everything is setup correctly. After installing libGeoIP and the GeoIP city
database, and building Bro, you can quickly check if the GeoIP functionality
works by running a command like this:
.. console::
bro -e "print lookup_location(8.8.8.8);"
If you see an error message similar to "Failed to open GeoIP City database",
then you may need to either rename or move your GeoIP city database file (the
error message should give you the full pathname of the database file that
Bro is looking for).
If you see an error message similar to "Bro was not configured for GeoIP
support", then you need to rebuild Bro and make sure it is linked against
libGeoIP. Normally, if libGeoIP is installed correctly then it should
automatically be found when building Bro. If this doesn't happen, then
you may need to specify the path to the libGeoIP installation
(e.g. ``./configure --with-geoip=<path>``).
Usage Usage
----- -----
There is a single built in function that provides the GeoIP There is a built-in function that provides the GeoIP functionality:
functionality:
.. code:: bro .. code:: bro
function lookup_location(a:addr): geo_location function lookup_location(a:addr): geo_location
There is also the :bro:see:`geo_location` data structure that is returned The return value of the :bro:see:`lookup_location` function is a record
from the :bro:see:`lookup_location` function: type called :bro:see:`geo_location`, and it consists of several fields
containing the country, region, city, latitude, and longitude of the specified
.. code:: bro IP address. Since one or more fields in this record will be uninitialized
for some IP addresses (for example, the country and region of an IP address
type geo_location: record { might be known, but the city could be unknown), a field should be checked
country_code: string; if it has a value before trying to access the value.
region: string;
city: string;
latitude: double;
longitude: double;
};
Example Example
------- -------
To write a line in a log file for every ftp connection from hosts in To show every ftp connection from hosts in Ohio, this is now very easy:
Ohio, this is now very easy:
.. code:: bro .. code:: bro
global ftp_location_log: file = open_log_file("ftp-location");
event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool)
{ {
local client = c$id$orig_h; local client = c$id$orig_h;
local loc = lookup_location(client); local loc = lookup_location(client);
if (loc$region == "OH" && loc$country_code == "US")
if (loc?$region && loc$region == "OH" && loc$country_code == "US")
{ {
print ftp_location_log, fmt("FTP Connection from:%s (%s,%s,%s)", client, loc$city, loc$region, loc$country_code); local city = loc?$city ? loc$city : "<unknown>";
print fmt("FTP Connection from:%s (%s,%s,%s)", client, city,
loc$region, loc$country_code);
} }
} }

View file

@ -32,7 +32,8 @@ For this example we assume that we want to import data from a blacklist
that contains server IP addresses as well as the timestamp and the reason that contains server IP addresses as well as the timestamp and the reason
for the block. for the block.
An example input file could look like this: An example input file could look like this (note that all fields must be
tab-separated):
:: ::
@ -63,19 +64,23 @@ The two records are defined as:
reason: string; reason: string;
}; };
Note that the names of the fields in the record definitions have to correspond Note that the names of the fields in the record definitions must correspond
to the column names listed in the '#fields' line of the log file, in this to the column names listed in the '#fields' line of the log file, in this
case 'ip', 'timestamp', and 'reason'. case 'ip', 'timestamp', and 'reason'. Also note that the ordering of the
columns does not matter, because each column is identified by name.
The log file is read into the table with a simple call of the ``add_table`` The log file is read into the table with a simple call of the
function: :bro:id:`Input::add_table` function:
.. code:: bro .. code:: bro
global blacklist: table[addr] of Val = table(); global blacklist: table[addr] of Val = table();
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist]); event bro_init() {
Input::remove("blacklist"); Input::add_table([$source="blacklist.file", $name="blacklist",
$idx=Idx, $val=Val, $destination=blacklist]);
Input::remove("blacklist");
}
With these three lines we first create an empty table that should contain the With these three lines we first create an empty table that should contain the
blacklist data and then instruct the input framework to open an input stream blacklist data and then instruct the input framework to open an input stream
@ -92,7 +97,7 @@ Because of this, the data is not immediately accessible. Depending on the
size of the data source it might take from a few milliseconds up to a few size of the data source it might take from a few milliseconds up to a few
seconds until all data is present in the table. Please note that this means seconds until all data is present in the table. Please note that this means
that when Bro is running without an input source or on very short captured that when Bro is running without an input source or on very short captured
files, it might terminate before the data is present in the system (because files, it might terminate before the data is present in the table (because
Bro already handled all packets before the import thread finished). Bro already handled all packets before the import thread finished).
Subsequent calls to an input source are queued until the previous action has Subsequent calls to an input source are queued until the previous action has
@ -101,8 +106,8 @@ been completed. Because of this, it is, for example, possible to call
will remain queued until the first read has been completed. will remain queued until the first read has been completed.
Once the input framework finishes reading from a data source, it fires Once the input framework finishes reading from a data source, it fires
the ``end_of_data`` event. Once this event has been received all data the :bro:id:`Input::end_of_data` event. Once this event has been received all
from the input file is available in the table. data from the input file is available in the table.
.. code:: bro .. code:: bro
@ -111,9 +116,9 @@ from the input file is available in the table.
print blacklist; print blacklist;
} }
The table can also already be used while the data is still being read - it The table can be used while the data is still being read - it
just might not contain all lines in the input file when the event has not just might not contain all lines from the input file before the event has
yet fired. After it has been populated it can be used like any other Bro fired. After the table has been populated it can be used like any other Bro
table and blacklist entries can easily be tested: table and blacklist entries can easily be tested:
.. code:: bro .. code:: bro
@ -130,10 +135,11 @@ changing. For these cases, the Bro input framework supports several ways to
deal with changing data files. deal with changing data files.
The first, very basic method is an explicit refresh of an input stream. When The first, very basic method is an explicit refresh of an input stream. When
an input stream is open, the function ``force_update`` can be called. This an input stream is open (this means it has not yet been removed by a call to
will trigger a complete refresh of the table; any changed elements from the :bro:id:`Input::remove`), the function :bro:id:`Input::force_update` can be
file will be updated. After the update is finished the ``end_of_data`` called. This will trigger a complete refresh of the table; any changed
event will be raised. elements from the file will be updated. After the update is finished the
:bro:id:`Input::end_of_data` event will be raised.
In our example the call would look like: In our example the call would look like:
@ -141,30 +147,35 @@ In our example the call would look like:
Input::force_update("blacklist"); Input::force_update("blacklist");
The input framework also supports two automatic refresh modes. The first mode Alternatively, the input framework can automatically refresh the table
continually checks if a file has been changed. If the file has been changed, it contents when it detects a change to the input file. To use this feature,
you need to specify a non-default read mode by setting the ``mode`` option
of the :bro:id:`Input::add_table` call. Valid values are ``Input::MANUAL``
(the default), ``Input::REREAD`` and ``Input::STREAM``. For example,
setting the value of the ``mode`` option in the previous example
would look like this:
.. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist",
$idx=Idx, $val=Val, $destination=blacklist,
$mode=Input::REREAD]);
When using the reread mode (i.e., ``$mode=Input::REREAD``), Bro continually
checks if the input file has been changed. If the file has been changed, it
is re-read and the data in the Bro table is updated to reflect the current is re-read and the data in the Bro table is updated to reflect the current
state. Each time a change has been detected and all the new data has been state. Each time a change has been detected and all the new data has been
read into the table, the ``end_of_data`` event is raised. read into the table, the ``end_of_data`` event is raised.
The second mode is a streaming mode. This mode assumes that the source data When using the streaming mode (i.e., ``$mode=Input::STREAM``), Bro assumes
file is an append-only file to which new data is continually appended. Bro that the source data file is an append-only file to which new data is
continually checks for new data at the end of the file and will add the new continually appended. Bro continually checks for new data at the end of
data to the table. If newer lines in the file have the same index as previous the file and will add the new data to the table. If newer lines in the
lines, they will overwrite the values in the output table. Because of the file have the same index as previous lines, they will overwrite the
nature of streaming reads (data is continually added to the table), values in the output table. Because of the nature of streaming reads
the ``end_of_data`` event is never raised when using streaming reads. (data is continually added to the table), the ``end_of_data`` event
is never raised when using streaming reads.
The reading mode can be selected by setting the ``mode`` option of the
add_table call. Valid values are ``MANUAL`` (the default), ``REREAD``
and ``STREAM``.
Hence, when adding ``$mode=Input::REREAD`` to the previous example, the
blacklist table will always reflect the state of the blacklist input file.
.. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD]);
Receiving change events Receiving change events
----------------------- -----------------------
@ -173,34 +184,40 @@ When re-reading files, it might be interesting to know exactly which lines in
the source files have changed. the source files have changed.
For this reason, the input framework can raise an event each time when a data For this reason, the input framework can raise an event each time when a data
item is added to, removed from or changed in a table. item is added to, removed from, or changed in a table.
The event definition looks like this: The event definition looks like this (note that you can change the name of
this event in your own Bro script):
.. code:: bro .. code:: bro
event entry(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) { event entry(description: Input::TableDescription, tpe: Input::Event,
# act on values left: Idx, right: Val) {
# do something here...
print fmt("%s = %s", left, right);
} }
The event has to be specified in ``$ev`` in the ``add_table`` call: The event must be specified in ``$ev`` in the ``add_table`` call:
.. code:: bro .. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]); Input::add_table([$source="blacklist.file", $name="blacklist",
$idx=Idx, $val=Val, $destination=blacklist,
$mode=Input::REREAD, $ev=entry]);
The ``description`` field of the event contains the arguments that were The ``description`` argument of the event contains the arguments that were
originally supplied to the add_table call. Hence, the name of the stream can, originally supplied to the add_table call. Hence, the name of the stream can,
for example, be accessed with ``description$name``. ``tpe`` is an enum for example, be accessed with ``description$name``. The ``tpe`` argument of the
containing the type of the change that occurred. event is an enum containing the type of the change that occurred.
If a line that was not previously present in the table has been added, If a line that was not previously present in the table has been added,
then ``tpe`` will contain ``Input::EVENT_NEW``. In this case ``left`` contains then the value of ``tpe`` will be ``Input::EVENT_NEW``. In this case ``left``
the index of the added table entry and ``right`` contains the values of the contains the index of the added table entry and ``right`` contains the
added entry. values of the added entry.
If a table entry that already was present is altered during the re-reading or If a table entry that already was present is altered during the re-reading or
streaming read of a file, ``tpe`` will contain ``Input::EVENT_CHANGED``. In streaming read of a file, then the value of ``tpe`` will be
``Input::EVENT_CHANGED``. In
this case ``left`` contains the index of the changed table entry and ``right`` this case ``left`` contains the index of the changed table entry and ``right``
contains the values of the entry before the change. The reason for this is contains the values of the entry before the change. The reason for this is
that the table already has been updated when the event is raised. The current that the table already has been updated when the event is raised. The current
@ -208,8 +225,9 @@ value in the table can be ascertained by looking up the current table value.
Hence it is possible to compare the new and the old values of the table. Hence it is possible to compare the new and the old values of the table.
If a table element is removed because it was no longer present during a If a table element is removed because it was no longer present during a
re-read, then ``tpe`` will contain ``Input::REMOVED``. In this case ``left`` re-read, then the value of ``tpe`` will be ``Input::EVENT_REMOVED``. In this
contains the index and ``right`` the values of the removed element. case ``left`` contains the index and ``right`` the values of the removed
element.
Filtering data during import Filtering data during import
@ -222,24 +240,26 @@ can either accept or veto the change by returning true for an accepted
change and false for a rejected change. Furthermore, it can alter the data change and false for a rejected change. Furthermore, it can alter the data
before it is written to the table. before it is written to the table.
The following example filter will reject to add entries to the table when The following example filter will reject adding entries to the table when
they were generated over a month ago. It will accept all changes and all they were generated over a month ago. It will accept all changes and all
removals of values that are already present in the table. removals of values that are already present in the table.
.. code:: bro .. code:: bro
Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, Input::add_table([$source="blacklist.file", $name="blacklist",
$pred(typ: Input::Event, left: Idx, right: Val) = { $idx=Idx, $val=Val, $destination=blacklist,
if ( typ != Input::EVENT_NEW ) { $mode=Input::REREAD,
return T; $pred(typ: Input::Event, left: Idx, right: Val) = {
} if ( typ != Input::EVENT_NEW ) {
return ( ( current_time() - right$timestamp ) < (30 day) ); return T;
}]); }
return (current_time() - right$timestamp) < 30day;
}]);
To change elements while they are being imported, the predicate function can To change elements while they are being imported, the predicate function can
manipulate ``left`` and ``right``. Note that predicate functions are called manipulate ``left`` and ``right``. Note that predicate functions are called
before the change is committed to the table. Hence, when a table element is before the change is committed to the table. Hence, when a table element is
changed (``tpe`` is ``INPUT::EVENT_CHANGED``), ``left`` and ``right`` changed (``typ`` is ``Input::EVENT_CHANGED``), ``left`` and ``right``
contain the new values, but the destination (``blacklist`` in our example) contain the new values, but the destination (``blacklist`` in our example)
still contains the old values. This allows predicate functions to examine still contains the old values. This allows predicate functions to examine
the changes between the old and the new version before deciding if they the changes between the old and the new version before deciding if they
@ -250,14 +270,19 @@ Different readers
The input framework supports different kinds of readers for different kinds The input framework supports different kinds of readers for different kinds
of source data files. At the moment, the default reader reads ASCII files of source data files. At the moment, the default reader reads ASCII files
formatted in the Bro log file format (tab-separated values). At the moment, formatted in the Bro log file format (tab-separated values with a "#fields"
Bro comes with two other readers. The ``RAW`` reader reads a file that is header line). Several other readers are included in Bro.
split by a specified record separator (usually newline). The contents are
The raw reader reads a file that is
split by a specified record separator (newline by default). The contents are
returned line-by-line as strings; it can, for example, be used to read returned line-by-line as strings; it can, for example, be used to read
configuration files and the like and is probably configuration files and the like and is probably
only useful in the event mode and not for reading data to tables. only useful in the event mode and not for reading data to tables.
Another included reader is the ``BENCHMARK`` reader, which is being used The binary reader is intended to be used with file analysis input streams (and
is the default type of reader for those streams).
The benchmark reader is being used
to optimize the speed of the input framework. It can generate arbitrary to optimize the speed of the input framework. It can generate arbitrary
amounts of semi-random data in all Bro data types supported by the input amounts of semi-random data in all Bro data types supported by the input
framework. framework.
@ -270,75 +295,17 @@ aforementioned ones:
logging-input-sqlite logging-input-sqlite
Add_table options
-----------------
This section lists all possible options that can be used for the add_table
function and gives a short explanation of their use. Most of the options
already have been discussed in the previous sections.
The possible fields that can be set for a table stream are:
``source``
A mandatory string identifying the source of the data.
For the ASCII reader this is the filename.
``name``
A mandatory name for the filter that can later be used
to manipulate it further.
``idx``
Record type that defines the index of the table.
``val``
Record type that defines the values of the table.
``reader``
The reader used for this stream. Default is ``READER_ASCII``.
``mode``
The mode in which the stream is opened. Possible values are
``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
``MANUAL`` means that the file is not updated after it has
been read. Changes to the file will not be reflected in the
data Bro knows. ``REREAD`` means that the whole file is read
again each time a change is found. This should be used for
files that are mapped to a table where individual lines can
change. ``STREAM`` means that the data from the file is
streamed. Events / table entries will be generated as new
data is appended to the file.
``destination``
The destination table.
``ev``
Optional event that is raised, when values are added to,
changed in, or deleted from the table. Events are passed an
Input::Event description as the first argument, the index
record as the second argument and the values as the third
argument.
``pred``
Optional predicate, that can prevent entries from being added
to the table and events from being sent.
``want_record``
Boolean value, that defines if the event wants to receive the
fields inside of a single record value, or individually
(default). This can be used if ``val`` is a record
containing only one type. In this case, if ``want_record`` is
set to false, the table will contain elements of the type
contained in ``val``.
Reading Data to Events Reading Data to Events
====================== ======================
The second supported mode of the input framework is reading data to Bro The second supported mode of the input framework is reading data to Bro
events instead of reading them to a table using event streams. events instead of reading them to a table.
Event streams work very similarly to table streams that were already Event streams work very similarly to table streams that were already
discussed in much detail. To read the blacklist of the previous example discussed in much detail. To read the blacklist of the previous example
into an event stream, the following Bro code could be used: into an event stream, the :bro:id:`Input::add_event` function is used.
For example:
.. code:: bro .. code:: bro
@ -348,12 +315,15 @@ into an event stream, the following Bro code could be used:
reason: string; reason: string;
}; };
event blacklistentry(description: Input::EventDescription, tpe: Input::Event, ip: addr, timestamp: time, reason: string) { event blacklistentry(description: Input::EventDescription,
# work with event data t: Input::Event, data: Val) {
# do something here...
print "data:", data;
} }
event bro_init() { event bro_init() {
Input::add_event([$source="blacklist.file", $name="blacklist", $fields=Val, $ev=blacklistentry]); Input::add_event([$source="blacklist.file", $name="blacklist",
$fields=Val, $ev=blacklistentry]);
} }
@ -364,52 +334,3 @@ data types are provided in a single record definition.
Apart from this, event streams work exactly the same as table streams and Apart from this, event streams work exactly the same as table streams and
support most of the options that are also supported for table streams. support most of the options that are also supported for table streams.
The options that can be set when creating an event stream with
``add_event`` are:
``source``
A mandatory string identifying the source of the data.
For the ASCII reader this is the filename.
``name``
A mandatory name for the stream that can later be used
to remove it.
``fields``
Name of a record type containing the fields, which should be
retrieved from the input stream.
``ev``
The event which is fired, after a line has been read from the
input source. The first argument that is passed to the event
is an Input::Event structure, followed by the data, either
inside of a record (if ``want_record is set``) or as
individual fields. The Input::Event structure can contain
information, if the received line is ``NEW``, has been
``CHANGED`` or ``DELETED``. Since the ASCII reader cannot
track this information for event filters, the value is
always ``NEW`` at the moment.
``mode``
The mode in which the stream is opened. Possible values are
``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``.
``MANUAL`` means that the file is not updated after it has
been read. Changes to the file will not be reflected in the
data Bro knows. ``REREAD`` means that the whole file is read
again each time a change is found. This should be used for
files that are mapped to a table where individual lines can
change. ``STREAM`` means that the data from the file is
streamed. Events / table entries will be generated as new
data is appended to the file.
``reader``
The reader used for this stream. Default is ``READER_ASCII``.
``want_record``
Boolean value, that defines if the event wants to receive the
fields inside of a single record value, or individually
(default). If this is set to true, the event will receive a
single record of the type provided in ``fields``.

View file

@ -23,17 +23,18 @@ In contrast to the ASCII reader and writer, the SQLite plugins have not yet
seen extensive use in production environments. While we are not aware seen extensive use in production environments. While we are not aware
of any issues with them, we urge to caution when using them of any issues with them, we urge to caution when using them
in production environments. There could be lingering issues which only occur in production environments. There could be lingering issues which only occur
when the plugins are used with high amounts of data or in high-load environments. when the plugins are used with high amounts of data or in high-load
environments.
Logging Data into SQLite Databases Logging Data into SQLite Databases
================================== ==================================
Logging support for SQLite is available in all Bro installations starting with Logging support for SQLite is available in all Bro installations starting with
version 2.2. There is no need to load any additional scripts or for any compile-time version 2.2. There is no need to load any additional scripts or for any
configurations. compile-time configurations.
Sending data from existing logging streams to SQLite is rather straightforward. You Sending data from existing logging streams to SQLite is rather straightforward.
have to define a filter which specifies SQLite as the writer. You have to define a filter which specifies SQLite as the writer.
The following example code adds SQLite as a filter for the connection log: The following example code adds SQLite as a filter for the connection log:
@ -44,15 +45,15 @@ The following example code adds SQLite as a filter for the connection log:
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro
Bro will create the database file ``/var/db/conn.sqlite``, if it does not already exist. Bro will create the database file ``/var/db/conn.sqlite``, if it does not
It will also create a table with the name ``conn`` (if it does not exist) and start already exist. It will also create a table with the name ``conn`` (if it
appending connection information to the table. does not exist) and start appending connection information to the table.
At the moment, SQLite databases are not rotated the same way ASCII log-files are. You At the moment, SQLite databases are not rotated the same way ASCII log-files
have to take care to create them in an adequate location. are. You have to take care to create them in an adequate location.
If you examine the resulting SQLite database, the schema will contain the same fields If you examine the resulting SQLite database, the schema will contain the
that are present in the ASCII log files:: same fields that are present in the ASCII log files::
# sqlite3 /var/db/conn.sqlite # sqlite3 /var/db/conn.sqlite
@ -67,35 +68,39 @@ that are present in the ASCII log files::
'id.orig_p' integer, 'id.orig_p' integer,
... ...
Note that the ASCII ``conn.log`` will still be created. To disable the ASCII writer for a Note that the ASCII ``conn.log`` will still be created. To prevent this file
log stream, you can remove the default filter: from being created, you can remove the default filter:
.. code:: bro .. code:: bro
Log::remove_filter(Conn::LOG, "default"); Log::remove_filter(Conn::LOG, "default");
To create a custom SQLite log file, you have to create a new log stream that contains To create a custom SQLite log file, you have to create a new log stream
just the information you want to commit to the database. Please refer to the that contains just the information you want to commit to the database.
:ref:`framework-logging` documentation on how to create custom log streams. Please refer to the :ref:`framework-logging` documentation on how to
create custom log streams.
Reading Data from SQLite Databases Reading Data from SQLite Databases
================================== ==================================
Like logging support, support for reading data from SQLite databases is built into Bro starting Like logging support, support for reading data from SQLite databases is
with version 2.2. built into Bro starting with version 2.2.
Just as with the text-based input readers (please refer to the :ref:`framework-input` Just as with the text-based input readers (please refer to the
documentation for them and for basic information on how to use the input-framework), the SQLite reader :ref:`framework-input` documentation for them and for basic information
can be used to read data - in this case the result of SQL queries - into tables or into events. on how to use the input framework), the SQLite reader can be used to
read data - in this case the result of SQL queries - into tables or into
events.
Reading Data into Tables Reading Data into Tables
------------------------ ------------------------
To read data from a SQLite database, we first have to provide Bro with the information, how To read data from a SQLite database, we first have to provide Bro with
the resulting data will be structured. For this example, we expect that we have a SQLite database, the information, how the resulting data will be structured. For this
which contains host IP addresses and the user accounts that are allowed to log into a specific example, we expect that we have a SQLite database, which contains
machine. host IP addresses and the user accounts that are allowed to log into
a specific machine.
The SQLite commands to create the schema are as follows:: The SQLite commands to create the schema are as follows::
@ -107,8 +112,8 @@ The SQLite commands to create the schema are as follows::
insert into machines_to_users values ('192.168.17.2', 'bernhard'); insert into machines_to_users values ('192.168.17.2', 'bernhard');
insert into machines_to_users values ('192.168.17.3', 'seth,matthias'); insert into machines_to_users values ('192.168.17.3', 'seth,matthias');
After creating a file called ``hosts.sqlite`` with this content, we can read the resulting table After creating a file called ``hosts.sqlite`` with this content, we can
into Bro: read the resulting table into Bro:
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro
@ -117,22 +122,25 @@ into Bro:
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro
Afterwards, that table can be used to check logins into hosts against the available Afterwards, that table can be used to check logins into hosts against
userlist. the available userlist.
Turning Data into Events Turning Data into Events
------------------------ ------------------------
The second mode is to use the SQLite reader to output the input data as events. Typically there The second mode is to use the SQLite reader to output the input data as events.
are two reasons to do this. First, when the structure of the input data is too complicated Typically there are two reasons to do this. First, when the structure of
for a direct table import. In this case, the data can be read into an event which can then the input data is too complicated for a direct table import. In this case,
create the necessary data structures in Bro in scriptland. the data can be read into an event which can then create the necessary
data structures in Bro in scriptland.
The second reason is, that the dataset is too big to hold it in memory. In this case, the checks The second reason is, that the dataset is too big to hold it in memory. In
can be performed on-demand, when Bro encounters a situation where it needs additional information. this case, the checks can be performed on-demand, when Bro encounters a
situation where it needs additional information.
An example for this would be an internal huge database with malware hashes. Live database queries An example for this would be an internal huge database with malware
could be used to check the sporadically happening downloads against the database. hashes. Live database queries could be used to check the sporadically
happening downloads against the database.
The SQLite commands to create the schema are as follows:: The SQLite commands to create the schema are as follows::
@ -151,9 +159,10 @@ The SQLite commands to create the schema are as follows::
insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace'); insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace');
The following code uses the file-analysis framework to get the sha1 hashes of files that are The following code uses the file-analysis framework to get the sha1 hashes
transmitted over the network. For each hash, a SQL-query is run against SQLite. If the query of files that are transmitted over the network. For each hash, a SQL-query
returns with a result, we had a hit against our malware-database and output the matching hash. is run against SQLite. If the query returns with a result, we had a hit
against our malware-database and output the matching hash.
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro
@ -162,5 +171,5 @@ returns with a result, we had a hit against our malware-database and output the
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro
If you run this script against the trace in ``testing/btest/Traces/ftp/ipv4.trace``, you If you run this script against the trace in
will get one hit. ``testing/btest/Traces/ftp/ipv4.trace``, you will get one hit.

View file

@ -19,195 +19,144 @@ Terminology
Bro's logging interface is built around three main abstractions: Bro's logging interface is built around three main abstractions:
Log streams Streams
A stream corresponds to a single log. It defines the set of A log stream corresponds to a single log. It defines the set of
fields that a log consists of with their names and fields. fields that a log consists of with their names and types.
Examples are the ``conn`` for recording connection summaries, Examples are the ``conn`` stream for recording connection summaries,
and the ``http`` stream for recording HTTP activity. and the ``http`` stream for recording HTTP activity.
Filters Filters
Each stream has a set of filters attached to it that determine Each stream has a set of filters attached to it that determine
what information gets written out. By default, each stream has what information gets written out. By default, each stream has
one default filter that just logs everything directly to disk one default filter that just logs everything directly to disk.
with an automatically generated file name. However, further However, additional filters can be added to record only a subset
filters can be added to record only a subset, split a stream of the log records, write to different outputs, or set a custom
into different outputs, or to even duplicate the log to rotation interval. If all filters are removed from a stream,
multiple outputs. If all filters are removed from a stream, then output is disabled for that stream.
all output is disabled.
Writers Writers
A writer defines the actual output format for the information Each filter has a writer. A writer defines the actual output
being logged. At the moment, Bro comes with only one type of format for the information being logged. The default writer is
writer, which produces tab separated ASCII files. In the the ASCII writer, which produces tab-separated ASCII files. Other
future we will add further writers, like for binary output and writers are available, like for binary output or direct logging
direct logging into a database. into a database.
Basics There are several different ways to customize Bro's logging: you can create
====== a new log stream, you can extend an existing log with new fields, you
can apply filters to an existing log stream, or you can customize the output
format by setting log writer options. All of these approaches are
described in this document.
The data fields that a stream records are defined by a record type Streams
specified when it is created. Let's look at the script generating Bro's =======
connection summaries as an example,
:doc:`/scripts/base/protocols/conn/main.bro`. It defines a record
:bro:type:`Conn::Info` that lists all the fields that go into
``conn.log``, each marked with a ``&log`` attribute indicating that it
is part of the information written out. To write a log record, the
script then passes an instance of :bro:type:`Conn::Info` to the logging
framework's :bro:id:`Log::write` function.
By default, each stream automatically gets a filter named ``default`` In order to log data to a new log stream, all of the following needs to be
that generates the normal output by recording all record fields into a done:
single output file.
In the following, we summarize ways in which the logging can be - A :bro:type:`record` type must be defined which consists of all the
customized. We continue using the connection summaries as our example fields that will be logged (by convention, the name of this record type is
to work with. usually "Info").
- A log stream ID (an :bro:type:`enum` with type name "Log::ID") must be
defined that uniquely identifies the new log stream.
- A log stream must be created using the :bro:id:`Log::create_stream` function.
- When the data to be logged becomes available, the :bro:id:`Log::write`
function must be called.
Filtering In the following example, we create a new module "Foo" which creates
--------- a new log stream.
To create a new output file for an existing stream, you can add a
new filter. A filter can, e.g., restrict the set of fields being
logged:
.. code:: bro .. code:: bro
event bro_init() module Foo;
export {
# Create an ID for our new stream. By convention, this is
# called "LOG".
redef enum Log::ID += { LOG };
# Define the record type that will contain the data to log.
type Info: record {
ts: time &log;
id: conn_id &log;
service: string &log &optional;
missed_bytes: count &log &default=0;
};
}
# Optionally, we can add a new field to the connection record so that
# the data we are logging (our "Info" record) will be easily
# accessible in a variety of event handlers.
redef record connection += {
# By convention, the name of this new field is the lowercase name
# of the module.
foo: Info &optional;
};
# This event is handled at a priority higher than zero so that if
# users modify this stream in another script, they can do so at the
# default priority of zero.
event bro_init() &priority=5
{ {
# Add a new filter to the Conn::LOG stream that logs only # Create the stream. This adds a default filter automatically.
# timestamp and originator address. Log::create_stream(Foo::LOG, [$columns=Info, $path="foo"]);
local filter: Log::Filter = [$name="orig-only", $path="origs", $include=set("ts", "id.orig_h")];
Log::add_filter(Conn::LOG, filter);
} }
Note the fields that are set for the filter: In the definition of the "Info" record above, notice that each field has the
:bro:attr:`&log` attribute. Without this attribute, a field will not appear in
the log output. Also notice one field has the :bro:attr:`&optional` attribute.
This indicates that the field might not be assigned any value before the
log record is written. Finally, a field with the :bro:attr:`&default`
attribute has a default value assigned to it automatically.
``name`` At this point, the only thing missing is a call to the :bro:id:`Log::write`
A mandatory name for the filter that can later be used function to send data to the logging framework. The actual event handler
to manipulate it further. where this should take place will depend on where your data becomes available.
In this example, the :bro:id:`connection_established` event provides our data,
``path`` and we also store a copy of the data being logged into the
The filename for the output file, without any extension (which :bro:type:`connection` record:
may be automatically added by the writer). Default path values
are generated by taking the stream's ID and munging it slightly.
:bro:enum:`Conn::LOG` is converted into ``conn``,
:bro:enum:`PacketFilter::LOG` is converted into
``packet_filter``, and :bro:enum:`Known::CERTS_LOG` is
converted into ``known_certs``.
``include``
A set limiting the fields to the ones given. The names
correspond to those in the :bro:type:`Conn::Info` record, with
sub-records unrolled by concatenating fields (separated with
dots).
Using the code above, you will now get a new log file ``origs.log``
that looks like this::
#separator \x09
#path origs
#fields ts id.orig_h
#types time addr
1128727430.350788 141.42.64.125
1128727435.450898 141.42.64.125
If you want to make this the only log file for the stream, you can
remove the default filter (which, conveniently, has the name
``default``):
.. code:: bro .. code:: bro
event bro_init() event connection_established(c: connection)
{ {
# Remove the filter called "default". local rec: Foo::Info = [$ts=network_time(), $id=c$id];
Log::remove_filter(Conn::LOG, "default");
# Store a copy of the data in the connection record so other
# event handlers can access it.
c$foo = rec;
Log::write(Foo::LOG, rec);
} }
An alternate approach to "turning off" a log is to completely disable If you run Bro with this script, a new log file ``foo.log`` will be created.
the stream: Although we only specified four fields in the "Info" record above, the
log output will actually contain seven fields because one of the fields
(the one named "id") is itself a record type. Since a :bro:type:`conn_id`
record has four fields, then each of these fields is a separate column in
the log output. Note that the way that such fields are named in the log
output differs slightly from the way we would refer to the same field
in a Bro script (each dollar sign is replaced with a period). For example,
to access the first field of a ``conn_id`` in a Bro script we would use
the notation ``id$orig_h``, but that field is named ``id.orig_h``
in the log output.
.. code:: bro When you are developing scripts that add data to the :bro:type:`connection`
record, care must be given to when and how long data is stored.
Normally data saved to the connection record will remain there for the
duration of the connection and from a practical perspective it's not
uncommon to need to delete that data before the end of the connection.
event bro_init()
{
Log::disable_stream(Conn::LOG);
}
If you want to skip only some fields but keep the rest, there is a Add Fields to a Log
corresponding ``exclude`` filter attribute that you can use instead of -------------------
``include`` to list only the ones you are not interested in.
A filter can also determine output paths *dynamically* based on the You can add additional fields to a log by extending the record
record being logged. That allows, e.g., to record local and remote type that defines its content, and setting a value for the new fields
connections into separate files. To do this, you define a function before each log record is written.
that returns the desired path:
.. code:: bro Let's say we want to add a boolean field ``is_private`` to
:bro:type:`Conn::Info` that indicates whether the originator IP address
function split_log(id: Log::ID, path: string, rec: Conn::Info) : string is part of the :rfc:`1918` space:
{
# Return "conn-local" if originator is a local IP, otherwise "conn-remote".
local lr = Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
return fmt("%s-%s", path, lr);
}
event bro_init()
{
local filter: Log::Filter = [$name="conn-split", $path_func=split_log, $include=set("ts", "id.orig_h")];
Log::add_filter(Conn::LOG, filter);
}
Running this will now produce two files, ``local.log`` and
``remote.log``, with the corresponding entries. One could extend this
further for example to log information by subnets or even by IP
address. Be careful, however, as it is easy to create many files very
quickly ...
.. sidebar:: A More Generic Path Function
The ``split_log`` method has one draw-back: it can be used
only with the :bro:enum:`Conn::LOG` stream as the record type is hardcoded
into its argument list. However, Bro allows to do a more generic
variant:
.. code:: bro
function split_log(id: Log::ID, path: string, rec: record { id: conn_id; } ) : string
{
return Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
}
This function can be used with all log streams that have records
containing an ``id: conn_id`` field.
While so far we have seen how to customize the columns being logged,
you can also control which records are written out by providing a
predicate that will be called for each log record:
.. code:: bro
function http_only(rec: Conn::Info) : bool
{
# Record only connections with successfully analyzed HTTP traffic
return rec$service == "http";
}
event bro_init()
{
local filter: Log::Filter = [$name="http-only", $path="conn-http", $pred=http_only];
Log::add_filter(Conn::LOG, filter);
}
This will result in a log file ``conn-http.log`` that contains only
traffic detected and analyzed as HTTP traffic.
Extending
---------
You can add further fields to a log stream by extending the record
type that defines its content. Let's say we want to add a boolean
field ``is_private`` to :bro:type:`Conn::Info` that indicates whether the
originator IP address is part of the :rfc:`1918` space:
.. code:: bro .. code:: bro
@ -218,9 +167,21 @@ originator IP address is part of the :rfc:`1918` space:
is_private: bool &default=F &log; is_private: bool &default=F &log;
}; };
As this example shows, when extending a log stream's "Info" record, each
new field must always be declared either with a ``&default`` value or
as ``&optional``. Furthermore, you need to add the ``&log`` attribute
or otherwise the field won't appear in the log file.
Now we need to set the field. A connection's summary is generated at Now we need to set the field. Although the details vary depending on which
the time its state is removed from memory. We can add another handler log is being extended, in general it is important to choose a suitable event
in which to set the additional fields because we need to make sure that
the fields are set before the log record is written. Sometimes the right
choice is the same event which writes the log record, but at a higher
priority (in order to ensure that the event handler that sets the additional
fields is executed before the event handler that writes the log record).
In this example, since a connection's summary is generated at
the time its state is removed from memory, we can add another handler
at that time that sets our field correctly: at that time that sets our field correctly:
.. code:: bro .. code:: bro
@ -232,31 +193,58 @@ at that time that sets our field correctly:
} }
Now ``conn.log`` will show a new field ``is_private`` of type Now ``conn.log`` will show a new field ``is_private`` of type
``bool``. ``bool``. If you look at the Bro script which defines the connection
log stream :doc:`/scripts/base/protocols/conn/main.bro`, you will see
that ``Log::write`` gets called in an event handler for the
same event as used in this example to set the additional fields, but at a
lower priority than the one used in this example (i.e., the log record gets
written after we assign the ``is_private`` field).
Notes: For extending logs this way, one needs a bit of knowledge about how
the script that creates the log stream is organizing its state
keeping. Most of the standard Bro scripts attach their log state to
the :bro:type:`connection` record where it can then be accessed, just
like ``c$conn`` above. For example, the HTTP analysis adds a field
``http`` of type :bro:type:`HTTP::Info` to the :bro:type:`connection`
record.
- For extending logs this way, one needs a bit of knowledge about how
the script that creates the log stream is organizing its state
keeping. Most of the standard Bro scripts attach their log state to
the :bro:type:`connection` record where it can then be accessed, just
as the ``c$conn`` above. For example, the HTTP analysis adds a field
``http`` of type :bro:type:`HTTP::Info` to the :bro:type:`connection`
record. See the script reference for more information.
- When extending records as shown above, the new fields must always be Define a Logging Event
declared either with a ``&default`` value or as ``&optional``. ----------------------
Furthermore, you need to add the ``&log`` attribute or otherwise the
field won't appear in the output.
Hooking into the Logging
------------------------
Sometimes it is helpful to do additional analysis of the information Sometimes it is helpful to do additional analysis of the information
being logged. For these cases, a stream can specify an event that will being logged. For these cases, a stream can specify an event that will
be generated every time a log record is written to it. All of Bro's be generated every time a log record is written to it. To do this, we
default log streams define such an event. For example, the connection need to modify the example module shown above to look something like this:
log stream raises the event :bro:id:`Conn::log_conn`. You
.. code:: bro
module Foo;
export {
redef enum Log::ID += { LOG };
type Info: record {
ts: time &log;
id: conn_id &log;
service: string &log &optional;
missed_bytes: count &log &default=0;
};
# Define a logging event. By convention, this is called
# "log_<stream>".
global log_foo: event(rec: Info);
}
event bro_init() &priority=5
{
# Specify the "log_foo" event here in order for Bro to raise it.
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo,
$path="foo"]);
}
All of Bro's default log streams define such an event. For example, the
connection log stream raises the event :bro:id:`Conn::log_conn`. You
could use that for example for flagging when a connection to a could use that for example for flagging when a connection to a
specific destination exceeds a certain duration: specific destination exceeds a certain duration:
@ -270,7 +258,7 @@ specific destination exceeds a certain duration:
event Conn::log_conn(rec: Conn::Info) event Conn::log_conn(rec: Conn::Info)
{ {
if ( rec$duration > 5mins ) if ( rec?$duration && rec$duration > 5mins )
NOTICE([$note=Long_Conn_Found, NOTICE([$note=Long_Conn_Found,
$msg=fmt("unusually long conn to %s", rec$id$resp_h), $msg=fmt("unusually long conn to %s", rec$id$resp_h),
$id=rec$id]); $id=rec$id]);
@ -281,15 +269,196 @@ externally with Perl scripts. Much of what such an external script
would do later offline, one may instead do directly inside of Bro in would do later offline, one may instead do directly inside of Bro in
real-time. real-time.
Rotation Disable a Stream
-------- ----------------
By default, no log rotation occurs, but it's globally controllable for all One way to "turn off" a log is to completely disable the stream. For
filters by redefining the :bro:id:`Log::default_rotation_interval` option: example, the following example will prevent the conn.log from being written:
.. code:: bro .. code:: bro
redef Log::default_rotation_interval = 1 hr; event bro_init()
{
Log::disable_stream(Conn::LOG);
}
Note that this must run after the stream is created, so the priority
of this event handler must be lower than the priority of the event handler
where the stream was created.
Filters
=======
A stream has one or more filters attached to it (a stream without any filters
will not produce any log output). When a stream is created, it automatically
gets a default filter attached to it. This default filter can be removed
or replaced, or other filters can be added to the stream. This is accomplished
by using either the :bro:id:`Log::add_filter` or :bro:id:`Log::remove_filter`
function. This section shows how to use filters to do such tasks as
rename a log file, split the output into multiple files, control which
records are written, and set a custom rotation interval.
Rename Log File
---------------
Normally, the log filename for a given log stream is determined when the
stream is created, unless you explicitly specify a different one by adding
a filter.
The easiest way to change a log filename is to simply replace the
default log filter with a new filter that specifies a value for the "path"
field. In this example, "conn.log" will be changed to "myconn.log":
.. code:: bro
event bro_init()
{
# Replace default filter for the Conn::LOG stream in order to
# change the log filename.
local f = Log::get_filter(Conn::LOG, "default");
f$path = "myconn";
Log::add_filter(Conn::LOG, f);
}
Keep in mind that the "path" field of a log filter never contains the
filename extension. The extension will be determined later by the log writer.
Add a New Log File
------------------
Normally, a log stream writes to only one log file. However, you can
add filters so that the stream writes to multiple files. This is useful
if you want to restrict the set of fields being logged to the new file.
In this example, a new filter is added to the Conn::LOG stream that writes
two fields to a new log file:
.. code:: bro
event bro_init()
{
# Add a new filter to the Conn::LOG stream that logs only
# timestamp and originator address.
local filter: Log::Filter = [$name="orig-only", $path="origs",
$include=set("ts", "id.orig_h")];
Log::add_filter(Conn::LOG, filter);
}
Notice how the "include" filter attribute specifies a set that limits the
fields to the ones given. The names correspond to those in the
:bro:type:`Conn::Info` record (however, because the "id" field is itself a
record, we can specify an individual field of "id" by the dot notation
shown in the example).
Using the code above, in addition to the regular ``conn.log``, you will
now also get a new log file ``origs.log`` that looks like the regular
``conn.log``, but will have only the fields specified in the "include"
filter attribute.
If you want to skip only some fields but keep the rest, there is a
corresponding ``exclude`` filter attribute that you can use instead of
``include`` to list only the ones you are not interested in.
If you want to make this the only log file for the stream, you can
remove the default filter:
.. code:: bro
event bro_init()
{
# Remove the filter called "default".
Log::remove_filter(Conn::LOG, "default");
}
Determine Log Path Dynamically
------------------------------
Instead of using the "path" filter attribute, a filter can determine
output paths *dynamically* based on the record being logged. That
allows, e.g., to record local and remote connections into separate
files. To do this, you define a function that returns the desired path,
and use the "path_func" filter attribute:
.. code:: bro
# Note: if using BroControl then you don't need to redef local_nets.
redef Site::local_nets = { 192.168.0.0/16 };
function myfunc(id: Log::ID, path: string, rec: Conn::Info) : string
{
# Return "conn-local" if originator is a local IP, otherwise
# return "conn-remote".
local r = Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
return fmt("%s-%s", path, r);
}
event bro_init()
{
local filter: Log::Filter = [$name="conn-split",
$path_func=myfunc, $include=set("ts", "id.orig_h")];
Log::add_filter(Conn::LOG, filter);
}
Running this will now produce two new files, ``conn-local.log`` and
``conn-remote.log``, with the corresponding entries (for this example to work,
the ``Site::local_nets`` must specify your local network). One could extend
this further for example to log information by subnets or even by IP
address. Be careful, however, as it is easy to create many files very
quickly.
The ``myfunc`` function has one drawback: it can be used
only with the :bro:enum:`Conn::LOG` stream as the record type is hardcoded
into its argument list. However, Bro allows to do a more generic
variant:
.. code:: bro
function myfunc(id: Log::ID, path: string,
rec: record { id: conn_id; } ) : string
{
local r = Site::is_local_addr(rec$id$orig_h) ? "local" : "remote";
return fmt("%s-%s", path, r);
}
This function can be used with all log streams that have records
containing an ``id: conn_id`` field.
Filter Log Records
------------------
We have seen how to customize the columns being logged, but
you can also control which records are written out by providing a
predicate that will be called for each log record:
.. code:: bro
function http_only(rec: Conn::Info) : bool
{
# Record only connections with successfully analyzed HTTP traffic
return rec?$service && rec$service == "http";
}
event bro_init()
{
local filter: Log::Filter = [$name="http-only", $path="conn-http",
$pred=http_only];
Log::add_filter(Conn::LOG, filter);
}
This will result in a new log file ``conn-http.log`` that contains only
the log records from ``conn.log`` that are analyzed as HTTP traffic.
Rotation
--------
The log rotation interval is globally controllable for all
filters by redefining the :bro:id:`Log::default_rotation_interval` option
(note that when using BroControl, this option is set automatically via
the BroControl configuration).
Or specifically for certain :bro:type:`Log::Filter` instances by setting Or specifically for certain :bro:type:`Log::Filter` instances by setting
their ``interv`` field. Here's an example of changing just the their ``interv`` field. Here's an example of changing just the
@ -301,90 +470,73 @@ their ``interv`` field. Here's an example of changing just the
{ {
local f = Log::get_filter(Conn::LOG, "default"); local f = Log::get_filter(Conn::LOG, "default");
f$interv = 1 min; f$interv = 1 min;
Log::remove_filter(Conn::LOG, "default");
Log::add_filter(Conn::LOG, f); Log::add_filter(Conn::LOG, f);
} }
ASCII Writer Configuration Writers
-------------------------- =======
The ASCII writer has a number of options for customizing the format of Each filter has a writer. If you do not specify a writer when adding a
its output, see :doc:`/scripts/base/frameworks/logging/writers/ascii.bro`. filter to a stream, then the ASCII writer is the default.
Adding Streams There are two ways to specify a non-default writer. To change the default
============== writer for all log filters, just redefine the :bro:id:`Log::default_writer`
option. Alternatively, you can specify the writer to use on a per-filter
basis by setting a value for the filter's "writer" field. Consult the
documentation of the writer to use to see if there are other options that are
needed.
It's easy to create a new log stream for custom scripts. Here's an ASCII Writer
example for the ``Foo`` module: ------------
By default, the ASCII writer outputs log files that begin with several
lines of metadata, followed by the actual log output. The metadata
describes the format of the log file, the "path" of the log (i.e., the log
filename without file extension), and also specifies the time that the log
was created and the time when Bro finished writing to it.
The ASCII writer has a number of options for customizing the format of its
output, see :doc:`/scripts/base/frameworks/logging/writers/ascii.bro`.
If you change the output format options, then be careful to check whether
your postprocessing scripts can still recognize your log files.
Some writer options are global (i.e., they affect all log filters using
that log writer). For example, to change the output format of all ASCII
logs to JSON format:
.. code:: bro .. code:: bro
module Foo; redef LogAscii::use_json = T;
export { Some writer options are filter-specific (i.e., they affect only the filters
# Create an ID for our new stream. By convention, this is that explicitly specify the option). For example, to change the output
# called "LOG". format of the ``conn.log`` only:
redef enum Log::ID += { LOG };
# Define the fields. By convention, the type is called "Info". .. code:: bro
type Info: record {
ts: time &log;
id: conn_id &log;
};
# Define a hook event. By convention, this is called event bro_init()
# "log_<stream>".
global log_foo: event(rec: Info);
}
# This event should be handled at a higher priority so that when
# users modify your stream later and they do it at priority 0,
# their code runs after this.
event bro_init() &priority=5
{ {
# Create the stream. This also adds a default filter automatically. local f = Log::get_filter(Conn::LOG, "default");
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo, $path="foo"]); # Use tab-separated-value mode
f$config = table(["tsv"] = "T");
Log::add_filter(Conn::LOG, f);
} }
You can also add the state to the :bro:type:`connection` record to make
it easily accessible across event handlers:
.. code:: bro
redef record connection += {
foo: Info &optional;
}
Now you can use the :bro:id:`Log::write` method to output log records and
save the logged ``Foo::Info`` record into the connection record:
.. code:: bro
event connection_established(c: connection)
{
local rec: Foo::Info = [$ts=network_time(), $id=c$id];
c$foo = rec;
Log::write(Foo::LOG, rec);
}
See the existing scripts for how to work with such a new connection
field. A simple example is :doc:`/scripts/base/protocols/syslog/main.bro`.
When you are developing scripts that add data to the :bro:type:`connection`
record, care must be given to when and how long data is stored.
Normally data saved to the connection record will remain there for the
duration of the connection and from a practical perspective it's not
uncommon to need to delete that data before the end of the connection.
Other Writers Other Writers
------------- -------------
Bro supports the following built-in output formats other than ASCII: Bro supports the following additional built-in output formats:
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1
logging-input-sqlite logging-input-sqlite
Further formats are available as external plugins. Additional writers are available as external plugins:
.. toctree::
:maxdepth: 1
../components/bro-plugins/dataseries/README
../components/bro-plugins/elasticsearch/README

View file

@ -8,10 +8,12 @@ How to Upgrade
If you're doing an upgrade install (rather than a fresh install), If you're doing an upgrade install (rather than a fresh install),
there's two suggested approaches: either install Bro using the same there's two suggested approaches: either install Bro using the same
installation prefix directory as before, or pick a new prefix and copy installation prefix directory as before, or pick a new prefix and copy
local customizations over. Regardless of which approach you choose, local customizations over.
if you are using BroControl, then after upgrading Bro you will need to
run "broctl check" (to verify that your new configuration is OK) Regardless of which approach you choose, if you are using BroControl, then
and "broctl install" to complete the upgrade process. before doing the upgrade you should stop all running Bro processes with the
"broctl stop" command. After the upgrade is complete then you will need
to run "broctl deploy".
In the following we summarize general guidelines for upgrading, see In the following we summarize general guidelines for upgrading, see
the :ref:`release-notes` for version-specific information. the :ref:`release-notes` for version-specific information.
@ -44,4 +46,4 @@ where Bro was originally installed). Review the files for differences
before copying and make adjustments as necessary (use the new version for before copying and make adjustments as necessary (use the new version for
differences that aren't a result of a local change). Of particular note, differences that aren't a result of a local change). Of particular note,
the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes
to the ``SpoolDir`` and ``LogDir`` settings. to any settings that specify a pathname.

View file

@ -4,7 +4,7 @@
.. _MacPorts: http://www.macports.org .. _MacPorts: http://www.macports.org
.. _Fink: http://www.finkproject.org .. _Fink: http://www.finkproject.org
.. _Homebrew: http://brew.sh .. _Homebrew: http://brew.sh
.. _bro downloads page: http://bro.org/download/index.html .. _bro downloads page: https://www.bro.org/download/index.html
.. _installing-bro: .. _installing-bro:
@ -32,22 +32,24 @@ before you begin:
* Libz * Libz
* Bash (for BroControl) * Bash (for BroControl)
* Python (for BroControl) * Python (for BroControl)
* C++ Actor Framework (CAF) version 0.14 (http://actor-framework.org)
To build Bro from source, the following additional dependencies are required: To build Bro from source, the following additional dependencies are required:
* CMake 2.8 or greater (http://www.cmake.org) * CMake 2.8 or greater (http://www.cmake.org)
* Make * Make
* C/C++ compiler * C/C++ compiler with C++11 support (GCC 4.8+ or Clang 3.3+)
* SWIG (http://www.swig.org) * SWIG (http://www.swig.org)
* Bison (GNU Parser Generator) * Bison (GNU Parser Generator)
* Flex (Fast Lexical Analyzer) * Flex (Fast Lexical Analyzer)
* Libpcap headers (http://www.tcpdump.org) * Libpcap headers (http://www.tcpdump.org)
* OpenSSL headers (http://www.openssl.org) * OpenSSL headers (http://www.openssl.org)
* zlib headers * zlib headers
* Perl * Python
To install the required dependencies, you can use (when done, make sure To install CAF, first download the source code of the required version from: https://github.com/actor-framework/actor-framework/releases
that ``bash`` and ``python`` are in your ``PATH``):
To install the required dependencies, you can use:
* RPM/RedHat-based Linux: * RPM/RedHat-based Linux:
@ -68,19 +70,23 @@ that ``bash`` and ``python`` are in your ``PATH``):
.. console:: .. console::
sudo pkg_add -r bash cmake swig bison python perl sudo pkg install bash cmake swig bison python py27-sqlite3
Note that in older versions of FreeBSD, you might have to use the
"pkg_add -r" command instead of "pkg install".
* Mac OS X: * Mac OS X:
Compiling source code on Macs requires first downloading Xcode_, Compiling source code on Macs requires first installing Xcode_ (in older
then going through its "Preferences..." -> "Downloads" menus to versions of Xcode, you would then need to go through its
install the "Command Line Tools" component. "Preferences..." -> "Downloads" menus to install the "Command Line Tools"
component).
OS X comes with all required dependencies except for CMake_ and SWIG_. OS X comes with all required dependencies except for CMake_, SWIG_, and CAF.
Distributions of these dependencies can likely be obtained from your Distributions of these dependencies can likely be obtained from your
preferred Mac OS X package management system (e.g. MacPorts_, Fink_, preferred Mac OS X package management system (e.g. Homebrew_, MacPorts_,
or Homebrew_). Specifically for MacPorts, the ``cmake``, ``swig``, or Fink_). Specifically for Homebrew, the ``cmake``, ``swig``,
and ``swig-python`` packages provide the required dependencies. and ``caf`` packages provide the required dependencies.
Optional Dependencies Optional Dependencies
@ -93,8 +99,9 @@ build time:
* sendmail (enables Bro and BroControl to send mail) * sendmail (enables Bro and BroControl to send mail)
* curl (used by a Bro script that implements active HTTP) * curl (used by a Bro script that implements active HTTP)
* gperftools (tcmalloc is used to improve memory and CPU usage) * gperftools (tcmalloc is used to improve memory and CPU usage)
* jemalloc (http://www.canonware.com/jemalloc/)
* PF_RING (Linux only, see :doc:`Cluster Configuration <../configuration/index>`)
* ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump) * ipsumdump (for trace-summary; http://www.cs.ucla.edu/~kohler/ipsumdump)
* Ruby executable, library, and headers (for Broccoli Ruby bindings)
LibGeoIP is probably the most interesting and can be installed LibGeoIP is probably the most interesting and can be installed
on most platforms by following the instructions for :ref:`installing on most platforms by following the instructions for :ref:`installing
@ -110,22 +117,18 @@ code forms.
Using Pre-Built Binary Release Packages Using Pre-Built Binary Release Packages
======================================= ---------------------------------------
See the `bro downloads page`_ for currently supported/targeted See the `bro downloads page`_ for currently supported/targeted
platforms for binary releases. platforms for binary releases and for installation instructions.
* RPM * Linux Packages
.. console:: Linux based binary installations are usually performed by adding
information about the Bro packages to the respective system packaging
sudo yum localinstall Bro-*.rpm tool. Then the usual system utilities such as ``apt``, ``yum``
or ``zypper`` are used to perform the installation. By default,
* DEB installations of binary packages will go into ``/opt/bro``.
.. console::
sudo gdebi Bro-*.deb
* MacOS Disk Image with Installer * MacOS Disk Image with Installer
@ -133,17 +136,17 @@ platforms for binary releases.
Everything installed by the package will go into ``/opt/bro``. Everything installed by the package will go into ``/opt/bro``.
The primary install prefix for binary packages is ``/opt/bro``. The primary install prefix for binary packages is ``/opt/bro``.
Non-MacOS packages that include BroControl also put variable/runtime
data (e.g. Bro logs) in ``/var/opt/bro``.
Installing from Source Installing from Source
========================== ----------------------
Bro releases are bundled into source packages for convenience and are Bro releases are bundled into source packages for convenience and are
available on the `bro downloads page`_. Alternatively, the latest available on the `bro downloads page`_.
Bro development version can be obtained through git repositories
Alternatively, the latest Bro development version
can be obtained through git repositories
hosted at ``git.bro.org``. See our `git development documentation hosted at ``git.bro.org``. See our `git development documentation
<http://bro.org/development/howtos/process.html>`_ for comprehensive <https://www.bro.org/development/howtos/process.html>`_ for comprehensive
information on Bro's use of git revision control, but the short story information on Bro's use of git revision control, but the short story
for downloading the full source code experience for Bro via git is: for downloading the full source code experience for Bro via git is:
@ -164,13 +167,23 @@ run ``./configure --help``):
make make
make install make install
If the ``configure`` script fails, then it is most likely because it either
couldn't find a required dependency or it couldn't find a sufficiently new
version of a dependency. Assuming that you already installed all required
dependencies, then you may need to use one of the ``--with-*`` options
that can be given to the ``configure`` script to help it locate a dependency.
The default installation path is ``/usr/local/bro``, which would typically The default installation path is ``/usr/local/bro``, which would typically
require root privileges when doing the ``make install``. A different require root privileges when doing the ``make install``. A different
installation path can be chosen by specifying the ``--prefix`` option. installation path can be chosen by specifying the ``configure`` script
Note that ``/usr`` and ``/opt/bro`` are the ``--prefix`` option. Note that ``/usr`` and ``/opt/bro`` are the
standard prefixes for binary Bro packages to be installed, so those are standard prefixes for binary Bro packages to be installed, so those are
typically not good choices unless you are creating such a package. typically not good choices unless you are creating such a package.
OpenBSD users, please see our `FAQ
<https://www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro.
Depending on the Bro package you downloaded, there may be auxiliary Depending on the Bro package you downloaded, there may be auxiliary
tools and libraries available in the ``aux/`` directory. Some of them tools and libraries available in the ``aux/`` directory. Some of them
will be automatically built and installed along with Bro. There are will be automatically built and installed along with Bro. There are
@ -179,10 +192,6 @@ turn off unwanted auxiliary projects that would otherwise be installed
automatically. Finally, use ``make install-aux`` to install some of automatically. Finally, use ``make install-aux`` to install some of
the other programs that are in the ``aux/bro-aux`` directory. the other programs that are in the ``aux/bro-aux`` directory.
OpenBSD users, please see our `FAQ
<//www.bro.org/documentation/faq.html>`_ if you are having
problems installing Bro.
Finally, if you want to build the Bro documentation (not required, because Finally, if you want to build the Bro documentation (not required, because
all of the documentation for the latest Bro release is available on the all of the documentation for the latest Bro release is available on the
Bro web site), there are instructions in ``doc/README`` in the source Bro web site), there are instructions in ``doc/README`` in the source
@ -191,7 +200,7 @@ distribution.
Configure the Run-Time Environment Configure the Run-Time Environment
================================== ==================================
Just remember that you may need to adjust your ``PATH`` environment variable You may want to adjust your ``PATH`` environment variable
according to the platform/shell/package you're using. For example: according to the platform/shell/package you're using. For example:
Bourne-Shell Syntax: Bourne-Shell Syntax:

View file

@ -24,9 +24,10 @@ Managing Bro with BroControl
BroControl is an interactive shell for easily operating/managing Bro BroControl is an interactive shell for easily operating/managing Bro
installations on a single system or even across multiple systems in a installations on a single system or even across multiple systems in a
traffic-monitoring cluster. This section explains how to use BroControl traffic-monitoring cluster. This section explains how to use BroControl
to manage a stand-alone Bro installation. For instructions on how to to manage a stand-alone Bro installation. For a complete reference on
configure a Bro cluster, see the :doc:`Cluster Configuration BroControl, see the :doc:`BroControl <../components/broctl/README>`
<../configuration/index>` documentation. documentation. For instructions on how to configure a Bro cluster,
see the :doc:`Cluster Configuration <../configuration/index>` documentation.
A Minimal Starting Configuration A Minimal Starting Configuration
-------------------------------- --------------------------------

View file

@ -54,13 +54,16 @@ Here is a more detailed explanation of each attribute:
.. bro:attr:: &redef .. bro:attr:: &redef
Allows for redefinition of initial values of global objects declared as Allows use of a :bro:keyword:`redef` to redefine initial values of
constant. global variables (i.e., variables declared either :bro:keyword:`global`
or :bro:keyword:`const`). Example::
In this example, the constant (assuming it is global) can be redefined
with a :bro:keyword:`redef` at some later point::
const clever = T &redef; const clever = T &redef;
global cache_size = 256 &redef;
Note that a variable declared "global" can also have its value changed
with assignment statements (doesn't matter if it has the "&redef"
attribute or not).
.. bro:attr:: &priority .. bro:attr:: &priority
@ -173,14 +176,20 @@ Here is a more detailed explanation of each attribute:
Rotates a file after a specified interval. Rotates a file after a specified interval.
Note: This attribute is deprecated and will be removed in a future release.
.. bro:attr:: &rotate_size .. bro:attr:: &rotate_size
Rotates a file after it has reached a given size in bytes. Rotates a file after it has reached a given size in bytes.
Note: This attribute is deprecated and will be removed in a future release.
.. bro:attr:: &encrypt .. bro:attr:: &encrypt
Encrypts files right before writing them to disk. Encrypts files right before writing them to disk.
Note: This attribute is deprecated and will be removed in a future release.
.. bro:attr:: &raw_output .. bro:attr:: &raw_output
Opens a file in raw mode, i.e., non-ASCII characters are not Opens a file in raw mode, i.e., non-ASCII characters are not
@ -229,5 +238,4 @@ Here is a more detailed explanation of each attribute:
The associated identifier is marked as deprecated and will be The associated identifier is marked as deprecated and will be
removed in a future version of Bro. Look in the NEWS file for more removed in a future version of Bro. Look in the NEWS file for more
explanation and/or instructions to migrate code that uses deprecated instructions to migrate code that uses deprecated functionality.
functionality.

View file

@ -58,6 +58,23 @@ executed. Directives are evaluated before script execution begins.
for that script are ignored). for that script are ignored).
.. bro:keyword:: @load-plugin
Activate a dynamic plugin with the specified plugin name. The specified
plugin must be located in Bro's plugin search path. Example::
@load-plugin Demo::Rot13
By default, Bro will automatically activate all dynamic plugins found
in the plugin search path (the search path can be changed by setting
the environment variable BRO_PLUGIN_PATH to a colon-separated list of
directories). However, in bare mode ("bro -b"), dynamic plugins can be
activated only by using "@load-plugin", or by specifying the full
plugin name on the Bro command-line (e.g., "bro Demo::Rot13"), or by
setting the environment variable BRO_PLUGIN_ACTIVATE to a
comma-separated list of plugin names.
.. bro:keyword:: @load-sigs .. bro:keyword:: @load-sigs
This works similarly to "@load", except that in this case the filename This works similarly to "@load", except that in this case the filename

View file

@ -26,13 +26,21 @@ Network Protocols
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| irc.log | IRC commands and responses | :bro:type:`IRC::Info` | | irc.log | IRC commands and responses | :bro:type:`IRC::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| kerberos.log | Kerberos | :bro:type:`KRB::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| modbus.log | Modbus commands and responses | :bro:type:`Modbus::Info` | | modbus.log | Modbus commands and responses | :bro:type:`Modbus::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| modbus_register_change.log | Tracks changes to Modbus holding | :bro:type:`Modbus::MemmapInfo` | | modbus_register_change.log | Tracks changes to Modbus holding | :bro:type:`Modbus::MemmapInfo` |
| | registers | | | | registers | |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| mysql.log | MySQL | :bro:type:`MySQL::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| radius.log | RADIUS authentication attempts | :bro:type:`RADIUS::Info` | | radius.log | RADIUS authentication attempts | :bro:type:`RADIUS::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| rdp.log | RDP | :bro:type:`RDP::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| sip.log | SIP | :bro:type:`SIP::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| smtp.log | SMTP transactions | :bro:type:`SMTP::Info` | | smtp.log | SMTP transactions | :bro:type:`SMTP::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| snmp.log | SNMP messages | :bro:type:`SNMP::Info` | | snmp.log | SNMP messages | :bro:type:`SNMP::Info` |
@ -56,6 +64,8 @@ Files
+============================+=======================================+=================================+ +============================+=======================================+=================================+
| files.log | File analysis results | :bro:type:`Files::Info` | | files.log | File analysis results | :bro:type:`Files::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+
| pe.log | Portable Executable (PE) | :bro:type:`PE::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| x509.log | X.509 certificate info | :bro:type:`X509::Info` | | x509.log | X.509 certificate info | :bro:type:`X509::Info` |
+----------------------------+---------------------------------------+---------------------------------+ +----------------------------+---------------------------------------+---------------------------------+

View file

@ -71,9 +71,11 @@ Statements
Declarations Declarations
------------ ------------
The following global declarations cannot occur within a function, hook, or Declarations cannot occur within a function, hook, or event handler.
event handler. Also, these declarations cannot appear after any statements
that are outside of a function, hook, or event handler. Declarations must appear before any statements (except those statements
that are in a function, hook, or event handler) in the concatenation of
all loaded Bro scripts.
.. bro:keyword:: module .. bro:keyword:: module
@ -126,9 +128,12 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: global .. bro:keyword:: global
Variables declared with the "global" keyword will be global. Variables declared with the "global" keyword will be global.
If a type is not specified, then an initializer is required so that If a type is not specified, then an initializer is required so that
the type can be inferred. Likewise, if an initializer is not supplied, the type can be inferred. Likewise, if an initializer is not supplied,
then the type must be specified. Example:: then the type must be specified. In some cases, when the type cannot
be correctly inferred, the type must be specified even when an
initializer is present. Example::
global pi = 3.14; global pi = 3.14;
global hosts: set[addr]; global hosts: set[addr];
@ -136,10 +141,11 @@ that are outside of a function, hook, or event handler.
Variable declarations outside of any function, hook, or event handler are Variable declarations outside of any function, hook, or event handler are
required to use this keyword (unless they are declared with the required to use this keyword (unless they are declared with the
:bro:keyword:`const` keyword). Definitions of functions, hooks, and :bro:keyword:`const` keyword instead).
event handlers are not allowed to use the "global"
keyword (they already have global scope), except function declarations Definitions of functions, hooks, and event handlers are not allowed
where no function body is supplied use the "global" keyword. to use the "global" keyword. However, function declarations (i.e., no
function body is provided) can use the "global" keyword.
The scope of a global variable begins where the declaration is located, The scope of a global variable begins where the declaration is located,
and extends through all remaining Bro scripts that are loaded (however, and extends through all remaining Bro scripts that are loaded (however,
@ -150,18 +156,22 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: const .. bro:keyword:: const
A variable declared with the "const" keyword will be constant. A variable declared with the "const" keyword will be constant.
Variables declared as constant are required to be initialized at the Variables declared as constant are required to be initialized at the
time of declaration. Example:: time of declaration. Normally, the type is inferred from the initializer,
but the type can be explicitly specified. Example::
const pi = 3.14; const pi = 3.14;
const ssh_port: port = 22/tcp; const ssh_port: port = 22/tcp;
The value of a constant cannot be changed later (the only The value of a constant cannot be changed. The only exception is if the
exception is if the variable is global and has the :bro:attr:`&redef` variable is a global constant and has the :bro:attr:`&redef`
attribute, then its value can be changed only with a :bro:keyword:`redef`). attribute, but even then its value can be changed only with a
:bro:keyword:`redef`.
The scope of a constant is local if the declaration is in a The scope of a constant is local if the declaration is in a
function, hook, or event handler, and global otherwise. function, hook, or event handler, and global otherwise.
Note that the "const" keyword cannot be used with either the "local" Note that the "const" keyword cannot be used with either the "local"
or "global" keywords (i.e., "const" replaces "local" and "global"). or "global" keywords (i.e., "const" replaces "local" and "global").
@ -184,7 +194,8 @@ that are outside of a function, hook, or event handler.
.. bro:keyword:: redef .. bro:keyword:: redef
There are three ways that "redef" can be used: to change the value of There are three ways that "redef" can be used: to change the value of
a global variable, to extend a record type or enum type, or to specify a global variable (but only if it has the :bro:attr:`&redef` attribute),
to extend a record type or enum type, or to specify
a new event handler body that replaces all those that were previously a new event handler body that replaces all those that were previously
defined. defined.
@ -237,13 +248,14 @@ that are outside of a function, hook, or event handler.
Statements Statements
---------- ----------
Statements (except those contained within a function, hook, or event
handler) can appear only after all global declarations in the concatenation
of all loaded Bro scripts.
Each statement in a Bro script must be terminated with a semicolon (with a Each statement in a Bro script must be terminated with a semicolon (with a
few exceptions noted below). An individual statement can span multiple few exceptions noted below). An individual statement can span multiple
lines. lines.
All statements (except those contained within a function, hook, or event
handler) must appear after all global declarations.
Here are the statements that the Bro scripting language supports. Here are the statements that the Bro scripting language supports.
.. bro:keyword:: add .. bro:keyword:: add
@ -258,8 +270,8 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: break .. bro:keyword:: break
The "break" statement is used to break out of a :bro:keyword:`switch` or The "break" statement is used to break out of a :bro:keyword:`switch`,
:bro:keyword:`for` statement. :bro:keyword:`for`, or :bro:keyword:`while` statement.
.. bro:keyword:: delete .. bro:keyword:: delete
@ -379,10 +391,10 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: next .. bro:keyword:: next
The "next" statement can only appear within a :bro:keyword:`for` loop. The "next" statement can only appear within a :bro:keyword:`for` or
It causes execution to skip to the next iteration. :bro:keyword:`while` loop. It causes execution to skip to the next
iteration.
For an example, see the :bro:keyword:`for` statement.
.. bro:keyword:: print .. bro:keyword:: print
@ -571,7 +583,7 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: while .. bro:keyword:: while
A "while" loop iterates over a body statement as long a given A "while" loop iterates over a body statement as long as a given
condition remains true. condition remains true.
A :bro:keyword:`break` statement can be used at any time to immediately A :bro:keyword:`break` statement can be used at any time to immediately
@ -609,8 +621,8 @@ Here are the statements that the Bro scripting language supports.
(outside of the braces) of a compound statement. (outside of the braces) of a compound statement.
A compound statement is required in order to execute more than one A compound statement is required in order to execute more than one
statement in the body of a :bro:keyword:`for`, :bro:keyword:`if`, or statement in the body of a :bro:keyword:`for`, :bro:keyword:`while`,
:bro:keyword:`when` statement. :bro:keyword:`if`, or :bro:keyword:`when` statement.
Example:: Example::

View file

@ -340,15 +340,18 @@ Here is a more detailed description of each type:
table [ type^+ ] of type table [ type^+ ] of type
where *type^+* is one or more types, separated by commas. where *type^+* is one or more types, separated by commas. The
For example: index type cannot be any of the following types: pattern, table, set,
vector, file, opaque, any.
Here is an example of declaring a table indexed by "count" values
and yielding "string" values:
.. code:: bro .. code:: bro
global a: table[count] of string; global a: table[count] of string;
declares a table indexed by "count" values and yielding The yield type can also be more complex:
"string" values. The yield type can also be more complex:
.. code:: bro .. code:: bro
@ -441,7 +444,9 @@ Here is a more detailed description of each type:
set [ type^+ ] set [ type^+ ]
where *type^+* is one or more types separated by commas. where *type^+* is one or more types separated by commas. The
index type cannot be any of the following types: pattern, table, set,
vector, file, opaque, any.
Sets can be initialized by listing elements enclosed by curly braces: Sets can be initialized by listing elements enclosed by curly braces:

View file

@ -4,7 +4,7 @@ type Service: record {
rfc: count; rfc: count;
}; };
function print_service(serv: Service): string function print_service(serv: Service)
{ {
print fmt("Service: %s(RFC%d)",serv$name, serv$rfc); print fmt("Service: %s(RFC%d)",serv$name, serv$rfc);

View file

@ -9,7 +9,7 @@ type System: record {
services: set[Service]; services: set[Service];
}; };
function print_service(serv: Service): string function print_service(serv: Service)
{ {
print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc); print fmt(" Service: %s(RFC%d)",serv$name, serv$rfc);
@ -17,7 +17,7 @@ function print_service(serv: Service): string
print fmt(" port: %s", p); print fmt(" port: %s", p);
} }
function print_system(sys: System): string function print_system(sys: System)
{ {
print fmt("System: %s", sys$name); print fmt("System: %s", sys$name);

View file

@ -51,12 +51,6 @@ add given prefix to policy file resolution
\fB\-r\fR,\ \-\-readfile <readfile> \fB\-r\fR,\ \-\-readfile <readfile>
read from given tcpdump file read from given tcpdump file
.TP .TP
\fB\-y\fR,\ \-\-flowfile <file>[=<ident>]
read from given flow file
.TP
\fB\-Y\fR,\ \-\-netflow <ip>:<prt>[=<id>]
read flow from socket
.TP
\fB\-s\fR,\ \-\-rulefile <rulefile> \fB\-s\fR,\ \-\-rulefile <rulefile>
read rules from given file read rules from given file
.TP .TP
@ -78,27 +72,21 @@ run the specified policy file analysis
\fB\-C\fR,\ \-\-no\-checksums \fB\-C\fR,\ \-\-no\-checksums
ignore checksums ignore checksums
.TP .TP
\fB\-D\fR,\ \-\-dfa\-size <size>
DFA state cache size
.TP
\fB\-F\fR,\ \-\-force\-dns \fB\-F\fR,\ \-\-force\-dns
force DNS force DNS
.TP .TP
\fB\-I\fR,\ \-\-print\-id <ID name> \fB\-I\fR,\ \-\-print\-id <ID name>
print out given ID print out given ID
.TP .TP
\fB\-J\fR,\ \-\-set\-seed <seed>
set the random number seed
.TP
\fB\-K\fR,\ \-\-md5\-hashkey <hashkey> \fB\-K\fR,\ \-\-md5\-hashkey <hashkey>
set key for MD5\-keyed hashing set key for MD5\-keyed hashing
.TP .TP
\fB\-L\fR,\ \-\-rule\-benchmark
benchmark for rules
.TP
\fB\-N\fR,\ \-\-print\-plugins \fB\-N\fR,\ \-\-print\-plugins
print available plugins and exit (\fB\-NN\fR for verbose) print available plugins and exit (\fB\-NN\fR for verbose)
.TP .TP
\fB\-O\fR,\ \-\-optimize
optimize policy script
.TP
\fB\-P\fR,\ \-\-prime\-dns \fB\-P\fR,\ \-\-prime\-dns
prime DNS prime DNS
.TP .TP
@ -120,7 +108,7 @@ Record process status in file
\fB\-W\fR,\ \-\-watchdog \fB\-W\fR,\ \-\-watchdog
activate watchdog timer activate watchdog timer
.TP .TP
\fB\-X\fR,\ \-\-broxygen \fB\-X\fR,\ \-\-broxygen <cfgfile>
generate documentation based on config file generate documentation based on config file
.TP .TP
\fB\-\-pseudo\-realtime[=\fR<speedup>] \fB\-\-pseudo\-realtime[=\fR<speedup>]
@ -131,6 +119,19 @@ load seeds from given file
.TP .TP
\fB\-\-save\-seeds\fR <file> \fB\-\-save\-seeds\fR <file>
save seeds to given file save seeds to given file
.TP
The following option is available only when Bro is built with the \-\-enable\-debug configure option:
.TP
\fB\-B\fR,\ \-\-debug <dbgstreams>
Enable debugging output for selected streams ('-B help' for help)
.TP
The following options are available only when Bro is built with gperftools support (use the \-\-enable\-perftools and \-\-enable\-perftools\-debug configure options):
.TP
\fB\-m\fR,\ \-\-mem-leaks
show leaks
.TP
\fB\-M\fR,\ \-\-mem-profile
record heap
.SH ENVIRONMENT .SH ENVIRONMENT
.TP .TP
.B BROPATH .B BROPATH

View file

@ -0,0 +1 @@
Support for Portable Executable (PE) file analysis.

View file

@ -0,0 +1,2 @@
The Broker communication framework facilitates connecting to remote Bro
instances to share state and transfer events.

View file

@ -78,6 +78,12 @@ signature file-coldfusion {
file-magic /^([\x0d\x0a[:blank:]]*(<!--.*-->)?)*<(CFPARAM|CFSET|CFIF)/ file-magic /^([\x0d\x0a[:blank:]]*(<!--.*-->)?)*<(CFPARAM|CFSET|CFIF)/
} }
# Adobe Flash Media Manifest
signature file-f4m {
file-mime "application/f4m", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[mM][aA][nN][iI][fF][eE][sS][tT][\x0d\x0a[:blank:]]{1,}xmlns=\"http:\/\/ns\.adobe\.com\/f4m\//
}
# Microsoft LNK files # Microsoft LNK files
signature file-lnk { signature file-lnk {
file-mime "application/x-ms-shortcut", 49 file-mime "application/x-ms-shortcut", 49

View file

@ -71,6 +71,14 @@ signature file-mp2p {
file-magic /\x00\x00\x01\xba([\x40-\x7f\xc0-\xff])/ file-magic /\x00\x00\x01\xba([\x40-\x7f\xc0-\xff])/
} }
# MPEG transport stream data. These files typically have the extension "ts".
# Note: The 0x47 repeats every 188 bytes. Using four as the number of
# occurrences for the test here is arbitrary.
signature file-mp2t {
file-mime "video/mp2t", 40
file-magic /^(\x47.{187}){4}/
}
# Silicon Graphics video # Silicon Graphics video
signature file-sgi-movie { signature file-sgi-movie {
file-mime "video/x-sgi-movie", 70 file-mime "video/x-sgi-movie", 70
@ -94,3 +102,4 @@ signature file-3gpp {
file-mime "video/3gpp", 60 file-mime "video/3gpp", 60
file-magic /^....ftyp(3g[egps2]|avc1|mmp4)/ file-magic /^....ftyp(3g[egps2]|avc1|mmp4)/
} }

View file

@ -1,18 +1,25 @@
##! The input framework provides a way to read previously stored data either ##! The input framework provides a way to read previously stored data either
##! as an event stream or into a bro table. ##! as an event stream or into a Bro table.
module Input; module Input;
export { export {
type Event: enum { type Event: enum {
## New data has been imported.
EVENT_NEW = 0, EVENT_NEW = 0,
## Existing data has been changed.
EVENT_CHANGED = 1, EVENT_CHANGED = 1,
## Previously existing data has been removed.
EVENT_REMOVED = 2, EVENT_REMOVED = 2,
}; };
## Type that defines the input stream read mode.
type Mode: enum { type Mode: enum {
## Do not automatically reread the file after it has been read.
MANUAL = 0, MANUAL = 0,
## Reread the entire file each time a change is found.
REREAD = 1, REREAD = 1,
## Read data from end of file each time new data is appended.
STREAM = 2 STREAM = 2
}; };
@ -24,20 +31,20 @@ export {
## Separator between fields. ## Separator between fields.
## Please note that the separator has to be exactly one character long. ## Please note that the separator has to be exactly one character long.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const separator = "\t" &redef; const separator = "\t" &redef;
## Separator between set elements. ## Separator between set elements.
## Please note that the separator has to be exactly one character long. ## Please note that the separator has to be exactly one character long.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const set_separator = "," &redef; const set_separator = "," &redef;
## String to use for empty fields. ## String to use for empty fields.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const empty_field = "(empty)" &redef; const empty_field = "(empty)" &redef;
## String to use for an unset &optional field. ## String to use for an unset &optional field.
## Can be overwritten by individual writers. ## Individual readers can use a different value.
const unset_field = "-" &redef; const unset_field = "-" &redef;
## Flag that controls if the input framework accepts records ## Flag that controls if the input framework accepts records
@ -47,11 +54,11 @@ export {
## abort. Defaults to false (abort). ## abort. Defaults to false (abort).
const accept_unsupported_types = F &redef; const accept_unsupported_types = F &redef;
## TableFilter description type used for the `table` method. ## A table input stream type used to send data to a Bro table.
type TableDescription: record { type TableDescription: record {
# Common definitions for tables and events # Common definitions for tables and events
## String that allows the reader to find the source. ## String that allows the reader to find the source of the data.
## For `READER_ASCII`, this is the filename. ## For `READER_ASCII`, this is the filename.
source: string; source: string;
@ -61,7 +68,8 @@ export {
## Read mode to use for this stream. ## Read mode to use for this stream.
mode: Mode &default=default_mode; mode: Mode &default=default_mode;
## Descriptive name. Used to remove a stream at a later time. ## Name of the input stream. This is used by some functions to
## manipulate the stream.
name: string; name: string;
# Special definitions for tables # Special definitions for tables
@ -73,31 +81,35 @@ export {
idx: any; idx: any;
## Record that defines the values used as the elements of the table. ## Record that defines the values used as the elements of the table.
## If this is undefined, then *destination* has to be a set. ## If this is undefined, then *destination* must be a set.
val: any &optional; val: any &optional;
## Defines if the value of the table is a record (default), or a single value. ## Defines if the value of the table is a record (default), or a single
## When this is set to false, then *val* can only contain one element. ## value. When this is set to false, then *val* can only contain one
## element.
want_record: bool &default=T; want_record: bool &default=T;
## The event that is raised each time a value is added to, changed in or removed ## The event that is raised each time a value is added to, changed in,
## from the table. The event will receive an Input::Event enum as the first ## or removed from the table. The event will receive an
## argument, the *idx* record as the second argument and the value (record) as the ## Input::TableDescription as the first argument, an Input::Event
## third argument. ## enum as the second argument, the *idx* record as the third argument
ev: any &optional; # event containing idx, val as values. ## and the value (record) as the fourth argument.
ev: any &optional;
## Predicate function that can decide if an insertion, update or removal should ## Predicate function that can decide if an insertion, update or removal
## really be executed. Parameters are the same as for the event. If true is ## should really be executed. Parameters have same meaning as for the
## returned, the update is performed. If false is returned, it is skipped. ## event.
## If true is returned, the update is performed. If false is returned,
## it is skipped.
pred: function(typ: Input::Event, left: any, right: any): bool &optional; pred: function(typ: Input::Event, left: any, right: any): bool &optional;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the writer, but ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes. ## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };
## EventFilter description type used for the `event` method. ## An event input stream type used to send input data to a Bro event.
type EventDescription: record { type EventDescription: record {
# Common definitions for tables and events # Common definitions for tables and events
@ -116,19 +128,26 @@ export {
# Special definitions for events # Special definitions for events
## Record describing the fields to be retrieved from the source input. ## Record type describing the fields to be retrieved from the input
## source.
fields: any; fields: any;
## If this is false, the event receives each value in fields as a separate argument. ## If this is false, the event receives each value in *fields* as a
## If this is set to true (default), the event receives all fields in a single record value. ## separate argument.
## If this is set to true (default), the event receives all fields in
## a single record value.
want_record: bool &default=T; want_record: bool &default=T;
## The event that is raised each time a new line is received from the reader. ## The event that is raised each time a new line is received from the
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments. ## reader. The event will receive an Input::EventDescription record
## as the first argument, an Input::Event enum as the second
## argument, and the fields (as specified in *fields*) as the following
## arguments (this will either be a single record value containing
## all fields, or each field value as a separate argument).
ev: any; ev: any;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the writer, but ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes. ## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };
@ -155,28 +174,29 @@ export {
## field will be the same value as the *source* field. ## field will be the same value as the *source* field.
name: string; name: string;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed to the reader.
## Interpretation of the values is left to the writer, but ## Interpretation of the values is left to the reader, but
## usually they will be used for configuration purposes. ## usually they will be used for configuration purposes.
config: table[string] of string &default=table(); config: table[string] of string &default=table();
}; };
## Create a new table input from a given source. ## Create a new table input stream from a given source.
## ##
## description: `TableDescription` record describing the source. ## description: `TableDescription` record describing the source.
## ##
## Returns: true on success. ## Returns: true on success.
global add_table: function(description: Input::TableDescription) : bool; global add_table: function(description: Input::TableDescription) : bool;
## Create a new event input from a given source. ## Create a new event input stream from a given source.
## ##
## description: `EventDescription` record describing the source. ## description: `EventDescription` record describing the source.
## ##
## Returns: true on success. ## Returns: true on success.
global add_event: function(description: Input::EventDescription) : bool; global add_event: function(description: Input::EventDescription) : bool;
## Create a new file analysis input from a given source. Data read from ## Create a new file analysis input stream from a given source. Data read
## the source is automatically forwarded to the file analysis framework. ## from the source is automatically forwarded to the file analysis
## framework.
## ##
## description: A record describing the source. ## description: A record describing the source.
## ##
@ -199,7 +219,11 @@ export {
## Event that is called when the end of a data source has been reached, ## Event that is called when the end of a data source has been reached,
## including after an update. ## including after an update.
global end_of_data: event(name: string, source:string); ##
## name: Name of the input stream.
##
## source: String that identifies the data source (such as the filename).
global end_of_data: event(name: string, source: string);
} }
@load base/bif/input.bif @load base/bif/input.bif

View file

@ -11,7 +11,9 @@ export {
## ##
## name: name of the input stream. ## name: name of the input stream.
## source: source of the input stream. ## source: source of the input stream.
## exit_code: exit code of the program, or number of the signal that forced the program to exit. ## exit_code: exit code of the program, or number of the signal that forced
## signal_exit: false when program exited normally, true when program was forced to exit by a signal. ## the program to exit.
## signal_exit: false when program exited normally, true when program was
## forced to exit by a signal.
global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool); global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool);
} }

View file

@ -6,9 +6,10 @@
module Log; module Log;
export { export {
## Type that defines an ID unique to each log stream. Scripts creating new log ## Type that defines an ID unique to each log stream. Scripts creating new
## streams need to redef this enum to add their own specific log ID. The log ID ## log streams need to redef this enum to add their own specific log ID.
## implicitly determines the default name of the generated log file. ## The log ID implicitly determines the default name of the generated log
## file.
type Log::ID: enum { type Log::ID: enum {
## Dummy place-holder. ## Dummy place-holder.
UNKNOWN UNKNOWN
@ -20,25 +21,24 @@ export {
## If true, remote logging is by default enabled for all filters. ## If true, remote logging is by default enabled for all filters.
const enable_remote_logging = T &redef; const enable_remote_logging = T &redef;
## Default writer to use if a filter does not specify ## Default writer to use if a filter does not specify anything else.
## anything else.
const default_writer = WRITER_ASCII &redef; const default_writer = WRITER_ASCII &redef;
## Default separator between fields for logwriters. ## Default separator to use between fields.
## Can be overwritten by individual writers. ## Individual writers can use a different value.
const separator = "\t" &redef; const separator = "\t" &redef;
## Separator between set elements. ## Default separator to use between elements of a set.
## Can be overwritten by individual writers. ## Individual writers can use a different value.
const set_separator = "," &redef; const set_separator = "," &redef;
## String to use for empty fields. This should be different from ## Default string to use for empty fields. This should be different
## *unset_field* to make the output unambiguous. ## from *unset_field* to make the output unambiguous.
## Can be overwritten by individual writers. ## Individual writers can use a different value.
const empty_field = "(empty)" &redef; const empty_field = "(empty)" &redef;
## String to use for an unset &optional field. ## Default string to use for an unset &optional field.
## Can be overwritten by individual writers. ## Individual writers can use a different value.
const unset_field = "-" &redef; const unset_field = "-" &redef;
## Type defining the content of a logging stream. ## Type defining the content of a logging stream.
@ -69,7 +69,7 @@ export {
## If no ``path`` is defined for the filter, then the first call ## If no ``path`` is defined for the filter, then the first call
## to the function will contain an empty string. ## to the function will contain an empty string.
## ##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the stream's ``columns`` type with its
## fields set to the values to be logged. ## fields set to the values to be logged.
## ##
## Returns: The path to be used for the filter. ## Returns: The path to be used for the filter.
@ -87,7 +87,8 @@ export {
terminating: bool; ##< True if rotation occured due to Bro shutting down. terminating: bool; ##< True if rotation occured due to Bro shutting down.
}; };
## Default rotation interval. Zero disables rotation. ## Default rotation interval to use for filters that do not specify
## an interval. Zero disables rotation.
## ##
## Note that this is overridden by the BroControl LogRotationInterval ## Note that this is overridden by the BroControl LogRotationInterval
## option. ## option.
@ -122,8 +123,8 @@ export {
## Indicates whether a log entry should be recorded. ## Indicates whether a log entry should be recorded.
## If not given, all entries are recorded. ## If not given, all entries are recorded.
## ##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the stream's ``columns`` type with its
## fields set to the values to logged. ## fields set to the values to be logged.
## ##
## Returns: True if the entry is to be recorded. ## Returns: True if the entry is to be recorded.
pred: function(rec: any): bool &optional; pred: function(rec: any): bool &optional;
@ -131,10 +132,10 @@ export {
## Output path for recording entries matching this ## Output path for recording entries matching this
## filter. ## filter.
## ##
## The specific interpretation of the string is up to ## The specific interpretation of the string is up to the
## the used writer, and may for example be the destination ## logging writer, and may for example be the destination
## file name. Generally, filenames are expected to be given ## file name. Generally, filenames are expected to be given
## without any extensions; writers will add appropiate ## without any extensions; writers will add appropriate
## extensions automatically. ## extensions automatically.
## ##
## If this path is found to conflict with another filter's ## If this path is found to conflict with another filter's
@ -151,7 +152,7 @@ export {
## easy to flood the disk by returning a new string for each ## easy to flood the disk by returning a new string for each
## connection. Upon adding a filter to a stream, if neither ## connection. Upon adding a filter to a stream, if neither
## ``path`` nor ``path_func`` is explicitly set by them, then ## ``path`` nor ``path_func`` is explicitly set by them, then
## :bro:see:`default_path_func` is used. ## :bro:see:`Log::default_path_func` is used.
## ##
## id: The ID associated with the log stream. ## id: The ID associated with the log stream.
## ##
@ -161,7 +162,7 @@ export {
## then the first call to the function will contain an ## then the first call to the function will contain an
## empty string. ## empty string.
## ##
## rec: An instance of the streams's ``columns`` type with its ## rec: An instance of the stream's ``columns`` type with its
## fields set to the values to be logged. ## fields set to the values to be logged.
## ##
## Returns: The path to be used for the filter, which will be ## Returns: The path to be used for the filter, which will be
@ -185,7 +186,7 @@ export {
## If true, entries are passed on to remote peers. ## If true, entries are passed on to remote peers.
log_remote: bool &default=enable_remote_logging; log_remote: bool &default=enable_remote_logging;
## Rotation interval. ## Rotation interval. Zero disables rotation.
interv: interval &default=default_rotation_interval; interv: interval &default=default_rotation_interval;
## Callback function to trigger for rotated files. If not set, the ## Callback function to trigger for rotated files. If not set, the
@ -215,9 +216,9 @@ export {
## Removes a logging stream completely, stopping all the threads. ## Removes a logging stream completely, stopping all the threads.
## ##
## id: The ID enum to be associated with the new logging stream. ## id: The ID associated with the logging stream.
## ##
## Returns: True if a new stream was successfully removed. ## Returns: True if the stream was successfully removed.
## ##
## .. bro:see:: Log::create_stream ## .. bro:see:: Log::create_stream
global remove_stream: function(id: ID) : bool; global remove_stream: function(id: ID) : bool;

View file

@ -1,15 +1,15 @@
##! Interface for the ASCII log writer. Redefinable options are available ##! Interface for the ASCII log writer. Redefinable options are available
##! to tweak the output format of ASCII logs. ##! to tweak the output format of ASCII logs.
##! ##!
##! The ASCII writer supports currently one writer-specific filter option via ##! The ASCII writer currently supports one writer-specific per-filter config
##! ``config``: setting ``tsv`` to the string ``T`` turns the output into ##! option: setting ``tsv`` to the string ``T`` turns the output into
##! "tab-separated-value" mode where only a single header row with the column ##! "tab-separated-value" mode where only a single header row with the column
##! names is printed out as meta information, with no "# fields" prepended; no ##! names is printed out as meta information, with no "# fields" prepended; no
##! other meta data gets included in that mode. ##! other meta data gets included in that mode. Example filter using this::
##! ##!
##! Example filter using this:: ##! local f: Log::Filter = [$name = "my-filter",
##! ##! $writer = Log::WRITER_ASCII,
##! local my_filter: Log::Filter = [$name = "my-filter", $writer = Log::WRITER_ASCII, $config = table(["tsv"] = "T")]; ##! $config = table(["tsv"] = "T")];
##! ##!
module LogAscii; module LogAscii;
@ -29,6 +29,8 @@ export {
## Format of timestamps when writing out JSON. By default, the JSON ## Format of timestamps when writing out JSON. By default, the JSON
## formatter will use double values for timestamps which represent the ## formatter will use double values for timestamps which represent the
## number of seconds from the UNIX epoch. ## number of seconds from the UNIX epoch.
##
## This option is also available as a per-filter ``$config`` option.
const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef; const json_timestamps: JSON::TimestampFormat = JSON::TS_EPOCH &redef;
## If true, include lines with log meta information such as column names ## If true, include lines with log meta information such as column names

View file

@ -19,7 +19,7 @@ export {
const unset_field = Log::unset_field &redef; const unset_field = Log::unset_field &redef;
## String to use for empty fields. This should be different from ## String to use for empty fields. This should be different from
## *unset_field* to make the output unambiguous. ## *unset_field* to make the output unambiguous.
const empty_field = Log::empty_field &redef; const empty_field = Log::empty_field &redef;
} }

View file

@ -138,7 +138,7 @@ redef enum PcapFilterID += {
function test_filter(filter: string): bool function test_filter(filter: string): bool
{ {
if ( ! precompile_pcap_filter(FilterTester, filter) ) if ( ! Pcap::precompile_pcap_filter(FilterTester, filter) )
{ {
# The given filter was invalid # The given filter was invalid
# TODO: generate a notice. # TODO: generate a notice.
@ -273,7 +273,7 @@ function install(): bool
return F; return F;
local ts = current_time(); local ts = current_time();
if ( ! precompile_pcap_filter(DefaultPcapFilter, tmp_filter) ) if ( ! Pcap::precompile_pcap_filter(DefaultPcapFilter, tmp_filter) )
{ {
NOTICE([$note=Compile_Failure, NOTICE([$note=Compile_Failure,
$msg=fmt("Compiling packet filter failed"), $msg=fmt("Compiling packet filter failed"),
@ -303,7 +303,7 @@ function install(): bool
} }
info$filter = current_filter; info$filter = current_filter;
if ( ! install_pcap_filter(DefaultPcapFilter) ) if ( ! Pcap::install_pcap_filter(DefaultPcapFilter) )
{ {
# Installing the filter failed for some reason. # Installing the filter failed for some reason.
info$success = F; info$success = F;

View file

@ -280,6 +280,13 @@ function parse_mozilla(unparsed_version: string): Description
v = parse(parts[1])$version; v = parse(parts[1])$version;
} }
} }
else if ( /AdobeAIR\/[0-9\.]*/ in unparsed_version )
{
software_name = "AdobeAIR";
parts = split_string_all(unparsed_version, /AdobeAIR\/[0-9\.]*/);
if ( 1 in parts )
v = parse(parts[1])$version;
}
else if ( /AppleWebKit\/[0-9\.]*/ in unparsed_version ) else if ( /AppleWebKit\/[0-9\.]*/ in unparsed_version )
{ {
software_name = "Unspecified WebKit"; software_name = "Unspecified WebKit";

View file

@ -345,6 +345,12 @@ type connection: record {
## for the connection unless the :bro:id:`tunnel_changed` event is ## for the connection unless the :bro:id:`tunnel_changed` event is
## handled and reassigns this field to the new encapsulation. ## handled and reassigns this field to the new encapsulation.
tunnel: EncapsulatingConnVector &optional; tunnel: EncapsulatingConnVector &optional;
## The outer VLAN, if applicable, for this connection.
vlan: int &optional;
## The inner VLAN, if applicable, for this connection.
inner_vlan: int &optional;
}; };
## Default amount of time a file can be inactive before the file analysis ## Default amount of time a file can be inactive before the file analysis
@ -740,6 +746,7 @@ type pcap_packet: record {
caplen: count; ##< The number of bytes captured (<= *len*). caplen: count; ##< The number of bytes captured (<= *len*).
len: count; ##< The length of the packet in bytes, including link-level header. len: count; ##< The length of the packet in bytes, including link-level header.
data: string; ##< The payload of the packet, including link-level header. data: string; ##< The payload of the packet, including link-level header.
link_type: link_encap; ##< Layer 2 link encapsulation type.
}; };
## GeoIP location information. ## GeoIP location information.
@ -954,6 +961,11 @@ const tcp_max_above_hole_without_any_acks = 16384 &redef;
## .. bro:see:: tcp_max_initial_window tcp_max_above_hole_without_any_acks ## .. bro:see:: tcp_max_initial_window tcp_max_above_hole_without_any_acks
const tcp_excessive_data_without_further_acks = 10 * 1024 * 1024 &redef; const tcp_excessive_data_without_further_acks = 10 * 1024 * 1024 &redef;
## Number of TCP segments to buffer beyond what's been acknowledged already
## to detect retransmission inconsistencies. Zero disables any additonal
## buffering.
const tcp_max_old_segments = 0 &redef;
## For services without a handler, these sets define originator-side ports ## For services without a handler, these sets define originator-side ports
## that still trigger reassembly. ## that still trigger reassembly.
## ##
@ -1495,6 +1507,34 @@ type pkt_hdr: record {
icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet. icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet.
}; };
## Values extracted from the layer 2 header.
##
## .. bro:see:: pkt_hdr
type l2_hdr: record {
encap: link_encap; ##< L2 link encapsulation.
len: count; ##< Total frame length on wire.
cap_len: count; ##< Captured length.
src: string &optional; ##< L2 source (if Ethernet).
dst: string &optional; ##< L2 destination (if Ethernet).
vlan: count &optional; ##< Outermost VLAN tag if any (and Ethernet).
inner_vlan: count &optional; ##< Innermost VLAN tag if any (and Ethernet).
eth_type: count &optional; ##< Innermost Ethertype (if Ethernet).
proto: layer3_proto; ##< L3 protocol.
};
## A raw packet header, consisting of L2 header and everything in
## :bro:id:`pkt_hdr`. .
##
## .. bro:see:: raw_packet pkt_hdr
type raw_pkt_hdr: record {
l2: l2_hdr; ##< The layer 2 header.
ip: ip4_hdr &optional; ##< The IPv4 header if an IPv4 packet.
ip6: ip6_hdr &optional; ##< The IPv6 header if an IPv6 packet.
tcp: tcp_hdr &optional; ##< The TCP header if a TCP packet.
udp: udp_hdr &optional; ##< The UDP header if a UDP packet.
icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet.
};
## A Teredo origin indication header. See :rfc:`4380` for more information ## A Teredo origin indication header. See :rfc:`4380` for more information
## about the Teredo protocol. ## about the Teredo protocol.
## ##
@ -2469,7 +2509,7 @@ global dns_skip_all_addl = T &redef;
## If a DNS request includes more than this many queries, assume it's non-DNS ## If a DNS request includes more than this many queries, assume it's non-DNS
## traffic and do not process it. Set to 0 to turn off this functionality. ## traffic and do not process it. Set to 0 to turn off this functionality.
global dns_max_queries = 5; global dns_max_queries = 25 &redef;
## HTTP session statistics. ## HTTP session statistics.
## ##
@ -3115,6 +3155,186 @@ export {
}; };
} }
@load base/bif/plugins/Bro_KRB.types.bif
module KRB;
export {
## KDC Options. See :rfc:`4120`
type KRB::KDC_Options: record {
## The ticket to be issued should have its forwardable flag set.
forwardable : bool;
## A (TGT) request for forwarding.
forwarded : bool;
## The ticket to be issued should have its proxiable flag set.
proxiable : bool;
## A request for a proxy.
proxy : bool;
## The ticket to be issued should have its may-postdate flag set.
allow_postdate : bool;
## A request for a postdated ticket.
postdated : bool;
## The ticket to be issued should have its renewable flag set.
renewable : bool;
## Reserved for opt_hardware_auth
opt_hardware_auth : bool;
## Request that the KDC not check the transited field of a TGT against
## the policy of the local realm before it will issue derivative tickets
## based on the TGT.
disable_transited_check : bool;
## If a ticket with the requested lifetime cannot be issued, a renewable
## ticket is acceptable
renewable_ok : bool;
## The ticket for the end server is to be encrypted in the session key
## from the additional TGT provided
enc_tkt_in_skey : bool;
## The request is for a renewal
renew : bool;
## The request is to validate a postdated ticket.
validate : bool;
};
## AP Options. See :rfc:`4120`
type KRB::AP_Options: record {
## Indicates that user-to-user-authentication is in use
use_session_key : bool;
## Mutual authentication is required
mutual_required : bool;
};
## Used in a few places in the Kerberos analyzer for elements
## that have a type and a string value.
type KRB::Type_Value: record {
## The data type
data_type : count;
## The data value
val : string;
};
type KRB::Type_Value_Vector: vector of KRB::Type_Value;
## A Kerberos host address See :rfc:`4120`.
type KRB::Host_Address: record {
## IPv4 or IPv6 address
ip : addr &log &optional;
## NetBIOS address
netbios : string &log &optional;
## Some other type that we don't support yet
unknown : KRB::Type_Value &optional;
};
type KRB::Host_Address_Vector: vector of KRB::Host_Address;
## The data from the SAFE message. See :rfc:`4120`.
type KRB::SAFE_Msg: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (20 for SAFE_MSG)
msg_type : count;
## The application-specific data that is being passed
## from the sender to the reciever
data : string;
## Current time from the sender of the message
timestamp : time &optional;
## Sequence number used to detect replays
seq : count &optional;
## Sender address
sender : Host_Address &optional;
## Recipient address
recipient : Host_Address &optional;
};
## The data from the ERROR_MSG message. See :rfc:`4120`.
type KRB::Error_Msg: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (30 for ERROR_MSG)
msg_type : count;
## Current time on the client
client_time : time &optional;
## Current time on the server
server_time : time;
## The specific error code
error_code : count;
## Realm of the ticket
client_realm : string &optional;
## Name on the ticket
client_name : string &optional;
## Realm of the service
service_realm : string;
## Name of the service
service_name : string;
## Additional text to explain the error
error_text : string &optional;
## Optional pre-authentication data
pa_data : vector of KRB::Type_Value &optional;
};
## A Kerberos ticket. See :rfc:`4120`.
type KRB::Ticket: record {
## Protocol version number (5 for KRB5)
pvno : count;
## Realm
realm : string;
## Name of the service
service_name : string;
## Cipher the ticket was encrypted with
cipher : count;
};
type KRB::Ticket_Vector: vector of KRB::Ticket;
## The data from the AS_REQ and TGS_REQ messages. See :rfc:`4120`.
type KRB::KDC_Request: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (10 for AS_REQ, 12 for TGS_REQ)
msg_type : count;
## Optional pre-authentication data
pa_data : vector of KRB::Type_Value &optional;
## Options specified in the request
kdc_options : KRB::KDC_Options;
## Name on the ticket
client_name : string &optional;
## Realm of the service
service_realm : string;
## Name of the service
service_name : string &optional;
## Time the ticket is good from
from : time &optional;
## Time the ticket is good till
till : time;
## The requested renew-till time
rtime : time &optional;
## A random nonce generated by the client
nonce : count;
## The desired encryption algorithms, in order of preference
encryption_types : vector of count;
## Any additional addresses the ticket should be valid for
host_addrs : vector of KRB::Host_Address &optional;
## Additional tickets may be included for certain transactions
additional_tickets : vector of KRB::Ticket &optional;
};
## The data from the AS_REQ and TGS_REQ messages. See :rfc:`4120`.
type KRB::KDC_Response: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (11 for AS_REP, 13 for TGS_REP)
msg_type : count;
## Optional pre-authentication data
pa_data : vector of KRB::Type_Value &optional;
## Realm on the ticket
client_realm : string &optional;
## Name on the service
client_name : string;
## The ticket that was issued
ticket : KRB::Ticket;
};
}
module GLOBAL; module GLOBAL;
@load base/bif/event.bif @load base/bif/event.bif
@ -3442,20 +3662,11 @@ export {
## Toggle whether to do GRE decapsulation. ## Toggle whether to do GRE decapsulation.
const enable_gre = T &redef; const enable_gre = T &redef;
## With this option set, the Teredo analysis will first check to see if
## other protocol analyzers have confirmed that they think they're
## parsing the right protocol and only continue with Teredo tunnel
## decapsulation if nothing else has yet confirmed. This can help
## reduce false positives of UDP traffic (e.g. DNS) that also happens
## to have a valid Teredo encapsulation.
const yielding_teredo_decapsulation = T &redef;
## With this set, the Teredo analyzer waits until it sees both sides ## With this set, the Teredo analyzer waits until it sees both sides
## of a connection using a valid Teredo encapsulation before issuing ## of a connection using a valid Teredo encapsulation before issuing
## a :bro:see:`protocol_confirmation`. If it's false, the first ## a :bro:see:`protocol_confirmation`. If it's false, the first
## occurrence of a packet with valid Teredo encapsulation causes a ## occurrence of a packet with valid Teredo encapsulation causes a
## confirmation. Both cases are still subject to effects of ## confirmation.
## :bro:see:`Tunnel::yielding_teredo_decapsulation`.
const delay_teredo_confirmation = T &redef; const delay_teredo_confirmation = T &redef;
## With this set, the GTP analyzer waits until the most-recent upflow ## With this set, the GTP analyzer waits until the most-recent upflow
@ -3471,7 +3682,6 @@ export {
## (includes GRE tunnels). ## (includes GRE tunnels).
const ip_tunnel_timeout = 24hrs &redef; const ip_tunnel_timeout = 24hrs &redef;
} # end export } # end export
module GLOBAL;
module Reporter; module Reporter;
export { export {
@ -3490,10 +3700,18 @@ export {
## external harness and shouldn't output anything to the console. ## external harness and shouldn't output anything to the console.
const errors_to_stderr = T &redef; const errors_to_stderr = T &redef;
} }
module GLOBAL;
## Number of bytes per packet to capture from live interfaces. module Pcap;
const snaplen = 8192 &redef; export {
## Number of bytes per packet to capture from live interfaces.
const snaplen = 8192 &redef;
## Number of Mbytes to provide as buffer space when capturing from live
## interfaces.
const bufsize = 128 &redef;
} # end export
module GLOBAL;
## Seed for hashes computed internally for probabilistic data structures. Using ## Seed for hashes computed internally for probabilistic data structures. Using
## the same value here will make the hashes compatible between independent Bro ## the same value here will make the hashes compatible between independent Bro

View file

@ -45,11 +45,13 @@
@load base/protocols/ftp @load base/protocols/ftp
@load base/protocols/http @load base/protocols/http
@load base/protocols/irc @load base/protocols/irc
@load base/protocols/krb
@load base/protocols/modbus @load base/protocols/modbus
@load base/protocols/mysql @load base/protocols/mysql
@load base/protocols/pop3 @load base/protocols/pop3
@load base/protocols/radius @load base/protocols/radius
@load base/protocols/rdp @load base/protocols/rdp
@load base/protocols/sip
@load base/protocols/snmp @load base/protocols/snmp
@load base/protocols/smtp @load base/protocols/smtp
@load base/protocols/socks @load base/protocols/socks

View file

@ -87,7 +87,8 @@ export {
## f packet with FIN bit set ## f packet with FIN bit set
## r packet with RST bit set ## r packet with RST bit set
## c packet with a bad checksum ## c packet with a bad checksum
## i inconsistent packet (e.g. SYN+RST bits both set) ## i inconsistent packet (e.g. FIN+RST bits set)
## q multi-flag packet (SYN+FIN or SYN+RST bits set)
## ====== ==================================================== ## ====== ====================================================
## ##
## If the event comes from the originator, the letter is in ## If the event comes from the originator, the letter is in

View file

@ -86,7 +86,7 @@ event gridftp_possibility_timeout(c: connection)
{ {
# only remove if we did not already detect it and the connection # only remove if we did not already detect it and the connection
# is not yet at its end. # is not yet at its end.
if ( "gridftp-data" !in c$service && ! c$conn?$service ) if ( "gridftp-data" !in c$service && ! (c?$conn && c$conn?$service) )
{ {
ConnThreshold::delete_bytes_threshold(c, size_threshold, T); ConnThreshold::delete_bytes_threshold(c, size_threshold, T);
ConnThreshold::delete_bytes_threshold(c, size_threshold, F); ConnThreshold::delete_bytes_threshold(c, size_threshold, F);

View file

@ -270,7 +270,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
{ {
if ( /^[bB][aA][sS][iI][cC] / in value ) if ( /^[bB][aA][sS][iI][cC] / in value )
{ {
local userpass = decode_base64(sub(value, /[bB][aA][sS][iI][cC][[:blank:]]/, "")); local userpass = decode_base64_conn(c$id, sub(value, /[bB][aA][sS][iI][cC][[:blank:]]/, ""));
local up = split_string(userpass, /:/); local up = split_string(userpass, /:/);
if ( |up| >= 2 ) if ( |up| >= 2 )
{ {

View file

@ -0,0 +1 @@
Support for Kerberos protocol analysis.

View file

@ -0,0 +1,3 @@
@load ./main
@load ./files
@load-sigs ./dpd.sig

View file

@ -0,0 +1,99 @@
module KRB;
export {
const error_msg: table[count] of string = {
[0] = "KDC_ERR_NONE",
[1] = "KDC_ERR_NAME_EXP",
[2] = "KDC_ERR_SERVICE_EXP",
[3] = "KDC_ERR_BAD_PVNO",
[4] = "KDC_ERR_C_OLD_MAST_KVNO",
[5] = "KDC_ERR_S_OLD_MAST_KVNO",
[6] = "KDC_ERR_C_PRINCIPAL_UNKNOWN",
[7] = "KDC_ERR_S_PRINCIPAL_UNKNOWN",
[8] = "KDC_ERR_PRINCIPAL_NOT_UNIQUE",
[9] = "KDC_ERR_NULL_KEY",
[10] = "KDC_ERR_CANNOT_POSTDATE",
[11] = "KDC_ERR_NEVER_VALID",
[12] = "KDC_ERR_POLICY",
[13] = "KDC_ERR_BADOPTION",
[14] = "KDC_ERR_ETYPE_NOSUPP",
[15] = "KDC_ERR_SUMTYPE_NOSUPP",
[16] = "KDC_ERR_PADATA_TYPE_NOSUPP",
[17] = "KDC_ERR_TRTYPE_NOSUPP",
[18] = "KDC_ERR_CLIENT_REVOKED",
[19] = "KDC_ERR_SERVICE_REVOKED",
[20] = "KDC_ERR_TGT_REVOKED",
[21] = "KDC_ERR_CLIENT_NOTYET",
[22] = "KDC_ERR_SERVICE_NOTYET",
[23] = "KDC_ERR_KEY_EXPIRED",
[24] = "KDC_ERR_PREAUTH_FAILED",
[25] = "KDC_ERR_PREAUTH_REQUIRED",
[26] = "KDC_ERR_SERVER_NOMATCH",
[27] = "KDC_ERR_MUST_USE_USER2USER",
[28] = "KDC_ERR_PATH_NOT_ACCEPTED",
[29] = "KDC_ERR_SVC_UNAVAILABLE",
[31] = "KRB_AP_ERR_BAD_INTEGRITY",
[32] = "KRB_AP_ERR_TKT_EXPIRED",
[33] = "KRB_AP_ERR_TKT_NYV",
[34] = "KRB_AP_ERR_REPEAT",
[35] = "KRB_AP_ERR_NOT_US",
[36] = "KRB_AP_ERR_BADMATCH",
[37] = "KRB_AP_ERR_SKEW",
[38] = "KRB_AP_ERR_BADADDR",
[39] = "KRB_AP_ERR_BADVERSION",
[40] = "KRB_AP_ERR_MSG_TYPE",
[41] = "KRB_AP_ERR_MODIFIED",
[42] = "KRB_AP_ERR_BADORDER",
[44] = "KRB_AP_ERR_BADKEYVER",
[45] = "KRB_AP_ERR_NOKEY",
[46] = "KRB_AP_ERR_MUT_FAIL",
[47] = "KRB_AP_ERR_BADDIRECTION",
[48] = "KRB_AP_ERR_METHOD",
[49] = "KRB_AP_ERR_BADSEQ",
[50] = "KRB_AP_ERR_INAPP_CKSUM",
[51] = "KRB_AP_PATH_NOT_ACCEPTED",
[52] = "KRB_ERR_RESPONSE_TOO_BIG",
[60] = "KRB_ERR_GENERIC",
[61] = "KRB_ERR_FIELD_TOOLONG",
[62] = "KDC_ERROR_CLIENT_NOT_TRUSTED",
[63] = "KDC_ERROR_KDC_NOT_TRUSTED",
[64] = "KDC_ERROR_INVALID_SIG",
[65] = "KDC_ERR_KEY_TOO_WEAK",
[66] = "KDC_ERR_CERTIFICATE_MISMATCH",
[67] = "KRB_AP_ERR_NO_TGT",
[68] = "KDC_ERR_WRONG_REALM",
[69] = "KRB_AP_ERR_USER_TO_USER_REQUIRED",
[70] = "KDC_ERR_CANT_VERIFY_CERTIFICATE",
[71] = "KDC_ERR_INVALID_CERTIFICATE",
[72] = "KDC_ERR_REVOKED_CERTIFICATE",
[73] = "KDC_ERR_REVOCATION_STATUS_UNKNOWN",
[74] = "KDC_ERR_REVOCATION_STATUS_UNAVAILABLE",
[75] = "KDC_ERR_CLIENT_NAME_MISMATCH",
[76] = "KDC_ERR_KDC_NAME_MISMATCH",
};
const cipher_name: table[count] of string = {
[1] = "des-cbc-crc",
[2] = "des-cbc-md4",
[3] = "des-cbc-md5",
[5] = "des3-cbc-md5",
[7] = "des3-cbc-sha1",
[9] = "dsaWithSHA1-CmsOID",
[10] = "md5WithRSAEncryption-CmsOID",
[11] = "sha1WithRSAEncryption-CmsOID",
[12] = "rc2CBC-EnvOID",
[13] = "rsaEncryption-EnvOID",
[14] = "rsaES-OAEP-ENV-OID",
[15] = "des-ede3-cbc-Env-OID",
[16] = "des3-cbc-sha1-kd",
[17] = "aes128-cts-hmac-sha1-96",
[18] = "aes256-cts-hmac-sha1-96",
[23] = "rc4-hmac",
[24] = "rc4-hmac-exp",
[25] = "camellia128-cts-cmac",
[26] = "camellia256-cts-cmac",
[65] = "subkey-keymaterial",
};
}

View file

@ -0,0 +1,26 @@
# This is the ASN.1 encoded version and message type headers
signature dpd_krb_udp_requests {
ip-proto == udp
payload /(\x6a|\x6c).{1,4}\x30.{1,4}\xa1\x03\x02\x01\x05\xa2\x03\x02\x01/
enable "krb"
}
signature dpd_krb_udp_replies {
ip-proto == udp
payload /(\x6b|\x6d|\x7e).{1,4}\x30.{1,4}\xa0\x03\x02\x01\x05\xa1\x03\x02\x01/
enable "krb"
}
signature dpd_krb_tcp_requests {
ip-proto == tcp
payload /.{4}(\x6a|\x6c).{1,4}\x30.{1,4}\xa1\x03\x02\x01\x05\xa2\x03\x02\x01/
enable "krb_tcp"
}
signature dpd_krb_tcp_replies {
ip-proto == tcp
payload /.{4}(\x6b|\x6d|\x7e).{1,4}\x30.{1,4}\xa0\x03\x02\x01\x05\xa1\x03\x02\x01/
enable "krb_tcp"
}

View file

@ -0,0 +1,142 @@
@load ./main
@load base/utils/conn-ids
@load base/frameworks/files
@load base/files/x509
module KRB;
export {
redef record Info += {
# Client certificate
client_cert: Files::Info &optional;
# Subject of client certificate, if any
client_cert_subject: string &log &optional;
# File unique ID of client cert, if any
client_cert_fuid: string &log &optional;
# Server certificate
server_cert: Files::Info &optional;
# Subject of server certificate, if any
server_cert_subject: string &log &optional;
# File unique ID of server cert, if any
server_cert_fuid: string &log &optional;
};
## Default file handle provider for KRB.
global get_file_handle: function(c: connection, is_orig: bool): string;
## Default file describer for KRB.
global describe_file: function(f: fa_file): string;
}
function get_file_handle(c: connection, is_orig: bool): string
{
# Unused. File handles are generated in the analyzer.
return "";
}
function describe_file(f: fa_file): string
{
if ( f$source != "KRB_TCP" && f$source != "KRB" )
return "";
if ( ! f?$info || ! f$info?$x509 || ! f$info$x509?$certificate )
return "";
# It is difficult to reliably describe a certificate - especially since
# we do not know when this function is called (hence, if the data structures
# are already populated).
#
# Just return a bit of our connection information and hope that that is good enough.
for ( cid in f$conns )
{
if ( f$conns[cid]?$krb )
{
local c = f$conns[cid];
return cat(c$id$resp_h, ":", c$id$resp_p);
}
}
return cat("Serial: ", f$info$x509$certificate$serial, " Subject: ",
f$info$x509$certificate$subject, " Issuer: ",
f$info$x509$certificate$issuer);
}
event bro_init() &priority=5
{
Files::register_protocol(Analyzer::ANALYZER_KRB_TCP,
[$get_file_handle = KRB::get_file_handle,
$describe = KRB::describe_file]);
Files::register_protocol(Analyzer::ANALYZER_KRB,
[$get_file_handle = KRB::get_file_handle,
$describe = KRB::describe_file]);
}
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
{
if ( f$source != "KRB_TCP" && f$source != "KRB" )
return;
local info: Info;
if ( ! c?$krb )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
else
info = c$krb;
if ( is_orig )
{
info$client_cert = f$info;
info$client_cert_fuid = f$id;
}
else
{
info$server_cert = f$info;
info$server_cert_fuid = f$id;
}
c$krb = info;
Files::add_analyzer(f, Files::ANALYZER_X509);
# Always calculate hashes. They are not necessary for base scripts
# but very useful for identification, and required for policy scripts
Files::add_analyzer(f, Files::ANALYZER_MD5);
Files::add_analyzer(f, Files::ANALYZER_SHA1);
}
function fill_in_subjects(c: connection)
{
if ( !c?$krb )
return;
if ( c$krb?$client_cert && c$krb$client_cert?$x509 && c$krb$client_cert$x509?$certificate )
c$krb$client_cert_subject = c$krb$client_cert$x509$certificate$subject;
if ( c$krb?$server_cert && c$krb$server_cert?$x509 && c$krb$server_cert$x509?$certificate )
c$krb$server_cert_subject = c$krb$server_cert$x509$certificate$subject;
}
event krb_error(c: connection, msg: Error_Msg)
{
fill_in_subjects(c);
}
event krb_as_response(c: connection, msg: KDC_Response)
{
fill_in_subjects(c);
}
event krb_tgs_response(c: connection, msg: KDC_Response)
{
fill_in_subjects(c);
}
event connection_state_remove(c: connection)
{
fill_in_subjects(c);
}

View file

@ -0,0 +1,251 @@
##! Implements base functionality for KRB analysis. Generates the kerberos.log
##! file.
module KRB;
@load ./consts
export {
redef enum Log::ID += { LOG };
type Info: record {
## Timestamp for when the event happened.
ts: time &log;
## Unique ID for the connection.
uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log;
## Request type - Authentication Service ("AS") or
## Ticket Granting Service ("TGS")
request_type: string &log &optional;
## Client
client: string &log &optional;
## Service
service: string &log;
## Request result
success: bool &log &optional;
## Error code
error_code: count &optional;
## Error message
error_msg: string &log &optional;
## Ticket valid from
from: time &log &optional;
## Ticket valid till
till: time &log &optional;
## Ticket encryption type
cipher: string &log &optional;
## Forwardable ticket requested
forwardable: bool &log &optional;
## Renewable ticket requested
renewable: bool &log &optional;
## We've already logged this
logged: bool &default=F;
};
## The server response error texts which are *not* logged.
const ignored_errors: set[string] = {
# This will significantly increase the noisiness of the log.
# However, one attack is to iterate over principals, looking
# for ones that don't require preauth, and then performn
# an offline attack on that ticket. To detect that attack,
# log NEEDED_PREAUTH.
"NEEDED_PREAUTH",
# This is a more specific version of NEEDED_PREAUTH that's used
# by Windows AD Kerberos.
"Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ",
} &redef;
## Event that can be handled to access the KRB record as it is sent on
## to the logging framework.
global log_krb: event(rec: Info);
}
redef record connection += {
krb: Info &optional;
};
const tcp_ports = { 88/tcp };
const udp_ports = { 88/udp };
redef likely_server_ports += { tcp_ports, udp_ports };
event bro_init() &priority=5
{
Analyzer::register_for_ports(Analyzer::ANALYZER_KRB, udp_ports);
Analyzer::register_for_ports(Analyzer::ANALYZER_KRB_TCP, tcp_ports);
Log::create_stream(KRB::LOG, [$columns=Info, $ev=log_krb, $path="kerberos"]);
}
event krb_error(c: connection, msg: Error_Msg) &priority=5
{
local info: Info;
if ( msg?$error_text && msg$error_text in ignored_errors )
{
if ( c?$krb ) delete c$krb;
return;
}
if ( c?$krb && c$krb$logged )
return;
if ( c?$krb )
info = c$krb;
if ( ! info?$ts )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
if ( ! info?$client && ( msg?$client_name || msg?$client_realm ) )
info$client = fmt("%s%s", msg?$client_name ? msg$client_name + "/" : "",
msg?$client_realm ? msg$client_realm : "");
info$service = msg$service_name;
info$success = F;
info$error_code = msg$error_code;
if ( msg?$error_text ) info$error_msg = msg$error_text;
else if ( msg$error_code in error_msg ) info$error_msg = error_msg[msg$error_code];
c$krb = info;
}
event krb_error(c: connection, msg: Error_Msg) &priority=-5
{
if ( c?$krb )
{
Log::write(KRB::LOG, c$krb);
c$krb$logged = T;
}
}
event krb_as_request(c: connection, msg: KDC_Request) &priority=5
{
if ( c?$krb && c$krb$logged )
return;
local info: Info;
if ( !c?$krb )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
else
info = c$krb;
info$request_type = "AS";
info$client = fmt("%s/%s", msg$client_name, msg$service_realm);
info$service = msg$service_name;
if ( msg?$from )
info$from = msg$from;
info$till = msg$till;
info$forwardable = msg$kdc_options$forwardable;
info$renewable = msg$kdc_options$renewable;
c$krb = info;
}
event krb_tgs_request(c: connection, msg: KDC_Request) &priority=5
{
if ( c?$krb && c$krb$logged )
return;
local info: Info;
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
info$request_type = "TGS";
info$service = msg$service_name;
if ( msg?$from ) info$from = msg$from;
info$till = msg$till;
info$forwardable = msg$kdc_options$forwardable;
info$renewable = msg$kdc_options$renewable;
c$krb = info;
}
event krb_as_response(c: connection, msg: KDC_Response) &priority=5
{
local info: Info;
if ( c?$krb && c$krb$logged )
return;
if ( c?$krb )
info = c$krb;
if ( ! info?$ts )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
if ( ! info?$client )
info$client = fmt("%s/%s", msg$client_name, msg$client_realm);
info$service = msg$ticket$service_name;
info$cipher = cipher_name[msg$ticket$cipher];
info$success = T;
c$krb = info;
}
event krb_as_response(c: connection, msg: KDC_Response) &priority=-5
{
Log::write(KRB::LOG, c$krb);
c$krb$logged = T;
}
event krb_tgs_response(c: connection, msg: KDC_Response) &priority=5
{
local info: Info;
if ( c?$krb && c$krb$logged )
return;
if ( c?$krb )
info = c$krb;
if ( ! info?$ts )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
if ( ! info?$client )
info$client = fmt("%s/%s", msg$client_name, msg$client_realm);
info$service = msg$ticket$service_name;
info$cipher = cipher_name[msg$ticket$cipher];
info$success = T;
c$krb = info;
}
event krb_tgs_response(c: connection, msg: KDC_Response) &priority=-5
{
Log::write(KRB::LOG, c$krb);
c$krb$logged = T;
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$krb && ! c$krb$logged )
Log::write(KRB::LOG, c$krb);
}

View file

@ -0,0 +1 @@
Support for MySQL protocol analysis.

View file

@ -0,0 +1 @@
Support for RADIUS protocol analysis.

View file

@ -0,0 +1 @@
Support for Remote Desktop Protocol (RDP) analysis.

View file

@ -0,0 +1 @@
Support for Session Initiation Protocol (SIP) analysis.

View file

@ -0,0 +1,3 @@
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,19 @@
signature dpd_sip_udp_req {
ip-proto == udp
payload /.* SIP\/[0-9]\.[0-9]\x0d\x0a/
enable "sip"
}
signature dpd_sip_udp_resp {
ip-proto == udp
payload /^ ?SIP\/[0-9]\.[0-9](\x0d\x0a| [0-9][0-9][0-9] )/
enable "sip"
}
# We don't support SIP-over-TCP yet.
#
# signature dpd_sip_tcp {
# ip-proto == tcp
# payload /^( SIP\/[0-9]\.[0-9]\x0d\x0a|SIP\/[0-9]\.[0-9] [0-9][0-9][0-9] )/
# enable "sip_tcp"
# }

View file

@ -0,0 +1,272 @@
##! Implements base functionality for SIP analysis. The logging model is
##! to log request/response pairs and all relevant metadata together in
##! a single record.
@load base/utils/numbers
@load base/utils/files
module SIP;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Timestamp for when the request happened.
ts: time &log;
## Unique ID for the connection.
uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log;
## Represents the pipelined depth into the connection of this
## request/response transaction.
trans_depth: count &log;
## Verb used in the SIP request (INVITE, REGISTER etc.).
method: string &log &optional;
## URI used in the request.
uri: string &log &optional;
## Contents of the Date: header from the client
date: string &log &optional;
## Contents of the request From: header
## Note: The tag= value that's usually appended to the sender
## is stripped off and not logged.
request_from: string &log &optional;
## Contents of the To: header
request_to: string &log &optional;
## Contents of the response From: header
## Note: The ``tag=`` value that's usually appended to the sender
## is stripped off and not logged.
response_from: string &log &optional;
## Contents of the response To: header
response_to: string &log &optional;
## Contents of the Reply-To: header
reply_to: string &log &optional;
## Contents of the Call-ID: header from the client
call_id: string &log &optional;
## Contents of the CSeq: header from the client
seq: string &log &optional;
## Contents of the Subject: header from the client
subject: string &log &optional;
## The client message transmission path, as extracted from the headers.
request_path: vector of string &log &optional;
## The server message transmission path, as extracted from the headers.
response_path: vector of string &log &optional;
## Contents of the User-Agent: header from the client
user_agent: string &log &optional;
## Status code returned by the server.
status_code: count &log &optional;
## Status message returned by the server.
status_msg: string &log &optional;
## Contents of the Warning: header
warning: string &log &optional;
## Contents of the Content-Length: header from the client
request_body_len: string &log &optional;
## Contents of the Content-Length: header from the server
response_body_len: string &log &optional;
## Contents of the Content-Type: header from the server
content_type: string &log &optional;
};
type State: record {
## Pending requests.
pending: table[count] of Info;
## Current request in the pending queue.
current_request: count &default=0;
## Current response in the pending queue.
current_response: count &default=0;
};
## A list of SIP methods. Other methods will generate a weird. Note
## that the SIP analyzer will only accept methods consisting solely
## of letters ``[A-Za-z]``.
const sip_methods: set[string] = {
"REGISTER", "INVITE", "ACK", "CANCEL", "BYE", "OPTIONS"
} &redef;
## Event that can be handled to access the SIP record as it is sent on
## to the logging framework.
global log_sip: event(rec: Info);
}
# Add the sip state tracking fields to the connection record.
redef record connection += {
sip: Info &optional;
sip_state: State &optional;
};
const ports = { 5060/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(SIP::LOG, [$columns=Info, $ev=log_sip, $path="sip"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SIP, ports);
}
function new_sip_session(c: connection): Info
{
local tmp: Info;
tmp$ts=network_time();
tmp$uid=c$uid;
tmp$id=c$id;
# $current_request is set prior to the Info record creation so we
# can use the value directly here.
tmp$trans_depth = c$sip_state$current_request;
tmp$request_path = vector();
tmp$response_path = vector();
return tmp;
}
function set_state(c: connection, is_request: bool)
{
if ( ! c?$sip_state )
{
local s: State;
c$sip_state = s;
}
# These deal with new requests and responses.
if ( is_request && c$sip_state$current_request !in c$sip_state$pending )
c$sip_state$pending[c$sip_state$current_request] = new_sip_session(c);
if ( ! is_request && c$sip_state$current_response !in c$sip_state$pending )
c$sip_state$pending[c$sip_state$current_response] = new_sip_session(c);
if ( is_request )
c$sip = c$sip_state$pending[c$sip_state$current_request];
else
c$sip = c$sip_state$pending[c$sip_state$current_response];
if ( is_request )
{
if ( c$sip_state$current_request !in c$sip_state$pending )
c$sip_state$pending[c$sip_state$current_request] = new_sip_session(c);
c$sip = c$sip_state$pending[c$sip_state$current_request];
}
else
{
if ( c$sip_state$current_response !in c$sip_state$pending )
c$sip_state$pending[c$sip_state$current_response] = new_sip_session(c);
c$sip = c$sip_state$pending[c$sip_state$current_response];
}
}
function flush_pending(c: connection)
{
# Flush all pending but incomplete request/response pairs.
if ( c?$sip_state )
{
for ( r in c$sip_state$pending )
{
# We don't use pending elements at index 0.
if ( r == 0 ) next;
Log::write(SIP::LOG, c$sip_state$pending[r]);
}
}
}
event sip_request(c: connection, method: string, original_URI: string, version: string) &priority=5
{
set_state(c, T);
c$sip$method = method;
c$sip$uri = original_URI;
if ( method !in sip_methods )
event conn_weird("unknown_SIP_method", c, method);
}
event sip_reply(c: connection, version: string, code: count, reason: string) &priority=5
{
set_state(c, F);
if ( c$sip_state$current_response !in c$sip_state$pending &&
(code < 100 && 200 <= code) )
++c$sip_state$current_response;
c$sip$status_code = code;
c$sip$status_msg = reason;
}
event sip_header(c: connection, is_request: bool, name: string, value: string) &priority=5
{
if ( ! c?$sip_state )
{
local s: State;
c$sip_state = s;
}
if ( is_request ) # from client
{
if ( c$sip_state$current_request !in c$sip_state$pending )
++c$sip_state$current_request;
set_state(c, is_request);
if ( name == "CALL-ID" ) c$sip$call_id = value;
else if ( name == "CONTENT-LENGTH" || name == "L" ) c$sip$request_body_len = value;
else if ( name == "CSEQ" ) c$sip$seq = value;
else if ( name == "DATE" ) c$sip$date = value;
else if ( name == "FROM" || name == "F" ) c$sip$request_from = split_string1(value, /;[ ]?tag=/)[0];
else if ( name == "REPLY-TO" ) c$sip$reply_to = value;
else if ( name == "SUBJECT" || name == "S" ) c$sip$subject = value;
else if ( name == "TO" || name == "T" ) c$sip$request_to = value;
else if ( name == "USER-AGENT" ) c$sip$user_agent = value;
else if ( name == "VIA" || name == "V" ) c$sip$request_path[|c$sip$request_path|] = split_string1(value, /;[ ]?branch/)[0];
c$sip_state$pending[c$sip_state$current_request] = c$sip;
}
else # from server
{
if ( c$sip_state$current_response !in c$sip_state$pending )
++c$sip_state$current_response;
set_state(c, is_request);
if ( name == "CONTENT-LENGTH" || name == "L" ) c$sip$response_body_len = value;
else if ( name == "CONTENT-TYPE" || name == "C" ) c$sip$content_type = value;
else if ( name == "WARNING" ) c$sip$warning = value;
else if ( name == "FROM" || name == "F" ) c$sip$response_from = split_string1(value, /;[ ]?tag=/)[0];
else if ( name == "TO" || name == "T" ) c$sip$response_to = value;
else if ( name == "VIA" || name == "V" ) c$sip$response_path[|c$sip$response_path|] = split_string1(value, /;[ ]?branch/)[0];
c$sip_state$pending[c$sip_state$current_response] = c$sip;
}
}
event sip_end_entity(c: connection, is_request: bool) &priority = 5
{
set_state(c, is_request);
}
event sip_end_entity(c: connection, is_request: bool) &priority = -5
{
# The reply body is done so we're ready to log.
if ( ! is_request )
{
Log::write(SIP::LOG, c$sip);
if ( c$sip$status_code < 100 || 200 <= c$sip$status_code )
delete c$sip_state$pending[c$sip_state$current_response];
if ( ! c$sip?$method || ( c$sip$method == "BYE" &&
c$sip$status_code >= 200 && c$sip$status_code < 300 ) )
{
flush_pending(c);
delete c$sip;
delete c$sip_state;
}
}
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$sip_state )
{
for ( r in c$sip_state$pending )
{
Log::write(SIP::LOG, c$sip_state$pending[r]);
}
}
}

View file

@ -29,6 +29,8 @@ export {
from: string &log &optional; from: string &log &optional;
## Contents of the To header. ## Contents of the To header.
to: set[string] &log &optional; to: set[string] &log &optional;
## Contents of the CC header.
cc: set[string] &log &optional;
## Contents of the ReplyTo header. ## Contents of the ReplyTo header.
reply_to: string &log &optional; reply_to: string &log &optional;
## Contents of the MsgID header. ## Contents of the MsgID header.
@ -239,6 +241,16 @@ event mime_one_header(c: connection, h: mime_header_rec) &priority=5
add c$smtp$to[to_parts[i]]; add c$smtp$to[to_parts[i]];
} }
else if ( h$name == "CC" )
{
if ( ! c$smtp?$cc )
c$smtp$cc = set();
local cc_parts = split_string(h$value, /[[:blank:]]*,[[:blank:]]*/);
for ( i in cc_parts )
add c$smtp$cc[cc_parts[i]];
}
else if ( h$name == "X-ORIGINATING-IP" ) else if ( h$name == "X-ORIGINATING-IP" )
{ {
local addresses = extract_ip_addresses(h$value); local addresses = extract_ip_addresses(h$value);

View file

@ -0,0 +1 @@
Support for SSH protocol analysis.

View file

@ -93,6 +93,10 @@ function set_session(c: connection)
info$ts = network_time(); info$ts = network_time();
info$uid = c$uid; info$uid = c$uid;
info$id = c$id; info$id = c$id;
# If both hosts are local or non-local, we can't reliably set a direction.
if ( Site::is_local_addr(c$id$orig_h) != Site::is_local_addr(c$id$resp_h) )
info$direction = Site::is_local_addr(c$id$orig_h) ? OUTBOUND: INBOUND;
c$ssh = info; c$ssh = info;
} }
} }
@ -114,7 +118,7 @@ event ssh_client_version(c: connection, version: string)
c$ssh$version = 2; c$ssh$version = 2;
} }
event ssh_auth_successful(c: connection, auth_method_none: bool) event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=5
{ {
# TODO - what to do here? # TODO - what to do here?
if ( !c?$ssh || ( c$ssh?$auth_success && c$ssh$auth_success ) ) if ( !c?$ssh || ( c$ssh?$auth_success && c$ssh$auth_success ) )
@ -142,7 +146,7 @@ event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=-5
} }
} }
event ssh_auth_failed(c: connection) event ssh_auth_failed(c: connection) &priority=5
{ {
if ( !c?$ssh || ( c$ssh?$auth_success && !c$ssh$auth_success ) ) if ( !c?$ssh || ( c$ssh?$auth_success && !c$ssh$auth_success ) )
return; return;

View file

@ -120,9 +120,9 @@ export {
[18] = "signed_certificate_timestamp", [18] = "signed_certificate_timestamp",
[19] = "client_certificate_type", [19] = "client_certificate_type",
[20] = "server_certificate_type", [20] = "server_certificate_type",
[21] = "padding", # temporary till 2015-03-12 [21] = "padding", # temporary till 2016-03-12
[22] = "encrypt_then_mac", [22] = "encrypt_then_mac",
[23] = "extended_master_secret", # temporary till 2015-09-26 [23] = "extended_master_secret",
[35] = "SessionTicket TLS", [35] = "SessionTicket TLS",
[40] = "extended_random", [40] = "extended_random",
[13172] = "next_protocol_negotiation", [13172] = "next_protocol_negotiation",
@ -169,7 +169,8 @@ export {
[256] = "ffdhe2048", [256] = "ffdhe2048",
[257] = "ffdhe3072", [257] = "ffdhe3072",
[258] = "ffdhe4096", [258] = "ffdhe4096",
[259] = "ffdhe8192", [259] = "ffdhe6144",
[260] = "ffdhe8192",
[0xFF01] = "arbitrary_explicit_prime_curves", [0xFF01] = "arbitrary_explicit_prime_curves",
[0xFF02] = "arbitrary_explicit_char2_curves" [0xFF02] = "arbitrary_explicit_char2_curves"
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };

View file

@ -1,7 +1,7 @@
signature dpd_ssl_server { signature dpd_ssl_server {
ip-proto == tcp ip-proto == tcp
# Server hello. # Server hello.
payload /^(\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/ payload /^((\x15\x03[\x00\x01\x02\x03]....)?\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client requires-reverse-signature dpd_ssl_client
enable "ssl" enable "ssl"
tcp-state responder tcp-state responder

File diff suppressed because one or more lines are too long

View file

@ -9,6 +9,6 @@ signature dpd_ayiya {
signature dpd_teredo { signature dpd_teredo {
ip-proto = udp ip-proto = udp
payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/ payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f].{7}((\x20\x01\x00\x00)).{28})|([\x60-\x6f].{23}((\x20\x01\x00\x00))).{12}/
enable "teredo" enable "teredo"
} }

View file

@ -6,23 +6,23 @@ const url_regex = /^([a-zA-Z\-]{3,5})(:\/\/[^\/?#"'\r\n><]*)([^?#"'\r\n><]*)([^[
## A URI, as parsed by :bro:id:`decompose_uri`. ## A URI, as parsed by :bro:id:`decompose_uri`.
type URI: record { type URI: record {
## The URL's scheme.. ## The URL's scheme..
scheme: string &optional; scheme: string &optional;
## The location, which could be a domain name or an IP address. Left empty if not ## The location, which could be a domain name or an IP address. Left empty if not
## specified. ## specified.
netlocation: string; netlocation: string;
## Port number, if included in URI. ## Port number, if included in URI.
portnum: count &optional; portnum: count &optional;
## Full including the file name. Will be '/' if there's not path given. ## Full including the file name. Will be '/' if there's not path given.
path: string; path: string;
## Full file name, including extension, if there is a file name. ## Full file name, including extension, if there is a file name.
file_name: string &optional; file_name: string &optional;
## The base filename, without extension, if there is a file name. ## The base filename, without extension, if there is a file name.
file_base: string &optional; file_base: string &optional;
## The filename's extension, if there is a file name. ## The filename's extension, if there is a file name.
file_ext: string &optional; file_ext: string &optional;
## A table of all query parameters, mapping their keys to values, if there's a ## A table of all query parameters, mapping their keys to values, if there's a
## query. ## query.
params: table[string] of string &optional; params: table[string] of string &optional;
}; };
## Extracts URLs discovered in arbitrary text. ## Extracts URLs discovered in arbitrary text.
@ -46,19 +46,19 @@ function find_all_urls_without_scheme(s: string): string_set
return return_urls; return return_urls;
} }
function decompose_uri(s: string): URI function decompose_uri(uri: string): URI
{ {
local parts: string_vec; local parts: string_vec;
local u: URI = [$netlocation="", $path="/"]; local u = URI($netlocation="", $path="/");
local s = uri;
if ( /\?/ in s) if ( /\?/ in s )
{ {
# Parse query.
u$params = table(); u$params = table();
parts = split_string1(s, /\?/); parts = split_string1(s, /\?/);
s = parts[0]; s = parts[0];
local query: string = parts[1]; local query = parts[1];
if ( /&/ in query ) if ( /&/ in query )
{ {
@ -73,7 +73,7 @@ function decompose_uri(s: string): URI
} }
} }
} }
else else if ( /=/ in query )
{ {
parts = split_string1(query, /=/); parts = split_string1(query, /=/);
u$params[parts[0]] = parts[1]; u$params[parts[0]] = parts[1];
@ -97,14 +97,14 @@ function decompose_uri(s: string): URI
if ( |u$path| > 1 && u$path[|u$path| - 1] != "/" ) if ( |u$path| > 1 && u$path[|u$path| - 1] != "/" )
{ {
local last_token: string = find_last(u$path, /\/.+/); local last_token = find_last(u$path, /\/.+/);
local full_filename = split_string1(last_token, /\//)[1]; local full_filename = split_string1(last_token, /\//)[1];
if ( /\./ in full_filename ) if ( /\./ in full_filename )
{ {
u$file_name = full_filename; u$file_name = full_filename;
u$file_base = split_string1(full_filename, /\./)[0]; u$file_base = split_string1(full_filename, /\./)[0];
u$file_ext = split_string1(full_filename, /\./)[1]; u$file_ext = split_string1(full_filename, /\./)[1];
} }
else else
{ {
@ -122,7 +122,9 @@ function decompose_uri(s: string): URI
u$portnum = to_count(parts[1]); u$portnum = to_count(parts[1]);
} }
else else
{
u$netlocation = s; u$netlocation = s;
}
return u; return u;
} }

View file

@ -4,7 +4,7 @@
##! ##!
##! It's intended to be used from the command line like this:: ##! It's intended to be used from the command line like this::
##! ##!
##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::port=<host_port> Control::cmd=<command> [Control::arg=<arg>] ##! bro <scripts> frameworks/control/controller Control::host=<host_addr> Control::host_port=<host_port> Control::cmd=<command> [Control::arg=<arg>]
@load base/frameworks/control @load base/frameworks/control
@load base/frameworks/communication @load base/frameworks/communication

View file

@ -1,5 +1,7 @@
##! Perform MD5 and SHA1 hashing on all files. ##! Perform MD5 and SHA1 hashing on all files.
@load base/files/hash
event file_new(f: fa_file) event file_new(f: fa_file)
{ {
Files::add_analyzer(f, Files::ANALYZER_MD5); Files::add_analyzer(f, Files::ANALYZER_MD5);

View file

@ -0,0 +1,26 @@
##! This script add VLAN information to the connection logs
@load base/protocols/conn
module Conn;
redef record Info += {
## The outer VLAN for this connection, if applicable.
vlan: int &log &optional;
## The inner VLAN for this connection, if applicable.
inner_vlan: int &log &optional;
};
# Add the VLAN information to the Conn::Info structure after the connection
# has been removed. This ensures it's only done once, and is done before the
# connection information is written to the log.
event connection_state_remove(c: connection) &priority=5
{
if ( c?$vlan )
c$conn$vlan = c$vlan;
if ( c?$inner_vlan )
c$conn$inner_vlan = c$inner_vlan;
}

View file

@ -19,12 +19,12 @@ export {
}; };
} }
event rexmit_inconsistency(c: connection, t1: string, t2: string) event rexmit_inconsistency(c: connection, t1: string, t2: string, tcp_flags: string)
{ {
NOTICE([$note=Retransmission_Inconsistency, NOTICE([$note=Retransmission_Inconsistency,
$conn=c, $conn=c,
$msg=fmt("%s rexmit inconsistency (%s) (%s)", $msg=fmt("%s rexmit inconsistency (%s) (%s) [%s]",
id_string(c$id), t1, t2), id_string(c$id), t1, t2, tcp_flags),
$identifier=fmt("%s", c$id)]); $identifier=fmt("%s", c$id)]);
} }

View file

@ -1,4 +1,4 @@
##! Detect browser plugins as they leak through requests to Omniture ##! Detect browser plugins as they leak through requests to Omniture
##! advertising servers. ##! advertising servers.
@load base/protocols/http @load base/protocols/http
@ -10,8 +10,10 @@ export {
redef record Info += { redef record Info += {
## Indicates if the server is an omniture advertising server. ## Indicates if the server is an omniture advertising server.
omniture: bool &default=F; omniture: bool &default=F;
## The unparsed Flash version, if detected.
flash_version: string &optional;
}; };
redef enum Software::Type += { redef enum Software::Type += {
## Identifier for browser plugins in the software framework. ## Identifier for browser plugins in the software framework.
BROWSER_PLUGIN BROWSER_PLUGIN
@ -22,12 +24,20 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
{ {
if ( is_orig ) if ( is_orig )
{ {
if ( name == "X-FLASH-VERSION" ) switch ( name )
{ {
# Flash doesn't include it's name so we'll add it here since it case "X-FLASH-VERSION":
# simplifies the version parsing. # Flash doesn't include it's name so we'll add it here since it
value = cat("Flash/", value); # simplifies the version parsing.
Software::found(c$id, [$unparsed_version=value, $host=c$id$orig_h, $software_type=BROWSER_PLUGIN]); c$http$flash_version = cat("Flash/", value);
break;
case "X-REQUESTED-WITH":
# This header is usually used to indicate AJAX requests (XMLHttpRequest),
# but Chrome uses this header also to indicate the use of Flash.
if ( /Flash/ in value )
c$http$flash_version = value;
break;
} }
} }
else else
@ -38,9 +48,26 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
} }
} }
event http_message_done(c: connection, is_orig: bool, stat: http_message_stat)
{
# If a Flash was detected, it has to be logged considering the user agent.
if ( is_orig && c$http?$flash_version )
{
# AdobeAIR contains a seperate Flash, which should be emphasized.
# Note: We assume that the user agent header was not reset by the app.
if( c$http?$user_agent )
{
if ( /AdobeAIR/ in c$http$user_agent )
c$http$flash_version = cat("AdobeAIR-", c$http$flash_version);
}
Software::found(c$id, [$unparsed_version=c$http$flash_version, $host=c$id$orig_h, $software_type=BROWSER_PLUGIN]);
}
}
event log_http(rec: Info) event log_http(rec: Info)
{ {
# We only want to inspect requests that were sent to omniture advertising # We only want to inspect requests that were sent to omniture advertising
# servers. # servers.
if ( rec$omniture && rec?$uri ) if ( rec$omniture && rec?$uri )
{ {
@ -48,11 +75,11 @@ event log_http(rec: Info)
local parts = split_string_n(rec$uri, /&p=([^&]{5,});&/, T, 1); local parts = split_string_n(rec$uri, /&p=([^&]{5,});&/, T, 1);
if ( 1 in parts ) if ( 1 in parts )
{ {
# We do sub_bytes here just to remove the extra extracted # We do sub_bytes here just to remove the extra extracted
# characters from the regex split above. # characters from the regex split above.
local sw = sub_bytes(parts[1], 4, |parts[1]|-5); local sw = sub_bytes(parts[1], 4, |parts[1]|-5);
local plugins = split_string(sw, /[[:blank:]]*;[[:blank:]]*/); local plugins = split_string(sw, /[[:blank:]]*;[[:blank:]]*/);
for ( i in plugins ) for ( i in plugins )
Software::found(rec$id, [$unparsed_version=plugins[i], $host=rec$id$orig_h, $software_type=BROWSER_PLUGIN]); Software::found(rec$id, [$unparsed_version=plugins[i], $host=rec$id$orig_h, $software_type=BROWSER_PLUGIN]);
} }

View file

@ -12,14 +12,14 @@ export {
## notice will be generated. ## notice will be generated.
Watched_Country_Login, Watched_Country_Login,
}; };
redef record Info += { redef record Info += {
## Add geographic data related to the "remote" host of the ## Add geographic data related to the "remote" host of the
## connection. ## connection.
remote_location: geo_location &log &optional; remote_location: geo_location &log &optional;
}; };
## The set of countries for which you'd like to generate notices upon ## The set of countries for which you'd like to generate notices upon
## successful login. ## successful login.
const watched_countries: set[string] = {"RO"} &redef; const watched_countries: set[string] = {"RO"} &redef;
} }
@ -32,21 +32,27 @@ function get_location(c: connection): geo_location
event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=3 event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=3
{ {
if ( ! c$ssh?$direction )
return;
# Add the location data to the SSH record. # Add the location data to the SSH record.
c$ssh$remote_location = get_location(c); c$ssh$remote_location = get_location(c);
if ( c$ssh$remote_location?$country_code && c$ssh$remote_location$country_code in watched_countries ) if ( c$ssh$remote_location?$country_code && c$ssh$remote_location$country_code in watched_countries )
{ {
NOTICE([$note=Watched_Country_Login, NOTICE([$note=Watched_Country_Login,
$conn=c, $conn=c,
$msg=fmt("SSH login %s watched country: %s", $msg=fmt("SSH login %s watched country: %s",
(c$ssh$direction == OUTBOUND) ? "to" : "from", (c$ssh$direction == OUTBOUND) ? "to" : "from",
c$ssh$remote_location$country_code)]); c$ssh$remote_location$country_code)]);
} }
} }
event ssh_auth_failed(c: connection) &priority=3 event ssh_auth_failed(c: connection) &priority=3
{ {
if ( ! c$ssh?$direction )
return;
# Add the location data to the SSH record. # Add the location data to the SSH record.
c$ssh$remote_location = get_location(c); c$ssh$remote_location = get_location(c);
} }

View file

@ -1,4 +1,4 @@
##! Local site policy. Customize as appropriate. ##! Local site policy. Customize as appropriate.
##! ##!
##! This file will not be overwritten when upgrading or reinstalling! ##! This file will not be overwritten when upgrading or reinstalling!
@ -11,16 +11,16 @@
# Load the scan detection script. # Load the scan detection script.
@load misc/scan @load misc/scan
# Log some information about web applications being used by users # Log some information about web applications being used by users
# on your network. # on your network.
@load misc/app-stats @load misc/app-stats
# Detect traceroute being run on the network. # Detect traceroute being run on the network.
@load misc/detect-traceroute @load misc/detect-traceroute
# Generate notices when vulnerable versions of software are discovered. # Generate notices when vulnerable versions of software are discovered.
# The default is to only monitor software found in the address space defined # The default is to only monitor software found in the address space defined
# as "local". Refer to the software framework's documentation for more # as "local". Refer to the software framework's documentation for more
# information. # information.
@load frameworks/software/vulnerable @load frameworks/software/vulnerable
@ -35,12 +35,12 @@
@load protocols/smtp/software @load protocols/smtp/software
@load protocols/ssh/software @load protocols/ssh/software
@load protocols/http/software @load protocols/http/software
# The detect-webapps script could possibly cause performance trouble when # The detect-webapps script could possibly cause performance trouble when
# running on live traffic. Enable it cautiously. # running on live traffic. Enable it cautiously.
#@load protocols/http/detect-webapps #@load protocols/http/detect-webapps
# This script detects DNS results pointing toward your Site::local_nets # This script detects DNS results pointing toward your Site::local_nets
# where the name is not part of your local DNS zone and is being hosted # where the name is not part of your local DNS zone and is being hosted
# externally. Requires that the Site::local_zones variable is defined. # externally. Requires that the Site::local_zones variable is defined.
@load protocols/dns/detect-external-names @load protocols/dns/detect-external-names
@ -62,7 +62,7 @@
# certificate notary service; see http://notary.icsi.berkeley.edu . # certificate notary service; see http://notary.icsi.berkeley.edu .
# @load protocols/ssl/notary # @load protocols/ssl/notary
# If you have libGeoIP support built in, do some geographic detections and # If you have libGeoIP support built in, do some geographic detections and
# logging for SSH traffic. # logging for SSH traffic.
@load protocols/ssh/geo-data @load protocols/ssh/geo-data
# Detect hosts doing SSH bruteforce attacks. # Detect hosts doing SSH bruteforce attacks.
@ -84,3 +84,7 @@
# Uncomment the following line to enable detection of the heartbleed attack. Enabling # Uncomment the following line to enable detection of the heartbleed attack. Enabling
# this might impact performance a bit. # this might impact performance a bit.
# @load policy/protocols/ssl/heartbleed # @load policy/protocols/ssl/heartbleed
# Uncomment the following line to enable logging of connection VLANs. Enabling
# this adds two VLAN fields to the conn.log file.
# @load policy/protocols/conn/vlan-logging

View file

@ -62,6 +62,7 @@
@load misc/trim-trace-file.bro @load misc/trim-trace-file.bro
@load protocols/conn/known-hosts.bro @load protocols/conn/known-hosts.bro
@load protocols/conn/known-services.bro @load protocols/conn/known-services.bro
@load protocols/conn/vlan-logging.bro
@load protocols/conn/weirds.bro @load protocols/conn/weirds.bro
@load protocols/dhcp/known-devices-and-hostnames.bro @load protocols/dhcp/known-devices-and-hostnames.bro
@load protocols/dns/auth-addl.bro @load protocols/dns/auth-addl.bro

View file

@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright. // See the file "COPYING" in the main distribution directory for copyright.
#include "config.h" #include "bro-config.h"
#include "Attr.h" #include "Attr.h"
#include "Expr.h" #include "Expr.h"

View file

@ -1,4 +1,4 @@
#include "config.h" #include "bro-config.h"
#include "Base64.h" #include "Base64.h"
#include <math.h> #include <math.h>
@ -82,7 +82,7 @@ int* Base64Converter::InitBase64Table(const string& alphabet)
return base64_table; return base64_table;
} }
Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string& arg_alphabet) Base64Converter::Base64Converter(Connection* arg_conn, const string& arg_alphabet)
{ {
if ( arg_alphabet.size() > 0 ) if ( arg_alphabet.size() > 0 )
{ {
@ -98,7 +98,7 @@ Base64Converter::Base64Converter(analyzer::Analyzer* arg_analyzer, const string&
base64_group_next = 0; base64_group_next = 0;
base64_padding = base64_after_padding = 0; base64_padding = base64_after_padding = 0;
errored = 0; errored = 0;
analyzer = arg_analyzer; conn = arg_conn;
} }
Base64Converter::~Base64Converter() Base64Converter::~Base64Converter()
@ -216,9 +216,9 @@ int Base64Converter::Done(int* pblen, char** pbuf)
} }
BroString* decode_base64(const BroString* s, const BroString* a) BroString* decode_base64(const BroString* s, const BroString* a, Connection* conn)
{ {
if ( a && a->Len() != 64 ) if ( a && a->Len() != 0 && a->Len() != 64 )
{ {
reporter->Error("base64 decoding alphabet is not 64 characters: %s", reporter->Error("base64 decoding alphabet is not 64 characters: %s",
a->CheckString()); a->CheckString());
@ -229,7 +229,7 @@ BroString* decode_base64(const BroString* s, const BroString* a)
int rlen2, rlen = buf_len; int rlen2, rlen = buf_len;
char* rbuf2, *rbuf = new char[rlen]; char* rbuf2, *rbuf = new char[rlen];
Base64Converter dec(0, a ? a->CheckString() : ""); Base64Converter dec(conn, a ? a->CheckString() : "");
if ( dec.Decode(s->Len(), (const char*) s->Bytes(), &rlen, &rbuf) == -1 ) if ( dec.Decode(s->Len(), (const char*) s->Bytes(), &rlen, &rbuf) == -1 )
goto err; goto err;
@ -248,9 +248,9 @@ err:
return 0; return 0;
} }
BroString* encode_base64(const BroString* s, const BroString* a) BroString* encode_base64(const BroString* s, const BroString* a, Connection* conn)
{ {
if ( a && a->Len() != 64 ) if ( a && a->Len() != 0 && a->Len() != 64 )
{ {
reporter->Error("base64 alphabet is not 64 characters: %s", reporter->Error("base64 alphabet is not 64 characters: %s",
a->CheckString()); a->CheckString());
@ -259,7 +259,7 @@ BroString* encode_base64(const BroString* s, const BroString* a)
char* outbuf = 0; char* outbuf = 0;
int outlen = 0; int outlen = 0;
Base64Converter enc(0, a ? a->CheckString() : ""); Base64Converter enc(conn, a ? a->CheckString() : "");
enc.Encode(s->Len(), (const unsigned char*) s->Bytes(), &outlen, &outbuf); enc.Encode(s->Len(), (const unsigned char*) s->Bytes(), &outlen, &outbuf);
return new BroString(1, (u_char*)outbuf, outlen); return new BroString(1, (u_char*)outbuf, outlen);

View file

@ -8,15 +8,17 @@
#include "util.h" #include "util.h"
#include "BroString.h" #include "BroString.h"
#include "Reporter.h" #include "Reporter.h"
#include "analyzer/Analyzer.h" #include "Conn.h"
// Maybe we should have a base class for generic decoders? // Maybe we should have a base class for generic decoders?
class Base64Converter { class Base64Converter {
public: public:
// <analyzer> is used for error reporting, and it should be zero when // <conn> is used for error reporting. If it is set to zero (as,
// the decoder is called by the built-in function decode_base64() or encode_base64(). // e.g., done by the built-in functions decode_base64() and
// Empty alphabet indicates the default base64 alphabet. // encode_base64()), encoding-errors will go to Reporter instead of
Base64Converter(analyzer::Analyzer* analyzer, const string& alphabet = ""); // Weird. Usage errors go to Reporter in any case. Empty alphabet
// indicates the default base64 alphabet.
Base64Converter(Connection* conn, const string& alphabet = "");
~Base64Converter(); ~Base64Converter();
// A note on Decode(): // A note on Decode():
@ -42,8 +44,8 @@ public:
void IllegalEncoding(const char* msg) void IllegalEncoding(const char* msg)
{ {
// strncpy(error_msg, msg, sizeof(error_msg)); // strncpy(error_msg, msg, sizeof(error_msg));
if ( analyzer ) if ( conn )
analyzer->Weird("base64_illegal_encoding", msg); conn->Weird("base64_illegal_encoding", msg);
else else
reporter->Error("%s", msg); reporter->Error("%s", msg);
} }
@ -63,11 +65,11 @@ protected:
int base64_after_padding; int base64_after_padding;
int* base64_table; int* base64_table;
int errored; // if true, we encountered an error - skip further processing int errored; // if true, we encountered an error - skip further processing
analyzer::Analyzer* analyzer; Connection* conn;
}; };
BroString* decode_base64(const BroString* s, const BroString* a = 0); BroString* decode_base64(const BroString* s, const BroString* a = 0, Connection* conn = 0);
BroString* encode_base64(const BroString* s, const BroString* a = 0); BroString* encode_base64(const BroString* s, const BroString* a = 0, Connection* conn = 0);
#endif /* base64_h */ #endif /* base64_h */

View file

@ -1,6 +1,6 @@
// See the file "COPYING" in the main distribution directory for copyright. // See the file "COPYING" in the main distribution directory for copyright.
#include "config.h" #include "bro-config.h"
#include <algorithm> #include <algorithm>
#include <ctype.h> #include <ctype.h>
@ -194,22 +194,7 @@ char* BroString::Render(int format, int* len) const
for ( int i = 0; i < n; ++i ) for ( int i = 0; i < n; ++i )
{ {
if ( b[i] == '\0' && (format & ESC_NULL) ) if ( b[i] == '\\' && (format & ESC_ESC) )
{
*sp++ = '\\'; *sp++ = '0';
}
else if ( b[i] == '\x7f' && (format & ESC_DEL) )
{
*sp++ = '^'; *sp++ = '?';
}
else if ( b[i] <= 26 && (format & ESC_LOW) )
{
*sp++ = '^'; *sp++ = b[i] + 'A' - 1;
}
else if ( b[i] == '\\' && (format & ESC_ESC) )
{ {
*sp++ = '\\'; *sp++ = '\\'; *sp++ = '\\'; *sp++ = '\\';
} }

Some files were not shown because too many files have changed in this diff Show more