Merge remote-tracking branch 'origin/master' into topic/dnthayer/doc-improvements-2.4

This commit is contained in:
Daniel Thayer 2015-05-25 11:59:34 -05:00
commit 9cde2be727
653 changed files with 25848 additions and 7940 deletions

3
.gitmodules vendored
View file

@ -22,3 +22,6 @@
[submodule "aux/plugins"] [submodule "aux/plugins"]
path = aux/plugins path = aux/plugins
url = git://git.bro.org/bro-plugins url = git://git.bro.org/bro-plugins
[submodule "aux/broker"]
path = aux/broker
url = git://git.bro.org/broker

601
CHANGES
View file

@ -1,4 +1,601 @@
2.4-beta | 2015-05-07 21:55:31 -0700
* Release 2.4-beta.
* Update local-compat.test (Johanna Amann)
2.3-913 | 2015-05-06 09:58:00 -0700
* Add /sbin to PATH in btest.cfg and remove duplicate default_path.
(Daniel Thayer)
2.3-911 | 2015-05-04 09:58:09 -0700
* Update usage output and list of command line options. (Daniel
Thayer)
* Fix to ssh/geo-data.bro for unset directions. (Vlad Grigorescu)
* Improve SIP logging and remove reporter messages. (Seth Hall)
2.3-905 | 2015-04-29 17:01:30 -0700
* Improve SIP logging and remove reporter messages. (Seth Hall)
2.3-903 | 2015-04-27 17:27:59 -0700
* BIT-1350: Improve record coercion type checking. (Jon Siwek)
2.3-901 | 2015-04-27 17:25:27 -0700
* BIT-1384: Remove -O (optimize scripts) command-line option, which
hadn't been working for a while already. (Jon Siwek)
2.3-899 | 2015-04-27 17:22:42 -0700
* Fix the -J/--set-seed cmd-line option. (Daniel Thayer)
* Remove unused -l, -L, and -Z cmd-line options. (Daniel Thayer)
2.3-892 | 2015-04-27 08:22:22 -0700
* Fix typos in the Broker BIF documentation. (Daniel Thayer)
* Update installation instructions and remove outdated references.
(Johanna Amann)
* Easier support for systems with tcmalloc_minimal installed. (Seth
Hall)
2.3-884 | 2015-04-23 12:30:15 -0500
* Fix some outdated documentation unit tests. (Jon Siwek)
2.3-883 | 2015-04-23 07:10:36 -0700
* Fix -N option to work with builtin plugins as well. (Robin Sommer)
2.3-882 | 2015-04-23 06:59:40 -0700
* Add missing .pac dependencies for some binpac analyzer targets.
(Jon Siwek)
2.3-879 | 2015-04-22 10:38:07 -0500
* Fix compile errors. (Jon Siwek)
2.3-878 | 2015-04-22 08:21:23 -0700
* Fix another compiler warning in DTLS. (Johanna Amann)
2.3-877 | 2015-04-21 20:14:16 -0700
* Adding missing include. (Robin Sommer)
2.3-876 | 2015-04-21 16:40:10 -0700
* Attempt at fixing a potential std::length_error exception in RDP
analyzer. Addresses BIT-1337. (Robin Sommer)
* Fixing compile problem caused by overeager factorization. (Robin
Sommer)
2.3-874 | 2015-04-21 16:09:20 -0700
* Change details of escaping when logging/printing. (Seth Hall/Robin
Sommer)
- Log files now escape non-printable characters consistently
as "\xXX'. Furthermore, backslashes are escaped as "\\",
making the representation fully reversible.
- When escaping via script-level functions (escape_string,
clean), we likewise now escape consistently with "\xXX" and
"\\".
- There's no "alternative" output style anymore, i.e., fmt()
'%A' qualifier is gone.
Addresses BIT-1333.
* Remove several BroString escaping methods that are no longer
useful. (Seth Hall)
2.3-864 | 2015-04-21 15:24:02 -0700
* A SIP protocol analyzer. (Vlad Grigorescu)
Activity gets logged into sip.log. It generates the following
events:
event sip_request(c: connection, method: string, original_URI: string, version: string);
event sip_reply(c: connection, version: string, code: count, reason: string);
event sip_header(c: connection, is_orig: bool, name: string, value: string);
event sip_all_headers(c: connection, is_orig: bool, hlist: mime_header_list);
event sip_begin_entity(c: connection, is_orig: bool);
event sip_end_entity(c: connection, is_orig: bool);
The analyzer support SIP over UDP currently.
* BIT-1343: Factor common ASN.1 code from RDP, SNMP, and Kerberos
analyzers. (Jon Siwek/Robin Sommer)
2.3-838 | 2015-04-21 13:40:12 -0700
* BIT-1373: Fix vector index assignment reference count bug. (Jon Siwek)
2.3-836 | 2015-04-21 13:37:31 -0700
* Fix SSH direction field being unset. Addresses BIT-1365. (Vlad
Grigorescu)
2.3-835 | 2015-04-21 16:36:00 -0500
* Clarify Broker examples. (Jon Siwek)
2.3-833 | 2015-04-21 12:38:32 -0700
* A Kerberos protocol analyzer. (Vlad Grigorescu)
Activity gets logged into kerberos.log. It generates the following
events:
event krb_as_request(c: connection, msg: KRB::KDC_Request);
event krb_as_response(c: connection, msg: KRB::KDC_Response);
event krb_tgs_request(c: connection, msg: KRB::KDC_Request);
event krb_tgs_response(c: connection, msg: KRB::KDC_Response);
event krb_ap_request(c: connection, ticket: KRB::Ticket, opts: KRB::AP_Options);
event krb_priv(c: connection, is_orig: bool);
event krb_safe(c: connection, is_orig: bool, msg: KRB::SAFE_Msg);
event krb_cred(c: connection, is_orig: bool, tickets: KRB::Ticket_Vector);
event krb_error(c: connection, msg: KRB::Error_Msg);
2.3-793 | 2015-04-20 20:51:00 -0700
* Add decoding of PROXY-AUTHORIZATION header to HTTP analyze,
treating it the same as AUTHORIZATION. (Josh Liburdi)
* Remove deprecated fields "hot" and "addl" from the connection
record. Remove the functions append_addl() and
append_addl_marker(). (Robin Sommer)
* Removing the NetFlow analyzer, which hasn't been used anymore
since then corresponding command-line option went away. (Robin
Sommer)
2.3-787 | 2015-04-20 19:15:23 -0700
* A file analyzer for Portable Executables. (Vlad Grigorescu/Seth
Hall).
Activity gets logged into pe.log. It generates the following
events:
event pe_dos_header(f: fa_file, h: PE::DOSHeader);
event pe_dos_code(f: fa_file, code: string);
event pe_file_header(f: fa_file, h: PE::FileHeader);
event pe_optional_header(f: fa_file, h: PE::OptionalHeader);
event pe_section_header(f: fa_file, h: PE::SectionHeader);
2.3-741 | 2015-04-20 13:12:39 -0700
* API changes to file analysis mime type detection. Removed
"file_mime_type" and "file_mime_types" event, replacing them with
a new event called "file_metadata_inferred". Addresses BIT-1368.
(Jon Siwek)
* A large series of improvements for file type identification. This
inludes a many signature updates (new types, cleanup, performance
improvments) and splitting out signatures into subfiles. (Seth
Hall)
* Fix an issue with files having gaps before the bof_buffer is
filled, which could lead to file type identification not working
correctly. (Seth Hall)
* Fix an issue with packet loss in HTTP file reporting for file type
identification wasn't working correctly zero-length bodies. (Seth
Hall)
* X.509 certificates are now populating files.log with the mime type
application/pkix-cert. (Seth Hall)
* Normalized some FILE_ANALYSIS debug messages. (Seth Hall)
2.3-725 | 2015-04-20 12:54:54 -0700
* Updating submodule(s).
2.3-724 | 2015-04-20 14:11:02 -0500
* Fix uninitialized field in raw input reader. (Jon Siwek)
2.3-722 | 2015-04-20 12:59:03 -0500
* Remove unneeded documentation cross-referencing. (Jon Siwek)
2.3-721 | 2015-04-20 12:47:05 -0500
* BIT-1380: Improve Broxygen output of &default expressions.
(Jon Siwek)
2.3-720 | 2015-04-17 14:18:26 -0700
* Updating NEWS.
2.3-716 | 2015-04-17 13:06:37 -0700
* Add seeking functionality to raw reader. One can now add an option
"offset" to the config map. Positive offsets are interpreted to be
from the beginning of the file, negative from the end of the file
(-1 is end of file). Only works for raw reader in streaming or
manual mode. Does not work with executables. Addresses BIT-985.
(Johanna Amann)
* Allow setting packet and byte thresholds for connections. (Johanna Amann)
This extends the ConnSize analyzer to be able to raise events when
each direction of a connection crosses a certain amount of bytes
or packets.
Thresholds are set using:
- set_conn_bytes_threshold(c$id, [num-bytes], [direction]);
- set_conn_packets_threshold(c$id, [num-packets], [direction]);
They raise the events, respectively:
- event conn_bytes_threshold_crossed(c: connection, threshold: count, is_orig: bool)
- event conn_packets_threshold_crossed(c: connection, threshold: count, is_orig: bool)
Current thresholds can be examined using get_conn_bytes_threshold()
and get_conn_packets_threshold().
Only one threshold can be set per connection.
* Add high-level API for packet/bytes thresholding in
base/protocols/conn/thresholds.bro that holds lists of thresholds
and raises an event for each threshold exactly once. (Johanna
Amann)
* Fix a bug where child packet analyzers of the TCP analyzer
where not found using FindChild.
* Update GridFTP analyzer to use connection thresholding instead of
polling. (Johanna Amann)
2.3-709 | 2015-04-17 12:37:32 -0700
* Fix addressing the dreaded "internal error: unknown msg type 115
in Poll()". (Jon Siwek)
This patch removes the error handling code for overload conditions
in the main process that could cause trouble down the road. The
"chunked_io_buffer_soft_cap" script variable can now tune when the
client process begins shutting down peer connections, and the
default setting is now double what it used to be. Addresses
BIT-1376.
2.3-707 | 2015-04-17 10:57:59 -0500
* Add more info about Broker to NEWS. (Jon Siwek)
2.3-705 | 2015-04-16 08:16:45 -0700
* Update Mozilla CA list. (Johanna Amann)
* Update tests to have them keep using older certificates where
appropiate. (Johanna Amann)
2.3-699 | 2015-04-16 09:51:58 -0500
* Fix the to_count function to use strtoull versus strtoll.
(Jon Siwek)
2.3-697 | 2015-04-15 09:51:15 -0700
* Removing error check verifying that an ASCII writer has been
properly finished. Instead of aborting, we now just clean up in
that case and proceed. Addresses BIT-1331. (Robin Sommer)
2.3-696 | 2015-04-14 15:56:36 -0700
* Update sqlite to 3.8.9
2.3-695 | 2015-04-13 10:34:42 -0500
* Fix iterator invalidation in broker::Manager dtor. (Jon Siwek)
* Add paragraph to plugin documentation. (Robin Sommer)
2.3-693 | 2015-04-11 10:56:31 -0700
* BIT-1367: improve coercion of anonymous records in set constructor.
(Jon Siwek)
* Allow to specify ports for sftp log rotator. (Johanna Amann)
2.3-690 | 2015-04-10 21:51:10 -0700
* Make sure to always delete the remote serializer. Addresses
BIT-1306 and probably also BIT-1356. (Robin Sommer)
* Cleaning up --help. -D and -Y/y were still listed, even though
they had no effect anymore. Removing some dead code along with -D.
Addresses BIT-1372. (Robin Sommer)
2.3-688 | 2015-04-10 08:10:44 -0700
* Update SQLite to 3.8.8.3.
2.3-687 | 2015-04-10 07:32:52 -0700
* Remove stale signature benchmarking code (-L command-line option).
(Jon Siwek)
* BIT-844: fix UDP payload signatures to match packet-wise. (Jon
Siwek)
2.3-682 | 2015-04-09 12:07:00 -0700
* Fixing input readers' component type. (Robin Sommer)
* Tiny spelling correction. (Seth Hall)
2.3-680 | 2015-04-06 16:02:43 -0500
* BIT-1371: remove CMake version check from binary package scripts.
(Jon Siwek)
2.3-679 | 2015-04-06 10:16:36 -0500
* Increase some unit test timeouts. (Jon Siwek)
* Fix Coverity warning in RDP analyzer. (Jon Siwek)
2.3-676 | 2015-04-02 10:10:39 -0500
* BIT-1366: improve checksum offloading warning.
(Frank Meier, Jon Siwek)
2.3-675 | 2015-03-30 17:05:05 -0500
* Add an RDP analyzer. (Josh Liburdi, Seth Hall, Johanna Amann)
2.3-640 | 2015-03-30 13:51:51 -0500
* BIT-1359: Limit maximum number of DTLS fragments to 30. (Johanna Amann)
2.3-637 | 2015-03-30 12:02:07 -0500
* Increase timeout duration in some broker tests. (Jon Siwek)
2.3-636 | 2015-03-30 11:26:32 -0500
* Updates related to SSH analysis. (Jon Siwek)
- Some scripts used wrong SSH module/namespace scoping on events.
- Fix outdated notice documentation related to SSH password guessing.
- Add a unit test for SSH pasword guessing notice.
2.3-635 | 2015-03-30 11:02:45 -0500
* Fix outdated documentation unit tests. (Jon Siwek)
2.3-634 | 2015-03-30 10:22:45 -0500
* Add a canonifier to a unit test's output. (Jon Siwek)
2.3-633 | 2015-03-25 18:32:59 -0700
* Log::write in signature framework was missing timestamp.
(Andrew Benson/Michel Laterman)
2.3-631 | 2015-03-25 11:03:12 -0700
* New SSH analyzer. (Vlad Grigorescu)
2.3-600 | 2015-03-25 10:23:46 -0700
* Add defensive checks in code to calculate log rotation intervals.
(Pete Nelson).
2.3-597 | 2015-03-23 12:50:04 -0700
* DTLS analyzer. (Johanna Amann)
* Implement correct parsing of TLS record fragmentation. (Johanna
Amann)
2.3-582 | 2015-03-23 11:34:25 -0700
* BIT-1313: In debug builds, "bro -B <x>" now supports "all" and
"help" for "<x>". "all" enables all debug streams. "help" prints a
list of available debug streams. (John Donnelly/Robin Sommer).
* BIT-1324: Allow logging filters to inherit default path from
stream. This allows the path for the default filter to be
specified explicitly through $path="..." when creating a stream.
Adapted the existing Log::create_stream calls to explicitly
specify a path value. (Jon Siwek)
* BIT-1199: Change the way the input framework deals with values it
cannot convert into BroVals, raising error messages instead of
aborting execution. (Johanna Amann)
* BIT-788: Use DNS QR field to better identify flow direction. (Jon
Siwek)
2.3-572 | 2015-03-23 13:04:53 -0500
* BIT-1226: Fix an example in quickstart docs. (Jon siwek)
2.3-570 | 2015-03-23 09:51:20 -0500
* Correct a spelling error (Daniel Thayer)
* Improvement to SSL analyzer failure mode. (Johanna Amann)
2.3-565 | 2015-03-20 16:27:41 -0500
* BIT-978: Improve documentation of 'for' loop iterator invalidation.
(Jon Siwek)
2.3-564 | 2015-03-20 11:12:02 -0500
* BIT-725: Remove "unmatched_HTTP_reply" weird. (Jon Siwek)
2.3-562 | 2015-03-20 10:31:02 -0500
* BIT-1207: Add unit test to catch breaking changes to local.bro
(Jon Siwek)
* Fix failing sqlite leak test (Johanna Amann)
2.3-560 | 2015-03-19 13:17:39 -0500
* BIT-1255: Increase default values of
"tcp_max_above_hole_without_any_acks" and "tcp_max_initial_window"
from 4096 to 16384 bytes. (Jon Siwek)
2.3-559 | 2015-03-19 12:14:33 -0500
* BIT-849: turn SMTP reporter warnings into weirds,
"smtp_nested_mail_transaction" and "smtp_unmatched_end_of_data".
(Jon Siwek)
2.3-558 | 2015-03-18 22:50:55 -0400
* DNS: Log the type number for the DNS_RR_unknown_type weird. (Vlad Grigorescu)
2.3-555 | 2015-03-17 15:57:13 -0700
* Splitting test-all Makefile target into Bro tests and test-aux.
(Robin Sommer)
2.3-554 | 2015-03-17 15:40:39 -0700
* Deprecate &rotate_interval, &rotate_size, &encrypt. Addresses
BIT-1305. (Jon Siwek)
2.3-549 | 2015-03-17 09:12:18 -0700
* BIT-1077: Fix HTTP::log_server_header_names. Before, it just
re-logged fields from the client side. (Jon Siwek)
2.3-547 | 2015-03-17 09:07:51 -0700
* Update certificate validation script to cache valid intermediate
chains that it encounters on the wire and use those to try to
validate chains that might be missing intermediate certificates.
(Johanna Amann)
2.3-541 | 2015-03-13 15:44:08 -0500
* Make INSTALL a symlink to doc/install/install.rst (Jon siwek)
* Fix Broxygen coverage. (Jon Siwek)
2.3-539 | 2015-03-13 14:19:27 -0500
* BIT-1335: Include timestamp in default extracted file names.
And add a policy script to extract all files. (Jon Siwek)
* BIT-1311: Identify GRE tunnels as Tunnel::GRE, not Tunnel::IP.
(Jon Siwek)
* BIT-1309: Add Connection class getter methods for flow labels.
(Jon Siwek)
2.3-536 | 2015-03-12 16:16:24 -0500
* Fix Broker leak tests. (Jon Siwek)
2.3-534 | 2015-03-12 10:59:49 -0500
* Update NEWS file. (Jon Siwek)
2.3-533 | 2015-03-12 10:18:53 -0500
* Give broker python bindings default install path within --prefix.
(Jon Siwek)
2.3-530 | 2015-03-10 13:22:39 -0500
* Fix broker data stores in absence of --enable-debug. (Jon Siwek)
2.3-529 | 2015-03-09 13:14:27 -0500
* Fix format specifier in SSL protocol violation. (Jon Siwek)
2.3-526 | 2015-03-06 12:48:49 -0600
* Fix build warnings, clarify broker requirements, update submodule.
(Jon Siwek)
* Rename comm/ directories to broker/ (Jon Siwek)
* Rename broker-related namespaces. (Jon Siwek)
* Improve remote logging via broker by only sending fields w/ &log.
(Jon Siwek)
* Disable a stream's remote logging via broker if it fails. (Jon Siwek)
* Improve some broker communication unit tests. (Jon Siwek)
2.3-518 | 2015-03-04 13:13:50 -0800
* Add bytes_recvd to stats.log recording the number of bytes
received, according to packet headers. (Mike Smiley)
2.3-516 | 2015-03-04 12:30:06 -0800
* Extract most specific Common Name from SSL certificates (Johanna
Amann)
* Send CN and SAN fields of SSL certificates to the Intel framework.
(Johanna Amann)
2.3-511 | 2015-03-02 18:07:17 -0800
* Changes to plugin meta hooks for function calls. (Gilbert Clark)
- Add frame argument.
- Change return value to tuple unambigiously whether hook
returned a result.
2.3-493 | 2015-03-02 17:17:32 -0800
* Extend the SSL weak-keys policy file to also alert when
encountering SSL connections with old versions as well as unsafe
cipher suites. (Johanna Amann)
* Make the notice suppression handling of other SSL policy files a
tad more robust. (Johanna Amann)
2.3-491 | 2015-03-02 17:12:56 -0800
* Updating docs for recent addition of local_resp. (Robin Sommer)
2.3-489 | 2015-03-02 15:29:30 -0800
* Integrate Broker, Bro's new communication library. (Jon Siwek)
See aux/broker/README for more information on Broker, and
doc/frameworks/comm.rst for the corresponding Bro script API.
Broker support is by default off for now; it can be enabled at
configure time with --enable-broker. It requires CAF
(https://github.com/actor-framework/actor-framework); for now iot
needs CAF's "develop" branch. Broker also requires a C++11
compiler.
Broker will become a mandatory dependency in future Bro versions.
* Add --enable-c++11 configure flag to compile Bro's source code in
C++11 mode with a corresponding compiler. (Jon Siwek)
2.3-451 | 2015-02-24 16:37:08 -0800 2.3-451 | 2015-02-24 16:37:08 -0800
* Updating submodule(s). * Updating submodule(s).
@ -245,7 +842,7 @@
2.3-328 | 2014-12-02 08:13:10 -0500 2.3-328 | 2014-12-02 08:13:10 -0500
* Update windows-version-detection.bro to add support for * Update windows-version-detection.bro to add support for
Windows 10. (Michal Purzynski) Windows 10. (Michal Purzynski)
2.3-326 | 2014-12-01 12:10:27 -0600 2.3-326 | 2014-12-01 12:10:27 -0600
@ -315,7 +912,7 @@
2.3-280 | 2014-11-05 09:46:33 -0500 2.3-280 | 2014-11-05 09:46:33 -0500
* Add Windows detection based on CryptoAPI HTTP traffic as a * Add Windows detection based on CryptoAPI HTTP traffic as a
software framework policy script. (Vlad Grigorescu) software framework policy script. (Vlad Grigorescu)
2.3-278 | 2014-11-03 18:55:18 -0800 2.3-278 | 2014-11-03 18:55:18 -0800

View file

@ -113,7 +113,7 @@ if (NOT DISABLE_PERFTOOLS)
find_package(GooglePerftools) find_package(GooglePerftools)
endif () endif ()
if (GOOGLEPERFTOOLS_FOUND) if (GOOGLEPERFTOOLS_FOUND OR TCMALLOC_FOUND)
set(HAVE_PERFTOOLS true) set(HAVE_PERFTOOLS true)
# Non-Linux systems may not be well-supported by gperftools, so # Non-Linux systems may not be well-supported by gperftools, so
# require explicit request from user to enable it in that case. # require explicit request from user to enable it in that case.
@ -177,6 +177,17 @@ include_directories(${CMAKE_CURRENT_BINARY_DIR})
######################################################################## ########################################################################
## Recurse on sub-directories ## Recurse on sub-directories
if ( ENABLE_CXX11 )
include(RequireCXX11)
endif ()
if ( ENABLE_BROKER )
add_subdirectory(aux/broker)
set(brodeps ${brodeps} broker)
add_definitions(-DENABLE_BROKER)
include_directories(BEFORE ${CMAKE_CURRENT_SOURCE_DIR}/aux/broker)
endif ()
add_subdirectory(src) add_subdirectory(src)
add_subdirectory(scripts) add_subdirectory(scripts)
add_subdirectory(doc) add_subdirectory(doc)
@ -224,6 +235,7 @@ message(
"\nCXXFLAGS: ${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_${BuildType}}" "\nCXXFLAGS: ${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_${BuildType}}"
"\nCPP: ${CMAKE_CXX_COMPILER}" "\nCPP: ${CMAKE_CXX_COMPILER}"
"\n" "\n"
"\nBroker: ${ENABLE_BROKER}"
"\nBroccoli: ${INSTALL_BROCCOLI}" "\nBroccoli: ${INSTALL_BROCCOLI}"
"\nBroctl: ${INSTALL_BROCTL}" "\nBroctl: ${INSTALL_BROCTL}"
"\nAux. Tools: ${INSTALL_AUX_TOOLS}" "\nAux. Tools: ${INSTALL_AUX_TOOLS}"

View file

@ -1,3 +0,0 @@
See doc/install/install.rst for installation instructions.

1
INSTALL Symbolic link
View file

@ -0,0 +1 @@
doc/install/install.rst

View file

@ -51,13 +51,15 @@ distclean:
$(MAKE) -C testing $@ $(MAKE) -C testing $@
test: test:
@( cd testing && make ) -@( cd testing && make )
test-all: test test-aux:
test -d aux/broctl && ( cd aux/broctl && make test-all ) -test -d aux/broctl && ( cd aux/broctl && make test-all )
test -d aux/btest && ( cd aux/btest && make test ) -test -d aux/btest && ( cd aux/btest && make test )
test -d aux/bro-aux && ( cd aux/bro-aux && make test ) -test -d aux/bro-aux && ( cd aux/bro-aux && make test )
test -d aux/plugins && ( cd aux/plugins && make test-all ) -test -d aux/plugins && ( cd aux/plugins && make test-all )
test-all: test test-aux
configured: configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 ) @test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )

200
NEWS
View file

@ -7,29 +7,57 @@ their own ``CHANGES``.)
Bro 2.4 (in progress) Bro 2.4 (in progress)
===================== =====================
Dependencies
------------
New Functionality New Functionality
----------------- -----------------
- Bro now has support for external plugins that can extend its core - Bro now has support for external plugins that can extend its core
functionality, like protocol/file analysis, via shared libraries. functionality, like protocol/file analysis, via shared libraries.
Plugins can be developed and distributed externally, and will be Plugins can be developed and distributed externally, and will be
pulled in dynamically at startup. Currently, a plugin can provide pulled in dynamically at startup (the environment variables
custom protocol analyzers, file analyzers, log writers[TODO], input BRO_PLUGIN_PATH and BRO_PLUGIN_ACTIVATE can be used to specify the
readers[TODO], packet sources[TODO], and new built-in functions. A locations and names of plugins to activate). Currently, a plugin
plugin can furthermore hook into Bro's processing a number of places can provide custom protocol analyzers, file analyzers, log writers,
to add custom logic. input readers, packet sources and dumpers, and new built-in functions.
A plugin can furthermore hook into Bro's processing at a number of
places to add custom logic.
See https://www.bro.org/sphinx-git/devel/plugins.html for more See https://www.bro.org/sphinx-git/devel/plugins.html for more
information on writing plugins. information on writing plugins.
- Bro now has supoprt for the MySQL wire protocol. Activity gets - Bro now has support for the MySQL wire protocol. Activity gets
logged into mysql.log. logged into mysql.log.
- Bro now parses DTLS traffic. Activity gets logged into ssl.log.
- Bro now has support for the Kerberos KRB5 protocol over TCP and
UDP. Activity gets logged into kerberos.log.
- Bro now has an RDP analyzer. Activity gets logged into rdp.log.
- Bro now has a file analyzer for Portable Executables. Activity gets
logged into pe.log.
- Bro now has support for the SIP protocol over UDP. Activity gets
logged into sip.log.
- Bro now features a completely rewritten, enhanced SSH analyzer. The
new analyzer is able to determine if logins failed or succeeded in
most circumstances, logs a lot more more information about SSH
sessions, supports v1, and introduces the intelligence type
``Intel::PUBKEY_HASH`` and location ``SSH::IN_SERVER_HOST_KEY``. The
analayzer also generates a set of additional events
(``ssh_auth_successful``, ``ssh_auth_failed``, ``ssh_capabilities``,
``ssh2_server_host_key``, ``ssh1_server_host_key``,
``ssh_encrypted_packet``, ``ssh2_dh_server_params``,
``ssh2_gss_error``, ``ssh2_ecc_key``). See next section for
incompatible SSH changes.
- Bro's file analysis now supports reassembly of files that are not - Bro's file analysis now supports reassembly of files that are not
transferred/seen sequentially. transferred/seen sequentially. The default file reassembly buffer
size is set with the ``Files::reassembly_buffer_size`` variable.
- Bro's file type identification has been greatly improved (new file types,
bug fixes, and performance improvements).
- Bro's scripting language now has a ``while`` statement:: - Bro's scripting language now has a ``while`` statement::
@ -39,6 +67,70 @@ New Functionality
``next`` and ``break`` can be used inside the loop's body just like ``next`` and ``break`` can be used inside the loop's body just like
with ``for`` loops. with ``for`` loops.
- Bro now integrates Broker, a new communication library. See
aux/broker/README for more information on Broker, and
doc/frameworks/broker.rst for the corresponding Bro script API.
With Broker, Bro has the similar capabilities of exchanging events and
logs with remote peers (either another Bro process or some other
application that uses Broker). It also includes a key-value store
API that can be used to share state between peers and optionally
allow data to persist on disk for longer-term storage.
Broker support is by default off for now; it can be enabled at
configure time with --enable-broker. It requires CAF version 0.13+
(https://github.com/actor-framework/actor-framework) as well as a
C++11 compiler (e.g. GCC 4.8+ or Clang 3.3+).
Broker will become a mandatory dependency in future Bro versions and
replace the current communication and serialization system.
- Add --enable-c++11 configure flag to compile Bro's source code in
C++11 mode with a corresponding compiler. Note that 2.4 will be the
last version of Bro that compiles without C++11 support.
- The SSL analysis now alerts when encountering SSL connections with
old protocol versions or unsafe cipher suites. It also gained
extended reporting of weak keys, caching of already validated
certificates, and full support for TLS record defragmentation. SSL generally
became much more robust and added several fields to ssl.log (while
removing some others).
- A new icmp_sent_payload event provides access to ICMP payload.
- The input framework's raw reader now supports seeking by adding an
option "offset" to the config map. Positive offsets are interpreted
to be from the beginning of the file, negative from the end of the
file (-1 is end of file).
- One can now raise events when a connection crosses a given size
threshold in terms of packets or bytes. The primary API for that
functionality is in base/protocols/conn/thresholds.bro.
- There is a new command-line option -Q/--time that prints Bro's execution
time and memory usage to stderr.
- BroControl now has a new command "deploy" which is equivalent to running
the "check", "install", "stop", and "start" commands (in that order).
- BroControl now has a new option "StatusCmdShowAll" that controls whether
or not the broctl "status" command gathers all of the status information.
This option can be used to make the "status" command run significantly
faster (in this case, the "Peers" column will not be shown in the output).
- BroControl now has a new option "StatsLogEnable" that controls whether
or not broctl will record information to the "stats.log" file. This option
can be used to make the "broctl cron" command run slightly faster (in this
case, "broctl cron" will also no longer send email about not seeing any
packets on the monitoring interfaces).
- BroControl now has a new option "MailHostUpDown" which controls whether or
not the "broctl cron" command will send email when it notices that a host
in the cluster is up or down.
- BroControl now has a new option "CommandTimeout" which specifies the number
of seconds to wait for a command that broctl ran to return results.
Changed Functionality Changed Functionality
--------------------- ---------------------
@ -47,9 +139,17 @@ Changed Functionality
- File analysis - File analysis
* Removed ``fa_file`` record's ``mime_type`` and ``mime_types`` * Removed ``fa_file`` record's ``mime_type`` and ``mime_types``
fields. The events ``file_mime_type`` and ``file_mime_types`` fields. The event ``file_sniff`` has been added which provides
have been added which contain the same information. The the same information. The ``mime_type`` field of ``Files::Info``
``mime_type`` field of ``Files::Info`` also still has this info. also still has this info.
* The earliest point that new mime type information is available is
in the ``file_sniff`` event which comes after the ``file_new`` and
``file_over_new_connection`` events. Scripts which inspected mime
type info within those events will need to be adapted. (Note: for
users that worked w/ versions of Bro from git, for a while there was
also an event called ``file_mime_type`` which is now replaced with
the ``file_sniff`` event).
* Removed ``Files::add_analyzers_for_mime_type`` function. * Removed ``Files::add_analyzers_for_mime_type`` function.
@ -58,15 +158,83 @@ Changed Functionality
reassembly for non-sequential files, "offset" can be obtained reassembly for non-sequential files, "offset" can be obtained
with other information already available -- adding together with other information already available -- adding together
``seen_bytes`` and ``missed_bytes`` fields of the ``fa_file`` ``seen_bytes`` and ``missed_bytes`` fields of the ``fa_file``
record gives the how many bytes have been written so far (i.e. record gives how many bytes have been written so far (i.e.
the "offset"). the "offset").
- has_valid_octets: now uses a string_vec parameter instead of - The SSH changes come with a few incompatibilities. The following
events have been renamed:
* ``SSH::heuristic_failed_login`` to ``SSH::ssh_auth_failed``
* ``SSH::heuristic_successful_login`` to ``SSH::ssh_auth_successful``
The ``SSH::Info`` status field has been removed and replaced with
the ``auth_success`` field. This field has been changed from a
string that was previously ``success``, ``failure`` or
``undetermined`` to a boolean. a boolean that is ``T``, ``F``, or
unset.
- The has_valid_octets function now uses a string_vec parameter instead of
string_array. string_array.
- conn.log gained a new field local_resp that works like local_orig, - conn.log gained a new field local_resp that works like local_orig,
just for the responder address of the connection. just for the responder address of the connection.
- GRE tunnels are now identified as ``Tunnel::GRE`` instead of
``Tunnel::IP``.
- The default name for extracted files changed from extract-protocol-id
to extract-timestamp-protocol-id.
- The weird named "unmatched_HTTP_reply" has been removed since it can
be detected at the script-layer and is handled correctly by the
default HTTP scripts.
- When adding a logging filter to a stream, the filter can now inherit
a default ``path`` field from the associated ``Log::Stream`` record.
- When adding a logging filter to a stream, the
``Log::default_path_func`` is now only automatically added to the
filter if it has neither a ``path`` nor a ``path_func`` already
explicitly set. Before, the default path function would always be set
for all filters which didn't specify their own ``path_func``.
- BroControl now establishes only one ssh connection from the manager to
each remote host in a cluster configuration (previously, there would be
one ssh connection per remote Bro process).
- BroControl now uses SQLite to record state information instead of a
plain text file (the file "spool/broctl.dat" is no longer used).
On FreeBSD, this means that there is a new dependency on the package
"py27-sqlite3".
- BroControl now records the expected running state of each Bro node right
before each start or stop. The "broctl cron" command uses this info to
either start or stop Bro nodes as needed so that the actual state matches
the expected state (previously, "broctl cron" could only start nodes in
the "crashed" state, and could never stop a node).
- BroControl now sends all normal command output (i.e., not error messages)
to stdout. Error messages are still sent to stderr, however.
- The capability of processing NetFlow input has been removed for the
time being. Therefore, the -y/--flowfile and -Y/--netflow command-line
options have been removed, and the netflow_v5_header and netflow_v5_record
events have been removed.
- The -D/--dfa-size command-line option has been removed.
- The -L/--rule-benchmark command-line option has been removed.
- The -O/--optimize command-line option has been removed.
- The deprecated fields "hot" and "addl" have been removed from the
connection record. Likewise, the functions append_addl() and
append_addl_marker() have been removed.
- Log files now escape non-printable characters consistently as "\xXX'.
Furthermore, backslashes are escaped as "\\", making the
representation fully reversible.
Deprecated Functionality Deprecated Functionality
------------------------ ------------------------
@ -76,7 +244,7 @@ Deprecated Functionality
concatenation/extraction functions. Note that the new functions use concatenation/extraction functions. Note that the new functions use
0-based indexing, rather than 1-based. 0-based indexing, rather than 1-based.
The full list of now deprecation functions is: The full list of now deprecated functions is:
* split: use split_string instead. * split: use split_string instead.

View file

@ -1 +1 @@
2.3-451 2.4-beta

@ -1 +1 @@
Subproject commit 33cb1f8e6bf2e33c2773e86b157e1f343ee85dc6 Subproject commit 4f33233aef5539ae4f12c6d0e4338247833c3900

@ -1 +1 @@
Subproject commit c9d340847c668590a450f1881e6e3d763abe1138 Subproject commit a2d290a832c35ad11f3fabb19812bcae2ff089cd

@ -1 +1 @@
Subproject commit 1d55a0a84c5b1d0aa1727829300b388c92f92daa Subproject commit 74bb4bbd949e61e099178f8a97499d3f1355de8b

@ -1 +1 @@
Subproject commit 76f99ea52c3e021cade3d03eda7865d4f4d1793e Subproject commit 97c17d21725e42b36f4b49579077ecdc28ddb86a

1
aux/broker Submodule

@ -0,0 +1 @@
Subproject commit 8fc6938017dc15acfb26fa29e6ad0933019781c5

@ -1 +1 @@
Subproject commit 93d4989ed1537e4d143cf09d44077159f869a4b2 Subproject commit 80b42ee3e4503783b6720855b28e83ff1658c22b

@ -1 +1 @@
Subproject commit 71d820e9d8ca753fea8fb34ea3987993b28d79e4 Subproject commit e1ea9f67cfe3d6a81e0c1479ced0b9aa73e77c87

2
cmake

@ -1 +1 @@
Subproject commit ff08be5aa1b8eaadbe2775cbc11b499c5f93349e Subproject commit 6406fb79d30df8d7956110ce65a97d18e4bc8c3b

26
configure vendored
View file

@ -41,6 +41,9 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--enable-perftools-debug use Google's perftools for debugging --enable-perftools-debug use Google's perftools for debugging
--enable-jemalloc link against jemalloc --enable-jemalloc link against jemalloc
--enable-ruby build ruby bindings for broccoli (deprecated) --enable-ruby build ruby bindings for broccoli (deprecated)
--enable-c++11 build using the C++11 standard
--enable-broker enable use of the Broker communication library
(requires C++ Actor Framework and C++11)
--disable-broccoli don't build or install the Broccoli library --disable-broccoli don't build or install the Broccoli library
--disable-broctl don't install Broctl --disable-broctl don't install Broctl
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
@ -55,6 +58,8 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-flex=PATH path to flex executable --with-flex=PATH path to flex executable
--with-bison=PATH path to bison executable --with-bison=PATH path to bison executable
--with-perl=PATH path to perl executable --with-perl=PATH path to perl executable
--with-libcaf=PATH path to C++ Actor Framework installation
(a required Broker dependency)
Optional Packages in Non-Standard Locations: Optional Packages in Non-Standard Locations:
--with-geoip=PATH path to the libGeoIP install root --with-geoip=PATH path to the libGeoIP install root
@ -67,6 +72,8 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-ruby-lib=PATH path to ruby library --with-ruby-lib=PATH path to ruby library
--with-ruby-inc=PATH path to ruby headers --with-ruby-inc=PATH path to ruby headers
--with-swig=PATH path to SWIG executable --with-swig=PATH path to SWIG executable
--with-rocksdb=PATH path to RocksDB installation
(an optional Broker dependency)
Packaging Options (for developers): Packaging Options (for developers):
--binary-package toggle special logic for binary packaging --binary-package toggle special logic for binary packaging
@ -142,6 +149,10 @@ while [ $# -ne 0 ]; do
append_cache_entry CMAKE_INSTALL_PREFIX PATH $optarg append_cache_entry CMAKE_INSTALL_PREFIX PATH $optarg
append_cache_entry BRO_ROOT_DIR PATH $optarg append_cache_entry BRO_ROOT_DIR PATH $optarg
append_cache_entry PY_MOD_INSTALL_DIR PATH $optarg/lib/broctl append_cache_entry PY_MOD_INSTALL_DIR PATH $optarg/lib/broctl
if [ -n "$user_enabled_broker" ]; then
append_cache_entry BROKER_PYTHON_HOME PATH $prefix
fi
;; ;;
--scriptdir=*) --scriptdir=*)
append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $optarg append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $optarg
@ -176,6 +187,15 @@ while [ $# -ne 0 ]; do
--enable-jemalloc) --enable-jemalloc)
append_cache_entry ENABLE_JEMALLOC BOOL true append_cache_entry ENABLE_JEMALLOC BOOL true
;; ;;
--enable-c++11)
append_cache_entry ENABLE_CXX11 BOOL true
;;
--enable-broker)
append_cache_entry ENABLE_CXX11 BOOL true
append_cache_entry ENABLE_BROKER BOOL true
append_cache_entry BROKER_PYTHON_HOME PATH $prefix
user_enabled_broker="true"
;;
--disable-broccoli) --disable-broccoli)
append_cache_entry INSTALL_BROCCOLI BOOL false append_cache_entry INSTALL_BROCCOLI BOOL false
;; ;;
@ -248,6 +268,12 @@ while [ $# -ne 0 ]; do
--with-swig=*) --with-swig=*)
append_cache_entry SWIG_EXECUTABLE PATH $optarg append_cache_entry SWIG_EXECUTABLE PATH $optarg
;; ;;
--with-libcaf=*)
append_cache_entry LIBCAF_ROOT_DIR PATH $optarg
;;
--with-rocksdb=*)
append_cache_entry ROCKSDB_ROOT_DIR PATH $optarg
;;
--binary-package) --binary-package)
append_cache_entry BINARY_PACKAGING_MODE BOOL true append_cache_entry BINARY_PACKAGING_MODE BOOL true
;; ;;

View file

@ -0,0 +1 @@
../../../aux/broker/README

View file

@ -0,0 +1 @@
../../../aux/broker/broker-manual.rst

View file

@ -17,6 +17,8 @@ current, independent component releases.
Broccoli - User Manual <broccoli/broccoli-manual> Broccoli - User Manual <broccoli/broccoli-manual>
Broccoli Python Bindings <broccoli-python/README> Broccoli Python Bindings <broccoli-python/README>
Broccoli Ruby Bindings <broccoli-ruby/README> Broccoli Ruby Bindings <broccoli-ruby/README>
Broker - Bro's (New) Messaging Library (README) <broker/README>
Broker - User Manual <broker/broker-manual.rst>
BroControl - Interactive Bro management shell <broctl/README> BroControl - Interactive Bro management shell <broctl/README>
Bro-Aux - Small auxiliary tools for Bro <bro-aux/README> Bro-Aux - Small auxiliary tools for Bro <bro-aux/README>
BTest - A unit testing framework <btest/README> BTest - A unit testing framework <btest/README>

View file

@ -245,13 +245,26 @@ VERSION). You can find a full list of files installed in
copies over the binary tarball in ``build/dist``. copies over the binary tarball in ``build/dist``.
``init-plugin`` will never overwrite existing files. If its target ``init-plugin`` will never overwrite existing files. If its target
directory already exists, it will be default decline to do anything. directory already exists, it will by default decline to do anything.
You can run it with ``-u`` instead to update an existing plugin, You can run it with ``-u`` instead to update an existing plugin,
however it will never overwrite any existing files; it will only put however it will never overwrite any existing files; it will only put
in place files it doesn't find yet. To revert a file back to what in place files it doesn't find yet. To revert a file back to what
``init-plugin`` created originally, delete it first and then rerun ``init-plugin`` created originally, delete it first and then rerun
with ``-u``. with ``-u``.
``init-plugin`` puts a ``configure`` script in place that wraps
``cmake`` with a more familiar configure-style configuration. By
default, the script provides two options for specifying paths to the
Bro source (``--bro-dist``) and to the plugin's installation directory
(``--install-root``). To extend ``configure`` with plugin-specific
options (such as search paths for its dependencies) don't edit the
script directly but instead extend ``configure.plugin``, which
``configure`` includes. That way you will be able to more easily
update ``configure`` in the future when the distribution version
changes. In ``configure.plugin`` you can use the predefined shell
function ``append_cache_entry`` to seed values into the CMake cache;
see the installed skeleton version and existing plugins for examples.
Activating a Plugin Activating a Plugin
=================== ===================

202
doc/frameworks/broker.rst Normal file
View file

@ -0,0 +1,202 @@
.. _brokercomm-framework:
======================================
Broker-Enabled Communication Framework
======================================
.. rst-class:: opening
Bro can now use the `Broker Library
<../components/broker/README.html>`_ to exchange information with
other Bro processes. To enable it run Bro's ``configure`` script
with the ``--enable-broker`` option. Note that a C++11 compatible
compiler (e.g. GCC 4.8+ or Clang 3.3+) is required as well as the
`C++ Actor Framework <http://actor-framework.org/>`_.
.. contents::
Connecting to Peers
===================
Communication via Broker must first be turned on via
:bro:see:`BrokerComm::enable`.
Bro can accept incoming connections by calling :bro:see:`BrokerComm::listen`
and then monitor connection status updates via
:bro:see:`BrokerComm::incoming_connection_established` and
:bro:see:`BrokerComm::incoming_connection_broken`.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-listener.bro
Bro can initiate outgoing connections by calling :bro:see:`BrokerComm::connect`
and then monitor connection status updates via
:bro:see:`BrokerComm::outgoing_connection_established`,
:bro:see:`BrokerComm::outgoing_connection_broken`, and
:bro:see:`BrokerComm::outgoing_connection_incompatible`.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/connecting-connector.bro
Remote Printing
===============
To receive remote print messages, first use
:bro:see:`BrokerComm::subscribe_to_prints` to advertise to peers a topic
prefix of interest and then create an event handler for
:bro:see:`BrokerComm::print_handler` to handle any print messages that are
received.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-listener.bro
To send remote print messages, just call :bro:see:`BrokerComm::print`.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/printing-connector.bro
Notice that the subscriber only used the prefix "bro/print/", but is
able to receive messages with full topics of "bro/print/hi",
"bro/print/stuff", and "bro/print/bye". The model here is that the
publisher of a message checks for all subscribers who advertised
interest in a prefix of that message's topic and sends it to them.
Message Format
--------------
For other applications that want to exchange print messages with Bro,
the Broker message format is simply:
.. code:: c++
broker::message{std::string{}};
Remote Events
=============
Receiving remote events is similar to remote prints. Just use
:bro:see:`BrokerComm::subscribe_to_events` and possibly define any new events
along with handlers that peers may want to send.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-listener.bro
To send events, there are two choices. The first is to use call
:bro:see:`BrokerComm::event` directly. The second option is to use
:bro:see:`BrokerComm::auto_event` to make it so a particular event is
automatically sent to peers whenever it is called locally via the normal
event invocation syntax.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/events-connector.bro
Again, the subscription model is prefix-based.
Message Format
--------------
For other applications that want to exchange event messages with Bro,
the Broker message format is:
.. code:: c++
broker::message{std::string{}, ...};
The first parameter is the name of the event and the remaining ``...``
are its arguments, which are any of the support Broker data types as
they correspond to the Bro types for the event named in the first
parameter of the message.
Remote Logging
==============
.. btest-include:: ${DOC_ROOT}/frameworks/broker/testlog.bro
Use :bro:see:`BrokerComm::subscribe_to_logs` to advertise interest in logs
written by peers. The topic names that Bro uses are implicitly of the
form "bro/log/<stream-name>".
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-listener.bro
To send remote logs either use :bro:see:`Log::enable_remote_logging` or
:bro:see:`BrokerComm::enable_remote_logs`. The former allows any log stream
to be sent to peers while the later toggles remote logging for
particular streams.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/logs-connector.bro
Message Format
--------------
For other applications that want to exchange logs messages with Bro,
the Broker message format is:
.. code:: c++
broker::message{broker::enum_value{}, broker::record{}};
The enum value corresponds to the stream's :bro:see:`Log::ID` value, and
the record corresponds to a single entry of that log's columns record,
in this case a ``Test::INFO`` value.
Tuning Access Control
=====================
By default, endpoints do not restrict the message topics that it sends
to peers and do not restrict what message topics and data store
identifiers get advertised to peers. These are the default
:bro:see:`BrokerComm::EndpointFlags` supplied to :bro:see:`BrokerComm::enable`.
If not using the ``auto_publish`` flag, one can use the
:bro:see:`BrokerComm::publish_topic` and :bro:see:`BrokerComm::unpublish_topic`
functions to manipulate the set of message topics (must match exactly)
that are allowed to be sent to peer endpoints. These settings take
precedence over the per-message ``peers`` flag supplied to functions
that take a :bro:see:`BrokerComm::SendFlags` such as :bro:see:`BrokerComm::print`,
:bro:see:`BrokerComm::event`, :bro:see:`BrokerComm::auto_event` or
:bro:see:`BrokerComm::enable_remote_logs`.
If not using the ``auto_advertise`` flag, one can use the
:bro:see:`BrokerComm::advertise_topic` and :bro:see:`BrokerComm::unadvertise_topic`
to manupulate the set of topic prefixes that are allowed to be
advertised to peers. If an endpoint does not advertise a topic prefix,
the only way a peers can send messages to it is via the ``unsolicited``
flag of :bro:see:`BrokerComm::SendFlags` and choosing a topic with a matching
prefix (i.e. full topic may be longer than receivers prefix, just the
prefix needs to match).
Distributed Data Stores
=======================
There are three flavors of key-value data store interfaces: master,
clone, and frontend.
A frontend is the common interface to query and modify data stores.
That is, a clone is a specific type of frontend and a master is also a
specific type of frontend, but a standalone frontend can also exist to
e.g. query and modify the contents of a remote master store without
actually "owning" any of the contents itself.
A master data store can be be cloned from remote peers which may then
perform lightweight, local queries against the clone, which
automatically stays synchronized with the master store. Clones cannot
modify their content directly, instead they send modifications to the
centralized master store which applies them and then broadcasts them to
all clones.
Master and clone stores get to choose what type of storage backend to
use. E.g. In-memory versus SQLite for persistence. Note that if clones
are used, data store sizes should still be able to fit within memory
regardless of the storage backend as a single snapshot of the master
store is sent in a single chunk to initialize the clone.
Data stores also support expiration on a per-key basis either using an
absolute point in time or a relative amount of time since the entry's
last modification time.
.. btest-include:: ${DOC_ROOT}/frameworks/broker/stores-listener.bro
.. btest-include:: ${DOC_ROOT}/frameworks/broker/stores-connector.bro
In the above example, if a local copy of the store contents isn't
needed, just replace the :bro:see:`BrokerStore::create_clone` call with
:bro:see:`BrokerStore::create_frontend`. Queries will then be made against
the remote master store instead of the local clone.
Note that all queries are made within Bro's asynchrounous ``when``
statements and must specify a timeout block.

View file

@ -0,0 +1,19 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector";
event bro_init()
{
BrokerComm::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1sec);
}
event BrokerComm::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
print "BrokerComm::outgoing_connection_established",
peer_address, peer_port, peer_name;
terminate();
}

View file

@ -0,0 +1,21 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener";
event bro_init()
{
BrokerComm::enable();
BrokerComm::listen(broker_port, "127.0.0.1");
}
event BrokerComm::incoming_connection_established(peer_name: string)
{
print "BrokerComm::incoming_connection_established", peer_name;
}
event BrokerComm::incoming_connection_broken(peer_name: string)
{
print "BrokerComm::incoming_connection_broken", peer_name;
terminate();
}

View file

@ -0,0 +1,31 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector";
global my_event: event(msg: string, c: count);
global my_auto_event: event(msg: string, c: count);
event bro_init()
{
BrokerComm::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1sec);
BrokerComm::auto_event("bro/event/my_auto_event", my_auto_event);
}
event BrokerComm::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
print "BrokerComm::outgoing_connection_established",
peer_address, peer_port, peer_name;
BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "hi", 0));
event my_auto_event("stuff", 88);
BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "...", 1));
event my_auto_event("more stuff", 51);
BrokerComm::event("bro/event/my_event", BrokerComm::event_args(my_event, "bye", 2));
}
event BrokerComm::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
}

View file

@ -0,0 +1,37 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener";
global msg_count = 0;
global my_event: event(msg: string, c: count);
global my_auto_event: event(msg: string, c: count);
event bro_init()
{
BrokerComm::enable();
BrokerComm::subscribe_to_events("bro/event/");
BrokerComm::listen(broker_port, "127.0.0.1");
}
event BrokerComm::incoming_connection_established(peer_name: string)
{
print "BrokerComm::incoming_connection_established", peer_name;
}
event my_event(msg: string, c: count)
{
++msg_count;
print "got my_event", msg, c;
if ( msg_count == 5 )
terminate();
}
event my_auto_event(msg: string, c: count)
{
++msg_count;
print "got my_auto_event", msg, c;
if ( msg_count == 5 )
terminate();
}

View file

@ -0,0 +1,40 @@
@load ./testlog
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector";
redef Log::enable_local_logging = F;
redef Log::enable_remote_logging = F;
global n = 0;
event bro_init()
{
BrokerComm::enable();
BrokerComm::enable_remote_logs(Test::LOG);
BrokerComm::connect("127.0.0.1", broker_port, 1sec);
}
event do_write()
{
if ( n == 6 )
return;
Log::write(Test::LOG, [$msg = "ping", $num = n]);
++n;
event do_write();
}
event BrokerComm::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
print "BrokerComm::outgoing_connection_established",
peer_address, peer_port, peer_name;
event do_write();
}
event BrokerComm::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
}

View file

@ -0,0 +1,25 @@
@load ./testlog
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener";
event bro_init()
{
BrokerComm::enable();
BrokerComm::subscribe_to_logs("bro/log/Test::LOG");
BrokerComm::listen(broker_port, "127.0.0.1");
}
event BrokerComm::incoming_connection_established(peer_name: string)
{
print "BrokerComm::incoming_connection_established", peer_name;
}
event Test::log_test(rec: Test::Info)
{
print "wrote log", rec;
if ( rec$num == 5 )
terminate();
}

View file

@ -0,0 +1,26 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "connector";
event bro_init()
{
BrokerComm::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1sec);
}
event BrokerComm::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
print "BrokerComm::outgoing_connection_established",
peer_address, peer_port, peer_name;
BrokerComm::print("bro/print/hi", "hello");
BrokerComm::print("bro/print/stuff", "...");
BrokerComm::print("bro/print/bye", "goodbye");
}
event BrokerComm::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
}

View file

@ -0,0 +1,26 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
redef BrokerComm::endpoint_name = "listener";
global msg_count = 0;
event bro_init()
{
BrokerComm::enable();
BrokerComm::subscribe_to_prints("bro/print/");
BrokerComm::listen(broker_port, "127.0.0.1");
}
event BrokerComm::incoming_connection_established(peer_name: string)
{
print "BrokerComm::incoming_connection_established", peer_name;
}
event BrokerComm::print_handler(msg: string)
{
++msg_count;
print "got print message", msg;
if ( msg_count == 3 )
terminate();
}

View file

@ -0,0 +1,53 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
global h: opaque of BrokerStore::Handle;
function dv(d: BrokerComm::Data): BrokerComm::DataVector
{
local rval: BrokerComm::DataVector;
rval[0] = d;
return rval;
}
global ready: event();
event BrokerComm::outgoing_connection_broken(peer_address: string,
peer_port: port)
{
terminate();
}
event BrokerComm::outgoing_connection_established(peer_address: string,
peer_port: port,
peer_name: string)
{
local myset: set[string] = {"a", "b", "c"};
local myvec: vector of string = {"alpha", "beta", "gamma"};
h = BrokerStore::create_master("mystore");
BrokerStore::insert(h, BrokerComm::data("one"), BrokerComm::data(110));
BrokerStore::insert(h, BrokerComm::data("two"), BrokerComm::data(223));
BrokerStore::insert(h, BrokerComm::data("myset"), BrokerComm::data(myset));
BrokerStore::insert(h, BrokerComm::data("myvec"), BrokerComm::data(myvec));
BrokerStore::increment(h, BrokerComm::data("one"));
BrokerStore::decrement(h, BrokerComm::data("two"));
BrokerStore::add_to_set(h, BrokerComm::data("myset"), BrokerComm::data("d"));
BrokerStore::remove_from_set(h, BrokerComm::data("myset"), BrokerComm::data("b"));
BrokerStore::push_left(h, BrokerComm::data("myvec"), dv(BrokerComm::data("delta")));
BrokerStore::push_right(h, BrokerComm::data("myvec"), dv(BrokerComm::data("omega")));
when ( local res = BrokerStore::size(h) )
{
print "master size", res;
event ready();
}
timeout 10sec
{ print "timeout"; }
}
event bro_init()
{
BrokerComm::enable();
BrokerComm::connect("127.0.0.1", broker_port, 1secs);
BrokerComm::auto_event("bro/event/ready", ready);
}

View file

@ -0,0 +1,43 @@
const broker_port: port = 9999/tcp &redef;
redef exit_only_after_terminate = T;
global h: opaque of BrokerStore::Handle;
global expected_key_count = 4;
global key_count = 0;
function do_lookup(key: string)
{
when ( local res = BrokerStore::lookup(h, BrokerComm::data(key)) )
{
++key_count;
print "lookup", key, res;
if ( key_count == expected_key_count )
terminate();
}
timeout 10sec
{ print "timeout", key; }
}
event ready()
{
h = BrokerStore::create_clone("mystore");
when ( local res = BrokerStore::keys(h) )
{
print "clone keys", res;
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 0)));
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 1)));
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 2)));
do_lookup(BrokerComm::refine_to_string(BrokerComm::vector_lookup(res$result, 3)));
}
timeout 10sec
{ print "timeout"; }
}
event bro_init()
{
BrokerComm::enable();
BrokerComm::subscribe_to_events("bro/event/ready");
BrokerComm::listen(broker_port, "127.0.0.1");
}

View file

@ -0,0 +1,19 @@
module Test;
export {
redef enum Log::ID += { LOG };
type Info: record {
msg: string &log;
num: count &log;
};
global log_test: event(rec: Test::Info);
}
event bro_init() &priority=5
{
BrokerComm::enable();
Log::create_stream(Test::LOG, [$columns=Test::Info, $ev=log_test, $path="test"]);
}

View file

@ -1,7 +1,8 @@
event file_mime_type(f: fa_file, mime_type: string) event file_sniff(f: fa_file, meta: fa_metadata)
{ {
if ( ! meta?$mime_type ) return;
print "new file", f$id; print "new file", f$id;
if ( mime_type == "text/plain" ) if ( meta$mime_type == "text/plain" )
Files::add_analyzer(f, Files::ANALYZER_MD5); Files::add_analyzer(f, Files::ANALYZER_MD5);
} }

View file

@ -14,4 +14,4 @@ Frameworks
notice notice
signatures signatures
sumstats sumstats
broker

View file

@ -344,7 +344,7 @@ example for the ``Foo`` module:
event bro_init() &priority=5 event bro_init() &priority=5
{ {
# Create the stream. This also adds a default filter automatically. # Create the stream. This also adds a default filter automatically.
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]); Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo, $path="foo"]);
} }
You can also add the state to the :bro:type:`connection` record to make You can also add the state to the :bro:type:`connection` record to make

View file

@ -88,15 +88,15 @@ directly make modifications to the :bro:see:`Notice::Info` record
given as the argument to the hook. given as the argument to the hook.
Here's a simple example which tells Bro to send an email for all notices of Here's a simple example which tells Bro to send an email for all notices of
type :bro:see:`SSH::Password_Guessing` if the server is 10.0.0.1: type :bro:see:`SSH::Password_Guessing` if the guesser attempted to log in to
the server at 192.168.56.103:
.. code:: bro .. btest-include:: ${DOC_ROOT}/frameworks/notice_ssh_guesser.bro
hook Notice::policy(n: Notice::Info) .. btest:: notice_ssh_guesser.bro
{
if ( n$note == SSH::Password_Guessing && n$id$resp_h == 10.0.0.1 ) @TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/ssh/sshguess.pcap ${DOC_ROOT}/frameworks/notice_ssh_guesser.bro
add n$actions[Notice::ACTION_EMAIL]; @TEST-EXEC: btest-rst-cmd cat notice.log
}
.. note:: .. note::
@ -111,10 +111,9 @@ a hook body to run before default hook bodies might look like this:
.. code:: bro .. code:: bro
hook Notice::policy(n: Notice::Info) &priority=5 hook Notice::policy(n: Notice::Info) &priority=5
{ {
if ( n$note == SSH::Password_Guessing && n$id$resp_h == 10.0.0.1 ) # Insert your code here.
add n$actions[Notice::ACTION_EMAIL]; }
}
Hooks can also abort later hook bodies with the ``break`` keyword. This Hooks can also abort later hook bodies with the ``break`` keyword. This
is primarily useful if one wants to completely preempt processing by is primarily useful if one wants to completely preempt processing by

View file

@ -0,0 +1,10 @@
@load protocols/ssh/detect-bruteforcing
redef SSH::password_guesses_limit=10;
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Password_Guessing && /192\.168\.56\.103/ in n$sub )
add n$actions[Notice::ACTION_EMAIL];
}

View file

@ -7,15 +7,18 @@ global mime_to_ext: table[string] of string = {
["text/html"] = "html", ["text/html"] = "html",
}; };
event file_mime_type(f: fa_file, mime_type: string) event file_sniff(f: fa_file, meta: fa_metadata)
{ {
if ( f$source != "HTTP" ) if ( f$source != "HTTP" )
return; return;
if ( mime_type !in mime_to_ext ) if ( ! meta?$mime_type )
return; return;
local fname = fmt("%s-%s.%s", f$source, f$id, mime_to_ext[mime_type]); if ( meta$mime_type !in mime_to_ext )
return;
local fname = fmt("%s-%s.%s", f$source, f$id, mime_to_ext[meta$mime_type]);
print fmt("Extracting file %s", fname); print fmt("Extracting file %s", fname);
Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]); Files::add_analyzer(f, Files::ANALYZER_EXTRACT, [$extract_filename=fname]);
} }

View file

@ -68,7 +68,7 @@ that ``bash`` and ``python`` are in your ``PATH``):
.. console:: .. console::
sudo pkg_add -r bash cmake swig bison python perl sudo pkg_add -r bash cmake swig bison python perl py27-sqlite3
* Mac OS X: * Mac OS X:
@ -113,19 +113,15 @@ Using Pre-Built Binary Release Packages
======================================= =======================================
See the `bro downloads page`_ for currently supported/targeted See the `bro downloads page`_ for currently supported/targeted
platforms for binary releases. platforms for binary releases and for installation instructions.
* RPM * Linux Packages
.. console:: Linux based binary installations are usually performed by adding
information about the Bro packages to the respective system packaging
sudo yum localinstall Bro-*.rpm tool. Theen the usual system utilities such as ``apt``, ``yum``
or ``zyppper`` are used to perforn the installation. By default,
* DEB installations of binary packages will go into ``/opt/bro``.
.. console::
sudo gdebi Bro-*.deb
* MacOS Disk Image with Installer * MacOS Disk Image with Installer
@ -133,8 +129,6 @@ platforms for binary releases.
Everything installed by the package will go into ``/opt/bro``. Everything installed by the package will go into ``/opt/bro``.
The primary install prefix for binary packages is ``/opt/bro``. The primary install prefix for binary packages is ``/opt/bro``.
Non-MacOS packages that include BroControl also put variable/runtime
data (e.g. Bro logs) in ``/var/opt/bro``.
Installing from Source Installing from Source
========================== ==========================

View file

@ -30,7 +30,7 @@ export {
event bro_init() &priority=3 event bro_init() &priority=3
{ {
Log::create_stream(MimeMetrics::LOG, [$columns=Info]); Log::create_stream(MimeMetrics::LOG, [$columns=Info, $path="mime_metrics"]);
local r1: SumStats::Reducer = [$stream="mime.bytes", local r1: SumStats::Reducer = [$stream="mime.bytes",
$apply=set(SumStats::SUM)]; $apply=set(SumStats::SUM)];
local r2: SumStats::Reducer = [$stream="mime.hits", local r2: SumStats::Reducer = [$stream="mime.hits",

View file

@ -0,0 +1,24 @@
@load protocols/ssl/expiring-certs
const watched_servers: set[addr] = {
87.98.220.10,
} &redef;
# Site::local_nets usually isn't something you need to modify if
# BroControl automatically sets it up from networks.cfg. It's
# shown here for completeness.
redef Site::local_nets += {
87.98.0.0/16,
};
hook Notice::policy(n: Notice::Info)
{
if ( n$note != SSL::Certificate_Expired )
return;
if ( n$id$resp_h !in watched_servers )
return;
add n$actions[Notice::ACTION_EMAIL];
}

View file

@ -156,9 +156,11 @@ changes we want to make:
notice that means an SSL connection was established and the server's notice that means an SSL connection was established and the server's
certificate couldn't be validated using Bro's default trust roots, but certificate couldn't be validated using Bro's default trust roots, but
we want to ignore it. we want to ignore it.
2) ``SSH::Login`` is a notice type that is triggered when an SSH connection 2) ``SSL::Certificate_Expired`` is a notice type that is triggered when
attempt looks like it may have been successful, and we want email when an SSL connection was established using an expired certificate. We
that happens, but only for certain servers. want email when that happens, but only for certain servers on the
local network (Bro can also proactively monitor for certs that will
soon expire, but this is just for demonstration purposes).
We've defined *what* we want to do, but need to know *where* to do it. We've defined *what* we want to do, but need to know *where* to do it.
The answer is to use a script written in the Bro programming language, so The answer is to use a script written in the Bro programming language, so
@ -203,7 +205,7 @@ the variable's value may not change at run-time, but whose initial value can be
modified via the ``redef`` operator at parse-time. modified via the ``redef`` operator at parse-time.
Let's continue on our path to modify the behavior for the two SSL Let's continue on our path to modify the behavior for the two SSL
and SSH notices. Looking at :doc:`/scripts/base/frameworks/notice/main.bro`, notices. Looking at :doc:`/scripts/base/frameworks/notice/main.bro`,
we see that it advertises: we see that it advertises:
.. code:: bro .. code:: bro
@ -216,7 +218,7 @@ we see that it advertises:
const ignored_types: set[Notice::Type] = {} &redef; const ignored_types: set[Notice::Type] = {} &redef;
} }
That's exactly what we want to do for the SSL notice. Add to ``local.bro``: That's exactly what we want to do for the first notice. Add to ``local.bro``:
.. code:: bro .. code:: bro
@ -248,38 +250,30 @@ is valid before installing it and then restarting the Bro instance:
stopping bro ... stopping bro ...
starting bro ... starting bro ...
Now that the SSL notice is ignored, let's look at how to send an email on Now that the SSL notice is ignored, let's look at how to send an email
the SSH notice. The notice framework has a similar option called on the other notice. The notice framework has a similar option called
``emailed_types``, but using that would generate email for all SSH servers and ``emailed_types``, but using that would generate email for all SSL
we only want email for logins to certain ones. There is a ``policy`` hook servers with expired certificates and we only want email for connections
that is actually what is used to implement the simple functionality of to certain ones. There is a ``policy`` hook that is actually what is
``ignored_types`` and used to implement the simple functionality of ``ignored_types`` and
``emailed_types``, but it's extensible such that the condition and action taken ``emailed_types``, but it's extensible such that the condition and
on notices can be user-defined. action taken on notices can be user-defined.
In ``local.bro``, let's define a new ``policy`` hook handler body In ``local.bro``, let's define a new ``policy`` hook handler body:
that takes the email action for SSH logins only for a defined set of servers:
.. code:: bro .. btest-include:: ${DOC_ROOT}/quickstart/conditional-notice.bro
const watched_servers: set[addr] = { .. btest:: conditional-notice
192.168.1.100,
192.168.1.101,
192.168.1.102,
} &redef;
hook Notice::policy(n: Notice::Info) @TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/tls/tls-expired-cert.trace ${DOC_ROOT}/quickstart/conditional-notice.bro
{ @TEST-EXEC: btest-rst-cmd cat notice.log
if ( n$note == SSH::SUCCESSFUL_LOGIN && n$id$resp_h in watched_servers )
add n$actions[Notice::ACTION_EMAIL];
}
You'll just have to trust the syntax for now, but what we've done is You'll just have to trust the syntax for now, but what we've done is
first declare our own variable to hold a set of watched addresses, first declare our own variable to hold a set of watched addresses,
``watched_servers``; then added a hook handler body to the policy that will ``watched_servers``; then added a hook handler body to the policy that
generate an email whenever the notice type is an SSH login and the responding will generate an email whenever the notice type is an SSL expired
host stored certificate and the responding host stored inside the ``Info`` record's
inside the ``Info`` record's connection field is in the set of watched servers. connection field is in the set of watched servers.
.. note:: Record field member access is done with the '$' character .. note:: Record field member access is done with the '$' character
instead of a '.' as might be expected from other languages, in instead of a '.' as might be expected from other languages, in

View file

@ -43,8 +43,6 @@ The Bro scripting language supports the following attributes.
+-----------------------------+-----------------------------------------------+ +-----------------------------+-----------------------------------------------+
| :bro:attr:`&mergeable` |Prefer set union for synchronized state. | | :bro:attr:`&mergeable` |Prefer set union for synchronized state. |
+-----------------------------+-----------------------------------------------+ +-----------------------------+-----------------------------------------------+
| :bro:attr:`&group` |Group event handlers to activate/deactivate. |
+-----------------------------+-----------------------------------------------+
| :bro:attr:`&error_handler` |Used internally for reporter framework events. | | :bro:attr:`&error_handler` |Used internally for reporter framework events. |
+-----------------------------+-----------------------------------------------+ +-----------------------------+-----------------------------------------------+
| :bro:attr:`&type_column` |Used by input framework for "port" type. | | :bro:attr:`&type_column` |Used by input framework for "port" type. |
@ -198,11 +196,6 @@ Here is a more detailed explanation of each attribute:
inconsistencies and can be avoided by unifying the two sets, rather inconsistencies and can be avoided by unifying the two sets, rather
than merely overwriting the old value. than merely overwriting the old value.
.. bro:attr:: &group
Groups event handlers such that those in the same group can be
jointly activated or deactivated.
.. bro:attr:: &error_handler .. bro:attr:: &error_handler
Internally set on the events that are associated with the reporter Internally set on the events that are associated with the reporter

View file

@ -294,7 +294,10 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: for .. bro:keyword:: for
A "for" loop iterates over each element in a string, set, vector, or A "for" loop iterates over each element in a string, set, vector, or
table and executes a statement for each iteration. table and executes a statement for each iteration. Currently,
modifying a container's membership while iterating over it may
result in undefined behavior, so avoid adding or removing elements
inside the loop.
For each iteration of the loop, a loop variable will be assigned to an For each iteration of the loop, a loop variable will be assigned to an
element if the expression evaluates to a string or set, or an index if element if the expression evaluates to a string or set, or an index if

View file

@ -23,7 +23,7 @@ function factorial(n: count): count
event bro_init() event bro_init()
{ {
# Create the logging stream. # Create the logging stream.
Log::create_stream(LOG, [$columns=Info]); Log::create_stream(LOG, [$columns=Info, $path="factor"]);
} }
event bro_done() event bro_done()

View file

@ -37,7 +37,7 @@ function mod5(id: Log::ID, path: string, rec: Factor::Info) : string
event bro_init() event bro_init()
{ {
Log::create_stream(LOG, [$columns=Info]); Log::create_stream(LOG, [$columns=Info, $path="factor"]);
local filter: Log::Filter = [$name="split-mod5s", $path_func=mod5]; local filter: Log::Filter = [$name="split-mod5s", $path_func=mod5];
Log::add_filter(Factor::LOG, filter); Log::add_filter(Factor::LOG, filter);

View file

@ -22,7 +22,7 @@ function factorial(n: count): count
event bro_init() event bro_init()
{ {
Log::create_stream(LOG, [$columns=Info, $ev=log_factor]); Log::create_stream(LOG, [$columns=Info, $ev=log_factor, $path="factor"]);
} }
event bro_done() event bro_done()

View file

@ -363,7 +363,7 @@ decrypted from HTTP streams is stored in
excerpt from :doc:`/scripts/base/protocols/http/main.bro` below. excerpt from :doc:`/scripts/base/protocols/http/main.bro` below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro .. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/http/main.bro
:lines: 9-11,20-22,121 :lines: 9-11,20-22,125
Because the constant was declared with the ``&redef`` attribute, if we Because the constant was declared with the ``&redef`` attribute, if we
needed to turn this option on globally, we could do so by adding the needed to turn this option on globally, we could do so by adding the
@ -826,7 +826,7 @@ example of the ``record`` data type in the earlier sections, the
``conn.log``, is shown by the excerpt below. ``conn.log``, is shown by the excerpt below.
.. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/conn/main.bro .. btest-include:: ${BRO_SRC_ROOT}/scripts/base/protocols/conn/main.bro
:lines: 10-12,16-17,19,21,23,25,28,31,35,38,57,63,69,92,95,99,102,106,110-111,116 :lines: 10-12,16-17,19,21,23,25,28,31,35,38,57,63,69,75,98,101,105,108,112,116-117,122
Looking at the structure of the definition, a new collection of data Looking at the structure of the definition, a new collection of data
types is being defined as a type called ``Info``. Since this type types is being defined as a type called ``Info``. Since this type

View file

@ -1,14 +0,0 @@
#!/bin/sh
# CMake/CPack versions before 2.8.3 have bugs that can create bad packages
# Since packages will be built on several different systems, a single
# version of CMake is required to obtain consistency, but can be increased
# as new versions of CMake come out that also produce working packages.
CMAKE_PACK_REQ="cmake version 2.8.6"
CMAKE_VER=`cmake -version`
if [ "${CMAKE_VER}" != "${CMAKE_PACK_REQ}" ]; then
echo "Package creation requires ${CMAKE_PACK_REQ}" >&2
exit 1
fi

View file

@ -3,8 +3,6 @@
# This script generates binary DEB packages. # This script generates binary DEB packages.
# They can be found in ../build/ after running. # They can be found in ../build/ after running.
./check-cmake || { exit 1; }
# The DEB CPack generator depends on `dpkg-shlibdeps` to automatically # The DEB CPack generator depends on `dpkg-shlibdeps` to automatically
# determine what dependencies to set for the packages # determine what dependencies to set for the packages
type dpkg-shlibdeps > /dev/null 2>&1 || { type dpkg-shlibdeps > /dev/null 2>&1 || {

View file

@ -3,14 +3,6 @@
# This script creates binary packages for Mac OS X. # This script creates binary packages for Mac OS X.
# They can be found in ../build/ after running. # They can be found in ../build/ after running.
cmake -P /dev/stdin << "EOF"
if ( ${CMAKE_VERSION} VERSION_LESS 2.8.9 )
message(FATAL_ERROR "CMake >= 2.8.9 required to build package")
endif ()
EOF
[ $? -ne 0 ] && exit 1;
type sw_vers > /dev/null 2>&1 || { type sw_vers > /dev/null 2>&1 || {
echo "Unable to get Mac OS X version" >&2; echo "Unable to get Mac OS X version" >&2;
exit 1; exit 1;

View file

@ -3,8 +3,6 @@
# This script generates binary RPM packages. # This script generates binary RPM packages.
# They can be found in ../build/ after running. # They can be found in ../build/ after running.
./check-cmake || { exit 1; }
# The RPM CPack generator depends on `rpmbuild` to create packages # The RPM CPack generator depends on `rpmbuild` to create packages
type rpmbuild > /dev/null 2>&1 || { type rpmbuild > /dev/null 2>&1 || {
echo "\ echo "\

View file

@ -53,7 +53,8 @@ function set_limit(f: fa_file, args: Files::AnalyzerArgs, n: count): bool
function on_add(f: fa_file, args: Files::AnalyzerArgs) function on_add(f: fa_file, args: Files::AnalyzerArgs)
{ {
if ( ! args?$extract_filename ) if ( ! args?$extract_filename )
args$extract_filename = cat("extract-", f$source, "-", f$id); args$extract_filename = cat("extract-", f$last_active, "-", f$source,
"-", f$id);
f$info$extracted = args$extract_filename; f$info$extracted = args$extract_filename;
args$extract_filename = build_path_compressed(prefix, args$extract_filename); args$extract_filename = build_path_compressed(prefix, args$extract_filename);

View file

@ -0,0 +1,2 @@
@load ./consts
@load ./main

View file

@ -0,0 +1,184 @@
module PE;
export {
const machine_types: table[count] of string = {
[0x00] = "UNKNOWN",
[0x1d3] = "AM33",
[0x8664] = "AMD64",
[0x1c0] = "ARM",
[0x1c4] = "ARMNT",
[0xaa64] = "ARM64",
[0xebc] = "EBC",
[0x14c] = "I386",
[0x200] = "IA64",
[0x9041] = "M32R",
[0x266] = "MIPS16",
[0x366] = "MIPSFPU",
[0x466] = "MIPSFPU16",
[0x1f0] = "POWERPC",
[0x1f1] = "POWERPCFP",
[0x166] = "R4000",
[0x1a2] = "SH3",
[0x1a3] = "SH3DSP",
[0x1a6] = "SH4",
[0x1a8] = "SH5",
[0x1c2] = "THUMB",
[0x169] = "WCEMIPSV2"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const file_characteristics: table[count] of string = {
[0x1] = "RELOCS_STRIPPED",
[0x2] = "EXECUTABLE_IMAGE",
[0x4] = "LINE_NUMS_STRIPPED",
[0x8] = "LOCAL_SYMS_STRIPPED",
[0x10] = "AGGRESSIVE_WS_TRIM",
[0x20] = "LARGE_ADDRESS_AWARE",
[0x80] = "BYTES_REVERSED_LO",
[0x100] = "32BIT_MACHINE",
[0x200] = "DEBUG_STRIPPED",
[0x400] = "REMOVABLE_RUN_FROM_SWAP",
[0x800] = "NET_RUN_FROM_SWAP",
[0x1000] = "SYSTEM",
[0x2000] = "DLL",
[0x4000] = "UP_SYSTEM_ONLY",
[0x8000] = "BYTES_REVERSED_HI"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const dll_characteristics: table[count] of string = {
[0x40] = "DYNAMIC_BASE",
[0x80] = "FORCE_INTEGRITY",
[0x100] = "NX_COMPAT",
[0x200] = "NO_ISOLATION",
[0x400] = "NO_SEH",
[0x800] = "NO_BIND",
[0x2000] = "WDM_DRIVER",
[0x8000] = "TERMINAL_SERVER_AWARE"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const windows_subsystems: table[count] of string = {
[0] = "UNKNOWN",
[1] = "NATIVE",
[2] = "WINDOWS_GUI",
[3] = "WINDOWS_CUI",
[7] = "POSIX_CUI",
[9] = "WINDOWS_CE_GUI",
[10] = "EFI_APPLICATION",
[11] = "EFI_BOOT_SERVICE_DRIVER",
[12] = "EFI_RUNTIME_DRIVER",
[13] = "EFI_ROM",
[14] = "XBOX"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const directories: table[count] of string = {
[0] = "Export Table",
[1] = "Import Table",
[2] = "Resource Table",
[3] = "Exception Table",
[4] = "Certificate Table",
[5] = "Base Relocation Table",
[6] = "Debug",
[7] = "Architecture",
[8] = "Global Ptr",
[9] = "TLS Table",
[10] = "Load Config Table",
[11] = "Bound Import",
[12] = "IAT",
[13] = "Delay Import Descriptor",
[14] = "CLR Runtime Header",
[15] = "Reserved"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const section_characteristics: table[count] of string = {
[0x8] = "TYPE_NO_PAD",
[0x20] = "CNT_CODE",
[0x40] = "CNT_INITIALIZED_DATA",
[0x80] = "CNT_UNINITIALIZED_DATA",
[0x100] = "LNK_OTHER",
[0x200] = "LNK_INFO",
[0x800] = "LNK_REMOVE",
[0x1000] = "LNK_COMDAT",
[0x8000] = "GPREL",
[0x20000] = "MEM_16BIT",
[0x40000] = "MEM_LOCKED",
[0x80000] = "MEM_PRELOAD",
[0x100000] = "ALIGN_1BYTES",
[0x200000] = "ALIGN_2BYTES",
[0x300000] = "ALIGN_4BYTES",
[0x400000] = "ALIGN_8BYTES",
[0x500000] = "ALIGN_16BYTES",
[0x600000] = "ALIGN_32BYTES",
[0x700000] = "ALIGN_64BYTES",
[0x800000] = "ALIGN_128BYTES",
[0x900000] = "ALIGN_256BYTES",
[0xa00000] = "ALIGN_512BYTES",
[0xb00000] = "ALIGN_1024BYTES",
[0xc00000] = "ALIGN_2048BYTES",
[0xd00000] = "ALIGN_4096BYTES",
[0xe00000] = "ALIGN_8192BYTES",
[0x1000000] = "LNK_NRELOC_OVFL",
[0x2000000] = "MEM_DISCARDABLE",
[0x4000000] = "MEM_NOT_CACHED",
[0x8000000] = "MEM_NOT_PAGED",
[0x10000000] = "MEM_SHARED",
[0x20000000] = "MEM_EXECUTE",
[0x40000000] = "MEM_READ",
[0x80000000] = "MEM_WRITE"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
const os_versions: table[count, count] of string = {
[10,0] = "Windows 10",
[6,4] = "Windows 10 Technical Preview",
[6,3] = "Windows 8.1 or Server 2012 R2",
[6,2] = "Windows 8 or Server 2012",
[6,1] = "Windows 7 or Server 2008 R2",
[6,0] = "Windows Vista or Server 2008",
[5,2] = "Windows XP x64 or Server 2003",
[5,1] = "Windows XP",
[5,0] = "Windows 2000",
[4,90] = "Windows Me",
[4,10] = "Windows 98",
[4,0] = "Windows 95 or NT 4.0",
[3,51] = "Windows NT 3.51",
[3,50] = "Windows NT 3.5",
[3,2] = "Windows 3.2",
[3,11] = "Windows for Workgroups 3.11",
[3,10] = "Windows 3.1 or NT 3.1",
[3,0] = "Windows 3.0",
[2,11] = "Windows 2.11",
[2,10] = "Windows 2.10",
[2,0] = "Windows 2.0",
[1,4] = "Windows 1.04",
[1,3] = "Windows 1.03",
[1,1] = "Windows 1.01",
[1,0] = "Windows 1.0",
} &default=function(i: count, j: count):string { return fmt("unknown-%d.%d", i, j); };
const section_descs: table[string] of string = {
[".bss"] = "Uninitialized data",
[".cormeta"] = "CLR metadata that indicates that the object file contains managed code",
[".data"] = "Initialized data",
[".debug$F"] = "Generated FPO debug information",
[".debug$P"] = "Precompiled debug types",
[".debug$S"] = "Debug symbols",
[".debug$T"] = "Debug types",
[".drective"] = "Linker options",
[".edata"] = "Export tables",
[".idata"] = "Import tables",
[".idlsym"] = "Includes registered SEH to support IDL attributes",
[".pdata"] = "Exception information",
[".rdata"] = "Read-only initialized data",
[".reloc"] = "Image relocations",
[".rsrc"] = "Resource directory",
[".sbss"] = "GP-relative uninitialized data",
[".sdata"] = "GP-relative initialized data",
[".srdata"] = "GP-relative read-only data",
[".sxdata"] = "Registered exception handler data",
[".text"] = "Executable code",
[".tls"] = "Thread-local storage",
[".tls$"] = "Thread-local storage",
[".vsdata"] = "GP-relative initialized data",
[".xdata"] = "Exception information",
} &default=function(i: string):string { return fmt("unknown-%s", i); };
}

View file

@ -0,0 +1,137 @@
module PE;
@load ./consts.bro
export {
redef enum Log::ID += { LOG };
type Info: record {
## Current timestamp.
ts: time &log;
## File id of this portable executable file.
id: string &log;
## The target machine that the file was compiled for.
machine: string &log &optional;
## The time that the file was created at.
compile_ts: time &log &optional;
## The required operating system.
os: string &log &optional;
## The subsystem that is required to run this file.
subsystem: string &log &optional;
## Is the file an executable, or just an object file?
is_exe: bool &log &default=T;
## Is the file a 64-bit executable?
is_64bit: bool &log &default=T;
## Does the file support Address Space Layout Randomization?
uses_aslr: bool &log &default=F;
## Does the file support Data Execution Prevention?
uses_dep: bool &log &default=F;
## Does the file enforce code integrity checks?
uses_code_integrity: bool &log &default=F;
## Does the file use structured exception handing?
uses_seh: bool &log &default=T;
## Does the file have an import table?
has_import_table: bool &log &optional;
## Does the file have an export table?
has_export_table: bool &log &optional;
## Does the file have an attribute certificate table?
has_cert_table: bool &log &optional;
## Does the file have a debug table?
has_debug_data: bool &log &optional;
## The names of the sections, in order.
section_names: vector of string &log &optional;
};
## Event for accessing logged records.
global log_pe: event(rec: Info);
## A hook that gets called when we first see a PE file.
global set_file: hook(f: fa_file);
}
redef record fa_file += {
pe: Info &optional;
};
const pe_mime_types = { "application/x-dosexec" };
event bro_init() &priority=5
{
Files::register_for_mime_types(Files::ANALYZER_PE, pe_mime_types);
Log::create_stream(LOG, [$columns=Info, $ev=log_pe, $path="pe"]);
}
hook set_file(f: fa_file) &priority=5
{
if ( ! f?$pe )
f$pe = [$ts=network_time(), $id=f$id];
}
event pe_dos_header(f: fa_file, h: PE::DOSHeader) &priority=5
{
hook set_file(f);
}
event pe_file_header(f: fa_file, h: PE::FileHeader) &priority=5
{
hook set_file(f);
f$pe$machine = machine_types[h$machine];
f$pe$compile_ts = h$ts;
f$pe$is_exe = ( h$optional_header_size > 0 );
for ( c in h$characteristics )
{
if ( file_characteristics[c] == "32BIT_MACHINE" )
f$pe$is_64bit = F;
}
}
event pe_optional_header(f: fa_file, h: PE::OptionalHeader) &priority=5
{
hook set_file(f);
# Only EXEs have optional headers
if ( ! f$pe$is_exe )
return;
f$pe$os = os_versions[h$os_version_major, h$os_version_minor];
f$pe$subsystem = windows_subsystems[h$subsystem];
for ( c in h$dll_characteristics )
{
if ( dll_characteristics[c] == "DYNAMIC_BASE" )
f$pe$uses_aslr = T;
if ( dll_characteristics[c] == "FORCE_INTEGRITY" )
f$pe$uses_code_integrity = T;
if ( dll_characteristics[c] == "NX_COMPAT" )
f$pe$uses_dep = T;
if ( dll_characteristics[c] == "NO_SEH" )
f$pe$uses_seh = F;
}
f$pe$has_export_table = (|h$table_sizes| > 0 && h$table_sizes[0] > 0);
f$pe$has_import_table = (|h$table_sizes| > 1 && h$table_sizes[1] > 0);
f$pe$has_cert_table = (|h$table_sizes| > 4 && h$table_sizes[4] > 0);
f$pe$has_debug_data = (|h$table_sizes| > 6 && h$table_sizes[6] > 0);
}
event pe_section_header(f: fa_file, h: PE::SectionHeader) &priority=5
{
hook set_file(f);
# Only EXEs have section headers
if ( ! f$pe$is_exe )
return;
if ( ! f$pe?$section_names )
f$pe$section_names = vector();
f$pe$section_names[|f$pe$section_names|] = h$name;
}
event file_state_remove(f: fa_file) &priority=-5
{
if ( f?$pe && f$pe?$machine )
Log::write(LOG, f$pe);
}

View file

@ -195,7 +195,7 @@ event Input::end_of_data(name: string, source: string)
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Unified2::LOG, [$columns=Info, $ev=log_unified2]); Log::create_stream(Unified2::LOG, [$columns=Info, $ev=log_unified2, $path="unified2"]);
if ( sid_msg == "" ) if ( sid_msg == "" )
{ {

View file

@ -36,7 +36,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509]); Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509, $path="x509"]);
} }
redef record Files::Info += { redef record Files::Info += {
@ -47,6 +47,9 @@ redef record Files::Info += {
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=5 event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=5
{ {
if ( ! f$info?$mime_type )
f$info$mime_type = "application/pkix-cert";
f$info$x509 = [$ts=f$info$ts, $id=f$id, $certificate=cert, $handle=cert_ref]; f$info$x509 = [$ts=f$info$ts, $id=f$id, $certificate=cert, $handle=cert_ref];
} }

View file

@ -0,0 +1 @@
@load ./main

View file

@ -0,0 +1,103 @@
##! Various data structure definitions for use with Bro's communication system.
module BrokerComm;
export {
## A name used to identify this endpoint to peers.
## .. bro:see:: BrokerComm::connect BrokerComm::listen
const endpoint_name = "" &redef;
## Change communication behavior.
type EndpointFlags: record {
## Whether to restrict message topics that can be published to peers.
auto_publish: bool &default = T;
## Whether to restrict what message topics or data store identifiers
## the local endpoint advertises to peers (e.g. subscribing to
## events or making a master data store available).
auto_advertise: bool &default = T;
};
## Fine-grained tuning of communication behavior for a particular message.
type SendFlags: record {
## Send the message to the local endpoint.
self: bool &default = F;
## Send the message to peer endpoints that advertise interest in
## the topic associated with the message.
peers: bool &default = T;
## Send the message to peer endpoints even if they don't advertise
## interest in the topic associated with the message.
unsolicited: bool &default = F;
};
## Opaque communication data.
type Data: record {
d: opaque of BrokerComm::Data &optional;
};
## Opaque communication data.
type DataVector: vector of BrokerComm::Data;
## Opaque event communication data.
type EventArgs: record {
## The name of the event. Not set if invalid event or arguments.
name: string &optional;
## The arguments to the event.
args: DataVector;
};
## Opaque communication data used as a convenient way to wrap key-value
## pairs that comprise table entries.
type TableItem : record {
key: BrokerComm::Data;
val: BrokerComm::Data;
};
}
module BrokerStore;
export {
## Whether a data store query could be completed or not.
type QueryStatus: enum {
SUCCESS,
FAILURE,
};
## An expiry time for a key-value pair inserted in to a data store.
type ExpiryTime: record {
## Absolute point in time at which to expire the entry.
absolute: time &optional;
## A point in time relative to the last modification time at which
## to expire the entry. New modifications will delay the expiration.
since_last_modification: interval &optional;
};
## The result of a data store query.
type QueryResult: record {
## Whether the query completed or not.
status: BrokerStore::QueryStatus;
## The result of the query. Certain queries may use a particular
## data type (e.g. querying store size always returns a count, but
## a lookup may return various data types).
result: BrokerComm::Data;
};
## Options to tune the SQLite storage backend.
type SQLiteOptions: record {
## File system path of the database.
path: string &default = "store.sqlite";
};
## Options to tune the RocksDB storage backend.
type RocksDBOptions: record {
## File system path of the database.
path: string &default = "store.rocksdb";
};
## Options to tune the particular storage backends.
type BackendOptions: record {
sqlite: SQLiteOptions &default = SQLiteOptions();
rocksdb: RocksDBOptions &default = RocksDBOptions();
};
}

View file

@ -159,5 +159,5 @@ event bro_init() &priority=5
terminate(); terminate();
} }
Log::create_stream(Cluster::LOG, [$columns=Info]); Log::create_stream(Cluster::LOG, [$columns=Info, $path="cluster"]);
} }

View file

@ -164,7 +164,7 @@ const src_names = {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Communication::LOG, [$columns=Info]); Log::create_stream(Communication::LOG, [$columns=Info, $path="communication"]);
} }
function do_script_log_common(level: count, src: count, msg: string) function do_script_log_common(level: count, src: count, msg: string)

View file

@ -38,7 +38,7 @@ redef record connection += {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DPD::LOG, [$columns=Info]); Log::create_stream(DPD::LOG, [$columns=Info, $path="dpd"]);
} }
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=10 event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=10

View file

@ -1,3 +1,9 @@
@load-sigs ./archive
@load-sigs ./audio
@load-sigs ./font
@load-sigs ./general @load-sigs ./general
@load-sigs ./image
@load-sigs ./msoffice @load-sigs ./msoffice
@load-sigs ./libmagic @load-sigs ./video
@load-sigs ./libmagic

View file

@ -0,0 +1,176 @@
signature file-tar {
file-magic /^[[:print:]\x00]{100}([[:digit:]\x20]{7}\x00){3}([[:digit:]\x20]{11}\x00){2}([[:digit:]\x00\x20]{7}[\x20\x00])[0-7\x00]/
file-mime "application/x-tar", 100
}
# This is low priority so that files using zip as a
# container will be identified correctly.
signature file-zip {
file-mime "application/zip", 10
file-magic /^PK\x03\x04.{2}/
}
# Multivolume Zip archive
signature file-multi-zip {
file-mime "application/zip", 10
file-magic /^PK\x07\x08PK\x03\x04/
}
# RAR
signature file-rar {
file-mime "application/x-rar", 70
file-magic /^Rar!/
}
# GZIP
signature file-gzip {
file-mime "application/x-gzip", 100
file-magic /\x1f\x8b/
}
# Microsoft Cabinet
signature file-ms-cab {
file-mime "application/vnd.ms-cab-compressed", 110
file-magic /^MSCF\x00\x00\x00\x00/
}
# Mac OS X DMG files
signature file-dmg {
file-magic /^(\x78\x01\x73\x0D\x62\x62\x60|\x78\xDA\x63\x60\x18\x05|\x78\x01\x63\x60\x18\x05|\x78\xDA\x73\x0D|\x78[\x01\xDA]\xED[\xD0-\xD9])/
file-mime "application/x-dmg", 100
}
# XAR (eXtensible ARchive) format.
# Mac OS X uses this for the .pkg format.
signature file-xar {
file-magic /^xar\!/
file-mime "application/x-xar", 100
}
# RPM
signature file-magic-auto352 {
file-mime "application/x-rpm", 70
file-magic /^(drpm|\xed\xab\xee\xdb)/
}
# StuffIt
signature file-stuffit {
file-mime "application/x-stuffit", 70
file-magic /^(SIT\x21|StuffIt)/
}
# Archived data
signature file-x-archive {
file-mime "application/x-archive", 70
file-magic /^!?<ar(ch)?>/
}
# ARC archive data
signature file-arc {
file-mime "application/x-arc", 70
file-magic /^[\x00-\x7f]{2}[\x02-\x0a\x14\x48]\x1a/
}
# EET archive
signature file-eet {
file-mime "application/x-eet", 70
file-magic /^\x1e\xe7\xff\x00/
}
# Zoo archive
signature file-zoo {
file-mime "application/x-zoo", 70
file-magic /^.{20}\xdc\xa7\xc4\xfd/
}
# LZ4 compressed data (legacy format)
signature file-lz4-legacy {
file-mime "application/x-lz4", 70
file-magic /(\x02\x21\x4c\x18)/
}
# LZ4 compressed data
signature file-lz4 {
file-mime "application/x-lz4", 70
file-magic /^\x04\x22\x4d\x18/
}
# LRZIP compressed data
signature file-lrzip {
file-mime "application/x-lrzip", 1
file-magic /^LRZI/
}
# LZIP compressed data
signature file-lzip {
file-mime "application/x-lzip", 70
file-magic /^LZIP/
}
# Self-extracting PKZIP archive
signature file-magic-auto434 {
file-mime "application/zip", 340
file-magic /^MZ.{28}(Copyright 1989\x2d1990 PKWARE Inc|PKLITE Copr)\x2e/
}
# LHA archive (LZH)
signature file-lzh {
file-mime "application/x-lzh", 80
file-magic /^.{2}-(lh[ abcdex0-9]|lz[s2-8]|lz[s2-8]|pm[s012]|pc1)-/
}
# WARC Archive
signature file-warc {
file-mime "application/warc", 50
file-magic /^WARC\x2f/
}
# 7-zip archive data
signature file-7zip {
file-mime "application/x-7z-compressed", 50
file-magic /^7z\xbc\xaf\x27\x1c/
}
# XZ compressed data
signature file-xz {
file-mime "application/x-xz", 90
file-magic /^\xfd7zXZ\x00/
}
# LHa self-extracting archive
signature file-magic-auto436 {
file-mime "application/x-lha", 120
file-magic /^MZ.{34}LH[aA]\x27s SFX/
}
# ARJ archive data
signature file-arj {
file-mime "application/x-arj", 50
file-magic /^\x60\xea/
}
# Byte-swapped cpio archive
signature file-bs-cpio {
file-mime "application/x-cpio", 50
file-magic /(\x71\xc7|\xc7\x71)/
}
# CPIO archive
signature file-cpio {
file-mime "application/x-cpio", 50
file-magic /^(\xc7\x71|\x71\xc7)/
}
# Compress'd data
signature file-compress {
file-mime "application/x-compress", 50
file-magic /^\x1f\x9d/
}
# LZMA compressed data
signature file-lzma {
file-mime "application/x-lzma", 71
file-magic /^\x5d\x00\x00/
}

View file

@ -0,0 +1,13 @@
# MPEG v3 audio
signature file-mpeg-audio {
file-mime "audio/mpeg", 20
file-magic /^\xff[\xe2\xe3\xf2\xf3\xf6\xf7\xfa\xfb\xfc\xfd]/
}
# MPEG v4 audio
signature file-m4a {
file-mime "audio/m4a", 70
file-magic /^....ftyp(m4a)/
}

View file

@ -0,0 +1,41 @@
# Web Open Font Format
signature file-woff {
file-magic /^wOFF/
file-mime "application/font-woff", 70
}
# TrueType font
signature file-ttf {
file-mime "application/x-font-ttf", 80
file-magic /^\x00\x01\x00\x00\x00/
}
signature file-embedded-opentype {
file-mime "application/vnd.ms-fontobject", 50
file-magic /^.{34}LP/
}
# X11 SNF font
signature file-snf {
file-mime "application/x-font-sfn", 70
file-magic /^(\x04\x00\x00\x00|\x00\x00\x00\x04).{100}(\x04\x00\x00\x00|\x00\x00\x00\x04)/
}
# OpenType font
signature file-opentype {
file-mime "application/vnd.ms-opentype", 70
file-magic /^OTTO/
}
# FrameMaker Font file
signature file-maker-screen-font {
file-mime "application/x-mif", 190
file-magic /^\x3cMakerScreenFont/
}
# >0 string,=SplineFontDB: (len=13), ["Spline Font Database "], swap_endian=0
signature file-spline-font-db {
file-mime "application/vnd.font-fontforge-sfd", 160
file-magic /^SplineFontDB\x3a/
}

View file

@ -1,18 +1,87 @@
# General purpose file magic signatures. # General purpose file magic signatures.
# Plaintext
# (Including BOMs for UTF-8, 16, and 32)
signature file-plaintext { signature file-plaintext {
file-magic /^([[:print:][:space:]]{10})/ file-mime "text/plain", -20
file-mime "text/plain", -20 file-magic /^(\xef\xbb\xbf|(\x00\x00)?\xfe\xff|\xff\xfe(\x00\x00)?)?[[:space:]\x20-\x7E]{10}/
} }
signature file-tar { signature file-json {
file-magic /^[[:print:]\x00]{100}([[:digit:]\x20]{7}\x00){3}([[:digit:]\x20]{11}\x00){2}([[:digit:]\x00\x20]{7}[\x20\x00])[0-7\x00]/ file-mime "text/json", 1
file-mime "application/x-tar", 100 file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*\{[\x0d\x0a[:blank:]]*(["][^"]{1,}["]|[a-zA-Z][a-zA-Z0-9\\_]*)[\x0d\x0a[:blank:]]*:[\x0d\x0a[:blank:]]*(["]|\[|\{|[0-9]|true|false)/
} }
signature file-zip { signature file-json2 {
file-mime "application/zip", 10 file-mime "text/json", 1
file-magic /^PK\x03\x04.{2}/ file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*\[[\x0d\x0a[:blank:]]*(((["][^"]{1,}["]|[0-9]{1,}(\.[0-9]{1,})?|true|false)[\x0d\x0a[:blank:]]*,)|\{|\[)[\x0d\x0a[:blank:]]*/
}
# Match empty JSON documents.
signature file-json3 {
file-mime "text/json", 0
file-magic /^(\xef\xbb\xbf)?[\x0d\x0a[:blank:]]*(\[\]|\{\})[\x0d\x0a[:blank:]]*$/
}
signature file-xml {
file-mime "application/xml", 10
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<\?xml /
}
signature file-xhtml {
file-mime "text/html", 100
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<(![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]|[hH][tT][mM][lL]|[mM][eE][tT][aA] {1,}[hH][tT][tT][pP]-[eE][qQ][uU][iI][vV])/
}
signature file-html {
file-mime "text/html", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<![dD][oO][cC][tT][yY][pP][eE] {1,}[hH][tT][mM][lL]/
}
signature file-html2 {
file-mime "text/html", 20
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<([hH][eE][aA][dD]|[hH][tT][mM][lL]|[tT][iI][tT][lL][eE]|[bB][oO][dD][yY])/
}
signature file-rss {
file-mime "text/rss", 90
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[rR][sS][sS]/
}
signature file-atom {
file-mime "text/atom", 100
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<([rR][sS][sS][^>]*xmlns:atom|[fF][eE][eE][dD][^>]*xmlns=["']?http:\/\/www.w3.org\/2005\/Atom["']?)/
}
signature file-soap {
file-mime "application/soap+xml", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[sS][oO][aA][pP](-[eE][nN][vV])?:[eE][nN][vV][eE][lL][oO][pP][eE]/
}
signature file-cross-domain-policy {
file-mime "text/x-cross-domain-policy", 49
file-magic /^([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<![dD][oO][cC][tT][yY][pP][eE] {1,}[cC][rR][oO][sS][sS]-[dD][oO][mM][aA][iI][nN]-[pP][oO][lL][iI][cC][yY]/
}
signature file-cross-domain-policy2 {
file-mime "text/x-cross-domain-policy", 49
file-magic /^([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[cC][rR][oO][sS][sS]-[dD][oO][mM][aA][iI][nN]-[pP][oO][lL][iI][cC][yY]/
}
signature file-xmlrpc {
file-mime "application/xml-rpc", 49
file-magic /^(\xef\xbb\xbf)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*(<\?xml .*\?>)?([\x0d\x0a[:blank:]]*(<!--.*-->)?[\x0d\x0a[:blank:]]*)*<[mM][eE][tT][hH][oO][dD][rR][eE][sS][pP][oO][nN][sS][eE]>/
}
signature file-coldfusion {
file-mime "magnus-internal/cold-fusion", 20
file-magic /^([\x0d\x0a[:blank:]]*(<!--.*-->)?)*<(CFPARAM|CFSET|CFIF)/
}
# Microsoft LNK files
signature file-lnk {
file-mime "application/x-ms-shortcut", 49
file-magic /^\x4C\x00\x00\x00\x01\x14\x02\x00\x00\x00\x00\x00\xC0\x00\x00\x00\x00\x10\x00\x00\x00\x46/
} }
signature file-jar { signature file-jar {
@ -21,8 +90,20 @@ signature file-jar {
} }
signature file-java-applet { signature file-java-applet {
file-magic /^\xca\xfe\xba\xbe...[\x2e-\x34]/
file-mime "application/x-java-applet", 71 file-mime "application/x-java-applet", 71
file-magic /^\xca\xfe\xba\xbe...[\x2d-\x34]/
}
# OCSP requests over HTTP.
signature file-ocsp-request {
file-magic /^.{11,19}\x06\x05\x2b\x0e\x03\x02\x1a/
file-mime "application/ocsp-request", 71
}
# OCSP responses over HTTP.
signature file-ocsp-response {
file-magic /^.{11,19}\x06\x09\x2B\x06\x01\x05\x05\x07\x30\x01\x01/
file-mime "application/ocsp-response", 71
} }
# Shockwave flash # Shockwave flash
@ -37,12 +118,6 @@ signature file-tnef {
file-mime "application/vnd.ms-tnef", 100 file-mime "application/vnd.ms-tnef", 100
} }
# Mac OS X DMG files
signature file-dmg {
file-magic /^(\x78\x01\x73\x0D\x62\x62\x60|\x78\xDA\x63\x60\x18\x05|\x78\x01\x63\x60\x18\x05|\x78\xDA\x73\x0D|\x78[\x01\xDA]\xED[\xD0-\xD9])/
file-mime "application/x-dmg", 100
}
# Mac OS X Mach-O executable # Mac OS X Mach-O executable
signature file-mach-o { signature file-mach-o {
file-magic /^[\xce\xcf]\xfa\xed\xfe/ file-magic /^[\xce\xcf]\xfa\xed\xfe/
@ -55,13 +130,6 @@ signature file-mach-o-universal {
file-mime "application/x-mach-o-executable", 100 file-mime "application/x-mach-o-executable", 100
} }
# XAR (eXtensible ARchive) format.
# Mac OS X uses this for the .pkg format.
signature file-xar {
file-magic /^xar\!/
file-mime "application/x-xar", 100
}
signature file-pkcs7 { signature file-pkcs7 {
file-magic /^MIME-Version:.*protocol=\"application\/pkcs7-signature\"/ file-magic /^MIME-Version:.*protocol=\"application\/pkcs7-signature\"/
file-mime "application/pkcs7-signature", 100 file-mime "application/pkcs7-signature", 100
@ -79,16 +147,6 @@ signature file-jnlp {
file-mime "application/x-java-jnlp-file", 100 file-mime "application/x-java-jnlp-file", 100
} }
signature file-ico {
file-magic /^\x00\x00\x01\x00/
file-mime "image/x-icon", 70
}
signature file-cur {
file-magic /^\x00\x00\x02\x00/
file-mime "image/x-cursor", 70
}
signature file-pcap { signature file-pcap {
file-magic /^(\xa1\xb2\xc3\xd4|\xd4\xc3\xb2\xa1)/ file-magic /^(\xa1\xb2\xc3\xd4|\xd4\xc3\xb2\xa1)/
file-mime "application/vnd.tcpdump.pcap", 70 file-mime "application/vnd.tcpdump.pcap", 70
@ -119,7 +177,58 @@ signature file-python {
file-mime "text/x-python", 60 file-mime "text/x-python", 60
} }
signature file-awk {
file-mime "text/x-awk", 60
file-magic /^\x23\x21[^\n]{1,15}bin\/(env[[:space:]]+)?(g|n)?awk/
}
signature file-tcl {
file-mime "text/x-tcl", 60
file-magic /^\x23\x21[^\n]{1,15}bin\/(env[[:space:]]+)?(wish|tcl)/
}
signature file-lua {
file-mime "text/x-lua", 49
file-magic /^\x23\x21[^\n]{1,15}bin\/(env[[:space:]]+)?lua/
}
signature file-javascript {
file-mime "application/javascript", 60
file-magic /^\x23\x21[^\n]{1,15}bin\/(env[[:space:]]+)?node(js)?/
}
signature file-javascript2 {
file-mime "application/javascript", 60
file-magic /^[\x0d\x0a[:blank:]]*<[sS][cC][rR][iI][pP][tT][[:blank:]]+([tT][yY][pP][eE]|[lL][aA][nN][gG][uU][aA][gG][eE])=['"]?([tT][eE][xX][tT]\/)?[jJ][aA][vV][aA][sS][cC][rR][iI][pP][tT]/
}
signature file-javascript3 {
file-mime "application/javascript", 60
# This seems to be a somewhat common idiom in javascript.
file-magic /^[\x0d\x0a[:blank:]]*for \(;;\);/
}
signature file-javascript4 {
file-mime "application/javascript", 60
file-magic /^[\x0d\x0a[:blank:]]*document\.write(ln)?[:blank:]?\(/
}
signature file-javascript5 {
file-mime "application/javascript", 60
file-magic /^\(function\(\)[[:blank:]\n]*\{/
}
signature file-javascript6 {
file-mime "application/javascript", 60
file-magic /^[\x0d\x0a[:blank:]]*<script>[\x0d\x0a[:blank:]]*(var|function) /
}
signature file-php { signature file-php {
file-mime "text/x-php", 60
file-magic /^\x23\x21[^\n]{1,15}bin\/(env[[:space:]]+)?php/
}
signature file-php2 {
file-magic /^.*<\?php/ file-magic /^.*<\?php/
file-mime "text/x-php", 40 file-mime "text/x-php", 40
} }
@ -135,3 +244,23 @@ signature file-skp {
file-magic /^\xFF\xFE\xFF\x0E\x53\x00\x6B\x00\x65\x00\x74\x00\x63\x00\x68\x00\x55\x00\x70\x00\x20\x00\x4D\x00\x6F\x00\x64\x00\x65\x00\x6C\x00/ file-magic /^\xFF\xFE\xFF\x0E\x53\x00\x6B\x00\x65\x00\x74\x00\x63\x00\x68\x00\x55\x00\x70\x00\x20\x00\x4D\x00\x6F\x00\x64\x00\x65\x00\x6C\x00/
file-mime "application/skp", 100 file-mime "application/skp", 100
} }
signature file-elf-object {
file-mime "application/x-object", 50
file-magic /\x7fELF[\x01\x02](\x01.{10}\x01\x00|\x02.{10}\x00\x01)/
}
signature file-elf {
file-mime "application/x-executable", 50
file-magic /\x7fELF[\x01\x02](\x01.{10}\x02\x00|\x02.{10}\x00\x02)/
}
signature file-elf-sharedlib {
file-mime "application/x-sharedlib", 50
file-magic /\x7fELF[\x01\x02](\x01.{10}\x03\x00|\x02.{10}\x00\x03)/
}
signature file-elf-coredump {
file-mime "application/x-coredump", 50
file-magic /\x7fELF[\x01\x02](\x01.{10}\x04\x00|\x02.{10}\x00\x04)/
}

View file

@ -0,0 +1,166 @@
signature file-tiff {
file-mime "image/tiff", 70
file-magic /^(MM\x00[\x2a\x2b]|II[\x2a\x2b]\x00)/
}
signature file-gif {
file-mime "image/gif", 70
file-magic /^GIF8/
}
# JPEG image
signature file-jpeg {
file-mime "image/jpeg", 52
file-magic /^\xff\xd8/
}
signature file-bmp {
file-mime "image/x-ms-bmp", 50
file-magic /BM.{12}[\x0c\x28\x40\x6c\x7c\x80]\x00/
}
signature file-ico {
file-magic /^\x00\x00\x01\x00/
file-mime "image/x-icon", 70
}
signature file-cur {
file-magic /^\x00\x00\x02\x00/
file-mime "image/x-cursor", 70
}
signature file-magic-auto289 {
file-mime "image/vnd.adobe.photoshop", 70
file-magic /^8BPS/
}
signature file-png {
file-mime "image/png", 110
file-magic /^\x89PNG/
}
# JPEG 2000
signature file-jp2 {
file-mime "image/jp2", 60
file-magic /.{4}ftypjp2/
}
# JPEG 2000
signature file-jp22 {
file-mime "image/jp2", 70
file-magic /\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a.{8}jp2 /
}
# JPEG 2000
signature file-jpx {
file-mime "image/jpx", 70
file-magic /\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a.{8}jpx /
}
# JPEG 2000
signature file-jpm {
file-mime "image/jpm", 70
file-magic /\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a.{8}jpm /
}
# Xcursor image
signature file-x-cursor {
file-mime "image/x-xcursor", 70
file-magic /^Xcur/
}
# NIFF image
signature file-niff {
file-mime "image/x-niff", 70
file-magic /^IIN1/
}
# OpenEXR image
signature file-openexr {
file-mime "image/x-exr", 70
file-magic /^\x76\x2f\x31\x01/
}
# DPX image
signature file-dpx {
file-mime "image/x-dpx", 70
file-magic /^SDPX/
}
# Cartesian Perceptual Compression image
signature file-cpi {
file-mime "image/x-cpi", 70
file-magic /(CPC\xb2)/
}
signature file-orf {
file-mime "image/x-olympus-orf", 70
file-magic /IIR[OS]|MMOR/
}
# Foveon X3F raw image
signature file-x3r {
file-mime "image/x-x3f", 70
file-magic /^FOVb/
}
# Paint.NET image
signature file-paint-net {
file-mime "image/x-paintnet", 70
file-magic /^PDN3/
}
# Corel Draw Picture
signature file-coreldraw {
file-mime "image/x-coreldraw", 70
file-magic /^RIFF....CDR[A6]/
}
# Netpbm PAM image
signature file-netbpm{
file-mime "image/x-portable-pixmap", 50
file-magic /^P7/
}
# JPEG 2000 image
signature file-jpeg-2000 {
file-mime "image/jp2", 50
file-magic /^....jP/
}
# DjVU Images
signature file-djvu {
file-mime "image/vnd.djvu", 70
file-magic /AT\x26TFORM.{4}(DJV[MUI]|THUM)/
}
# DWG AutoDesk AutoCAD
signature file-dwg {
file-mime "image/vnd.dwg", 90
file-magic /^(AC[12]\.|AC10)/
}
# GIMP XCF image
signature file-gimp-xcf {
file-mime "image/x-xcf", 110
file-magic /^gimp xcf/
}
# Polar Monitor Bitmap text
signature file-polar-monitor-bitmap {
file-mime "image/x-polar-monitor-bitmap", 160
file-magic /^\x5bBitmapInfo2\x5d/
}
# Award BIOS bitmap
signature file-award-bitmap {
file-mime "image/x-award-bmp", 20
file-magic /^AWBM/
}
# Award BIOS Logo, 136 x 84
signature file-award-bios-logo {
file-mime "image/x-award-bioslogo", 50
file-magic /^\x11[\x06\x09]/
}

File diff suppressed because it is too large Load diff

View file

@ -26,3 +26,9 @@ signature file-pptx {
file-magic /^PK\x03\x04.{26}(\[Content_Types\]\.xml|_rels\x2f\.rels|ppt\x2f).*PK\x03\x04.{26}ppt\x2f/ file-magic /^PK\x03\x04.{26}(\[Content_Types\]\.xml|_rels\x2f\.rels|ppt\x2f).*PK\x03\x04.{26}ppt\x2f/
file-mime "application/vnd.openxmlformats-officedocument.presentationml.presentation", 80 file-mime "application/vnd.openxmlformats-officedocument.presentationml.presentation", 80
} }
signature file-msaccess {
file-mime "application/x-msaccess", 180
file-magic /.{4}Standard (Jet|ACE) DB\x00/
}

View file

@ -0,0 +1,96 @@
# Macromedia Flash Video
signature file-flv {
file-mime "video/x-flv", 60
file-magic /^FLV/
}
# FLI animation
signature file-fli {
file-mime "video/x-fli", 50
file-magic /^.{4}\x11\xaf/
}
# FLC animation
signature file-flc {
file-mime "video/x-flc", 50
file-magic /^.{4}\x12\xaf/
}
# Motion JPEG 2000
signature file-mj2 {
file-mime "video/mj2", 70
file-magic /\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a.{8}mjp2/
}
# MNG video
signature file-mng {
file-mime "video/x-mng", 70
file-magic /^\x8aMNG/
}
# JNG video
signature file-jng {
file-mime "video/x-jng", 70
file-magic /^\x8bJNG/
}
# Generic MPEG container
signature file-mpeg {
file-mime "video/mpeg", 50
file-magic /(\x00\x00\x01[\xb0-\xbb])/
}
# MPV
signature file-mpv {
file-mime "video/mpv", 71
file-magic /(\x00\x00\x01\xb3)/
}
# H.264
signature file-h264 {
file-mime "video/h264", 41
file-magic /(\x00\x00\x00\x01)([\x07\x27\x47\x67\x87\xa7\xc7\xe7])/
}
# WebM video
signature file-webm {
file-mime "video/webm", 70
file-magic /(\x1a\x45\xdf\xa3)(.*)(B\x82)(.{1})(webm)/
}
# Matroska video
signature file-matroska {
file-mime "video/x-matroska", 110
file-magic /(\x1a\x45\xdf\xa3)(.*)(B\x82)(.{1})(matroska)/
}
# MP2P
signature file-mp2p {
file-mime "video/mp2p", 21
file-magic /\x00\x00\x01\xba([\x40-\x7f\xc0-\xff])/
}
# Silicon Graphics video
signature file-sgi-movie {
file-mime "video/x-sgi-movie", 70
file-magic /^MOVI/
}
# Apple QuickTime movie
signature file-quicktime {
file-mime "video/quicktime", 70
file-magic /^....(mdat|moov)/
}
# MPEG v4 video
signature file-mp4 {
file-mime "video/mp4", 70
file-magic /^....ftyp(isom|mp4[12])/
}
# 3GPP Video
signature file-3gpp {
file-mime "video/3gpp", 60
file-magic /^....ftyp(3g[egps2]|avc1|mmp4)/
}

View file

@ -129,12 +129,11 @@ export {
## files based on the detected mime type of the file. ## files based on the detected mime type of the file.
const analyze_by_mime_type_automatically = T &redef; const analyze_by_mime_type_automatically = T &redef;
## The default setting for if the file reassembler is enabled for ## The default setting for file reassembly.
## each file.
const enable_reassembler = T &redef; const enable_reassembler = T &redef;
## The default per-file reassembly buffer size. ## The default per-file reassembly buffer size.
const reassembly_buffer_size = 1048576 &redef; const reassembly_buffer_size = 524288 &redef;
## Allows the file reassembler to be used if it's necessary because the ## Allows the file reassembler to be used if it's necessary because the
## file is transferred out of order. ## file is transferred out of order.
@ -313,7 +312,7 @@ global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: A
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Files::LOG, [$columns=Info, $ev=log_files]); Log::create_stream(Files::LOG, [$columns=Info, $ev=log_files, $path="files"]);
} }
function set_info(f: fa_file) function set_info(f: fa_file)
@ -484,16 +483,19 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
add f$info$rx_hosts[f$is_orig ? cid$resp_h : cid$orig_h]; add f$info$rx_hosts[f$is_orig ? cid$resp_h : cid$orig_h];
} }
event file_mime_type(f: fa_file, mime_type: string) &priority=10 event file_sniff(f: fa_file, meta: fa_metadata) &priority=10
{ {
set_info(f); set_info(f);
f$info$mime_type = mime_type; if ( ! meta?$mime_type )
return;
f$info$mime_type = meta$mime_type;
if ( analyze_by_mime_type_automatically && if ( analyze_by_mime_type_automatically &&
mime_type in mime_type_to_analyzers ) meta$mime_type in mime_type_to_analyzers )
{ {
local analyzers = mime_type_to_analyzers[mime_type]; local analyzers = mime_type_to_analyzers[meta$mime_type];
for ( a in analyzers ) for ( a in analyzers )
{ {
add f$info$analyzers[Files::analyzer_name(a)]; add f$info$analyzers[Files::analyzer_name(a)];

View file

@ -32,6 +32,8 @@ export {
FILE_NAME, FILE_NAME,
## Certificate SHA-1 hash. ## Certificate SHA-1 hash.
CERT_HASH, CERT_HASH,
## Public key MD5 hash. (SSH server host keys are a good example.)
PUBKEY_HASH,
}; };
## Data about an :bro:type:`Intel::Item`. ## Data about an :bro:type:`Intel::Item`.
@ -174,7 +176,7 @@ global min_data_store: MinDataStore &redef;
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(LOG, [$columns=Info, $ev=log_intel]); Log::create_stream(LOG, [$columns=Info, $ev=log_intel, $path="intel"]);
} }
function find(s: Seen): bool function find(s: Seen): bool

View file

@ -50,11 +50,17 @@ export {
## The event receives a single same parameter, an instance of ## The event receives a single same parameter, an instance of
## type ``columns``. ## type ``columns``.
ev: any &optional; ev: any &optional;
## A path that will be inherited by any filters added to the
## stream which do not already specify their own path.
path: string &optional;
}; };
## Builds the default path values for log filters if not otherwise ## Builds the default path values for log filters if not otherwise
## specified by a filter. The default implementation uses *id* ## specified by a filter. The default implementation uses *id*
## to derive a name. ## to derive a name. Upon adding a filter to a stream, if neither
## ``path`` nor ``path_func`` is explicitly set by them, then
## this function is used as the ``path_func``.
## ##
## id: The ID associated with the log stream. ## id: The ID associated with the log stream.
## ##
@ -144,7 +150,9 @@ export {
## to compute the string dynamically. It is ok to return ## to compute the string dynamically. It is ok to return
## different strings for separate calls, but be careful: it's ## different strings for separate calls, but be careful: it's
## easy to flood the disk by returning a new string for each ## easy to flood the disk by returning a new string for each
## connection. ## connection. Upon adding a filter to a stream, if neither
## ``path`` nor ``path_func`` is explicitly set by them, then
## :bro:see:`default_path_func` is used.
## ##
## id: The ID associated with the log stream. ## id: The ID associated with the log stream.
## ##
@ -380,6 +388,8 @@ export {
global active_streams: table[ID] of Stream = table(); global active_streams: table[ID] of Stream = table();
} }
global all_streams: table[ID] of Stream = table();
# We keep a script-level copy of all filters so that we can manipulate them. # We keep a script-level copy of all filters so that we can manipulate them.
global filters: table[ID, string] of Filter; global filters: table[ID, string] of Filter;
@ -464,6 +474,7 @@ function create_stream(id: ID, stream: Stream) : bool
return F; return F;
active_streams[id] = stream; active_streams[id] = stream;
all_streams[id] = stream;
return add_default_filter(id); return add_default_filter(id);
} }
@ -471,6 +482,7 @@ function create_stream(id: ID, stream: Stream) : bool
function remove_stream(id: ID) : bool function remove_stream(id: ID) : bool
{ {
delete active_streams[id]; delete active_streams[id];
delete all_streams[id];
return __remove_stream(id); return __remove_stream(id);
} }
@ -483,10 +495,12 @@ function disable_stream(id: ID) : bool
function add_filter(id: ID, filter: Filter) : bool function add_filter(id: ID, filter: Filter) : bool
{ {
# This is a work-around for the fact that we can't forward-declare local stream = all_streams[id];
# the default_path_func and then use it as &default in the record
# definition. if ( stream?$path && ! filter?$path )
if ( ! filter?$path_func ) filter$path = stream$path;
if ( ! filter?$path && ! filter?$path_func )
filter$path_func = default_path_func; filter$path_func = default_path_func;
filters[id, filter$name] = filter; filters[id, filter$name] = filter;

View file

@ -37,6 +37,8 @@ export {
user: string; user: string;
## The remote host to which to transfer logs. ## The remote host to which to transfer logs.
host: string; host: string;
## The port to connect to. Defaults to 22
host_port: count &default=22;
## The path/directory on the remote host to send logs. ## The path/directory on the remote host to send logs.
path: string; path: string;
}; };
@ -63,8 +65,8 @@ function sftp_postprocessor(info: Log::RotationInfo): bool
{ {
local dst = fmt("%s/%s.%s.log", d$path, info$path, local dst = fmt("%s/%s.%s.log", d$path, info$path,
strftime(Log::sftp_rotation_date_format, info$open)); strftime(Log::sftp_rotation_date_format, info$open));
command += fmt("echo put %s %s | sftp -b - %s@%s;", info$fname, dst, command += fmt("echo put %s %s | sftp -P %d -b - %s@%s;", info$fname, dst,
d$user, d$host); d$host_port, d$user, d$host);
} }
command += fmt("/bin/rm %s", info$fname); command += fmt("/bin/rm %s", info$fname);

View file

@ -19,9 +19,9 @@ export {
## the :bro:id:`NOTICE` function. The convention is to give a general ## the :bro:id:`NOTICE` function. The convention is to give a general
## category along with the specific notice separating words with ## category along with the specific notice separating words with
## underscores and using leading capitals on each word except for ## underscores and using leading capitals on each word except for
## abbreviations which are kept in all capitals. For example, ## abbreviations which are kept in all capitals. For example,
## SSH::Password_Guessing is for hosts that have crossed a threshold of ## SSH::Password_Guessing is for hosts that have crossed a threshold of
## heuristically determined failed SSH logins. ## failed SSH logins.
type Type: enum { type Type: enum {
## Notice reporting a count of how often a notice occurred. ## Notice reporting a count of how often a notice occurred.
Tally, Tally,
@ -349,9 +349,9 @@ function log_mailing_postprocessor(info: Log::RotationInfo): bool
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Notice::LOG, [$columns=Info, $ev=log_notice]); Log::create_stream(Notice::LOG, [$columns=Info, $ev=log_notice, $path="notice"]);
Log::create_stream(Notice::ALARM_LOG, [$columns=Notice::Info]); Log::create_stream(Notice::ALARM_LOG, [$columns=Notice::Info, $path="notice_alarm"]);
# If Bro is configured for mailing notices, set up mailing for alarms. # If Bro is configured for mailing notices, set up mailing for alarms.
# Make sure that this alarm log is also output as text so that it can # Make sure that this alarm log is also output as text so that it can
# be packaged up and emailed later. # be packaged up and emailed later.

View file

@ -294,7 +294,7 @@ global current_conn: connection;
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Weird::LOG, [$columns=Info, $ev=log_weird]); Log::create_stream(Weird::LOG, [$columns=Info, $ev=log_weird, $path="weird"]);
} }
function flow_id_string(src: addr, dst: addr): string function flow_id_string(src: addr, dst: addr): string

View file

@ -159,7 +159,7 @@ event filter_change_tracking()
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(PacketFilter::LOG, [$columns=Info]); Log::create_stream(PacketFilter::LOG, [$columns=Info, $path="packet_filter"]);
# Preverify the capture and restrict filters to give more granular failure messages. # Preverify the capture and restrict filters to give more granular failure messages.
for ( id in capture_filters ) for ( id in capture_filters )

View file

@ -45,7 +45,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Reporter::LOG, [$columns=Info]); Log::create_stream(Reporter::LOG, [$columns=Info, $path="reporter"]);
} }
event reporter_info(t: time, msg: string, location: string) &priority=-5 event reporter_info(t: time, msg: string, location: string) &priority=-5

View file

@ -142,7 +142,7 @@ global did_sig_log: set[string] &read_expire = 1 hr;
event bro_init() event bro_init()
{ {
Log::create_stream(Signatures::LOG, [$columns=Info, $ev=log_signature]); Log::create_stream(Signatures::LOG, [$columns=Info, $ev=log_signature, $path="signatures"]);
} }
# Returns true if the given signature has already been triggered for the given # Returns true if the given signature has already been triggered for the given
@ -277,7 +277,7 @@ event signature_match(state: signature_state, msg: string, data: string)
orig, sig_id, hcount); orig, sig_id, hcount);
Log::write(Signatures::LOG, Log::write(Signatures::LOG,
[$note=Multiple_Sig_Responders, [$ts=network_time(), $note=Multiple_Sig_Responders,
$src_addr=orig, $sig_id=sig_id, $event_msg=msg, $src_addr=orig, $sig_id=sig_id, $event_msg=msg,
$host_count=hcount, $sub_msg=horz_scan_msg]); $host_count=hcount, $sub_msg=horz_scan_msg]);

View file

@ -105,7 +105,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Software::LOG, [$columns=Info, $ev=log_software]); Log::create_stream(Software::LOG, [$columns=Info, $ev=log_software, $path="software"]);
} }
type Description: record { type Description: record {

View file

@ -89,7 +89,7 @@ redef likely_server_ports += { ayiya_ports, teredo_ports, gtpv1_ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Tunnel::LOG, [$columns=Info]); Log::create_stream(Tunnel::LOG, [$columns=Info, $path="tunnel"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_AYIYA, ayiya_ports); Analyzer::register_for_ports(Analyzer::ANALYZER_AYIYA, ayiya_ports);
Analyzer::register_for_ports(Analyzer::ANALYZER_TEREDO, teredo_ports); Analyzer::register_for_ports(Analyzer::ANALYZER_TEREDO, teredo_ports);

View file

@ -333,8 +333,6 @@ type connection: record {
## to parse the same data. If so, all will be recorded. Also note that ## to parse the same data. If so, all will be recorded. Also note that
## the recorded services are independent of any transport-level protocols. ## the recorded services are independent of any transport-level protocols.
service: set[string]; service: set[string];
addl: string; ##< Deprecated.
hot: count; ##< Deprecated.
history: string; ##< State history of connections. See *history* in :bro:see:`Conn::Info`. history: string; ##< State history of connections. See *history* in :bro:see:`Conn::Info`.
## A globally unique connection identifier. For each connection, Bro ## A globally unique connection identifier. For each connection, Bro
## creates an ID that is very likely unique across independent Bro runs. ## creates an ID that is very likely unique across independent Bro runs.
@ -414,6 +412,14 @@ type fa_file: record {
bof_buffer: string &optional; bof_buffer: string &optional;
} &redef; } &redef;
## Metadata that's been inferred about a particular file.
type fa_metadata: record {
## The strongest matching mime type if one was discovered.
mime_type: string &optional;
## All matching mime types if any were discovered.
mime_types: mime_matches &optional;
};
## Fields of a SYN packet. ## Fields of a SYN packet.
## ##
## .. bro:see:: connection_SYN_packet ## .. bro:see:: connection_SYN_packet
@ -440,6 +446,7 @@ type NetStats: record {
## packet capture system, this value may not be available and will then ## packet capture system, this value may not be available and will then
## be always set to zero. ## be always set to zero.
pkts_link: count &default=0; pkts_link: count &default=0;
bytes_recvd: count &default=0; ##< Bytes received by Bro.
}; };
## Statistics about Bro's resource consumption. ## Statistics about Bro's resource consumption.
@ -928,7 +935,7 @@ const tcp_storm_interarrival_thresh = 1 sec &redef;
## seeing our peer's ACKs. Set to zero to turn off this determination. ## seeing our peer's ACKs. Set to zero to turn off this determination.
## ##
## .. bro:see:: tcp_max_above_hole_without_any_acks tcp_excessive_data_without_further_acks ## .. bro:see:: tcp_max_above_hole_without_any_acks tcp_excessive_data_without_further_acks
const tcp_max_initial_window = 4096 &redef; const tcp_max_initial_window = 16384 &redef;
## If we're not seeing our peer's ACKs, the maximum volume of data above a ## If we're not seeing our peer's ACKs, the maximum volume of data above a
## sequence hole that we'll tolerate before assuming that there's been a packet ## sequence hole that we'll tolerate before assuming that there's been a packet
@ -936,7 +943,7 @@ const tcp_max_initial_window = 4096 &redef;
## don't ever give up. ## don't ever give up.
## ##
## .. bro:see:: tcp_max_initial_window tcp_excessive_data_without_further_acks ## .. bro:see:: tcp_max_initial_window tcp_excessive_data_without_further_acks
const tcp_max_above_hole_without_any_acks = 4096 &redef; const tcp_max_above_hole_without_any_acks = 16384 &redef;
## If we've seen this much data without any of it being acked, we give up ## If we've seen this much data without any of it being acked, we give up
## on that connection to avoid memory exhaustion due to buffering all that ## on that connection to avoid memory exhaustion due to buffering all that
@ -1080,27 +1087,6 @@ const ENDIAN_LITTLE = 1; ##< Little endian.
const ENDIAN_BIG = 2; ##< Big endian. const ENDIAN_BIG = 2; ##< Big endian.
const ENDIAN_CONFUSED = 3; ##< Tried to determine endian, but failed. const ENDIAN_CONFUSED = 3; ##< Tried to determine endian, but failed.
## Deprecated.
function append_addl(c: connection, addl: string)
{
if ( c$addl == "" )
c$addl= addl;
else if ( addl !in c$addl )
c$addl = fmt("%s %s", c$addl, addl);
}
## Deprecated.
function append_addl_marker(c: connection, addl: string, marker: string)
{
if ( c$addl == "" )
c$addl= addl;
else if ( addl !in c$addl )
c$addl = fmt("%s%s%s", c$addl, marker, addl);
}
# Values for :bro:see:`set_contents_file` *direction* argument. # Values for :bro:see:`set_contents_file` *direction* argument.
# todo:: these should go into an enum to make them autodoc'able # todo:: these should go into an enum to make them autodoc'able
const CONTENTS_NONE = 0; ##< Turn off recording of contents. const CONTENTS_NONE = 0; ##< Turn off recording of contents.
@ -2215,6 +2201,41 @@ export {
const heartbeat_interval = 1.0 secs &redef; const heartbeat_interval = 1.0 secs &redef;
} }
module SSH;
export {
## The client and server each have some preferences for the algorithms used
## in each direction.
type Algorithm_Prefs: record {
## The algorithm preferences for client to server communication
client_to_server: vector of string &optional;
## The algorithm preferences for server to client communication
server_to_client: vector of string &optional;
};
## This record lists the preferences of an SSH endpoint for
## algorithm selection. During the initial :abbr:`SSH (Secure Shell)`
## key exchange, each endpoint lists the algorithms
## that it supports, in order of preference. See
## :rfc:`4253#section-7.1` for details.
type Capabilities: record {
## Key exchange algorithms
kex_algorithms: string_vec;
## The algorithms supported for the server host key
server_host_key_algorithms: string_vec;
## Symmetric encryption algorithm preferences
encryption_algorithms: Algorithm_Prefs;
## Symmetric MAC algorithm preferences
mac_algorithms: Algorithm_Prefs;
## Compression algorithm preferences
compression_algorithms: Algorithm_Prefs;
## Language preferences
languages: Algorithm_Prefs &optional;
## Are these the capabilities of the server?
is_server: bool;
};
}
module GLOBAL; module GLOBAL;
## An NTP message. ## An NTP message.
@ -2511,6 +2532,145 @@ type irc_join_info: record {
## .. bro:see:: irc_join_message ## .. bro:see:: irc_join_message
type irc_join_list: set[irc_join_info]; type irc_join_list: set[irc_join_info];
module PE;
export {
type PE::DOSHeader: record {
## The magic number of a portable executable file ("MZ").
signature : string;
## The number of bytes in the last page that are used.
used_bytes_in_last_page : count;
## The number of pages in the file that are part of the PE file itself.
file_in_pages : count;
## Number of relocation entries stored after the header.
num_reloc_items : count;
## Number of paragraphs in the header.
header_in_paragraphs : count;
## Number of paragraps of additional memory that the program will need.
min_extra_paragraphs : count;
## Maximum number of paragraphs of additional memory.
max_extra_paragraphs : count;
## Relative value of the stack segment.
init_relative_ss : count;
## Initial value of the SP register.
init_sp : count;
## Checksum. The 16-bit sum of all words in the file should be 0. Normally not set.
checksum : count;
## Initial value of the IP register.
init_ip : count;
## Initial value of the CS register (relative to the initial segment).
init_relative_cs : count;
## Offset of the first relocation table.
addr_of_reloc_table : count;
## Overlays allow you to append data to the end of the file. If this is the main program,
## this will be 0.
overlay_num : count;
## OEM identifier.
oem_id : count;
## Additional OEM info, specific to oem_id.
oem_info : count;
## Address of the new EXE header.
addr_of_new_exe_header : count;
};
type PE::FileHeader: record {
## The target machine that the file was compiled for.
machine : count;
## The time that the file was created at.
ts : time;
## Pointer to the symbol table.
sym_table_ptr : count;
## Number of symbols.
num_syms : count;
## The size of the optional header.
optional_header_size : count;
## Bit flags that determine if this file is executable, non-relocatable, and/or a DLL.
characteristics : set[count];
};
type PE::OptionalHeader: record {
## PE32 or PE32+ indicator.
magic : count;
## The major version of the linker used to create the PE.
major_linker_version : count;
## The minor version of the linker used to create the PE.
minor_linker_version : count;
## Size of the .text section.
size_of_code : count;
## Size of the .data section.
size_of_init_data : count;
## Size of the .bss section.
size_of_uninit_data : count;
## The relative virtual address (RVA) of the entry point.
addr_of_entry_point : count;
## The relative virtual address (RVA) of the .text section.
base_of_code : count;
## The relative virtual address (RVA) of the .data section.
base_of_data : count &optional;
## Preferred memory location for the image to be based at.
image_base : count;
## The alignment (in bytes) of sections when they're loaded in memory.
section_alignment : count;
## The alignment (in bytes) of the raw data of sections.
file_alignment : count;
## The major version of the required OS.
os_version_major : count;
## The minor version of the required OS.
os_version_minor : count;
## The major version of this image.
major_image_version : count;
## The minor version of this image.
minor_image_version : count;
## The major version of the subsystem required to run this file.
major_subsys_version : count;
## The minor version of the subsystem required to run this file.
minor_subsys_version : count;
## The size (in bytes) of the iamge as the image is loaded in memory.
size_of_image : count;
## The size (in bytes) of the headers, rounded up to file_alignment.
size_of_headers : count;
## The image file checksum.
checksum : count;
## The subsystem that's required to run this image.
subsystem : count;
## Bit flags that determine how to execute or load this file.
dll_characteristics : set[count];
## A vector with the sizes of various tables and strings that are
## defined in the optional header data directories. Examples include
## the import table, the resource table, and debug information.
table_sizes : vector of count;
};
## Record for Portable Executable (PE) section headers.
type PE::SectionHeader: record {
## The name of the section
name : string;
## The total size of the section when loaded into memory.
virtual_size : count;
## The relative virtual address (RVA) of the section.
virtual_addr : count;
## The size of the initialized data for the section, as it is
## in the file on disk.
size_of_raw_data : count;
## The virtual address of the initialized dat for the section,
## as it is in the file on disk.
ptr_to_raw_data : count;
## The file pointer to the beginning of relocation entries for
## the section.
ptr_to_relocs : count;
## The file pointer to the beginning of line-number entries for
## the section.
ptr_to_line_nums : count;
## The number of relocation entries for the section.
num_of_relocs : count;
## The number of line-number entrie for the section.
num_of_line_nums : count;
## Bit-flags that describe the characteristics of the section.
characteristics : set[count];
};
}
module GLOBAL;
## Deprecated. ## Deprecated.
## ##
## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere ## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere
@ -2635,60 +2795,6 @@ global generate_OS_version_event: set[subnet] &redef;
# number>``), which were seen during the sample. # number>``), which were seen during the sample.
type load_sample_info: set[string]; type load_sample_info: set[string];
## ID for NetFlow header. This is primarily a means to sort together NetFlow
## headers and flow records at the script level.
type nfheader_id: record {
## Name of the NetFlow file (e.g., ``netflow.dat``) or the receiving
## socket address (e.g., ``127.0.0.1:5555``), or an explicit name if
## specified to ``-y`` or ``-Y``.
rcvr_id: string;
## A serial number, ignoring any overflows.
pdu_id: count;
};
## A NetFlow v5 header.
##
## .. bro:see:: netflow_v5_header
type nf_v5_header: record {
h_id: nfheader_id; ##< ID for sorting.
cnt: count; ##< TODO.
sysuptime: interval; ##< Router's uptime.
exporttime: time; ##< When the data was exported.
flow_seq: count; ##< Sequence number.
eng_type: count; ##< Engine type.
eng_id: count; ##< Engine ID.
sample_int: count; ##< Sampling interval.
exporter: addr; ##< Exporter address.
};
## A NetFlow v5 record.
##
## .. bro:see:: netflow_v5_record
type nf_v5_record: record {
h_id: nfheader_id; ##< ID for sorting.
id: conn_id; ##< Connection ID.
nexthop: addr; ##< Address of next hop.
input: count; ##< Input interface.
output: count; ##< Output interface.
pkts: count; ##< Number of packets.
octets: count; ##< Number of bytes.
first: time; ##< Timestamp of first packet.
last: time; ##< Timestamp of last packet.
tcpflag_fin: bool; ##< FIN flag for TCP flows.
tcpflag_syn: bool; ##< SYN flag for TCP flows.
tcpflag_rst: bool; ##< RST flag for TCP flows.
tcpflag_psh: bool; ##< PSH flag for TCP flows.
tcpflag_ack: bool; ##< ACK flag for TCP flows.
tcpflag_urg: bool; ##< URG flag for TCP flows.
proto: count; ##< IP protocol.
tos: count; ##< Type of service.
src_as: count; ##< Source AS.
dst_as: count; ##< Destination AS.
src_mask: count; ##< Source mask.
dst_mask: count; ##< Destination mask.
};
## A BitTorrent peer. ## A BitTorrent peer.
## ##
## .. bro:see:: bittorrent_peer_set ## .. bro:see:: bittorrent_peer_set
@ -2774,19 +2880,20 @@ export {
module X509; module X509;
export { export {
type Certificate: record { type Certificate: record {
version: count; ##< Version number. version: count &log; ##< Version number.
serial: string; ##< Serial number. serial: string &log; ##< Serial number.
subject: string; ##< Subject. subject: string &log; ##< Subject.
issuer: string; ##< Issuer. issuer: string &log; ##< Issuer.
not_valid_before: time; ##< Timestamp before when certificate is not valid. cn: string &optional; ##< Last (most specific) common name.
not_valid_after: time; ##< Timestamp after when certificate is not valid. not_valid_before: time &log; ##< Timestamp before when certificate is not valid.
key_alg: string; ##< Name of the key algorithm not_valid_after: time &log; ##< Timestamp after when certificate is not valid.
sig_alg: string; ##< Name of the signature algorithm key_alg: string &log; ##< Name of the key algorithm
key_type: string &optional; ##< Key type, if key parseable by openssl (either rsa, dsa or ec) sig_alg: string &log; ##< Name of the signature algorithm
key_length: count &optional; ##< Key length in bits key_type: string &optional &log; ##< Key type, if key parseable by openssl (either rsa, dsa or ec)
exponent: string &optional; ##< Exponent, if RSA-certificate key_length: count &optional &log; ##< Key length in bits
curve: string &optional; ##< Curve, if EC-certificate exponent: string &optional &log; ##< Exponent, if RSA-certificate
} &log; curve: string &optional &log; ##< Curve, if EC-certificate
};
type Extension: record { type Extension: record {
name: string; ##< Long name of extension. oid if name not known name: string; ##< Long name of extension. oid if name not known
@ -2847,7 +2954,44 @@ export {
attributes : RADIUS::Attributes &optional; attributes : RADIUS::Attributes &optional;
}; };
} }
module GLOBAL;
module RDP;
export {
type RDP::EarlyCapabilityFlags: record {
support_err_info_pdu: bool;
want_32bpp_session: bool;
support_statusinfo_pdu: bool;
strong_asymmetric_keys: bool;
support_monitor_layout_pdu: bool;
support_netchar_autodetect: bool;
support_dynvc_gfx_protocol: bool;
support_dynamic_time_zone: bool;
support_heartbeat_pdu: bool;
};
type RDP::ClientCoreData: record {
version_major: count;
version_minor: count;
desktop_width: count;
desktop_height: count;
color_depth: count;
sas_sequence: count;
keyboard_layout: count;
client_build: count;
client_name: string;
keyboard_type: count;
keyboard_sub: count;
keyboard_function_key: count;
ime_file_name: string;
post_beta2_color_depth: count &optional;
client_product_id: string &optional;
serial_number: count &optional;
high_color_depth: count &optional;
supported_color_depths: count &optional;
ec_flags: RDP::EarlyCapabilityFlags &optional;
dig_product_id: string &optional;
};
}
@load base/bif/plugins/Bro_SNMP.types.bif @load base/bif/plugins/Bro_SNMP.types.bif
@ -2971,6 +3115,186 @@ export {
}; };
} }
@load base/bif/plugins/Bro_KRB.types.bif
module KRB;
export {
## KDC Options. See :rfc:`4120`
type KRB::KDC_Options: record {
## The ticket to be issued should have its forwardable flag set.
forwardable : bool;
## A (TGT) request for forwarding.
forwarded : bool;
## The ticket to be issued should have its proxiable flag set.
proxiable : bool;
## A request for a proxy.
proxy : bool;
## The ticket to be issued should have its may-postdate flag set.
allow_postdate : bool;
## A request for a postdated ticket.
postdated : bool;
## The ticket to be issued should have its renewable flag set.
renewable : bool;
## Reserved for opt_hardware_auth
opt_hardware_auth : bool;
## Request that the KDC not check the transited field of a TGT against
## the policy of the local realm before it will issue derivative tickets
## based on the TGT.
disable_transited_check : bool;
## If a ticket with the requested lifetime cannot be issued, a renewable
## ticket is acceptable
renewable_ok : bool;
## The ticket for the end server is to be encrypted in the session key
## from the additional TGT provided
enc_tkt_in_skey : bool;
## The request is for a renewal
renew : bool;
## The request is to validate a postdated ticket.
validate : bool;
};
## AP Options. See :rfc:`4120`
type KRB::AP_Options: record {
## Indicates that user-to-user-authentication is in use
use_session_key : bool;
## Mutual authentication is required
mutual_required : bool;
};
## Used in a few places in the Kerberos analyzer for elements
## that have a type and a string value.
type KRB::Type_Value: record {
## The data type
data_type : count;
## The data value
val : string;
};
type KRB::Type_Value_Vector: vector of KRB::Type_Value;
## A Kerberos host address See :rfc:`4120`.
type KRB::Host_Address: record {
## IPv4 or IPv6 address
ip : addr &log &optional;
## NetBIOS address
netbios : string &log &optional;
## Some other type that we don't support yet
unknown : KRB::Type_Value &optional;
};
type KRB::Host_Address_Vector: vector of KRB::Host_Address;
## The data from the SAFE message. See :rfc:`4120`.
type KRB::SAFE_Msg: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (20 for SAFE_MSG)
msg_type : count;
## The application-specific data that is being passed
## from the sender to the reciever
data : string;
## Current time from the sender of the message
timestamp : time &optional;
## Sequence number used to detect replays
seq : count &optional;
## Sender address
sender : Host_Address &optional;
## Recipient address
recipient : Host_Address &optional;
};
## The data from the ERROR_MSG message. See :rfc:`4120`.
type KRB::Error_Msg: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (30 for ERROR_MSG)
msg_type : count;
## Current time on the client
client_time : time &optional;
## Current time on the server
server_time : time;
## The specific error code
error_code : count;
## Realm of the ticket
client_realm : string &optional;
## Name on the ticket
client_name : string &optional;
## Realm of the service
service_realm : string;
## Name of the service
service_name : string;
## Additional text to explain the error
error_text : string &optional;
## Optional pre-authentication data
pa_data : vector of KRB::Type_Value &optional;
};
## A Kerberos ticket. See :rfc:`4120`.
type KRB::Ticket: record {
## Protocol version number (5 for KRB5)
pvno : count;
## Realm
realm : string;
## Name of the service
service_name : string;
## Cipher the ticket was encrypted with
cipher : count;
};
type KRB::Ticket_Vector: vector of KRB::Ticket;
## The data from the AS_REQ and TGS_REQ messages. See :rfc:`4120`.
type KRB::KDC_Request: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (10 for AS_REQ, 12 for TGS_REQ)
msg_type : count;
## Optional pre-authentication data
pa_data : vector of KRB::Type_Value &optional;
## Options specified in the request
kdc_options : KRB::KDC_Options;
## Name on the ticket
client_name : string &optional;
## Realm of the service
service_realm : string;
## Name of the service
service_name : string &optional;
## Time the ticket is good from
from : time &optional;
## Time the ticket is good till
till : time;
## The requested renew-till time
rtime : time &optional;
## A random nonce generated by the client
nonce : count;
## The desired encryption algorithms, in order of preference
encryption_types : vector of count;
## Any additional addresses the ticket should be valid for
host_addrs : vector of KRB::Host_Address &optional;
## Additional tickets may be included for certain transactions
additional_tickets : vector of KRB::Ticket &optional;
};
## The data from the AS_REQ and TGS_REQ messages. See :rfc:`4120`.
type KRB::KDC_Response: record {
## Protocol version number (5 for KRB5)
pvno : count;
## The message type (11 for AS_REP, 13 for TGS_REP)
msg_type : count;
## Optional pre-authentication data
pa_data : vector of KRB::Type_Value &optional;
## Realm on the ticket
client_realm : string &optional;
## Name on the service
client_name : string;
## The ticket that was issued
ticket : KRB::Ticket;
};
}
module GLOBAL; module GLOBAL;
@load base/bif/event.bif @load base/bif/event.bif
@ -3133,6 +3457,11 @@ const forward_remote_events = F &redef;
## more sophisticated script-level communication framework. ## more sophisticated script-level communication framework.
const forward_remote_state_changes = F &redef; const forward_remote_state_changes = F &redef;
## The number of IO chunks allowed to be buffered between the child
## and parent process of remote communication before Bro starts dropping
## connections to remote peers in an attempt to catch up.
const chunked_io_buffer_soft_cap = 800000 &redef;
## Place-holder constant indicating "no peer". ## Place-holder constant indicating "no peer".
const PEER_ID_NONE = 0; const PEER_ID_NONE = 0;
@ -3358,6 +3687,7 @@ const bits_per_uid: count = 96 &redef;
# Load these frameworks here because they use fairly deep integration with # Load these frameworks here because they use fairly deep integration with
# BiFs and script-land defined types. # BiFs and script-land defined types.
@load base/frameworks/broker
@load base/frameworks/logging @load base/frameworks/logging
@load base/frameworks/input @load base/frameworks/input
@load base/frameworks/analyzer @load base/frameworks/analyzer

View file

@ -45,10 +45,13 @@
@load base/protocols/ftp @load base/protocols/ftp
@load base/protocols/http @load base/protocols/http
@load base/protocols/irc @load base/protocols/irc
@load base/protocols/krb
@load base/protocols/modbus @load base/protocols/modbus
@load base/protocols/mysql @load base/protocols/mysql
@load base/protocols/pop3 @load base/protocols/pop3
@load base/protocols/radius @load base/protocols/radius
@load base/protocols/rdp
@load base/protocols/sip
@load base/protocols/snmp @load base/protocols/snmp
@load base/protocols/smtp @load base/protocols/smtp
@load base/protocols/socks @load base/protocols/socks
@ -57,6 +60,7 @@
@load base/protocols/syslog @load base/protocols/syslog
@load base/protocols/tunnels @load base/protocols/tunnels
@load base/files/pe
@load base/files/hash @load base/files/hash
@load base/files/extract @load base/files/extract
@load base/files/unified2 @load base/files/unified2

View file

@ -50,7 +50,7 @@ event ChecksumOffloading::check()
bad_checksum_msg += "UDP"; bad_checksum_msg += "UDP";
} }
local message = fmt("Your %s invalid %s checksums, most likely from NIC checksum offloading.", packet_src, bad_checksum_msg); local message = fmt("Your %s invalid %s checksums, most likely from NIC checksum offloading. By default, packets with invalid checksums are discarded by Bro unless using the -C command-line option or toggling the 'ignore_checksums' variable. Alternatively, disable checksum offloading by the network adapter to ensure Bro analyzes the actual checksums that are transmitted.", packet_src, bad_checksum_msg);
Reporter::warning(message); Reporter::warning(message);
done = T; done = T;
} }

View file

@ -2,3 +2,4 @@
@load ./contents @load ./contents
@load ./inactivity @load ./inactivity
@load ./polling @load ./polling
@load ./thresholds

View file

@ -127,7 +127,7 @@ redef record connection += {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Conn::LOG, [$columns=Info, $ev=log_conn]); Log::create_stream(Conn::LOG, [$columns=Info, $ev=log_conn, $path="conn"]);
} }
function conn_state(c: connection, trans: transport_proto): string function conn_state(c: connection, trans: transport_proto): string

View file

@ -0,0 +1,256 @@
##! Implements a generic API to throw events when a connection crosses a
##! fixed threshold of bytes or packets.
module ConnThreshold;
export {
type Thresholds: record {
orig_byte: set[count] &default=count_set(); ##< current originator byte thresholds we watch for
resp_byte: set[count] &default=count_set(); ##< current responder byte thresholds we watch for
orig_packet: set[count] &default=count_set(); ##< corrent originator packet thresholds we watch for
resp_packet: set[count] &default=count_set(); ##< corrent responder packet thresholds we watch for
};
## Sets a byte threshold for connection sizes, adding it to potentially already existing thresholds.
## conn_bytes_threshold_crossed will be raised for each set threshold.
##
## cid: The connection id.
##
## threshold: Threshold in bytes.
##
## is_orig: If true, threshold is set for bytes from originator, otherwise for bytes from responder.
##
## Returns: T on success, F on failure.
global set_bytes_threshold: function(c: connection, threshold: count, is_orig: bool): bool;
## Sets a packet threshold for connection sizes, adding it to potentially already existing thresholds.
## conn_packets_threshold_crossed will be raised for each set threshold.
##
## cid: The connection id.
##
## threshold: Threshold in packets.
##
## is_orig: If true, threshold is set for packets from originator, otherwise for packets from responder.
##
## Returns: T on success, F on failure.
global set_packets_threshold: function(c: connection, threshold: count, is_orig: bool): bool;
## Deletes a byte threshold for connection sizes.
##
## cid: The connection id.
##
## threshold: Threshold in bytes to remove.
##
## is_orig: If true, threshold is removed for packets from originator, otherwhise for packets from responder.
##
## Returns: T on success, F on failure.
global delete_bytes_threshold: function(c: connection, threshold: count, is_orig: bool): bool;
## Deletes a packet threshold for connection sizes.
##
## cid: The connection id.
##
## threshold: Threshold in packets.
##
## is_orig: If true, threshold is removed for packets from originator, otherwise for packets from responder.
##
## Returns: T on success, F on failure.
global delete_packets_threshold: function(c: connection, threshold: count, is_orig: bool): bool;
## Generated for a connection that crossed a set byte threshold
##
## c: the connection
##
## threshold: the threshold that was set
##
## is_orig: True if the threshold was crossed by the originator of the connection
global bytes_threshold_crossed: event(c: connection, threshold: count, is_orig: bool);
## Generated for a connection that crossed a set byte threshold
##
## c: the connection
##
## threshold: the threshold that was set
##
## is_orig: True if the threshold was crossed by the originator of the connection
global packets_threshold_crossed: event(c: connection, threshold: count, is_orig: bool);
}
redef record connection += {
thresholds: ConnThreshold::Thresholds &optional;
};
function set_conn(c: connection)
{
if ( c?$thresholds )
return;
c$thresholds = Thresholds();
}
function find_min_threshold(t: set[count]): count
{
if ( |t| == 0 )
return 0;
local first = T;
local min: count = 0;
for ( i in t )
{
if ( first )
{
min = i;
first = F;
}
else
{
if ( i < min )
min = i;
}
}
return min;
}
function set_current_threshold(c: connection, bytes: bool, is_orig: bool): bool
{
local t: count = 0;
local cur: count = 0;
if ( bytes && is_orig )
{
t = find_min_threshold(c$thresholds$orig_byte);
cur = get_current_conn_bytes_threshold(c$id, is_orig);
}
else if ( bytes && ! is_orig )
{
t = find_min_threshold(c$thresholds$resp_byte);
cur = get_current_conn_bytes_threshold(c$id, is_orig);
}
else if ( ! bytes && is_orig )
{
t = find_min_threshold(c$thresholds$orig_packet);
cur = get_current_conn_packets_threshold(c$id, is_orig);
}
else if ( ! bytes && ! is_orig )
{
t = find_min_threshold(c$thresholds$resp_packet);
cur = get_current_conn_packets_threshold(c$id, is_orig);
}
if ( t == cur )
return T;
if ( bytes && is_orig )
return set_current_conn_bytes_threshold(c$id, t, T);
else if ( bytes && ! is_orig )
return set_current_conn_bytes_threshold(c$id, t, F);
else if ( ! bytes && is_orig )
return set_current_conn_packets_threshold(c$id, t, T);
else if ( ! bytes && ! is_orig )
return set_current_conn_packets_threshold(c$id, t, F);
}
function set_bytes_threshold(c: connection, threshold: count, is_orig: bool): bool
{
set_conn(c);
if ( threshold == 0 )
return F;
if ( is_orig )
add c$thresholds$orig_byte[threshold];
else
add c$thresholds$resp_byte[threshold];
return set_current_threshold(c, T, is_orig);
}
function set_packets_threshold(c: connection, threshold: count, is_orig: bool): bool
{
set_conn(c);
if ( threshold == 0 )
return F;
if ( is_orig )
add c$thresholds$orig_packet[threshold];
else
add c$thresholds$resp_packet[threshold];
return set_current_threshold(c, F, is_orig);
}
function delete_bytes_threshold(c: connection, threshold: count, is_orig: bool): bool
{
set_conn(c);
if ( is_orig && threshold in c$thresholds$orig_byte )
{
delete c$thresholds$orig_byte[threshold];
set_current_threshold(c, T, is_orig);
return T;
}
else if ( ! is_orig && threshold in c$thresholds$resp_byte )
{
delete c$thresholds$resp_byte[threshold];
set_current_threshold(c, T, is_orig);
return T;
}
return F;
}
function delete_packets_threshold(c: connection, threshold: count, is_orig: bool): bool
{
set_conn(c);
if ( is_orig && threshold in c$thresholds$orig_packet )
{
delete c$thresholds$orig_packet[threshold];
set_current_threshold(c, F, is_orig);
return T;
}
else if ( ! is_orig && threshold in c$thresholds$resp_packet )
{
delete c$thresholds$resp_packet[threshold];
set_current_threshold(c, F, is_orig);
return T;
}
return F;
}
event conn_bytes_threshold_crossed(c: connection, threshold: count, is_orig: bool) &priority=5
{
if ( is_orig && threshold in c$thresholds$orig_byte )
{
delete c$thresholds$orig_byte[threshold];
event ConnThreshold::bytes_threshold_crossed(c, threshold, is_orig);
}
else if ( ! is_orig && threshold in c$thresholds$resp_byte )
{
delete c$thresholds$resp_byte[threshold];
event ConnThreshold::bytes_threshold_crossed(c, threshold, is_orig);
}
set_current_threshold(c, T, is_orig);
}
event conn_packets_threshold_crossed(c: connection, threshold: count, is_orig: bool) &priority=5
{
if ( is_orig && threshold in c$thresholds$orig_packet )
{
delete c$thresholds$orig_packet[threshold];
event ConnThreshold::packets_threshold_crossed(c, threshold, is_orig);
}
else if ( ! is_orig && threshold in c$thresholds$resp_packet )
{
delete c$thresholds$resp_packet[threshold];
event ConnThreshold::packets_threshold_crossed(c, threshold, is_orig);
}
set_current_threshold(c, F, is_orig);
}

View file

@ -49,7 +49,7 @@ redef likely_server_ports += { 67/udp };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp]); Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp, $path="dhcp"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports);
} }

View file

@ -36,7 +36,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DNP3::LOG, [$columns=Info, $ev=log_dnp3]); Log::create_stream(DNP3::LOG, [$columns=Info, $ev=log_dnp3, $path="dnp3"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DNP3_TCP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_DNP3_TCP, ports);
} }

View file

@ -150,7 +150,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DNS::LOG, [$columns=Info, $ev=log_dns]); Log::create_stream(DNS::LOG, [$columns=Info, $ev=log_dns, $path="dns"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DNS, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_DNS, ports);
} }
@ -305,6 +305,9 @@ hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
if ( ans$answer_type == DNS_ANS ) if ( ans$answer_type == DNS_ANS )
{ {
if ( ! c$dns?$query )
c$dns$query = ans$query;
c$dns$AA = msg$AA; c$dns$AA = msg$AA;
c$dns$RA = msg$RA; c$dns$RA = msg$RA;

View file

@ -63,10 +63,13 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
f$ftp = ftp; f$ftp = ftp;
} }
event file_mime_type(f: fa_file, mime_type: string) &priority=5 event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
{ {
if ( ! f?$ftp ) if ( ! f?$ftp )
return; return;
f$ftp$mime_type = mime_type; if ( ! meta?$mime_type )
return;
f$ftp$mime_type = meta$mime_type;
} }

View file

@ -11,13 +11,13 @@
##! GridFTP data channels are identified by a heuristic that relies on ##! GridFTP data channels are identified by a heuristic that relies on
##! the fact that default settings for GridFTP clients typically ##! the fact that default settings for GridFTP clients typically
##! mutually authenticate the data channel with TLS/SSL and negotiate a ##! mutually authenticate the data channel with TLS/SSL and negotiate a
##! NULL bulk cipher (no encryption). Connections with those ##! NULL bulk cipher (no encryption). Connections with those attributes
##! attributes are then polled for two minutes with decreasing frequency ##! are marked as GridFTP if the data transfer within the first two minutes
##! to check if the transfer sizes are large enough to indicate a ##! is big enough to indicate a GripFTP data channel that would be
##! GridFTP data channel that would be undesirable to analyze further ##! undesirable to analyze further (e.g. stop TCP reassembly). A side
##! (e.g. stop TCP reassembly). A side effect is that true connection ##! effect is that true connection sizes are not logged, but at the benefit
##! sizes are not logged, but at the benefit of saving CPU cycles that ##! of saving CPU cycles that would otherwise go to analyzing the large
##! would otherwise go to analyzing the large (and likely benign) connections. ##! (and likely benign) connections.
@load ./info @load ./info
@load ./main @load ./main
@ -32,23 +32,14 @@ export {
## GridFTP data channel. ## GridFTP data channel.
const size_threshold = 1073741824 &redef; const size_threshold = 1073741824 &redef;
## Max number of times to check whether a connection's size exceeds the ## Time during which we check whether a connection's size exceeds the
## :bro:see:`GridFTP::size_threshold`. ## :bro:see:`GridFTP::size_threshold`.
const max_poll_count = 15 &redef; const max_time = 2 min &redef;
## Whether to skip further processing of the GridFTP data channel once ## Whether to skip further processing of the GridFTP data channel once
## detected, which may help performance. ## detected, which may help performance.
const skip_data = T &redef; const skip_data = T &redef;
## Base amount of time between checking whether a GridFTP data connection
## has transferred more than :bro:see:`GridFTP::size_threshold` bytes.
const poll_interval = 1sec &redef;
## The amount of time the base :bro:see:`GridFTP::poll_interval` is
## increased by each poll interval. Can be used to make more frequent
## checks at the start of a connection and gradually slow down.
const poll_interval_increase = 1sec &redef;
## Raised when a GridFTP data channel is detected. ## Raised when a GridFTP data channel is detected.
## ##
## c: The connection pertaining to the GridFTP data channel. ## c: The connection pertaining to the GridFTP data channel.
@ -79,23 +70,27 @@ event ftp_request(c: connection, command: string, arg: string) &priority=4
c$ftp$last_auth_requested = arg; c$ftp$last_auth_requested = arg;
} }
function size_callback(c: connection, cnt: count): interval event ConnThreshold::bytes_threshold_crossed(c: connection, threshold: count, is_orig: bool)
{ {
if ( c$orig$size > size_threshold || c$resp$size > size_threshold ) if ( threshold < size_threshold || "gridftp-data" in c$service || c$duration > max_time )
return;
add c$service["gridftp-data"];
event GridFTP::data_channel_detected(c);
if ( skip_data )
skip_further_processing(c$id);
}
event gridftp_possibility_timeout(c: connection)
{
# only remove if we did not already detect it and the connection
# is not yet at its end.
if ( "gridftp-data" !in c$service && ! c$conn?$service )
{ {
add c$service["gridftp-data"]; ConnThreshold::delete_bytes_threshold(c, size_threshold, T);
event GridFTP::data_channel_detected(c); ConnThreshold::delete_bytes_threshold(c, size_threshold, F);
if ( skip_data )
skip_further_processing(c$id);
return -1sec;
} }
if ( cnt >= max_poll_count )
return -1sec;
return poll_interval + poll_interval_increase * cnt;
} }
event ssl_established(c: connection) &priority=5 event ssl_established(c: connection) &priority=5
@ -118,5 +113,9 @@ event ssl_established(c: connection) &priority=-3
# By default GridFTP data channels do mutual authentication and # By default GridFTP data channels do mutual authentication and
# negotiate a cipher suite with a NULL bulk cipher. # negotiate a cipher suite with a NULL bulk cipher.
if ( data_channel_initial_criteria(c) ) if ( data_channel_initial_criteria(c) )
ConnPolling::watch(c, size_callback, 0, 0secs); {
ConnThreshold::set_bytes_threshold(c, size_threshold, T);
ConnThreshold::set_bytes_threshold(c, size_threshold, F);
schedule max_time { gridftp_possibility_timeout(c) };
}
} }

View file

@ -52,7 +52,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(FTP::LOG, [$columns=Info, $ev=log_ftp]); Log::create_stream(FTP::LOG, [$columns=Info, $ev=log_ftp, $path="ftp"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_FTP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_FTP, ports);
} }

View file

@ -43,7 +43,7 @@ export {
event http_begin_entity(c: connection, is_orig: bool) &priority=10 event http_begin_entity(c: connection, is_orig: bool) &priority=10
{ {
set_state(c, F, is_orig); set_state(c, is_orig);
if ( is_orig ) if ( is_orig )
++c$http$orig_mime_depth; ++c$http$orig_mime_depth;
@ -93,24 +93,27 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
} }
} }
event file_mime_type(f: fa_file, mime_type: string) &priority=5 event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
{ {
if ( ! f?$http || ! f?$is_orig ) if ( ! f?$http || ! f?$is_orig )
return; return;
if ( ! meta?$mime_type )
return;
if ( f$is_orig ) if ( f$is_orig )
{ {
if ( ! f$http?$orig_mime_types ) if ( ! f$http?$orig_mime_types )
f$http$orig_mime_types = string_vec(mime_type); f$http$orig_mime_types = string_vec(meta$mime_type);
else else
f$http$orig_mime_types[|f$http$orig_mime_types|] = mime_type; f$http$orig_mime_types[|f$http$orig_mime_types|] = meta$mime_type;
} }
else else
{ {
if ( ! f$http?$resp_mime_types ) if ( ! f$http?$resp_mime_types )
f$http$resp_mime_types = string_vec(mime_type); f$http$resp_mime_types = string_vec(meta$mime_type);
else else
f$http$resp_mime_types[|f$http$resp_mime_types|] = mime_type; f$http$resp_mime_types[|f$http$resp_mime_types|] = meta$mime_type;
} }
} }

View file

@ -89,6 +89,10 @@ export {
current_request: count &default=0; current_request: count &default=0;
## Current response in the pending queue. ## Current response in the pending queue.
current_response: count &default=0; current_response: count &default=0;
## Track the current deepest transaction.
## This is meant to cope with missing requests
## and responses.
trans_depth: count &default=0;
}; };
## A list of HTTP headers typically used to indicate proxied requests. ## A list of HTTP headers typically used to indicate proxied requests.
@ -135,7 +139,7 @@ redef likely_server_ports += { ports };
# Initialize the HTTP logging stream and ports. # Initialize the HTTP logging stream and ports.
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(HTTP::LOG, [$columns=Info, $ev=log_http]); Log::create_stream(HTTP::LOG, [$columns=Info, $ev=log_http, $path="http"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_HTTP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_HTTP, ports);
} }
@ -150,13 +154,11 @@ function new_http_session(c: connection): Info
tmp$ts=network_time(); tmp$ts=network_time();
tmp$uid=c$uid; tmp$uid=c$uid;
tmp$id=c$id; tmp$id=c$id;
# $current_request is set prior to the Info record creation so we tmp$trans_depth = ++c$http_state$trans_depth;
# can use the value directly here.
tmp$trans_depth = c$http_state$current_request;
return tmp; return tmp;
} }
function set_state(c: connection, request: bool, is_orig: bool) function set_state(c: connection, is_orig: bool)
{ {
if ( ! c?$http_state ) if ( ! c?$http_state )
{ {
@ -165,15 +167,20 @@ function set_state(c: connection, request: bool, is_orig: bool)
} }
# These deal with new requests and responses. # These deal with new requests and responses.
if ( request || c$http_state$current_request !in c$http_state$pending )
c$http_state$pending[c$http_state$current_request] = new_http_session(c);
if ( ! is_orig && c$http_state$current_response !in c$http_state$pending )
c$http_state$pending[c$http_state$current_response] = new_http_session(c);
if ( is_orig ) if ( is_orig )
{
if ( c$http_state$current_request !in c$http_state$pending )
c$http_state$pending[c$http_state$current_request] = new_http_session(c);
c$http = c$http_state$pending[c$http_state$current_request]; c$http = c$http_state$pending[c$http_state$current_request];
}
else else
{
if ( c$http_state$current_response !in c$http_state$pending )
c$http_state$pending[c$http_state$current_response] = new_http_session(c);
c$http = c$http_state$pending[c$http_state$current_response]; c$http = c$http_state$pending[c$http_state$current_response];
}
} }
event http_request(c: connection, method: string, original_URI: string, event http_request(c: connection, method: string, original_URI: string,
@ -186,7 +193,7 @@ event http_request(c: connection, method: string, original_URI: string,
} }
++c$http_state$current_request; ++c$http_state$current_request;
set_state(c, T, T); set_state(c, T);
c$http$method = method; c$http$method = method;
c$http$uri = unescaped_URI; c$http$uri = unescaped_URI;
@ -208,8 +215,10 @@ event http_reply(c: connection, version: string, code: count, reason: string) &p
if ( c$http_state$current_response !in c$http_state$pending || if ( c$http_state$current_response !in c$http_state$pending ||
(c$http_state$pending[c$http_state$current_response]?$status_code && (c$http_state$pending[c$http_state$current_response]?$status_code &&
! code_in_range(c$http_state$pending[c$http_state$current_response]$status_code, 100, 199)) ) ! code_in_range(c$http_state$pending[c$http_state$current_response]$status_code, 100, 199)) )
{
++c$http_state$current_response; ++c$http_state$current_response;
set_state(c, F, F); }
set_state(c, F);
c$http$status_code = code; c$http$status_code = code;
c$http$status_msg = reason; c$http$status_msg = reason;
@ -233,7 +242,7 @@ event http_reply(c: connection, version: string, code: count, reason: string) &p
event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=5 event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=5
{ {
set_state(c, F, is_orig); set_state(c, is_orig);
if ( is_orig ) # client headers if ( is_orig ) # client headers
{ {
@ -257,7 +266,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
add c$http$proxied[fmt("%s -> %s", name, value)]; add c$http$proxied[fmt("%s -> %s", name, value)];
} }
else if ( name == "AUTHORIZATION" ) else if ( name == "AUTHORIZATION" || name == "PROXY-AUTHORIZATION" )
{ {
if ( /^[bB][aA][sS][iI][cC] / in value ) if ( /^[bB][aA][sS][iI][cC] / in value )
{ {
@ -278,12 +287,11 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
} }
} }
} }
} }
event http_message_done(c: connection, is_orig: bool, stat: http_message_stat) &priority = 5 event http_message_done(c: connection, is_orig: bool, stat: http_message_stat) &priority = 5
{ {
set_state(c, F, is_orig); set_state(c, is_orig);
if ( is_orig ) if ( is_orig )
c$http$request_body_len = stat$body_length; c$http$request_body_len = stat$body_length;

View file

@ -42,8 +42,8 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
f$irc = irc; f$irc = irc;
} }
event file_mime_type(f: fa_file, mime_type: string) &priority=5 event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
{ {
if ( f?$irc ) if ( f?$irc && meta?$mime_type )
f$irc$dcc_mime_type = mime_type; f$irc$dcc_mime_type = meta$mime_type;
} }

View file

@ -43,7 +43,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(IRC::LOG, [$columns=Info, $ev=irc_log]); Log::create_stream(IRC::LOG, [$columns=Info, $ev=irc_log, $path="irc"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_IRC, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_IRC, ports);
} }

Some files were not shown because too many files have changed in this diff Show more