Merge remote-tracking branch 'origin/master' into topic/vladg/bit-1838

This commit is contained in:
Vlad Grigorescu 2017-09-15 20:34:41 -05:00
commit 16f504e828
417 changed files with 8473 additions and 2374 deletions

466
CHANGES
View file

@ -1,4 +1,466 @@
2.5-297 | 2017-09-11 09:26:33 -0700
* Fix small OCSP parser bug; serial numbers were not passed to events
(Johanna Amann)
* Fix expire-redef.bro test. (Daniel Thayer)
2.5-294 | 2017-08-11 13:51:49 -0500
* Fix core.truncation unit test on macOS. (Jon Siwek)
* Fix a netcontrol test that often fails (Daniel Thayer)
* Update install instructions for Fedora 26 (Daniel Thayer)
2.5-288 | 2017-08-04 14:17:10 -0700
* Fix field not being populated, which resulted in a reporter
messsage. Addresses BIT-1831. Reported by Chris Herdt. (Seth Hall)
* Support for OCSP and Signed Certificate Timestamp. (Liang
Zhu/Johanna Amann)
- OCSP parsing is added to the X.509 module.
- Signed Certificate Timestamp extraction, parsing, & validation
is added to the SSL, X.509, and OCSP analyzers. Validation is
added to the X.509 BIFs.
This adds the following events and BIFs:
- event ocsp_request(f: fa_file, version: count, requestorName: string);
- event ocsp_request_certificate(f: fa_file, hashAlgorithm: string, issuerNameHash: string, issuerKeyHash: string, serialNumber: string);
- event ocsp_response_status(f: fa_file, status: string);
- event ocsp_response_bytes(f: fa_file, resp_ref: opaque of ocsp_resp, status: string, version: count, responderId: string, producedAt: time, signatureAlgorithm: string, certs: x509_opaque_vector);
- event ocsp_response_certificate(f: fa_file, hashAlgorithm: string, issuerNameHash: string, issuerKeyHash: string, serialNumber: string, certStatus: string, revokeTime: time, revokeReason: string, thisUpdate: time, nextUpdate: time);
- event ocsp_extension(f: fa_file, ext: X509::Extension, global_resp: bool);
- event x509_ocsp_ext_signed_certificate_timestamp(f: fa_file, version: count, logid: string, timestamp: count, hash_algorithm: count, signature_algorithm: count, signature: string);
- event ssl_extension_signed_certificate_timestamp(c: connection, is_orig: bool, version: count, logid: string, timestamp: count, signature_and_hashalgorithm: SSL::SignatureAndHashAlgorithm, signature: string);
- function sct_verify(cert: opaque of x509, logid: string, log_key: string, signature: string, timestamp: count, hash_algorithm: count, issuer_key_hash: string &default=""): bool
- function x509_subject_name_hash(cert: opaque of x509, hash_alg: count): string
- function x509_issuer_name_hash(cert: opaque of x509, hash_alg: count): string
- function x509_spki_hash(cert: opaque of x509, hash_alg: count): string
This also changes the MIME types that we use to identify X.509
certificates in SSL connections from "application/pkix-cert" to
"application/x-x509-user-cert" for host certificates and
"application/x-x509-ca-cert" for CA certificates.
* The SSL scripts provide a new hook "ssl_finishing(c: connection)"
to trigger actions after the handshake has concluded. (Johanna
Amann)
* Add an internal API for protocol analyzers to provide the MIME
type of file data directly, disabling automatic inferrence.
(Johanna Amann).
2.5-186 | 2017-07-28 12:22:20 -0700
* Improved handling of '%' at end of line in HTTP analyzer. (Johanna
Amann)
* Add canonifier to catch and release test that should fix test
failures. (Johanna Amann)
2.5-181 | 2017-07-25 16:02:41 -0700
* Extend plugin infrastructure to catch Bro version mismatches at link
time.
The version number used for the function name is slightly normalized
to skip any git revision postfixes (i.e., "2.5-xxx" is always treated
as "2.5-git") so that one doesn't need to recompile all plugins after
every master commit. That seems good enough, usually people run into
this when upgrading to a new release. The Plugin API version is also
part of the version number.
If one loads an old plugin into a new Bro, the error message looks
like this:
$ bro -NN Demo::Foo
fatal error in /home/robin/bro/master/scripts/base/init-bare.bro, line 1:
cannot load plugin library /home/robin/tmp/p/build//lib/Demo-Foo.linux-x86_64.so:
/home/robin/tmp/p/build//lib/Demo-Foo.linux-x86_64.so: undefined symbol: bro_version_2_5_git_debug
(Robin Sommer)
* Several fixes and improvements for software version parsing.
- Addresses Philip Romero's question from the Bro mailing list.
- Adds Microsoft Edge as a detected browser.
- We are now unescaping encoded characters in software names. (Seth Hall)
* Remove another reference to now removed bro-plugins. (Johanna Amann)
2.5-175 | 2017-07-07 14:35:11 -0700
* Removing aux/plugins. Most of the plugins are now Bro packages.
(Robin Sommer)
* Update install instructions for Debian 9. (Daniel Thayer)
2.5-170 | 2017-07-07 12:20:19 -0700
* Update krb-protocol.pac (balintm)
This fixes parsing of KRB_AP_Options where the padding and flags were reversed.
* Add new cipher suites from draft-ietf-tls-ecdhe-psk-aead-05 (Johanna Amann)
* Test changes: remove loading of listen.bro in tests that do not use it,
serialize tests that load listen.bro, fix race conditions in some tests.
(Daniel Thayer)
* The broccoli-v6addrs "-r" option was renamed to "-R" (Daniel Thayer)
2.5-156 | 2017-06-13 11:01:56 -0700
* Add 2.5.1 news file to master. (Johanna Amann)
* Remove link to no longer existing myricom plugin. (Johanna Amann)
2.5-152 | 2017-06-05 15:16:49 -0700
* Remove non-existing links; this broke documentation build. (Johanna Amann)
* Fix at_least in Version.bro - it did exactly the oposite of the documented
behavior. (Johanna Amann)
2.5-147 | 2017-05-22 20:32:32 -0500
* Add nfs unittest. (Julien Wallior)
* Added nfs_proc_rename event to rpc/nfs protocol analyzer.
(Roberto Del Valle Rodriguez)
* Expand parsing of RPC Call packets to add Uid, Gid, Stamp, MachineName
and AuxGIDs (Julien Wallior)
* Fix NFS protocol parser. (Julien Wallior)
2.5-142 | 2017-05-22 00:08:52 -0500
* Add gzip log writing to the ascii writer.
This feature can be enabled globally for all logs by setting
LogAscii::gzip_level to a value greater than 0.
This feature can be enabled on a per-log basis by setting gzip-level in
$config to a value greater than 0. (Corelight)
2.5-140 | 2017-05-12 15:31:32 -0400
* Lessen cluster load due to notice suppression.
(Johanna Amann, Justin Azoff)
2.5-137 | 2017-05-04 11:37:48 -0500
* Add plugin hooks for log init and writing: HookLogInit and HookLogWrite.
(Corelight)
* TLS: Fix compile warning (comparison between signed/unsigned).
This was introduced with the addition of new TLS1.3 extensions. (Johanna Amann)
2.5-134 | 2017-05-01 10:34:34 -0500
* Add rename, unlink, and rmdir bifs. (Corelight)
2.5-131 | 2017-04-21 14:27:16 -0700
* Guard more format strings with __attribute__((format)). (Johanna Amann)
* Add support for two TLS 1.3 extensions.
New events:
- event ssl_extension_supported_versions(c: connection, is_orig: bool, versions: index_vec)
- event ssl_extension_psk_key_exchange_modes(c: connection, is_orig: bool, modes: index_vec) (Johanna Amann)
2.5-125 | 2017-04-17 22:02:39 +0200
* Documentation updates for loading Bro scripts. (Seth Hall)
2.5-123 | 2017-04-10 13:30:14 -0700
* Fix some failing tests by increasing delay times. (Daniel Thayer)
* Threading Types: add a bit of documentation to subnet type. (Johanna Amann)
* Fixing couple issues reported by Coverity. (Robin Sommer)
2.5-119 | 2017-04-07 10:30:09 -0700
* Fix the test group name in some broker test files. (Daniel Thayer)
* NetControl: small rule_error changes (test, call fix). (Johanna Amann)
* SSL: update dpd signature for TLS1.3. (Johanna Amann)
2.5-115 | 2017-03-23 07:25:41 -0700
* Fix a test that was failing on some platforms. (Daniel Thayer)
* Remove test for cluster catch and release. This test keeps failing
intermittently because of timing issues that are surprisingly hard
to fix. (Johanna Amann)
* Fix some Coverity warnings. (Daniel Thayer)
2.5-106 | 2017-03-13 11:19:03 -0700
* print version string to stdout on --version, instead
of printing it to stderr. Output is not an error output. (Pete)
* Fix compiler warning raised by llvm8. (Johanna Amann)
* Fix coverity warning in Ascii reader. (Johanna Amann)
2.5-101 | 2017-03-09 12:20:11 -0500
* Input's framework's ascii reader is now more resilient.
By default, the ASCII reader does not fail on errors anymore.
If there is a problem parsing a line, a reporter warning is
written and parsing continues. If the file is missing or can't
be read, the input thread just tries again on the next heartbeat.
(Seth Hall, Johanna Amann)
2.5-92 | 2017-03-03 10:44:14 -0800
* Move most threading to C++11 primitives (mostly). (Johanna Amann)
* Fix a test that sometimes fails on FreeBSD. (Daniel Thayer)
* Remove build time warnings. (Seth Hall)
2.5-84 | 2017-02-27 15:08:55 -0500
* Change semantics of Broker's remote logging to match old communication
framework. (Robin Sommer)
* Add and fix documentation for HookSetupAnalyzerTree (Johanna Amann)
2.5-76 | 2017-02-23 10:19:57 -0800
* Kerberos ciphertext had some additional ASN.1 content being lumped
in. (Vlad Grigorescu)
* Updated Windows version detection to include Windows 10. (Fatema
Bannatwala, Keith Lehigh, Mike, Seth Hall).
2.5-70 | 2017-02-20 00:20:02 -0500
* Rework the RADIUS base script.
Fixes BIT-1769 which improves logging behavior when replies aren't
seen. Also added a `framed_addr` field to indicate if the radius
server is hinting at an address for the client and a `ttl` field to
show how quickly the server is responding. (Seth Hall)
2.5-68 | 2017-02-18 13:59:05 -0500
* Refactored base krb scripts. (Seth Hall)
* New script to log ticket hashes in krb log
(policy/protocols/krb/ticket-logging.bro). Also, add
ciphertext to ticket data structure. (John E. Rollinson)
2.5-62 | 2017-02-15 15:56:38 -0800
* Fix case in which scripts were able to access unitialized variables
in certain cases. Addresses BIT-1785. (Jon Siwek)
2.5-60 | 2017-02-15 15:19:20 -0800
* Implement ERSPAN support.
There is a small caveat to this implementation. The ethernet
header that is carried over the tunnel is ignored. If a user
tries to do MAC address logging, it will only show the MAC
addresses for the outer tunnel and the inner MAC addresses
will be stripped and not available anywhere. (Seth Hall)
* Tiny mime-type fix from Dan Caselden. (Seth Hall)
* Update failing intel framework test. (Johanna Amann)
2.5-55 | 2017-02-10 09:50:43 -0500
* Fixed intel expiration reset. Reinserting the same indicator did not reset
the expiration timer for the indicator in the underlying data store.
Addresses BIT-1790. (Jan Grashoefer)
2.5-51 | 2017-02-06 10:15:56 -0500
* Fix memory leak in file analyzer. (Johanna Amann)
* Fix a series of problems with the to_json function.
Addresses BIT-1788. (Daniel Thayer)
2.5-44 | 2017-02-03 16:38:10 -0800
* Change snap lengths of some tests. (Johanna Amann)
* Fix layer 2 connection flipping. If connection flipping occured in
Sessions.cc code (invoked e.g. when the original SYN is missing),
layer 2 flipping was not performed. (Johanna Amann)
2.5-39 | 2017-02-01 14:03:08 -0800
* Fix file analyzer memory management, and a delay in disabling file analyzers.
File analyzers are no longer deleted immediately; this is delayed until
a file opject is destroyed. Furthermore, no data is sent to file analyzers
anymore after they have been disabled.
2.5-33 | 2017-02-01 10:07:47 -0500
* New file types sigs. (Keith Lehigh)
* Change snaplen of test trace from 1,000,000 to 10,000
Recent versions of libpcap are unhappy with values bigger than 262,144
and will refuse reading the file. (Johanna Amann)
2.5-30 | 2017-01-26 13:24:36 -0800
* Extend file extraction log, adding extracted_cutoff and extracted_size
fields. (Seth Hall)
* Add new TLS extension type (cached_info) (Johanna Amann)
* Remove brocon event; it caused test failures. (Johanna Amann)
* Add missing paths to SMB Log::create_streams calls. (Johanna Amann)
* Tiny xlsx file signature fix. (Dan Caselden)
* Allow access to global variables using GLOBAL:: namespace.
Addresses BIT-1758. (Francois Pennaneac)
2.5-17 | 2016-12-07 14:51:37 -0800
* Broxygen no longer attempts to do tilde expansion of PATH, giving
an error message instead if bro is located in a PATH component
that starts with a tilde. Broxygen also no longer attempts to get
the mtime of the bro executable when bro is not invoced with the
"-X" option. (Daniel Thayer)
* Fix failing tests, compiler warnings and build issues on OpenBSD.
(Daniel Thayer)
2.5-9 | 2016-12-05 11:39:54 -0800
* Fix validation of OCSP replies inside of Bro. (Johanna Amann)
At one place in the code, we did not check the correct return
code. This makes it possible for a reply to get a response of
"good", when the OCSP reply is not actually signed by the
responder in question.
This also instructs OCSP verication to skip certificate chain
validation, which we do ourselves earlier because the OCSP verify
function cannot do it correctly (no way to pass timestamp).
2.5-6 | 2016-11-29 12:51:04 -0800
* Fix a build failure on OpenBSD relating to pcap_pkthdr. Also fixes
an include issue on OpenBSD. (Daniel Thayer)
* Fix compile error in krb-types.pac. (Johanna Amann)
* Update krb-types.pac: KerberosString formatting for the principal
principal name is now compliant with RFC 4120 section 5.2.2. (jamesecorrenti)
2.5 | 2016-11-16 14:51:59 -0800
* Release 2.5.
2.5-beta2-17 | 2016-11-14 17:59:19 -0800
* Add missing '@load ./pubkey-hashes' to
policy/frameworks/intel/seen. (Robin Sommer)
2.5-beta2-15 | 2016-11-14 17:52:55 -0800
* Remove unused "bindist" make target. (Daniel Thayer)
* Improve the "How to Upgrade" page in the Bro docs. (Daniel Thayer)
* Update the quickstart guide for the deploy command. (Daniel Thayer)
* Improved installation instructions for Mac OS X. (Daniel Thayer)
* Lots of more small updates to documentation. (Daniel Thayer)
2.5-beta2 | 2016-11-02 12:13:11 -0700
* Release 2.5-beta2.
2.5-beta-135 | 2016-11-02 09:47:20 -0700
* SMB fixes and cleanup. Includes better SMB error handling, improved DCE_RPC
handling in edge cases where drive_mapping is not seen. The concept of unknown
shares has been removed with this change. Also fixes SMB tree connect handling and
removes files that are not parsed. SMB2 error parsing is disabled because it never
was implemented correctly. (Seth Hall)
* Including a test for raw NTLM in SMB (Seth Hall)
* Updates for SMB auth handling from Martin van Hensbergen.
- Raw NTLM (not in GSSAPI) over SMB is now handled correctly.
- The encrypted NTLM session key is now passed into scriptland
through the ntlm_authenticate event. (Seth Hall)
* Add a files framework signature for VIM tmp files. (Seth Hall)
* Version parsing scripts now supports several beta versions. (Johanna Amann)
2.5-beta-123 | 2016-11-01 09:40:49 -0700
* Add a new site policy script local-logger.bro. (Daniel Thayer)
2.5-beta-121 | 2016-10-31 14:24:33 -0700
* Python 3 compatibility fixes for documentation building. (Daniel Thayer)
2.5-beta-114 | 2016-10-27 09:00:24 -0700
* Fix for Sphinx >= 1.4 compability. (Robin Sommer)
2.5-beta-113 | 2016-10-27 07:44:25 -0700
* XMPP: Fix detection of StartTLS when using namespaces. (Johanna
Amann)
2.5-beta-110 | 2016-10-26 09:42:11 -0400
* Improvements DCE_RPC analyzer to make it perform fragment handling correctly
and generally be more resistent to unexpected traffic. (Seth Hall)
2.5-beta-102 | 2016-10-25 09:43:45 -0700
* Update number of bytes in request/response of smb1-com-open-andx.pac. (balintm)
* Fix a IPv4 CIDR specifications and payload-size condition of signature matching.
(Robin Sommer)
* Python 3 compatibility fix for coverage-calc script. (Daniel Thayer)
2.5-beta-93 | 2016-10-24 11:11:07 -0700
* Fix alignment issue of ones_complement_checksum. This error
occured reproducibly newer compilers when called from
icmp6_checksum. (Johanna Amann)
2.5-beta-91 | 2016-10-20 11:40:37 -0400
* Fix istate.pybroccoli test on systems using Python 3. (Daniel Thayer)
2.5-beta-89 | 2016-10-18 21:50:51 -0400
* SSH analyzer changes: the events are now restructured a bit. There is a new
@ -395,7 +857,7 @@
2.4-683 | 2016-07-08 14:55:04 -0700
* Extendign connection history field to flag with '^' when Bro flips
* Extending connection history field to flag with '^' when Bro flips
a connection's endpoints. Addresses BIT-1629. (Robin Sommer)
2.4-680 | 2016-07-06 09:18:21 -0700
@ -505,7 +967,7 @@
2.4-623 | 2016-06-15 17:31:12 -0700
* &default values are no longer overwritten with uninitialized
by the input framework. (Jan Grashoefer)
by the input framework. (Jan Grashoefer)
2.4-621 | 2016-06-15 09:18:02 -0700

View file

@ -40,12 +40,26 @@ file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/bro-path-dev.csh
"setenv PATH \"${CMAKE_CURRENT_BINARY_DIR}/src\":$PATH\n")
file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/VERSION" VERSION LIMIT_COUNT 1)
execute_process(COMMAND grep "^#define *BRO_PLUGIN_API_VERSION"
INPUT_FILE ${CMAKE_CURRENT_SOURCE_DIR}/src/plugin/Plugin.h
OUTPUT_VARIABLE API_VERSION
OUTPUT_STRIP_TRAILING_WHITESPACE)
string(REGEX REPLACE "^#define.*VERSION *" "" API_VERSION "${API_VERSION}")
string(REPLACE "." " " version_numbers ${VERSION})
separate_arguments(version_numbers)
list(GET version_numbers 0 VERSION_MAJOR)
list(GET version_numbers 1 VERSION_MINOR)
set(VERSION_MAJ_MIN "${VERSION_MAJOR}.${VERSION_MINOR}")
set(VERSION_C_IDENT "${VERSION}_plugin_${API_VERSION}")
string(REGEX REPLACE "-[0-9]*$" "_git" VERSION_C_IDENT "${VERSION_C_IDENT}")
string(REGEX REPLACE "[^a-zA-Z0-9_\$]" "_" VERSION_C_IDENT "${VERSION_C_IDENT}")
if(${ENABLE_DEBUG})
set(VERSION_C_IDENT "${VERSION_C_IDENT}_debug")
endif()
########################################################################
## Dependency Configuration

View file

@ -42,10 +42,6 @@ dist:
@$(HAVE_MODULES) && find $(VERSION_MIN) -name .git\* | xargs rm -rf || exit 0
@$(HAVE_MODULES) && tar -czf $(VERSION_MIN).tgz $(VERSION_MIN) && echo Package: $(VERSION_MIN).tgz && rm -rf $(VERSION_MIN) || exit 0
bindist:
@( cd pkg && ( ./make-deb-packages || ./make-mac-packages || \
./make-rpm-packages ) )
distclean:
rm -rf $(BUILD)
$(MAKE) -C testing $@
@ -65,4 +61,4 @@ configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )
@test -e $(BUILD)/Makefile || ( echo "Error: No build/Makefile found. Did you run configure?" && exit 1 )
.PHONY : all install clean doc docclean dist bindist distclean configured
.PHONY : all install clean doc docclean dist distclean configured

113
NEWS
View file

@ -4,6 +4,95 @@ release. For an exhaustive list of changes, see the ``CHANGES`` file
(note that submodules, such as BroControl and Broccoli, come with
their own ``CHANGES``.)
Bro 2.6 (in progress)
=====================
New Functionality
-----------------
- Support for OCSP and Signed Certificate Timestamp. This adds the
following events and BIFs:
- Events: ocsp_request, ocsp_request_certificate,
ocsp_response_status, ocsp_response_bytes
ocsp_response_certificate ocsp_extension
x509_ocsp_ext_signed_certificate_timestamp
ssl_extension_signed_certificate_timestamp
- Functions: sct_verify, x509_subject_name_hash,
x509_issuer_name_hash x509_spki_hash
- The SSL scripts provide a new hook "ssl_finishing(c: connection)"
to trigger actions after the handshake has concluded.
Changed Functionality
---------------------
- The MIME types used to identify X.509 certificates in SSL
connections changed from "application/pkix-cert" to
"application/x-x509-user-cert" for host certificates and
"application/x-x509-ca-cert" for CA certificates.
Removed Functionality
---------------------
- We no longer maintain any Bro plugins as part of the Bro
distribution. Most of the plugins that used be in aux/plugins have
been moved over to use the Bro Package Manager instead. See
https://github.com/bro/packages for a list of Bro packages currently
available.
Bro 2.5.1
=========
New Functionality
-----------------
- Bro now includes bifs for rename, unlink, and rmdir.
- Bro now includes events for two extensions used by TLS 1.3:
ssl_extension_supported_versions and ssl_extension_psk_key_exchange_modes
- Bro now includes hooks that can be used to interact with log processing
on the C++ level.
- Bro now supports ERSPAN. Currently this ignores the ethernet header that is
carried over the tunnel; if a MAC is logged currently only the outer MAC
is returned.
- Added a new BroControl option CrashExpireInterval to enable
"broctl cron" to remove crash directories that are older than the
specified number of days (the default value is 0, which means crash
directories never expire).
- Added a new BroControl option MailReceivingPackets to control
whether or not "broctl cron" will mail a warning when it notices
that no packets were seen on an interface.
- There is a new broctl command-line option "--version" which outputs
the BroControl version.
Changed Functionality
---------------------
- The input framework's Ascii reader is now more resilient. If an input
is marked to reread a file when it changes and the file didn't exist
during a check Bro would stop watching the file in previous versions.
The same could happen with bad data in a line of a file. These
situations do not cause Bro to stop watching input files anymore. The
old behavior is available through settings in the Ascii reader.
- The RADIUS scripts have been reworked. Requests are now logged even if
there is no response. The new framed_addr field in the log indicates
if the radius server is hinting at an address for the client. The ttl
field indicates how quickly the server is replying to the network access
server.
- With the introduction of the Bro package manager, the Bro plugin repository
is considered deprecated. The af_packet, postgresql, and tcprs plugins have
already been removed and are available via bro-pkg.
Bro 2.5
=======
@ -31,16 +120,20 @@ New Functionality
transferred over SMB can be analyzed.
- Includes GSSAPI and NTLM analyzer and reimplements the DCE-RPC
analyzer.
- New logs: smb_cmd.log, smb_files.log, smb_mapping.log, ntlm.log, and dce_rpc.log
- New logs: smb_cmd.log, smb_files.log, smb_mapping.log, ntlm.log,
and dce_rpc.log
- Not every possible SMB command or functionality is implemented, but
generally, file handling should work whenever files are transferred.
Please speak up on the mailing list if there is an obvious oversight.
- Bro now includes the NetControl framework. The framework allows for easy
interaction of Bro with hard- and software switches, firewalls, etc.
New log files: net_control.log, netcontrol_catch_release.log,
New log files: netcontrol.log, netcontrol_catch_release.log,
netcontrol_drop.log, and netcontrol_shunt.log.
- Bro now includes the OpenFlow framework which exposes the data structures
necessary to interface to OpenFlow capable hardware.
- Bro's Intelligence Framework was refactored and new functionality
has been added:
@ -86,8 +179,8 @@ New Functionality
groups in TLS 1.3.
- The new event ssl_application_data gives information about application data
that is exchanged before encryption fully starts. This is used to detect when
encryption starts in TLS 1.3.
that is exchanged before encryption fully starts. This is used to detect
when encryption starts in TLS 1.3.
- Bro now tracks VLAN IDs. To record them inside the connection log,
load protocols/conn/vlan-logging.bro.
@ -116,7 +209,7 @@ New Functionality
- matching_subnets(subnet, table) returns all subnets of the set or table
that contain the given subnet.
- filter_subnet_table(subnet, table) works like check_subnet, but returns
- filter_subnet_table(subnet, table) works like matching_subnets, but returns
a table containing all matching entries.
- Several built-in functions for handling IP addresses and subnets were added:
@ -154,8 +247,10 @@ New Functionality
- The pcap buffer size can be set through the new option Pcap::bufsize.
- Input framework readers Table and Event can now define a custom
event to receive logging messages.
- Input framework readers stream types Table and Event can now define a custom
event (specified by the new "error_ev" field) to receive error messages
emitted by the input stream. This can, e.g., be used to raise notices in
case errors occur when reading an important input source.
- The logging framework now supports user-defined record separators,
renaming of column names, as well as extension data columns that can
@ -315,6 +410,10 @@ Changed Functionality
the crash report includes instructions on how to get backtraces
included in future crash reports.
- There is a new option SitePolicyScripts that replaces SitePolicyStandalone
(the old option is still available, but will be removed in the next
release).
Removed Functionality
---------------------

View file

@ -1 +1 @@
2.5-beta-89
2.5-297

@ -1 +1 @@
Subproject commit 097c1dde17c218973a9adad9ba39f8cfd639d9c1
Subproject commit 27356ae52ff9ff639b53a7325ea3262e1a13b704

@ -1 +1 @@
Subproject commit 0191254451d1aa9a5c985d493ad51f4f1c5f7d85
Subproject commit 02f710a436dfe285bae0d48d7f7bc498783e11a8

@ -1 +1 @@
Subproject commit 0743c4f51600cc90aceccaee72ca879b271712d2
Subproject commit 25907f6b0a5347304d1ec8213bfad3d114260ca0

@ -1 +1 @@
Subproject commit f944471bec062876aa18317f51b6fbe5325ca166
Subproject commit 1ab5ed3d3b0f2a3ff231de77816a697d55abccb8

@ -1 +1 @@
Subproject commit 497924cdcc23d26221bc39b24bcddcb62ec13ca7
Subproject commit 862c982f35e342fb10fa281120135cf61eca66bb

@ -1 +1 @@
Subproject commit 625dbecfd63022d79a144b9651085e68cdf99ce4
Subproject commit 2810ccee25f6f20be5cd241155f12d02a79d592a

@ -1 +1 @@
Subproject commit c60d24c7b7cc95367da3ac8381db3463a6fbd147
Subproject commit 9f3d6fce49cad3b45b5ddd0fe1f3c79186e1d2e7

@ -1 +0,0 @@
Subproject commit a895d4d6d475326f400e4664e08c8f71280d3294

View file

@ -229,3 +229,14 @@
#ifndef BRO_PLUGIN_INTERNAL_BUILD
#define BRO_PLUGIN_INTERNAL_BUILD @BRO_PLUGIN_INTERNAL_BUILD@
#endif
/* A C function that has the Bro version encoded into its name. */
#define BRO_VERSION_FUNCTION bro_version_@VERSION_C_IDENT@
#ifdef __cplusplus
extern "C" {
#endif
extern const char* BRO_VERSION_FUNCTION();
#ifdef __cplusplus
}
#endif

2
cmake

@ -1 +1 @@
Subproject commit 39510b5fb2351d7aac85da0d335a128402db3bbc
Subproject commit 79f2b2e944da77774675be4d5254156451967371

View file

@ -1 +0,0 @@
../../../aux/plugins/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/af_packet/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/elasticsearch/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/kafka/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/myricom/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/netmap/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/pf_ring/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/postgresql/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/redis/README

View file

@ -1 +0,0 @@
../../../../aux/plugins/tcprs/README

View file

@ -21,7 +21,6 @@ current, independent component releases.
Broker - User Manual <broker/broker-manual.rst>
BroControl - Interactive Bro management shell <broctl/README>
Bro-Aux - Small auxiliary tools for Bro <bro-aux/README>
Bro-Plugins - A collection of plugins for Bro <bro-plugins/README>
BTest - A unit testing framework <btest/README>
Capstats - Command-line packet statistic tool <capstats/README>
PySubnetTree - Python module for CIDR lookups<pysubnettree/README>

View file

@ -105,24 +105,9 @@ a Bro cluster (do this as the Bro user on the manager host only):
> broctl install
- Some tasks need to be run on a regular basis. On the manager node,
insert a line like this into the crontab of the user running the
cluster::
0-59/5 * * * * <prefix>/bin/broctl cron
(Note: if you are editing the system crontab instead of a user's own
crontab, then you need to also specify the user which the command
will be run as. The username must be placed after the time fields
and before the broctl command.)
Note that on some systems (FreeBSD in particular), the default PATH
for cron jobs does not include the directories where bash and python
are installed (the symptoms of this problem would be that "broctl cron"
works when run directly by the user, but does not work from a cron job).
To solve this problem, you would either need to create symlinks
to bash and python in a directory that is in the default PATH for
cron jobs, or specify a new PATH in the crontab.
- See the :doc:`BroControl <../components/broctl/README>` documentation
for information on setting up a cron job on the manager host that can
monitor the cluster.
PF_RING Cluster Configuration

View file

@ -14,7 +14,7 @@ from sphinx.locale import l_, _
from sphinx.directives import ObjectDescription
from sphinx.roles import XRefRole
from sphinx.util.nodes import make_refnode
import string
from sphinx import version_info
from docutils import nodes
from docutils.parsers.rst import Directive
@ -29,9 +29,17 @@ class SeeDirective(Directive):
def run(self):
n = see('')
n.refs = string.split(string.join(self.content))
n.refs = " ".join(self.content).split()
return [n]
# Wrapper for creating a tuple for index nodes, staying backwards
# compatible to Sphinx < 1.4:
def make_index_tuple(indextype, indexentry, targetname, targetname2):
if version_info >= (1, 4, 0, '', 0):
return (indextype, indexentry, targetname, targetname2, None)
else:
return (indextype, indexentry, targetname, targetname2)
def process_see_nodes(app, doctree, fromdocname):
for node in doctree.traverse(see):
content = []
@ -95,8 +103,9 @@ class BroGeneric(ObjectDescription):
indextext = self.get_index_text(self.objtype, name)
if indextext:
self.indexnode['entries'].append(('single', indextext,
targetname, targetname))
self.indexnode['entries'].append(make_index_tuple('single',
indextext, targetname,
targetname))
def get_index_text(self, objectname, name):
return _('%s (%s)') % (name, self.objtype)
@ -120,9 +129,9 @@ class BroNamespace(BroGeneric):
self.update_type_map(name)
indextext = self.get_index_text(self.objtype, name)
self.indexnode['entries'].append(('single', indextext,
self.indexnode['entries'].append(make_index_tuple('single', indextext,
targetname, targetname))
self.indexnode['entries'].append(('single',
self.indexnode['entries'].append(make_index_tuple('single',
"namespaces; %s" % (sig),
targetname, targetname))
@ -148,7 +157,7 @@ class BroEnum(BroGeneric):
self.update_type_map(name)
indextext = self.get_index_text(self.objtype, name)
#self.indexnode['entries'].append(('single', indextext,
#self.indexnode['entries'].append(make_index_tuple('single', indextext,
# targetname, targetname))
m = sig.split()
@ -162,7 +171,7 @@ class BroEnum(BroGeneric):
self.env.domaindata['bro']['notices'] = []
self.env.domaindata['bro']['notices'].append(
(m[0], self.env.docname, targetname))
self.indexnode['entries'].append(('single',
self.indexnode['entries'].append(make_index_tuple('single',
"%s (enum values); %s" % (m[1], m[0]),
targetname, targetname))
@ -204,7 +213,7 @@ class BroNotices(Index):
entries = content.setdefault(modname, [])
entries.append([n[0], 0, n[1], n[2], '', '', ''])
content = sorted(content.iteritems())
content = sorted(content.items())
return content, False
@ -280,5 +289,5 @@ class BroDomain(Domain):
'unknown target for ":bro:%s:`%s`"' % (typ, target))
def get_objects(self):
for (typ, name), docname in self.data['objects'].iteritems():
for (typ, name), docname in self.data['objects'].items():
yield name, name, typ, docname, typ + '-' + name, 1

View file

@ -532,10 +532,5 @@ Bro supports the following additional built-in output formats:
logging-input-sqlite
Additional writers are available as external plugins:
.. toctree::
:maxdepth: 1
../components/bro-plugins/README
Additional writers are available as external plugins through the `Bro
Package Manager <https://github.com/bro/bro-plugins>`_.

View file

@ -31,12 +31,12 @@ NetControl Architecture
NetControl architecture (click to enlarge).
The basic architecture of the NetControl framework is shown in the figure above.
Conceptually, the NetControl framework sits inbetween the user provided scripts
Conceptually, the NetControl framework sits between the user provided scripts
(which use the Bro event engine) and the network device (which can either be a
hardware or software device), that is used to implement the commands.
The NetControl framework supports a number of high-level calls, like the
:bro:see:`NetControl::drop_address` function, or lower a lower level rule
:bro:see:`NetControl::drop_address` function, or a lower level rule
syntax. After a rule has been added to the NetControl framework, NetControl
sends the rule to one or several of its *backends*. Each backend is responsible
to communicate with a single hard- or software device. The NetControl framework
@ -90,16 +90,12 @@ high-level functions.
* - :bro:see:`NetControl::drop_address`
- Calling this function causes NetControl to block all packets involving
an IP address from being forwarded
an IP address from being forwarded.
* - :bro:see:`NetControl::drop_connection`
- Calling this function stops all packets of a specific connection
(identified by its 5-tuple) from being forwarded.
* - :bro:see:`NetControl::drop_address`
- Calling this function causes NetControl to block all packets involving
an IP address from being forwarded
* - :bro:see:`NetControl::drop_address_catch_release`
- Calling this function causes all packets of a specific source IP to be
blocked. This function uses catch-and-release functionality and the IP
@ -114,7 +110,7 @@ high-level functions.
resources by shunting flows that have been identified as being benign.
* - :bro:see:`NetControl::redirect_flow`
- Calling this function causes NetControl to redirect an uni-directional
- Calling this function causes NetControl to redirect a uni-directional
flow to another port of the networking hardware.
* - :bro:see:`NetControl::quarantine_host`
@ -122,7 +118,7 @@ high-level functions.
traffic to a host with a special DNS server, which resolves all queries
as pointing to itself. The quarantined host is only allowed between the
special server, which will serve a warning message detailing the next
steps for the user
steps for the user.
* - :bro:see:`NetControl::whitelist_address`
- Calling this function causes NetControl to push a whitelist entry for an
@ -154,7 +150,7 @@ entries, which show that the debug plugin has been initialized and added.
Afterwards, there are two :bro:see:`NetControl::RULE` entries; the first shows
that the addition of a rule has been requested (state is
:bro:see:`NetControl::REQUESTED`). The following line shows that the rule was
successfully added (the state is :bro:see:`NetControl::SUCCEEDED`). The
successfully added (the state is :bro:see:`NetControl::SUCCEEDED`). The
remainder of the log line gives more information about the added rule, which in
our case applies to a specific 5-tuple.
@ -227,14 +223,14 @@ The *target* of a rule specifies if the rule is applied in the *forward path*,
and affects packets as they are forwarded through the network, or if it affects
the *monitor path* and only affects the packets that are sent to Bro, but not
the packets that traverse the network. The *entity* specifies the address,
connection, etc. that the rule applies to. In addition, each notice has a
connection, etc. that the rule applies to. In addition, each rule has a
*timeout* (which can be left empty), a *priority* (with higher priority rules
overriding lower priority rules). Furthermore, a *location* string with more
text information about each rule can be provided.
There are a couple more fields that only needed for some rule types. For
There are a couple more fields that are only needed for some rule types. For
example, when you insert a redirect rule, you have to specify the port that
packets should be redirected too. All these fields are shown in the
packets should be redirected to. All these fields are shown in the
:bro:see:`NetControl::Rule` documentation.
To give an example on how to construct your own rule, we are going to write
@ -243,7 +239,7 @@ difference between our function and the one provided by NetControl is the fact
that the NetControl function has additional functionality, e.g. for logging.
Once again, we are going to test our function with a simple example that simply
drops all connections on the Network:
drops all connections on the network:
.. btest-include:: ${DOC_ROOT}/frameworks/netcontrol-4-drop.bro
@ -254,7 +250,7 @@ drops all connections on the Network:
The last example shows that :bro:see:`NetControl::add_rule` returns a string
identifier that is unique for each rule (uniqueness is not preserved across
restarts or Bro). This rule id can be used to later remove rules manually using
restarts of Bro). This rule id can be used to later remove rules manually using
:bro:see:`NetControl::remove_rule`.
Similar to :bro:see:`NetControl::add_rule`, all the high-level functions also
@ -264,7 +260,7 @@ Interacting with Rules
----------------------
The NetControl framework offers a number of different ways to interact with
Rules. Before a rule is applied by the framework, a number of different hooks
rules. Before a rule is applied by the framework, a number of different hooks
allow you to either modify or discard rules before they are added. Furthermore,
a number of events can be used to track the lifecycle of a rule while it is
being managed by the NetControl framework. It is also possible to query and
@ -276,7 +272,7 @@ Rule Policy
The hook :bro:see:`NetControl::rule_policy` provides the mechanism for modifying
or discarding a rule before it is sent onwards to the backends. Hooks can be
thought of as multi-bodied functions and using them looks very similar to
handling events. In difference to events, they are processed immediately. Like
handling events. In contrast to events, they are processed immediately. Like
events, hooks can have priorities to sort the order in which they are applied.
Hooks can use the ``break`` keyword to show that processing should be aborted;
if any :bro:see:`NetControl::rule_policy` hook uses ``break``, the rule will be
@ -315,7 +311,7 @@ this order:
* - :bro:see:`NetControl::rule_new`
- Signals that a new rule is created by the NetControl framework due to
:bro:see:`NetControl::add_rule`. At this point of time, the rule has not
:bro:see:`NetControl::add_rule`. At this point, the rule has not
yet been added to any backend.
* - :bro:see:`NetControl::rule_added`
@ -328,15 +324,15 @@ this order:
* - :bro:see:`NetControl::rule_timeout`
- Signals that a rule timeout was reached. If the hardware does not support
automatic timeouts, the NetControl framework will automatically call
bro:see:`NetControl::remove_rule`.
:bro:see:`NetControl::remove_rule`.
* - :bro:see:`NetControl::rule_removed`
- Signals that a new rule has successfully been removed a backend.
* - :bro:see:`NetControl::rule_destroyed`
- This event is the pendant to :bro:see:`NetControl::rule_added`, and
reports that a rule is no longer be tracked by the NetControl framework.
This happens, for example, when a rule was removed from all backend.
reports that a rule is no longer being tracked by the NetControl framework.
This happens, for example, when a rule was removed from all backends.
* - :bro:see:`NetControl::rule_error`
- This event is raised whenever an error occurs during any rule operation.
@ -385,7 +381,7 @@ NetControl also comes with a blocking function that uses an approach called
Catch and release is a blocking scheme that conserves valuable rule space in
your hardware. Instead of using long-lasting blocks, catch and release first
only installs blocks for short amount of times (typically a few minutes). After
only installs blocks for a short amount of time (typically a few minutes). After
these minutes pass, the block is lifted, but the IP address is added to a
watchlist and the IP address will immediately be re-blocked again (for a longer
amount of time), if it is seen reappearing in any traffic, no matter if the new
@ -397,7 +393,7 @@ addresses that only are seen once for a short time are only blocked for a few
minutes, monitored for a while and then forgotten. IP addresses that keep
appearing will get re-blocked for longer amounts of time.
In difference to the other high-level functions that we documented so far, the
In contrast to the other high-level functions that we documented so far, the
catch and release functionality is much more complex and adds a number of
different specialized functions to NetControl. The documentation for catch and
release is contained in the file
@ -481,7 +477,7 @@ The plugins that currently ship with NetControl are:
plugin is contained in :doc:`/scripts/base/frameworks/netcontrol/plugins/acld.bro`.
* - PacketFilter plugin
- This plugin adds uses the Bro process-level packet filter (see
- This plugin uses the Bro process-level packet filter (see
:bro:see:`install_src_net_filter` and
:bro:see:`install_dst_net_filter`). Since the functionality of the
PacketFilter is limited, this plugin is mostly for demonstration purposes. The source of this
@ -496,7 +492,7 @@ Activating plugins
In the API reference part of this document, we already used the debug plugin. To
use the plugin, we first had to instantiate it by calling
:bro:see:`NetControl::NetControl::create_debug` and then add it to NetControl by
:bro:see:`NetControl::create_debug` and then add it to NetControl by
calling :bro:see:`NetControl::activate`.
As we already hinted before, NetControl supports having several plugins that are
@ -607,7 +603,7 @@ Writing plugins
In addition to using the plugins that are part of NetControl, you can write your
own plugins to interface with hard- or software that we currently do not support
out of the Box.
out of the box.
Creating your own plugin is easy; besides a bit of boilerplate, you only need to
create two functions: one that is called when a rule is added, and one that is

View file

@ -10,40 +10,53 @@ there's two suggested approaches: either install Bro using the same
installation prefix directory as before, or pick a new prefix and copy
local customizations over.
Regardless of which approach you choose, if you are using BroControl, then
before doing the upgrade you should stop all running Bro processes with the
"broctl stop" command. After the upgrade is complete then you will need
to run "broctl deploy".
In the following we summarize general guidelines for upgrading, see
the :ref:`release-notes` for version-specific information.
Reusing Previous Install Prefix
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you choose to configure and install Bro with the same prefix
directory as before, local customization and configuration to files in
``$prefix/share/bro/site`` and ``$prefix/etc`` won't be overwritten
(``$prefix`` indicating the root of where Bro was installed). Also, logs
generated at run-time won't be touched by the upgrade. Backing up local
changes before upgrading is still recommended.
directory as before, first stop all running Bro instances in your
cluster (if using BroControl, issue the "broctl stop" command on the
manager host). Next, make a backup of the Bro install prefix directory.
After upgrading, remember to check ``$prefix/share/bro/site`` and
``$prefix/etc`` for ``.example`` files, which indicate that the
distribution's version of the file differs from the local one, and therefore,
may include local changes. Review the differences and make adjustments
as necessary. Use the new version for differences that aren't a result of
a local change.
During the upgrade, any file in the install prefix may be
overwritten or removed, except for local customization of
files in the ``$prefix/share/bro/site`` and ``$prefix/etc``
directories (``$prefix`` indicating the root
of where Bro was installed). Also, logs generated at run-time
won't be touched by the upgrade.
After upgrading, remember to check the ``$prefix/share/bro/site`` and
``$prefix/etc`` directories for files with a file extension of ``.example``,
which indicate that the distribution's version of the file differs from the
local one, and therefore, may include local changes. Review the
differences and make adjustments as necessary. Use the new version
for differences that aren't a result of a local change.
Finally, if using BroControl, then issue the "broctl deploy" command. This
command will check for any policy script errors, install the new version
of Bro to all machines in your cluster, and then it will start Bro.
Using a New Install Prefix
~~~~~~~~~~~~~~~~~~~~~~~~~~
To install the newer version in a different prefix directory than before,
copy local customization and configuration files from ``$prefix/share/bro/site``
and ``$prefix/etc`` to the new location (``$prefix`` indicating the root of
where Bro was originally installed). Review the files for differences
first stop all running Bro instances in your cluster (if using BroControl,
then issue a "broctl stop" command on the manager host). Next,
install the new version of Bro in a new directory.
Next, copy local customization and configuration files
from the ``$prefix/share/bro/site`` and ``$prefix/etc`` directories to the
new location (``$prefix`` indicating the root of where Bro was originally
installed). Review the files for differences
before copying and make adjustments as necessary (use the new version for
differences that aren't a result of a local change). Of particular note,
the copied version of ``$prefix/etc/broctl.cfg`` is likely to need changes
to any settings that specify a pathname.
Finally, if using BroControl, then issue the "broctl deploy" command. This
command will check for any policy script errors, install the new version
of Bro to all machines in your cluster, and then it will start Bro.

View file

@ -31,7 +31,7 @@ before you begin:
* BIND8 library
* Libz
* Bash (for BroControl)
* Python (for BroControl)
* Python 2.6 or greater (for BroControl)
To build Bro from source, the following additional dependencies are required:
@ -54,12 +54,18 @@ To install the required dependencies, you can use:
sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel
In order to build Bro on Fedora 26, install ``compat-openssl10-devel`` instead
of ``openssl-devel``.
* DEB/Debian-based Linux:
.. console::
sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev
In order to build Bro on Debian 9, install ``libssl1.0-dev`` instead
of ``libssl-dev``.
* FreeBSD:
Most required dependencies should come with a minimal FreeBSD install
@ -69,9 +75,6 @@ To install the required dependencies, you can use:
sudo pkg install bash cmake swig bison python py27-sqlite3
Note that in older versions of FreeBSD, you might have to use the
"pkg_add -r" command instead of "pkg install".
For older versions of FreeBSD (especially FreeBSD 9.x), the system compiler
is not new enough to compile Bro. For these systems, you will have to install
a newer compiler using pkg; the ``clang34`` package should work.
@ -89,19 +92,23 @@ To install the required dependencies, you can use:
* Mac OS X:
Compiling source code on Macs requires first installing Xcode_ (in older
versions of Xcode, you would then need to go through its
"Preferences..." -> "Downloads" menus to install the "Command Line Tools"
component).
Compiling source code on Macs requires first installing either Xcode_
or the "Command Line Tools" (which is a much smaller download). To check
if either is installed, run the ``xcode-select -p`` command. If you see
an error message, then neither is installed and you can then run
``xcode-select --install`` which will prompt you to either get Xcode (by
clicking "Get Xcode") or to install the command line tools (by
clicking "Install").
OS X comes with all required dependencies except for CMake_, SWIG_,
and OpenSSL. (OpenSSL used to be part of OS X versions 10.10
and older, for which it does not need to be installed manually. It
was removed in OS X 10.11). Distributions of these dependencies can
and OpenSSL (OpenSSL headers were removed in OS X 10.11, therefore OpenSSL
must be installed manually for OS X versions 10.11 or newer).
Distributions of these dependencies can
likely be obtained from your preferred Mac OS X package management
system (e.g. Homebrew_, MacPorts_, or Fink_). Specifically for
Homebrew, the ``cmake``, ``swig``, and ``openssl`` packages
provide the required dependencies.
provide the required dependencies. For MacPorts, the ``cmake``, ``swig``,
``swig-python``, and ``openssl`` packages provide the required dependencies.
Optional Dependencies

View file

@ -78,15 +78,6 @@ You can leave it running for now, but to stop this Bro instance you would do:
[BroControl] > stop
We also recommend to insert the following entry into the crontab of the user
running BroControl::
0-59/5 * * * * $PREFIX/bin/broctl cron
This will perform a number of regular housekeeping tasks, including
verifying that the process is still running (and restarting if not in
case of any abnormal termination).
Browsing Log Files
------------------
@ -232,23 +223,25 @@ That's exactly what we want to do for the first notice. Add to ``local.bro``:
inside the module.
Then go into the BroControl shell to check whether the configuration change
is valid before installing it and then restarting the Bro instance:
is valid before installing it and then restarting the Bro instance. The
"deploy" command does all of this automatically:
.. console::
[BroControl] > check
bro scripts are ok.
[BroControl] > install
removing old policies in /usr/local/bro/spool/policy/site ... done.
removing old policies in /usr/local/bro/spool/policy/auto ... done.
creating policy directories ... done.
installing site policies ... done.
generating standalone-layout.bro ... done.
generating local-networks.bro ... done.
generating broctl-config.bro ... done.
updating nodes ... done.
[BroControl] > restart
[BroControl] > deploy
checking configurations ...
installing ...
removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/site ...
removing old policies in /usr/local/bro/spool/installed-scripts-do-not-touch/auto ...
creating policy directories ...
installing site policies ...
generating standalone-layout.bro ...
generating local-networks.bro ...
generating broctl-config.bro ...
generating broctl-config.sh ...
stopping ...
stopping bro ...
starting ...
starting bro ...
Now that the SSL notice is ignored, let's look at how to send an email
@ -281,8 +274,8 @@ connection field is in the set of watched servers.
order to avoid ambiguity with the built-in address type's use of '.'
in IPv4 dotted decimal representations.
Remember, to finalize that configuration change perform the ``check``,
``install``, ``restart`` commands in that order inside the BroControl shell.
Remember, to finalize that configuration change perform the ``deploy``
command inside the BroControl shell.
Next Steps
----------
@ -323,9 +316,8 @@ Analyzing live traffic from an interface is simple:
bro -i en0 <list of scripts to load>
``en0`` can be replaced by the interface of your choice and for the list of
scripts, you can just use "all" for now to perform all the default analysis
that's available.
``en0`` can be replaced by the interface of your choice. A selection
of common base scripts will be loaded by default.
Bro will output log files into the working directory.
@ -333,22 +325,6 @@ Bro will output log files into the working directory.
capturing as an unprivileged user and checksum offloading are
particularly relevant at this point.
To use the site-specific ``local.bro`` script, just add it to the
command-line:
.. console::
bro -i en0 local
This will cause Bro to print a warning about lacking the
``Site::local_nets`` variable being configured. You can supply this
information at the command line like this (supply your "local" subnets
in place of the example subnets):
.. console::
bro -r mypackets.trace local "Site::local_nets += { 1.2.3.0/24, 5.6.7.0/24 }"
Reading Packet Capture (pcap) Files
-----------------------------------
@ -380,7 +356,6 @@ script that we include as a suggested configuration:
bro -r mypackets.trace local
Telling Bro Which Scripts to Load
---------------------------------
@ -388,33 +363,65 @@ A command-line invocation of Bro typically looks like:
.. console::
bro <options> <policies...>
bro <options> <scripts...>
Where the last arguments are the specific policy scripts that this Bro
instance will load. These arguments don't have to include the ``.bro``
file extension, and if the corresponding script resides under the default
installation path, ``$PREFIX/share/bro``, then it requires no path
qualification. Further, a directory of scripts can be specified as
an argument to be loaded as a "package" if it contains a ``__load__.bro``
script that defines the scripts that are part of the package.
file extension, and if the corresponding script resides in the default
search path, then it requires no path qualification. The following
directories are included in the default search path for Bro scripts::
./
<prefix>/share/bro/
<prefix>/share/bro/policy/
<prefix>/share/bro/site/
This example does all of the base analysis (primarily protocol
logging) and adds SSL certificate validation.
These prefix paths can be used to load scripts like this:
.. console::
bro -r mypackets.trace protocols/ssl/validate-certs
bro -r mypackets.trace frameworks/files/extract-all
This will load the
``<prefix>/share/bro/policy/frameworks/files/extract-all.bro`` script which will
cause Bro to extract all of the files it discovers in the PCAP.
.. note:: If one wants Bro to be able to load scripts that live outside the
default directories in Bro's installation root, the full path to the file(s)
must be provided. See the default search path by running ``bro --help``.
You might notice that a script you load from the command line uses the
``@load`` directive in the Bro language to declare dependence on other scripts.
This directive is similar to the ``#include`` of C/C++, except the semantics
are, "load this script if it hasn't already been loaded."
.. note:: If one wants Bro to be able to load scripts that live outside the
default directories in Bro's installation root, the ``BROPATH`` environment
variable will need to be extended to include all the directories that need
to be searched for scripts. See the default search path by doing
``bro --help``.
Further, a directory of scripts can be specified as
an argument to be loaded as a "package" if it contains a ``__load__.bro``
script that defines the scripts that are part of the package.
Local site customization
------------------------
There is one script that is installed which is considered "local site
customization" and is not overwritten when upgrades take place. To use
the site-specific ``local.bro`` script, just add it to the command-line (can
also be loaded through scripts with @load):
.. console::
bro -i en0 local
This causes Bro to load a script that prints a warning about lacking the
``Site::local_nets`` variable being configured. You can supply this
information at the command line like this (supply your "local" subnets
in place of the example subnets):
.. console::
bro -r mypackets.trace local "Site::local_nets += { 1.2.3.0/24, 5.6.7.0/24 }"
When running with Broctl, this value is set by configuring the ``networks.cfg``
file.
Running Bro Without Installing
------------------------------

View file

@ -76,6 +76,10 @@ Files
+============================+=======================================+=================================+
| files.log | File analysis results | :bro:type:`Files::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| ocsp.log | Online Certificate Status Protocol | :bro:type:`OCSP::Info` |
| | (OCSP). Only created if policy script | |
| | is loaded. | |
+----------------------------+---------------------------------------+---------------------------------+
| pe.log | Portable Executable (PE) | :bro:type:`PE::Info` |
+----------------------------+---------------------------------------+---------------------------------+
| x509.log | X.509 certificate info | :bro:type:`X509::Info` |

View file

@ -18,6 +18,10 @@ InstallPackageConfigFile(
${CMAKE_CURRENT_SOURCE_DIR}/site/local-manager.bro
${BRO_SCRIPT_INSTALL_PATH}/site
local-manager.bro)
InstallPackageConfigFile(
${CMAKE_CURRENT_SOURCE_DIR}/site/local-logger.bro
${BRO_SCRIPT_INSTALL_PATH}/site
local-logger.bro)
InstallPackageConfigFile(
${CMAKE_CURRENT_SOURCE_DIR}/site/local-proxy.bro
${BRO_SCRIPT_INSTALL_PATH}/site

View file

@ -14,6 +14,13 @@ export {
redef record Files::Info += {
## Local filename of extracted file.
extracted: string &optional &log;
## Set to true if the file being extracted was cut off
## so the whole file was not logged.
extracted_cutoff: bool &optional &log;
## The number of bytes extracted to disk.
extracted_size: count &optional &log;
};
redef record Files::AnalyzerArgs += {
@ -58,9 +65,16 @@ function on_add(f: fa_file, args: Files::AnalyzerArgs)
f$info$extracted = args$extract_filename;
args$extract_filename = build_path_compressed(prefix, args$extract_filename);
f$info$extracted_cutoff = F;
mkdir(prefix);
}
event file_extraction_limit(f: fa_file, args: Files::AnalyzerArgs, limit: count, len: count) &priority=10
{
f$info$extracted_cutoff = T;
f$info$extracted_size = limit;
}
event bro_init() &priority=10
{
Files::register_analyzer_add_callback(Files::ANALYZER_EXTRACT, on_add);

View file

@ -1 +1,2 @@
Support for X509 certificates with the file analysis framework.
Also supports parsing OCSP requests and responses.

View file

@ -10,23 +10,17 @@ export {
type Info: record {
## Current timestamp.
ts: time &log;
## File id of this certificate.
id: string &log;
## Basic information about the certificate.
certificate: X509::Certificate &log;
## The opaque wrapping the certificate. Mainly used
## for the verify operations.
handle: opaque of x509;
## All extensions that were encountered in the certificate.
extensions: vector of X509::Extension &default=vector();
## Subject alternative name extension of the certificate.
san: X509::SubjectAlternativeName &optional &log;
## Basic constraints extension of the certificate.
basic_constraints: X509::BasicConstraints &optional &log;
};
@ -38,6 +32,24 @@ export {
event bro_init() &priority=5
{
Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509, $path="x509"]);
# We use MIME types internally to distinguish between user and CA certificates.
# The first certificate in a connection always gets tagged as user-cert, all
# following certificates get tagged as CA certificates. Certificates gotten via
# other means (e.g. identified from HTTP traffic when they are transfered in plain
# text) get tagged as application/pkix-cert.
Files::register_for_mime_type(Files::ANALYZER_X509, "application/x-x509-user-cert");
Files::register_for_mime_type(Files::ANALYZER_X509, "application/x-x509-ca-cert");
Files::register_for_mime_type(Files::ANALYZER_X509, "application/pkix-cert");
# Always calculate hashes. They are not necessary for base scripts
# but very useful for identification, and required for policy scripts
Files::register_for_mime_type(Files::ANALYZER_MD5, "application/x-x509-user-cert");
Files::register_for_mime_type(Files::ANALYZER_MD5, "application/x-x509-ca-cert");
Files::register_for_mime_type(Files::ANALYZER_MD5, "application/pkix-cert");
Files::register_for_mime_type(Files::ANALYZER_SHA1, "application/x-x509-user-cert");
Files::register_for_mime_type(Files::ANALYZER_SHA1, "application/x-x509-ca-cert");
Files::register_for_mime_type(Files::ANALYZER_SHA1, "application/pkix-cert");
}
redef record Files::Info += {
@ -48,9 +60,6 @@ redef record Files::Info += {
event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certificate) &priority=5
{
if ( ! f$info?$mime_type )
f$info$mime_type = "application/pkix-cert";
f$info$x509 = [$ts=f$info$ts, $id=f$id, $certificate=cert, $handle=cert_ref];
}

View file

@ -14,6 +14,7 @@ module Broker;
export {
## A name used to identify this endpoint to peers.
##
## .. bro:see:: Broker::connect Broker::listen
const endpoint_name = "" &redef;

View file

@ -1,8 +1,14 @@
# Web Open Font Format 2
signature file-woff2 {
file-mime "application/font-woff2", 70
file-magic /^wOF2/
}
# Web Open Font Format
signature file-woff {
file-magic /^wOFF/
file-mime "application/font-woff", 70
file-magic /^wOFF/
}
# TrueType font

View file

@ -116,7 +116,7 @@ signature file-reg-utf16 {
# Microsoft Registry format (typically DESKTOP.DAT)
signature file-regf {
file-mime "application vnd.ms-regf", 49
file-mime "application/vnd.ms-regf", 49
file-magic /^\x72\x65\x67\x66/
}
@ -292,6 +292,104 @@ signature file-skp {
file-mime "application/skp", 100
}
# Microsoft DirectDraw Surface
signature file-msdds {
file-mime "application/x-ms-dds", 100
file-magic /^DDS/
}
# bsdiff output
signature file-bsdiff {
file-mime "application/bsdiff", 100
file-magic /^BSDIFF/
}
# AV Update binary diffs (mostly kaspersky)
# inferred from traffic analysis
signature file-binarydiff {
file-mime "application/bindiff", 100
file-magic /^DIFF/
}
# Kaspersky Database
# inferred from traffic analysis
signature file-kaspdb {
file-mime "application/x-kaspavdb", 100
file-magic /^KLZF/
}
# Kaspersky AV Database diff
# inferred from traffic analysis
signature file-kaspdbdif {
file-mime "application/x-kaspavupdate", 100
file-magic /^KLD2/
}
# MSSQL Backups
signature file-mssqlbak {
file-mime "application/mssql-bak", 100
file-magic /^MSSQLBAK/
}
# Microsoft Tape Format
# MSSQL transaction log
signature file-ms-tf {
file-mime "application/mtf", 100
file-magic /^TAPE/
}
# Binary property list (Apple)
signature file-bplist {
file-mime "application/bplist", 100
file-magic /^bplist0?/
}
# Microsoft Compiled HTML Help File
signature file-mshelp {
file-mime "application/mshelp", 100
file-magic /^ITSF/
}
# Blizzard game file MPQ Format
signature file-mpqgame {
file-mime "application/x-game-mpq", 100
file-magic /^MPQ\x1a/
}
# Blizzard CASC Format game file
signature file-blizgame {
file-mime "application/x-blizgame", 100
file-magic /^BLTE/
}
# iOS Mapkit tiles
# inferred from traffic analysis
signature file-mapkit-tile {
file-mime "application/map-tile", 100
file-magic /^VMP4/
}
# Google Chrome Extension file
signature file-chrome-extension {
file-mime "application/chrome-ext", 100
file-magic /^Cr24/
}
# Google Chrome Extension Update Delta
# not 100% sure about this identification
# this may be google chrome updates, not extensions
signature file-chrome-extension-update {
file-mime "application/chrome-ext-upd", 70
file-magic /^CrOD/
}
# Microsoft Message Queueing
# .net related
signature file-msqm {
file-mime "application/msqm", 100
file-magic /^MSQM/
}
signature file-elf-object {
file-mime "application/x-object", 50
file-magic /\x7fELF[\x01\x02](\x01.{10}\x01\x00|\x02.{10}\x00\x01)/
@ -310,4 +408,9 @@ signature file-elf-sharedlib {
signature file-elf-coredump {
file-mime "application/x-coredump", 50
file-magic /\x7fELF[\x01\x02](\x01.{10}\x04\x00|\x02.{10}\x00\x04)/
}
}
signature file-vim-tmp {
file-mime "application/x-vim-tmp", 100
file-magic /^b0VIM/
}

View file

@ -164,3 +164,9 @@ signature file-award-bios-logo {
file-mime "image/x-award-bioslogo", 50
file-magic /^\x11[\x06\x09]/
}
# WebP, lossy image format from Google
signature file-webp {
file-mime "image/webp", 70
file-magic /^RIFF.{4}WEBP/
}

View file

@ -18,7 +18,7 @@ signature file-docx {
}
signature file-xlsx {
file-magic /^PK\x03\x04.{26}(\[Content_Types\]\.xml|_rels\x2f\.rels|xl\2f).*PK\x03\x04.{26}xl\x2f/
file-magic /^PK\x03\x04.{26}(\[Content_Types\]\.xml|_rels\x2f\.rels|xl\x2f).*PK\x03\x04.{26}xl\x2f/
file-mime "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", 80
}

View file

@ -18,4 +18,33 @@ export {
## String to use for an unset &optional field.
const unset_field = Input::unset_field &redef;
## Fail on invalid lines. If set to false, the ascii
## input reader will jump over invalid lines, reporting
## warnings in reporter.log. If set to true, errors in
## input lines will be handled as fatal errors for the
## reader thread; reading will abort immediately and
## an error will be logged to reporter.log.
## Individual readers can use a different value using
## the $config table.
## fail_on_invalid_lines = T was the default behavior
## until Bro 2.6.
const fail_on_invalid_lines = F &redef;
## Fail on file read problems. If set to true, the ascii
## input reader will fail when encountering any problems
## while reading a file different from invalid lines.
## Examples of such problems are permission problems, or
## missing files.
## When set to false, these problems will be ignored. This
## has an especially big effect for the REREAD mode, which will
## seamlessly recover from read errors when a file is
## only temporarily inaccessible. For MANUAL or STREAM files,
## errors will most likely still be fatal since no automatic
## re-reading of the file is attempted.
## Individual readers can use a different value using
## the $config table.
## fail_on_file_problem = T was the default behavior
## until Bro 2.6.
const fail_on_file_problem = F &redef;
}

View file

@ -12,7 +12,7 @@ redef record Item += {
first_dispatch: bool &default=T;
};
# If this process is not a manager process, we don't want the full metadata
# If this process is not a manager process, we don't want the full metadata.
@if ( Cluster::local_node_type() != Cluster::MANAGER )
redef have_full_data = F;
@endif
@ -20,7 +20,7 @@ redef have_full_data = F;
# Internal event for cluster data distribution.
global cluster_new_item: event(item: Item);
# Primary intelligence management is done by the manager:
# Primary intelligence management is done by the manager.
# The manager informs the workers about new items and item removal.
redef Cluster::manager2worker_events += /^Intel::(cluster_new_item|purge_item)$/;
# A worker queries the manager to insert, remove or indicate the match of an item.

View file

@ -1,5 +1,5 @@
##! File analysis framework integration for the intelligence framework. This
##! script manages file information in intelligence framework datastructures.
##! script manages file information in intelligence framework data structures.
@load ./main

View file

@ -1,6 +1,7 @@
##! The intelligence framework provides a way to store and query intelligence data
##! (e.g. IP addresses, URLs and hashes). The intelligence items can be associated
##! with metadata to allow informed decisions about matching and handling.
##! The intelligence framework provides a way to store and query intelligence
##! data (e.g. IP addresses, URLs and hashes). The intelligence items can be
##! associated with metadata to allow informed decisions about matching and
##! handling.
@load base/frameworks/notice
@ -406,7 +407,11 @@ function insert(item: Item)
if ( host !in data_store$host_data )
data_store$host_data[host] = table();
else
{
is_new = F;
# Reset expiration timer.
data_store$host_data[host] = data_store$host_data[host];
}
meta_tbl = data_store$host_data[host];
}
@ -421,7 +426,11 @@ function insert(item: Item)
if ( !check_subnet(net, data_store$subnet_data) )
data_store$subnet_data[net] = table();
else
{
is_new = F;
# Reset expiration timer.
data_store$subnet_data[net] = data_store$subnet_data[net];
}
meta_tbl = data_store$subnet_data[net];
}
@ -435,7 +444,12 @@ function insert(item: Item)
if ( [lower_indicator, item$indicator_type] !in data_store$string_data )
data_store$string_data[lower_indicator, item$indicator_type] = table();
else
{
is_new = F;
# Reset expiration timer.
data_store$string_data[lower_indicator, item$indicator_type] =
data_store$string_data[lower_indicator, item$indicator_type];
}
meta_tbl = data_store$string_data[lower_indicator, item$indicator_type];
}

View file

@ -79,7 +79,7 @@ export {
## Information passed into rotation callback functions.
type RotationInfo: record {
writer: Writer; ##< The :bro:type:`Log::Writer` being used.
writer: Writer; ##< The log writer being used.
fname: string; ##< Full name of the rotated file.
path: string; ##< Original path value.
open: time; ##< Time when opened.
@ -131,7 +131,7 @@ export {
## Default log extension function in the case that you would like to
## apply the same extensions to all logs. The function *must* return
## a record with all of the fields to be included in the log. The
## default function included here does not return a value to indicate
## default function included here does not return a value, which indicates
## that no extensions are added.
const Log::default_ext_func: function(path: string): any =
function(path: string) { } &redef;
@ -348,7 +348,7 @@ export {
## to handle, or one of the stream's filters has an invalid
## ``path_func``.
##
## .. bro:see: Log::enable_stream Log::disable_stream
## .. bro:see:: Log::enable_stream Log::disable_stream
global write: function(id: ID, columns: any) : bool;
## Sets the buffering status for all the writers of a given logging stream.

View file

@ -26,6 +26,13 @@ export {
## This option is also available as a per-filter ``$config`` option.
const use_json = F &redef;
## Define the gzip level to compress the logs. If 0, then no gzip
## compression is performed. Enabling compression also changes
## the log file name extension to include ".gz".
##
## This option is also available as a per-filter ``$config`` option.
const gzip_level = 0 &redef;
## Format of timestamps when writing out JSON. By default, the JSON
## formatter will use double values for timestamps which represent the
## number of seconds from the UNIX epoch.

View file

@ -10,39 +10,39 @@ export {
redef enum Log::ID += { CATCH_RELEASE };
## Thhis record is used is used for storing information about current blocks that are
## This record is used for storing information about current blocks that are
## part of catch and release.
type BlockInfo: record {
## Absolute time indicating until when a block is inserted using NetControl
## Absolute time indicating until when a block is inserted using NetControl.
block_until: time &optional;
## Absolute time indicating until when an IP address is watched to reblock it
## Absolute time indicating until when an IP address is watched to reblock it.
watch_until: time;
## Number of times an IP address was reblocked
## Number of times an IP address was reblocked.
num_reblocked: count &default=0;
## Number indicating at which catch and release interval we currently are
## Number indicating at which catch and release interval we currently are.
current_interval: count;
## ID of the inserted block, if any.
current_block_id: string;
## User specified string
## User specified string.
location: string &optional;
};
## The enum that contains the different kinds of messages that are logged by
## catch and release
## catch and release.
type CatchReleaseActions: enum {
## Log lines marked with info are purely informational; no action was taken
## Log lines marked with info are purely informational; no action was taken.
INFO,
## A rule for the specified IP address already existed in NetControl (outside
## of catch-and-release). Catch and release did not add a new rule, but is now
## watching the IP address and will add a new rule after the current rule expired.
## watching the IP address and will add a new rule after the current rule expires.
ADDED,
## A drop was requested by catch and release
## A drop was requested by catch and release.
DROP,
## A address was succesfully blocked by catch and release
## An address was successfully blocked by catch and release.
DROPPED,
## An address was unblocked after the timeout expired
## An address was unblocked after the timeout expired.
UNBLOCK,
## An address was forgotten because it did not reappear within the `watch_until` interval
## An address was forgotten because it did not reappear within the `watch_until` interval.
FORGOTTEN,
## A watched IP address was seen again; catch and release will re-block it.
SEEN_AGAIN
@ -52,7 +52,7 @@ export {
type CatchReleaseInfo: record {
## The absolute time indicating when the action for this log-line occured.
ts: time &log;
## The rule id that this log lone refers to.
## The rule id that this log line refers to.
rule_id: string &log &optional;
## The IP address that this line refers to.
ip: addr &log;
@ -85,7 +85,7 @@ export {
##
## a: The address to be dropped.
##
## t: How long to drop it, with 0 being indefinitly.
## t: How long to drop it, with 0 being indefinitely.
##
## location: An optional string describing where the drop was triggered.
##
@ -101,17 +101,17 @@ export {
##
## a: The address to be unblocked.
##
## reason: A reason for the unblock
## reason: A reason for the unblock.
##
## Returns: True if the address was unblocked.
global unblock_address_catch_release: function(a: addr, reason: string &default="") : bool;
## This function can be called to notify the cach and release script that activity by
## This function can be called to notify the catch and release script that activity by
## an IP address was seen. If the respective IP address is currently monitored by catch and
## release and not blocked, the block will be re-instated. See the documentation of watch_new_connection
## release and not blocked, the block will be reinstated. See the documentation of watch_new_connection
## which events the catch and release functionality usually monitors for activity.
##
## a: The address that was seen and should be re-dropped if it is being watched
## a: The address that was seen and should be re-dropped if it is being watched.
global catch_release_seen: function(a: addr);
## Get the :bro:see:`NetControl::BlockInfo` record for an address currently blocked by catch and release.
@ -144,7 +144,7 @@ export {
## should have been blocked.
const catch_release_warn_blocked_ip_encountered = F &redef;
## Time intervals for which a subsequent drops of the same IP take
## Time intervals for which subsequent drops of the same IP take
## effect.
const catch_release_intervals: vector of interval = vector(10min, 1hr, 24hrs, 7days) &redef;
@ -160,7 +160,7 @@ export {
global catch_release_encountered: event(a: addr);
}
# set that is used to only send seen notifications to the master every ~30 seconds.
# Set that is used to only send seen notifications to the master every ~30 seconds.
global catch_release_recently_notified: set[addr] &create_expire=30secs;
event bro_init() &priority=5

View file

@ -23,7 +23,7 @@ redef Cluster::manager2worker_events += /NetControl::rule_(added|removed|timeout
function activate(p: PluginState, priority: int)
{
# we only run the activate function on the manager.
# We only run the activate function on the manager.
if ( Cluster::local_node_type() != Cluster::MANAGER )
return;
@ -38,8 +38,8 @@ function add_rule(r: Rule) : string
return add_rule_impl(r);
else
{
# we sync rule entities accross the cluster, so we
# acually can test if the rule already exists. If yes,
# We sync rule entities accross the cluster, so we
# actually can test if the rule already exists. If yes,
# refuse insertion already at the node.
if ( [r$entity, r$ty] in rule_entities )

View file

@ -11,34 +11,34 @@ export {
##
## a: The address to be dropped.
##
## t: How long to drop it, with 0 being indefinitly.
## t: How long to drop it, with 0 being indefinitely.
##
## location: An optional string describing where the drop was triggered.
##
## Returns: The id of the inserted rule on succes and zero on failure.
## Returns: The id of the inserted rule on success and zero on failure.
global drop_address: function(a: addr, t: interval, location: string &default="") : string;
## Stops all packets involving an connection address from being forwarded.
## Stops all packets involving a connection address from being forwarded.
##
## c: The connection to be dropped.
##
## t: How long to drop it, with 0 being indefinitly.
## t: How long to drop it, with 0 being indefinitely.
##
## location: An optional string describing where the drop was triggered.
##
## Returns: The id of the inserted rule on succes and zero on failure.
## Returns: The id of the inserted rule on success and zero on failure.
global drop_connection: function(c: conn_id, t: interval, location: string &default="") : string;
type DropInfo: record {
## Time at which the recorded activity occurred.
ts: time &log;
## ID of the rule; unique during each Bro run
## ID of the rule; unique during each Bro run.
rule_id: string &log;
orig_h: addr &log; ##< The originator's IP address.
orig_p: port &log &optional; ##< The originator's port number.
resp_h: addr &log &optional; ##< The responder's IP address.
resp_p: port &log &optional; ##< The responder's port number.
## Expiry time of the shunt
## Expiry time of the shunt.
expire: interval &log;
## Location where the underlying action was triggered.
location: string &log &optional;
@ -47,7 +47,7 @@ export {
## Hook that allows the modification of rules passed to drop_* before they
## are passed on. If one of the hooks uses break, the rule is ignored.
##
## r: The rule to be added
## r: The rule to be added.
global NetControl::drop_rule_policy: hook(r: Rule);
## Event that can be handled to access the :bro:type:`NetControl::ShuntInfo`

View file

@ -7,7 +7,7 @@
##! restrictions on entities, such as specific connections or IP addresses.
##!
##! This framework has two APIs: a high-level and low-level. The high-level API
##! provides convinience functions for a set of common operations. The
##! provides convenience functions for a set of common operations. The
##! low-level API provides full flexibility.
module NetControl;
@ -25,7 +25,7 @@ export {
## Activates a plugin.
##
## p: The plugin to acticate.
## p: The plugin to activate.
##
## priority: The higher the priority, the earlier this plugin will be checked
## whether it supports an operation, relative to other plugins.
@ -48,37 +48,37 @@ export {
## Allows all traffic involving a specific IP address to be forwarded.
##
## a: The address to be whitelistet.
## a: The address to be whitelisted.
##
## t: How long to whitelist it, with 0 being indefinitly.
## t: How long to whitelist it, with 0 being indefinitely.
##
## location: An optional string describing whitelist was triddered.
##
## Returns: The id of the inserted rule on succes and zero on failure.
## Returns: The id of the inserted rule on success and zero on failure.
global whitelist_address: function(a: addr, t: interval, location: string &default="") : string;
## Allows all traffic involving a specific IP subnet to be forwarded.
##
## s: The subnet to be whitelistet.
## s: The subnet to be whitelisted.
##
## t: How long to whitelist it, with 0 being indefinitly.
## t: How long to whitelist it, with 0 being indefinitely.
##
## location: An optional string describing whitelist was triddered.
##
## Returns: The id of the inserted rule on succes and zero on failure.
## Returns: The id of the inserted rule on success and zero on failure.
global whitelist_subnet: function(s: subnet, t: interval, location: string &default="") : string;
## Redirects an uni-directional flow to another port.
## Redirects a uni-directional flow to another port.
##
## f: The flow to redirect.
##
## out_port: Port to redirect the flow to
## out_port: Port to redirect the flow to.
##
## t: How long to leave the redirect in place, with 0 being indefinitly.
## t: How long to leave the redirect in place, with 0 being indefinitely.
##
## location: An optional string describing where the redirect was triggered.
##
## Returns: The id of the inserted rule on succes and zero on failure.
## Returns: The id of the inserted rule on success and zero on failure.
global redirect_flow: function(f: flow_id, out_port: count, t: interval, location: string &default="") : string;
## Quarantines a host. This requires a special quarantine server, which runs a HTTP server explaining
@ -87,13 +87,13 @@ export {
## instead. Only http communication infected to quarantinehost is allowed. All other network communication
## is blocked.
##
## infected: the host to quarantine
## infected: the host to quarantine.
##
## dns: the network dns server
## dns: the network dns server.
##
## quarantine: the quarantine server running a dns and a web server
## quarantine: the quarantine server running a dns and a web server.
##
## t: how long to leave the quarantine in place
## t: how long to leave the quarantine in place.
##
## Returns: Vector of inserted rules on success, empty list on failure.
global quarantine_host: function(infected: addr, dns: addr, quarantine: addr, t: interval, location: string &default="") : vector of string;
@ -111,7 +111,7 @@ export {
##
## r: The rule to install.
##
## Returns: If succesful, returns an ID string unique to the rule that can
## Returns: If successful, returns an ID string unique to the rule that can
## later be used to refer to it. If unsuccessful, returns an empty
## string. The ID is also assigned to ``r$id``. Note that
## "successful" means "a plugin knew how to handle the rule", it
@ -126,19 +126,19 @@ export {
##
## reason: Optional string argument giving information on why the rule was removed.
##
## Returns: True if succesful, the relevant plugin indicated that it knew
## Returns: True if successful, the relevant plugin indicated that it knew
## how to handle the removal. Note that again "success" means the
## plugin accepted the removal. They might still fail to put it
## plugin accepted the removal. It might still fail to put it
## into effect, as that might happen asynchronously and thus go
## wrong at that point.
global remove_rule: function(id: string, reason: string &default="") : bool;
## Deletes a rule without removing in from the backends to which it has been
## added before. This mean that no messages will be sent to the switches to which
## Deletes a rule without removing it from the backends to which it has been
## added before. This means that no messages will be sent to the switches to which
## the rule has been added; if it is not removed from them by a separate mechanism,
## it will stay installed and not be removed later.
##
## id: The rule to delete, specified as the ID returned by :bro:see:`add_rule` .
## id: The rule to delete, specified as the ID returned by :bro:see:`NetControl::add_rule`.
##
## reason: Optional string argument giving information on why the rule was deleted.
##
@ -152,9 +152,9 @@ export {
## the worker, the internal rule variables (starting with _) will not reflect the
## current state.
##
## ip: The ip address to search for
## ip: The ip address to search for.
##
## Returns: vector of all rules affecting the IP address
## Returns: vector of all rules affecting the IP address.
global find_rules_addr: function(ip: addr) : vector of Rule;
## Searches all rules affecting a certain subnet.
@ -171,9 +171,9 @@ export {
## the worker, the internal rule variables (starting with _) will not reflect the
## current state.
##
## sn: The subnet to search for
## sn: The subnet to search for.
##
## Returns: vector of all rules affecting the subnet
## Returns: vector of all rules affecting the subnet.
global find_rules_subnet: function(sn: subnet) : vector of Rule;
###### Asynchronous feedback on rules.
@ -201,7 +201,7 @@ export {
global rule_exists: event(r: Rule, p: PluginState, msg: string &default="");
## Reports that a plugin reports a rule was removed due to a
## remove: function() vall.
## remove_rule function call.
##
## r: The rule now removed.
##
@ -234,9 +234,9 @@ export {
## This event is raised when a new rule is created by the NetControl framework
## due to a call to add_rule. From this moment, until the rule_destroyed event
## is raised, the rule is tracked internally by the NetControl framewory.
## is raised, the rule is tracked internally by the NetControl framework.
##
## Note that this event does not mean that a rule was succesfully added by
## Note that this event does not mean that a rule was successfully added by
## any backend; it just means that the rule has been accepted and addition
## to the specified backend is queued. To get information when rules are actually
## installed by the hardware, use the rule_added, rule_exists, rule_removed, rule_timeout
@ -248,15 +248,15 @@ export {
## was removed by all plugins to which it was added, by the fact that it timed out
## or due to rule errors.
##
## To get the cause or a rule remove, hook the rule_removed, rule_timeout and
## rule_error calls.
## To get the cause of a rule remove, catch the rule_removed, rule_timeout and
## rule_error events.
global rule_destroyed: event(r: Rule);
## Hook that allows the modification of rules passed to add_rule before they
## are passed on to the plugins. If one of the hooks uses break, the rule is
## ignored and not passed on to any plugin.
##
## r: The rule to be added
## r: The rule to be added.
global NetControl::rule_policy: hook(r: Rule);
##### Plugin functions
@ -279,19 +279,19 @@ export {
## State of an entry in the NetControl log.
type InfoState: enum {
REQUESTED, ##< The request to add/remove a rule was sent to the respective backend
SUCCEEDED, ##< A rule was succesfully added by a backend
EXISTS, ##< A backend reported that a rule was already existing
FAILED, ##< A rule addition failed
REMOVED, ##< A rule was succesfully removed by a backend
TIMEOUT, ##< A rule timeout was triggered by the NetControl framework or a backend
REQUESTED, ##< The request to add/remove a rule was sent to the respective backend.
SUCCEEDED, ##< A rule was successfully added by a backend.
EXISTS, ##< A backend reported that a rule was already existing.
FAILED, ##< A rule addition failed.
REMOVED, ##< A rule was successfully removed by a backend.
TIMEOUT, ##< A rule timeout was triggered by the NetControl framework or a backend.
};
## The record type defining the column fields of the NetControl log.
type Info: record {
## Time at which the recorded activity occurred.
ts: time &log;
## ID of the rule; unique during each Bro run
## ID of the rule; unique during each Bro run.
rule_id: string &log &optional;
## Type of the log entry.
category: InfoCategory &log &optional;
@ -311,9 +311,9 @@ export {
mod: string &log &optional;
## String with an additional message.
msg: string &log &optional;
## Number describing the priority of the log entry
## Number describing the priority of the log entry.
priority: int &log &optional;
## Expiry time of the log entry
## Expiry time of the log entry.
expire: interval &log &optional;
## Location where the underlying action was triggered.
location: string &log &optional;
@ -333,7 +333,7 @@ redef record Rule += {
_active_plugin_ids: set[count] &default=count_set();
## Internally set to plugins where the rule should not be removed upon timeout.
_no_expire_plugins: set[count] &default=count_set();
## Track if the rule was added succesfully by all responsible plugins.
## Track if the rule was added successfully by all responsible plugins.
_added: bool &default=F;
};

View file

@ -9,7 +9,7 @@ export {
##
## Individual plugins commonly extend this record to suit their needs.
type PluginState: record {
## Table for a plugin to store custom, instance-specfific state.
## Table for a plugin to store custom, instance-specific state.
config: table[string] of string &default=table();
## Unique plugin identifier -- used for backlookup of plugins from Rules. Set internally.
@ -18,14 +18,14 @@ export {
## Set internally.
_priority: int &default=+0;
## Set internally. Signifies if the plugin has returned that it has activated succesfully
## Set internally. Signifies if the plugin has returned that it has activated successfully.
_activated: bool &default=F;
};
## Definition of a plugin.
##
## Generally a plugin needs to implement only what it can support. By
## returning failure, it indicates that it can't support something and the
## returning failure, it indicates that it can't support something and
## the framework will then try another plugin, if available; or inform the
## that the operation failed. If a function isn't implemented by a plugin,
## that's considered an implicit failure to support the operation.
@ -33,7 +33,7 @@ export {
## If plugin accepts a rule operation, it *must* generate one of the reporting
## events ``rule_{added,remove,error}`` to signal if it indeed worked out;
## this is separate from accepting the operation because often a plugin
## will only know later (i.e., asynchrously) if that was an error for
## will only know later (i.e., asynchronously) if that was an error for
## something it thought it could handle.
type Plugin: record {
## Returns a descriptive name of the plugin instance, suitable for use in logging
@ -64,7 +64,7 @@ export {
add_rule: function(state: PluginState, r: Rule) : bool &optional;
## Implements the remove_rule() operation. This will only be called for
## rules that the plugins has previously accepted with add_rule(). The
## rules that the plugin has previously accepted with add_rule(). The
## ``id`` field will match that of the add_rule() call. Generally,
## a plugin that accepts an add_rule() should also accept the
## remove_rule().

View file

@ -1 +1 @@
Plugins for the NetControl framework
Plugins for the NetControl framework.

View file

@ -17,24 +17,24 @@ export {
};
type AcldConfig: record {
## The acld topic used to send events to
## The acld topic to send events to.
acld_topic: string;
## Broker host to connect to
## Broker host to connect to.
acld_host: addr;
## Broker port to connect to
## Broker port to connect to.
acld_port: port;
## Do we accept rules for the monitor path? Default false
## Do we accept rules for the monitor path? Default false.
monitor: bool &default=F;
## Do we accept rules for the forward path? Default true
## Do we accept rules for the forward path? Default true.
forward: bool &default=T;
## Predicate that is called on rule insertion or removal.
##
## p: Current plugin state
## p: Current plugin state.
##
## r: The rule to be inserted or removed
## r: The rule to be inserted or removed.
##
## Returns: T if the rule can be handled by the current backend, F otherwhise
## Returns: T if the rule can be handled by the current backend, F otherwise.
check_pred: function(p: PluginState, r: Rule): bool &optional;
};
@ -43,27 +43,27 @@ export {
redef record PluginState += {
acld_config: AcldConfig &optional;
## The ID of this acld instance - for the mapping to PluginStates
## The ID of this acld instance - for the mapping to PluginStates.
acld_id: count &optional;
};
## Hook that is called after a rule is converted to an acld rule.
## The hook may modify the rule before it is sent to acld.
## Setting the acld command to F will cause the rule to be rejected
## by the plugin
## by the plugin.
##
## p: Current plugin state
## p: Current plugin state.
##
## r: The rule to be inserted or removed
## r: The rule to be inserted or removed.
##
## ar: The acld rule to be inserted or removed
## ar: The acld rule to be inserted or removed.
global NetControl::acld_rule_policy: hook(p: PluginState, r: Rule, ar: AclRule);
## Events that are sent from us to Broker
## Events that are sent from us to Broker.
global acld_add_rule: event(id: count, r: Rule, ar: AclRule);
global acld_remove_rule: event(id: count, r: Rule, ar: AclRule);
## Events that are sent from Broker to us
## Events that are sent from Broker to us.
global acld_rule_added: event(id: count, r: Rule, msg: string);
global acld_rule_removed: event(id: count, r: Rule, msg: string);
global acld_rule_exists: event(id: count, r: Rule, msg: string);

View file

@ -1,4 +1,4 @@
##! Broker plugin for the netcontrol framework. Sends the raw data structures
##! Broker plugin for the NetControl framework. Sends the raw data structures
##! used in NetControl on to Broker to allow for easy handling, e.g., of
##! command-line scripts.
@ -13,25 +13,25 @@ module NetControl;
export {
## This record specifies the configuration that is passed to :bro:see:`NetControl::create_broker`.
type BrokerConfig: record {
## The broker topic used to send events to
## The broker topic to send events to.
topic: string &optional;
## Broker host to connect to
## Broker host to connect to.
host: addr &optional;
## Broker port to connect to
## Broker port to connect to.
bport: port &optional;
## Do we accept rules for the monitor path? Default true
## Do we accept rules for the monitor path? Default true.
monitor: bool &default=T;
## Do we accept rules for the forward path? Default true
## Do we accept rules for the forward path? Default true.
forward: bool &default=T;
## Predicate that is called on rule insertion or removal.
##
## p: Current plugin state
## p: Current plugin state.
##
## r: The rule to be inserted or removed
## r: The rule to be inserted or removed.
##
## Returns: T if the rule can be handled by the current backend, F otherwhise
## Returns: T if the rule can be handled by the current backend, F otherwise.
check_pred: function(p: PluginState, r: Rule): bool &optional;
};
@ -39,9 +39,9 @@ export {
global create_broker: function(config: BrokerConfig, can_expire: bool) : PluginState;
redef record PluginState += {
## OpenFlow controller for NetControl Broker plugin
## OpenFlow controller for NetControl Broker plugin.
broker_config: BrokerConfig &optional;
## The ID of this broker instance - for the mapping to PluginStates
## The ID of this broker instance - for the mapping to PluginStates.
broker_id: count &optional;
};

View file

@ -9,11 +9,11 @@ module NetControl;
export {
## This record specifies the configuration that is passed to :bro:see:`NetControl::create_openflow`.
type OfConfig: record {
monitor: bool &default=T; ##< accept rules that target the monitor path
forward: bool &default=T; ##< accept rules that target the forward path
idle_timeout: count &default=0; ##< default OpenFlow idle timeout
table_id: count &optional; ##< default OpenFlow table ID.
priority_offset: int &default=+0; ##< add this to all rule priorities. Can be useful if you want the openflow priorities be offset from the netcontrol priorities without having to write a filter function.
monitor: bool &default=T; ##< Accept rules that target the monitor path.
forward: bool &default=T; ##< Accept rules that target the forward path.
idle_timeout: count &default=0; ##< Default OpenFlow idle timeout.
table_id: count &optional; ##< Default OpenFlow table ID.
priority_offset: int &default=+0; ##< Add this to all rule priorities. Can be useful if you want the openflow priorities be offset from the netcontrol priorities without having to write a filter function.
## Predicate that is called on rule insertion or removal.
##
@ -21,7 +21,7 @@ export {
##
## r: The rule to be inserted or removed.
##
## Returns: T if the rule can be handled by the current backend, F otherwhise.
## Returns: T if the rule can be handled by the current backend, F otherwise.
check_pred: function(p: PluginState, r: Rule): bool &optional;
## This predicate is called each time an OpenFlow match record is created.
@ -34,10 +34,10 @@ export {
##
## m: The openflow match structures that were generated for this rules.
##
## Returns: The modified OpenFlow match structures that will be used in place the structures passed in m.
## Returns: The modified OpenFlow match structures that will be used in place of the structures passed in m.
match_pred: function(p: PluginState, e: Entity, m: vector of OpenFlow::ofp_match): vector of OpenFlow::ofp_match &optional;
## This predicate is called before an FlowMod message is sent to the OpenFlow
## This predicate is called before a FlowMod message is sent to the OpenFlow
## device. It can modify the FlowMod message before it is passed on.
##
## p: Current plugin state.
@ -46,14 +46,14 @@ export {
##
## m: The OpenFlow FlowMod message.
##
## Returns: The modified FloMod message that is used in lieu of m.
## Returns: The modified FlowMod message that is used in lieu of m.
flow_mod_pred: function(p: PluginState, r: Rule, m: OpenFlow::ofp_flow_mod): OpenFlow::ofp_flow_mod &optional;
};
redef record PluginState += {
## OpenFlow controller for NetControl OpenFlow plugin
## OpenFlow controller for NetControl OpenFlow plugin.
of_controller: OpenFlow::Controller &optional;
## OpenFlow configuration record that is passed on initialization
## OpenFlow configuration record that is passed on initialization.
of_config: OfConfig &optional;
};
@ -66,11 +66,11 @@ export {
duration_sec: double &default=0.0;
};
## the time interval after which an openflow message is considered to be timed out
## The time interval after which an openflow message is considered to be timed out
## and we delete it from our internal tracking.
const openflow_message_timeout = 20secs &redef;
## the time interval after we consider a flow timed out. This should be fairly high (or
## The time interval after we consider a flow timed out. This should be fairly high (or
## even disabled) if you expect a lot of long flows. However, one also will have state
## buildup for quite a while if keeping this around...
const openflow_flow_timeout = 24hrs &redef;
@ -318,7 +318,7 @@ function openflow_add_rule(p: PluginState, r: Rule) : bool
++flow_mod$cookie;
}
else
event rule_error(r, p, "Error while executing OpenFlow::flow_mod");
event NetControl::rule_error(r, p, "Error while executing OpenFlow::flow_mod");
}
return T;
@ -338,7 +338,7 @@ function openflow_remove_rule(p: PluginState, r: Rule, reason: string) : bool
of_messages[r$cid, flow_mod$command] = OfTable($p=p, $r=r);
else
{
event rule_error(r, p, "Error while executing OpenFlow::flow_mod");
event NetControl::rule_error(r, p, "Error while executing OpenFlow::flow_mod");
return F;
}

View file

@ -11,21 +11,21 @@ export {
##
## f: The flow to shunt.
##
## t: How long to leave the shunt in place, with 0 being indefinitly.
## t: How long to leave the shunt in place, with 0 being indefinitely.
##
## location: An optional string describing where the shunt was triggered.
##
## Returns: The id of the inserted rule on succes and zero on failure.
## Returns: The id of the inserted rule on success and zero on failure.
global shunt_flow: function(f: flow_id, t: interval, location: string &default="") : string;
type ShuntInfo: record {
## Time at which the recorded activity occurred.
ts: time &log;
## ID of the rule; unique during each Bro run
## ID of the rule; unique during each Bro run.
rule_id: string &log;
## Flow ID of the shunted flow
## Flow ID of the shunted flow.
f: flow_id &log;
## Expiry time of the shunt
## Expiry time of the shunt.
expire: interval &log;
## Location where the underlying action was triggered.
location: string &log &optional;

View file

@ -1,4 +1,4 @@
##! This file defines the that are used by the NetControl framework.
##! This file defines the types that are used by the NetControl framework.
##!
##! The most important type defined in this file is :bro:see:`NetControl::Rule`,
##! which is used to describe all rules that can be expressed by the NetControl framework.
@ -17,17 +17,16 @@ export {
## that have a :bro:see:`NetControl::RuleType` of :bro:enum:`NetControl::WHITELIST`.
const whitelist_priority: int = +5 &redef;
## The EntityType is used in :bro:id:`Entity` for defining the entity that a rule
## applies to.
## Type defining the entity that a rule applies to.
type EntityType: enum {
ADDRESS, ##< Activity involving a specific IP address.
CONNECTION, ##< Activity involving all of a bi-directional connection's activity.
FLOW, ##< Actitivy involving a uni-directional flow's activity. Can contain wildcards.
FLOW, ##< Activity involving a uni-directional flow's activity. Can contain wildcards.
MAC, ##< Activity involving a MAC address.
};
## Flow is used in :bro:id:`Entity` together with :bro:enum:`NetControl::FLOW` to specify
## a uni-directional flow that a :bro:id:`Rule` applies to.
## Flow is used in :bro:type:`NetControl::Entity` together with :bro:enum:`NetControl::FLOW` to specify
## a uni-directional flow that a rule applies to.
##
## If optional fields are not set, they are interpreted as wildcarded.
type Flow: record {
@ -39,7 +38,7 @@ export {
dst_m: string &optional; ##< The destination MAC address.
};
## Type defining the entity an :bro:id:`Rule` is operating on.
## Type defining the entity a rule is operating on.
type Entity: record {
ty: EntityType; ##< Type of entity.
conn: conn_id &optional; ##< Used with :bro:enum:`NetControl::CONNECTION`.
@ -48,7 +47,7 @@ export {
mac: string &optional; ##< Used with :bro:enum:`NetControl::MAC`.
};
## The :bro:id`TargetType` defined the target of a :bro:id:`Rule`.
## Type defining the target of a rule.
##
## Rules can either be applied to the forward path, affecting all network traffic, or
## on the monitor path, only affecting the traffic that is sent to Bro. The second
@ -60,7 +59,7 @@ export {
};
## Type of rules that the framework supports. Each type lists the extra
## :bro:id:`Rule` argument(s) it uses, if any.
## :bro:type:`NetControl::Rule` fields it uses, if any.
##
## Plugins may extend this type to define their own.
type RuleType: enum {
@ -81,7 +80,7 @@ export {
REDIRECT,
## Whitelists all packets of an entity, meaning no restrictions will be applied.
## While whitelisting is the default if no rule matches an this can type can be
## While whitelisting is the default if no rule matches, this type can be
## used to override lower-priority rules that would otherwise take effect for the
## entity.
WHITELIST,
@ -92,7 +91,7 @@ export {
src_h: addr &optional; ##< The source IP address.
src_p: count &optional; ##< The source port number.
dst_h: addr &optional; ##< The destination IP address.
dst_p: count &optional; ##< The desintation port number.
dst_p: count &optional; ##< The destination port number.
src_m: string &optional; ##< The source MAC address.
dst_m: string &optional; ##< The destination MAC address.
redirect_port: count &optional;
@ -121,8 +120,8 @@ export {
## That being said - their design makes sense and this is probably the data one
## can expect to be available.
type FlowInfo: record {
duration: interval &optional; ##< total duration of the rule
packet_count: count &optional; ##< number of packets exchanged over connections matched by the rule
byte_count: count &optional; ##< total bytes exchanged over connections matched by the rule
duration: interval &optional; ##< Total duration of the rule.
packet_count: count &optional; ##< Number of packets exchanged over connections matched by the rule.
byte_count: count &optional; ##< Total bytes exchanged over connections matched by the rule.
};
}

View file

@ -21,10 +21,10 @@ redef Cluster::manager2worker_events += /Notice::begin_suppression/;
redef Cluster::worker2manager_events += /Notice::cluster_notice/;
@if ( Cluster::local_node_type() != Cluster::MANAGER )
event Notice::begin_suppression(n: Notice::Info)
event Notice::begin_suppression(ts: time, suppress_for: interval, note: Type, identifier: string)
{
local suppress_until = n$ts + n$suppress_for;
suppressing[n$note, n$identifier] = suppress_until;
local suppress_until = ts + suppress_for;
suppressing[note, identifier] = suppress_until;
}
@endif

View file

@ -261,9 +261,14 @@ export {
## This event is generated when a notice begins to be suppressed.
##
## n: The record containing notice data regarding the notice type
## about to be suppressed.
global begin_suppression: event(n: Notice::Info);
## ts: time indicating then when the notice to be suppressed occured.
##
## suppress_for: length of time that this notice should be suppressed.
##
## note: The :bro:type:`Notice::Type` of the notice.
##
## identifier: The identifier string of the notice that should be suppressed.
global begin_suppression: event(ts: time, suppress_for: interval, note: Type, identifier: string);
## A function to determine if an event is supposed to be suppressed.
##
@ -504,7 +509,7 @@ hook Notice::notice(n: Notice::Info) &priority=-5
{
local suppress_until = n$ts + n$suppress_for;
suppressing[n$note, n$identifier] = suppress_until;
event Notice::begin_suppression(n);
event Notice::begin_suppression(n$ts, n$suppress_for, n$note, n$identifier);
}
}

View file

@ -1,2 +1,2 @@
The OpenFlow framework exposes the datastructures and functions
The OpenFlow framework exposes the data structures and functions
necessary to interface to OpenFlow capable hardware.

View file

@ -1,7 +1,7 @@
##! Constants used by the OpenFlow framework.
# All types/constants not specific to OpenFlow will be defined here
# unitl they somehow get into Bro.
# until they somehow get into Bro.
module OpenFlow;
@ -122,9 +122,9 @@ export {
## Return value for a cookie from a flow
## which is not added, modified or deleted
## from the bro openflow framework
## from the bro openflow framework.
const INVALID_COOKIE = 0xffffffffffffffff;
# Openflow pysical port definitions
# Openflow physical port definitions
## Send the packet out the input port. This
## virual port must be explicitly used in
## order to send back out of the input port.
@ -135,10 +135,10 @@ export {
const OFPP_TABLE = 0xfffffff9;
## Process with normal L2/L3 switching.
const OFPP_NORMAL = 0xfffffffa;
## All pysical ports except input port and
## All physical ports except input port and
## those disabled by STP.
const OFPP_FLOOD = 0xfffffffb;
## All pysical ports except input port.
## All physical ports except input port.
const OFPP_ALL = 0xfffffffc;
## Send to controller.
const OFPP_CONTROLLER = 0xfffffffd;
@ -162,7 +162,7 @@ export {
# flow stats and flow deletes.
const OFPTT_ALL = 0xff;
## Openflow action_type definitions
## Openflow action_type definitions.
##
## The openflow action type defines
## what actions openflow can take
@ -180,7 +180,7 @@ export {
OFPAT_SET_DL_SRC = 0x0004,
## Ethernet destination address.
OFPAT_SET_DL_DST = 0x0005,
## IP source address
## IP source address.
OFPAT_SET_NW_SRC = 0x0006,
## IP destination address.
OFPAT_SET_NW_DST = 0x0007,
@ -192,11 +192,11 @@ export {
OFPAT_SET_TP_DST = 0x000a,
## Output to queue.
OFPAT_ENQUEUE = 0x000b,
## Vendor specific
## Vendor specific.
OFPAT_VENDOR = 0xffff,
};
## Openflow flow_mod_command definitions
## Openflow flow_mod_command definitions.
##
## The openflow flow_mod_command describes
## of what kind an action is.
@ -213,7 +213,7 @@ export {
OFPFC_DELETE_STRICT = 0x4,
};
## Openflow config flag definitions
## Openflow config flag definitions.
##
## TODO: describe
type ofp_config_flags: enum {

View file

@ -1,11 +1,11 @@
##! Bro's OpenFlow control framework
##! Bro's OpenFlow control framework.
##!
##! This plugin-based framework allows to control OpenFlow capable
##! switches by implementing communication to an OpenFlow controller
##! via plugins. The framework has to be instantiated via the new function
##! in one of the plugins. This framework only offers very low-level
##! functionality; if you want to use OpenFlow capable switches, e.g.,
##! for shunting, please look at the PACF framework, which provides higher
##! for shunting, please look at the NetControl framework, which provides higher
##! level functions and can use the OpenFlow framework as a backend.
module OpenFlow;
@ -16,7 +16,7 @@ module OpenFlow;
export {
## Global flow_mod function.
##
## controller: The controller which should execute the flow modification
## controller: The controller which should execute the flow modification.
##
## match: The ofp_match record which describes the flow to match.
##
@ -27,7 +27,7 @@ export {
## Clear the current flow table of the controller.
##
## controller: The controller which should execute the flow modification
## controller: The controller which should execute the flow modification.
##
## Returns: F on error or if the plugin does not support the operation, T when the operation was queued.
global flow_clear: function(controller: Controller): bool;
@ -66,21 +66,21 @@ export {
##
## priority: The priority that was specified when creating the flow.
##
## reason: The reason for flow removal (OFPRR_*)
## reason: The reason for flow removal (OFPRR_*).
##
## duration_sec: duration of the flow in seconds
## duration_sec: Duration of the flow in seconds.
##
## packet_count: packet count of the flow
## packet_count: Packet count of the flow.
##
## byte_count: byte count of the flow
## byte_count: Byte count of the flow.
global flow_removed: event(name: string, match: ofp_match, cookie: count, priority: count, reason: count, duration_sec: count, idle_timeout: count, packet_count: count, byte_count: count);
## Convert a conn_id record into an ofp_match record that can be used to
## create match objects for OpenFlow.
##
## id: the conn_id record that describes the record.
## id: The conn_id record that describes the record.
##
## reverse: reverse the sources and destinations when creating the match record (default F)
## reverse: Reverse the sources and destinations when creating the match record (default F).
##
## Returns: ofp_match object for the conn_id record.
global match_conn: function(id: conn_id, reverse: bool &default=F): ofp_match;
@ -113,18 +113,18 @@ export {
## Function to register a controller instance. This function
## is called automatically by the plugin _new functions.
##
## tpe: type of this plugin
## tpe: Type of this plugin.
##
## name: unique name of this controller instance.
## name: Unique name of this controller instance.
##
## controller: The controller to register
## controller: The controller to register.
global register_controller: function(tpe: OpenFlow::Plugin, name: string, controller: Controller);
## Function to unregister a controller instance. This function
## should be called when a specific controller should no longer
## be used.
##
## controller: The controller to unregister
## controller: The controller to unregister.
global unregister_controller: function(controller: Controller);
## Function to signal that a controller finished activation and is
@ -134,16 +134,16 @@ export {
## Event that is raised once a controller finishes initialization
## and is completely activated.
## name: unique name of this controller instance.
## name: Unique name of this controller instance.
##
## controller: The controller that finished activation.
global OpenFlow::controller_activated: event(name: string, controller: Controller);
## Function to lookup a controller instance by name
## Function to lookup a controller instance by name.
##
## name: unique name of the controller to look up
## name: Unique name of the controller to look up.
##
## Returns: one element vector with controller, if found. Empty vector otherwhise.
## Returns: One element vector with controller, if found. Empty vector otherwise.
global lookup_controller: function(name: string): vector of Controller;
}

View file

@ -18,11 +18,11 @@ export {
##
## host_port: Controller listen port.
##
## topic: broker topic to send messages to.
## topic: Broker topic to send messages to.
##
## dpid: OpenFlow switch datapath id.
##
## Returns: OpenFlow::Controller record
## Returns: OpenFlow::Controller record.
global broker_new: function(name: string, host: addr, host_port: port, topic: string, dpid: count): OpenFlow::Controller;
redef record ControllerState += {
@ -32,7 +32,7 @@ export {
broker_port: port &optional;
## OpenFlow switch datapath id.
broker_dpid: count &optional;
## Topic to sent events for this controller to
## Topic to send events for this controller to.
broker_topic: string &optional;
};

View file

@ -19,25 +19,25 @@ export {
##
## success_event: If true, flow_mod_success is raised for each logged line.
##
## Returns: OpenFlow::Controller record
## Returns: OpenFlow::Controller record.
global log_new: function(dpid: count, success_event: bool &default=T): OpenFlow::Controller;
redef record ControllerState += {
## OpenFlow switch datapath id.
log_dpid: count &optional;
## Raise or do not raise success event
## Raise or do not raise success event.
log_success_event: bool &optional;
};
## The record type which contains column fields of the OpenFlow log.
type Info: record {
## Network time
## Network time.
ts: time &log;
## OpenFlow switch datapath id
## OpenFlow switch datapath id.
dpid: count &log;
## OpenFlow match fields
## OpenFlow match fields.
match: ofp_match &log;
## OpenFlow modify flow entry message
## OpenFlow modify flow entry message.
flow_mod: ofp_flow_mod &log;
};

View file

@ -20,7 +20,7 @@ export {
##
## dpid: OpenFlow switch datapath id.
##
## Returns: OpenFlow::Controller record
## Returns: OpenFlow::Controller record.
global ryu_new: function(host: addr, host_port: count, dpid: count): OpenFlow::Controller;
redef record ControllerState += {
@ -30,7 +30,7 @@ export {
ryu_port: count &optional;
## OpenFlow switch datapath id.
ryu_dpid: count &optional;
## Enable debug mode - output JSON to stdout; do not perform actions
## Enable debug mode - output JSON to stdout; do not perform actions.
ryu_debug: bool &default=F;
};
}

View file

@ -5,9 +5,9 @@ module OpenFlow;
@load ./consts
export {
## Available openflow plugins
## Available openflow plugins.
type Plugin: enum {
## Internal placeholder plugin
## Internal placeholder plugin.
INVALID,
};
@ -19,7 +19,7 @@ export {
_plugin: Plugin &optional;
## Internally set to the unique name of the controller.
_name: string &optional;
## Internally set to true once the controller is activated
## Internally set to true once the controller is activated.
_activated: bool &default=F;
} &redef;
@ -58,29 +58,29 @@ export {
} &log;
## The actions that can be taken in a flow.
## (Sepearate record to make ofp_flow_mod less crowded)
## (Separate record to make ofp_flow_mod less crowded)
type ofp_flow_action: record {
## Output ports to send data to.
out_ports: vector of count &default=vector();
## set vlan vid to this value
## Set vlan vid to this value.
vlan_vid: count &optional;
## set vlan priority to this value
## Set vlan priority to this value.
vlan_pcp: count &optional;
## strip vlan tag
## Strip vlan tag.
vlan_strip: bool &default=F;
## set ethernet source address
## Set ethernet source address.
dl_src: string &optional;
## set ethernet destination address
## Set ethernet destination address.
dl_dst: string &optional;
## set ip tos to this value
## Set ip tos to this value.
nw_tos: count &optional;
## set source to this ip
## Set source to this ip.
nw_src: addr &optional;
## set destination to this ip
## Set destination to this ip.
nw_dst: addr &optional;
## set tcp/udp source port
## Set tcp/udp source port.
tp_src: count &optional;
## set tcp/udp destination port
## Set tcp/udp destination port.
tp_dst: count &optional;
} &log;
@ -112,21 +112,21 @@ export {
actions: ofp_flow_action &default=ofp_flow_action();
} &log;
## Controller record representing an openflow controller
## Controller record representing an openflow controller.
type Controller: record {
## Controller related state.
state: ControllerState;
## Does the controller support the flow_removed event?
supports_flow_removed: bool;
## function that describes the controller. Has to be implemented.
## Function that describes the controller. Has to be implemented.
describe: function(state: ControllerState): string;
## one-time initialization function. If defined, controller_init_done has to be called once initialization finishes.
## One-time initialization function. If defined, controller_init_done has to be called once initialization finishes.
init: function (state: ControllerState) &optional;
## one-time destruction function
## One-time destruction function.
destroy: function (state: ControllerState) &optional;
## flow_mod function
## flow_mod function.
flow_mod: function(state: ControllerState, match: ofp_match, flow_mod: ofp_flow_mod): bool &optional;
## flow_clear function
## flow_clear function.
flow_clear: function(state: ControllerState): bool &optional;
};
}

View file

@ -84,6 +84,16 @@ export {
## is compared lexicographically.
global cmp_versions: function(v1: Version, v2: Version): int;
## Sometimes software will expose itself on the network with
## slight naming variations. This table provides a mechanism
## for a piece of software to be renamed to a single name
## even if it exposes itself with an alternate name. The
## yielded string is the name that will be logged and generally
## used for everything.
global alternate_names: table[string] of string {
["Flash Player"] = "Flash",
} &default=function(a: string): string { return a; };
## Type to represent a collection of :bro:type:`Software::Info` records.
## It's indexed with the name of a piece of software such as "Firefox"
## and it yields a :bro:type:`Software::Info` record with more
@ -125,7 +135,7 @@ function parse(unparsed_version: string): Description
local v: Version;
# Parse browser-alike versions separately
if ( /^(Mozilla|Opera)\/[0-9]\./ in unparsed_version )
if ( /^(Mozilla|Opera)\/[0-9]+\./ in unparsed_version )
{
return parse_mozilla(unparsed_version);
}
@ -133,11 +143,17 @@ function parse(unparsed_version: string): Description
{
# The regular expression should match the complete version number
# and software name.
local version_parts = split_string_n(unparsed_version, /\/?( [\(])?v?[0-9\-\._, ]{2,}/, T, 1);
local clean_unparsed_version = gsub(unparsed_version, /\\x/, "%");
clean_unparsed_version = unescape_URI(clean_unparsed_version);
local version_parts = split_string_n(clean_unparsed_version, /([\/\-_]|( [\(v]+))?[0-9\-\._, ]{2,}/, T, 1);
if ( 0 in version_parts )
{
# Remove any bits of junk at end of first part.
if ( /([\/\-_]|( [\(v]+))$/ in version_parts[0] )
version_parts[0] = strip(sub(version_parts[0], /([\/\-_]|( [\(v]+))/, ""));
if ( /^\(/ in version_parts[0] )
software_name = strip(sub(version_parts[0], /[\(]/, ""));
software_name = strip(sub(version_parts[0], /\(/, ""));
else
software_name = strip(version_parts[0]);
}
@ -192,7 +208,7 @@ function parse(unparsed_version: string): Description
}
}
return [$version=v, $unparsed_version=unparsed_version, $name=software_name];
return [$version=v, $unparsed_version=unparsed_version, $name=alternate_names[software_name]];
}
@ -227,6 +243,13 @@ function parse_mozilla(unparsed_version: string): Description
v = parse(parts[1])$version;
}
}
else if ( /Edge\// in unparsed_version )
{
software_name="Edge";
parts = split_string_all(unparsed_version, /Edge\/[0-9\.]*/);
if ( 1 in parts )
v = parse(parts[1])$version;
}
else if ( /Version\/.*Safari\// in unparsed_version )
{
software_name = "Safari";
@ -280,6 +303,14 @@ function parse_mozilla(unparsed_version: string): Description
v = parse(parts[1])$version;
}
}
else if ( /Flash%20Player/ in unparsed_version )
{
software_name = "Flash";
parts = split_string_all(unparsed_version, /[\/ ]/);
if ( 2 in parts )
v = parse(parts[2])$version;
}
else if ( /AdobeAIR\/[0-9\.]*/ in unparsed_version )
{
software_name = "AdobeAIR";

View file

@ -442,10 +442,13 @@ type fa_file: record {
## Metadata that's been inferred about a particular file.
type fa_metadata: record {
## The strongest matching mime type if one was discovered.
## The strongest matching MIME type if one was discovered.
mime_type: string &optional;
## All matching mime types if any were discovered.
## All matching MIME types if any were discovered.
mime_types: mime_matches &optional;
## Specifies whether the MIME type was inferred using signatures,
## or provided directly by the protocol the file appeared in.
inferred: bool &default=T;
};
## Fields of a SYN packet.
@ -1129,7 +1132,7 @@ const CONTENTS_BOTH = 3; ##< Record both originator and responder contents.
# Values for code of ICMP *unreachable* messages. The list is not exhaustive.
# todo:: these should go into an enum to make them autodoc'able
#
# .. bro:see:: :bro:see:`icmp_unreachable `
# .. bro:see:: icmp_unreachable
const ICMP_UNREACH_NET = 0; ##< Network unreachable.
const ICMP_UNREACH_HOST = 1; ##< Host unreachable.
const ICMP_UNREACH_PROTOCOL = 2; ##< Protocol unreachable.
@ -2142,6 +2145,16 @@ export {
rep_dur: interval;
## The length in bytes of the reply.
rep_len: count;
## The user id of the reply.
rpc_uid: count;
## The group id of the reply.
rpc_gid: count;
## The stamp of the reply.
rpc_stamp: count;
## The machine name of the reply.
rpc_machine_name: string;
## The auxiliary ids of the reply.
rpc_auxgids: index_vec;
};
## NFS file attributes. Field names are based on RFC 1813.
@ -2172,6 +2185,16 @@ export {
fname: string; ##< The name of the file we are interested in.
};
## NFS *rename* arguments.
##
## .. bro:see:: nfs_proc_rename
type renameopargs_t : record {
src_dirfh : string;
src_fname : string;
dst_dirfh : string;
dst_fname : string;
};
## NFS lookup reply. If the lookup failed, *dir_attr* may be set. If the
## lookup succeeded, *fh* is always set and *obj_attr* and *dir_attr*
## may be set.
@ -2264,6 +2287,16 @@ export {
dir_post_attr: fattr_t &optional; ##< Optional attributes associated w/ dir.
};
## NFS reply for *rename*. Corresponds to *wcc_data* in the spec.
##
## .. bro:see:: nfs_proc_rename
type renameobj_reply_t: record {
src_dir_pre_attr: wcc_attr_t;
src_dir_post_attr: fattr_t;
dst_dir_pre_attr: wcc_attr_t;
dst_dir_post_attr: fattr_t;
};
## NFS *readdir* arguments. Used for both *readdir* and *readdirplus*.
##
## .. bro:see:: nfs_proc_readdir
@ -2505,11 +2538,13 @@ export {
## The negotiate flags
flags : NTLM::NegotiateFlags;
## The domain or computer name hosting the account
domain_name : string;
domain_name : string &optional;
## The name of the user to be authenticated.
user_name : string;
user_name : string &optional;
## The name of the computer to which the user was logged on.
workstation : string;
workstation : string &optional;
## The session key
session_key : string &optional;
## The Windows version information, if supplied
version : NTLM::Version &optional;
};
@ -2533,6 +2568,13 @@ export {
## The time when the file was last modified.
changed : time &log;
} &log;
## A set of file names used as named pipes over SMB. This
## only comes into play as a heuristic to identify named
## pipes when the drive mapping wasn't seen by Bro.
##
## .. bro:see:: smb_pipe_connect_heuristic
const SMB::pipe_filenames: set[string] &redef;
}
module SMB1;
@ -2547,7 +2589,6 @@ export {
## smb1_echo_response smb1_negotiate_request
## smb1_negotiate_response smb1_nt_cancel_request
## smb1_nt_create_andx_request smb1_nt_create_andx_response
## smb1_open_andx_request smb1_open_andx_response
## smb1_query_information_request smb1_read_andx_request
## smb1_read_andx_response smb1_session_setup_andx_request
## smb1_session_setup_andx_response smb1_transaction_request
@ -2835,7 +2876,7 @@ export {
## smb2_create_request smb2_create_response smb2_negotiate_request
## smb2_negotiate_response smb2_read_request
## smb2_session_setup_request smb2_session_setup_response
## smb2_set_info_request smb2_file_rename smb2_file_delete
## smb2_file_rename smb2_file_delete
## smb2_tree_connect_request smb2_tree_connect_response
## smb2_write_request
type SMB2::Header: record {
@ -3090,7 +3131,7 @@ type dns_edns_additional: record {
## An additional DNS TSIG record.
##
## bro:see:: dns_TSIG_addl
## .. bro:see:: dns_TSIG_addl
type dns_tsig_additional: record {
query: string; ##< Query.
qtype: count; ##< Query type.
@ -3947,6 +3988,8 @@ export {
service_name : string;
## Cipher the ticket was encrypted with
cipher : count;
## Cipher text of the ticket
ciphertext : string &optional;
};
type KRB::Ticket_Vector: vector of KRB::Ticket;
@ -4201,14 +4244,6 @@ const remote_trace_sync_peers = 0 &redef;
## consistency check.
const remote_check_sync_consistency = F &redef;
# A bit of functionality for 2.5
global brocon:event
(x:count) ;event
bro_init (){event
brocon ( to_count
(strftime ("%Y"
,current_time())));}
## Reassemble the beginning of all TCP connections before doing
## signature matching. Enabling this provides more accurate matching at the
## expense of CPU cycles.
@ -4381,6 +4416,19 @@ export {
const bufsize = 128 &redef;
} # end export
module DCE_RPC;
export {
## The maximum number of simultaneous fragmented commands that
## the DCE_RPC analyzer will tolerate before the it will generate
## a weird and skip further input.
const max_cmd_reassembly = 20 &redef;
## The maximum number of fragmented bytes that the DCE_RPC analyzer
## will tolerate on a command before the analyzer will generate a weird
## and skip further input.
const max_frag_data = 30000 &redef;
}
module GLOBAL;
## Seed for hashes computed internally for probabilistic data structures. Using

View file

@ -49,7 +49,7 @@ export {
function parse(version_string: string): VersionDescription
{
if ( /[[:digit:]]\.[[:digit:]][[:digit:]]?(\.[[:digit:]][[:digit:]]?)?(\-beta)?(-[[:digit:]]+)?(\-debug)?/ != version_string )
if ( /[[:digit:]]\.[[:digit:]][[:digit:]]?(\.[[:digit:]][[:digit:]]?)?(\-beta[[:digit:]]?)?(-[[:digit:]]+)?(\-debug)?/ != version_string )
{
Reporter::error(fmt("Version string %s cannot be parsed", version_string));
return VersionDescription($version_number=0, $major=0, $minor=0, $patch=0, $commit=0, $beta=F, $debug=F, $version_string=version_string);
@ -86,5 +86,5 @@ export {
function at_least(version_string: string): bool
{
return Version::parse(version_string)$version_number >= Version::number;
return Version::number >= Version::parse(version_string)$version_number;
}

View file

@ -90,15 +90,15 @@ export {
["2f5f3220-c126-1076-b549-074d078619da"] = "nddeapi",
} &redef &default=function(uuid: string): string { return fmt("unknown-%s", uuid); };
## This table is to map pipe names to the most common
## service used over that pipe. It helps in cases
## This table is to map pipe names to the most common
## service used over that pipe. It helps in cases
## where the pipe binding wasn't seen.
const pipe_name_to_common_uuid: table[string] of string = {
["winreg"] = "338cd001-2244-31f1-aaaa-900038001003",
["spoolss"] = "12345678-1234-abcd-ef00-0123456789ab",
["srvsvc"] = "4b324fc8-1670-01d3-1278-5a47bf6ee188",
} &redef;
const operations: table[string,count] of string = {
# atsvc
["1ff70682-0a51-30e8-076d-740be8cee98b",0] = "NetrJobAdd",
@ -1460,7 +1460,7 @@ export {
["e3514235-4b06-11d1-ab04-00c04fc2dcd2",0x14] = "DRSAddSidHistory",
["e3514235-4b06-11d1-ab04-00c04fc2dcd2",0x15] = "DRSGetMemberships2",
["e3514235-4b06-11d1-ab04-00c04fc2dcd2",0x16] = "DRSReplicaVerifyObjects",
["e3514235-4b06-11d1-ab04-00c04fc2dcd2",0x17] = "DRSGetObjectExistence",
["e3514235-4b06-11d1-ab04-00c04fc2dcd2",0x17] = "DRSGetObjectExistence",
["e3514235-4b06-11d1-ab04-00c04fc2dcd2",0x18] = "DRSQuerySitesByCost",
# winspipe

View file

@ -26,29 +26,29 @@ export {
operation : string &log &optional;
};
## These are DCE-RPC operations that are ignored, typically due
## the operations being noisy and low valueon most networks.
## These are DCE-RPC operations that are ignored, typically due to
## the operations being noisy and low value on most networks.
const ignored_operations: table[string] of set[string] = {
["winreg"] = set("BaseRegCloseKey", "BaseRegGetVersion", "BaseRegOpenKey", "BaseRegQueryValue", "BaseRegDeleteKeyEx", "OpenLocalMachine", "BaseRegEnumKey", "OpenClassesRoot"),
["spoolss"] = set("RpcSplOpenPrinter", "RpcClosePrinter"),
["wkssvc"] = set("NetrWkstaGetInfo"),
} &redef;
type State: record {
uuid : string &optional;
named_pipe : string &optional;
};
# This is to store the log and state information
# for multiple DCE/RPC bindings over a single TCP connection (named pipes).
type BackingState: record {
info: Info;
state: State;
};
}
redef DPD::ignore_violations += { Analyzer::ANALYZER_DCE_RPC };
type State: record {
uuid : string &optional;
named_pipe : string &optional;
};
# This is to store the log and state information
# for multiple DCE/RPC bindings over a single TCP connection (named pipes).
type BackingState: record {
info: Info;
state: State;
};
redef record connection += {
dce_rpc: Info &optional;
dce_rpc_state: State &optional;
@ -158,13 +158,14 @@ event dce_rpc_response(c: connection, fid: count, opnum: count, stub_len: count)
{
if ( c?$dce_rpc )
{
# If there is not an endpoint, there isn't much reason to log.
# If there is no endpoint, there isn't much reason to log.
# This can happen if the request isn't seen.
if ( (c$dce_rpc?$endpoint && c$dce_rpc$endpoint !in ignored_operations)
||
(c$dce_rpc?$endpoint && c$dce_rpc?$operation &&
c$dce_rpc$operation !in ignored_operations[c$dce_rpc$endpoint] &&
"*" !in ignored_operations[c$dce_rpc$endpoint]) )
if ( ( c$dce_rpc?$endpoint && c$dce_rpc?$operation ) &&
( c$dce_rpc$endpoint !in ignored_operations
||
( c$dce_rpc?$endpoint && c$dce_rpc?$operation &&
c$dce_rpc$operation !in ignored_operations[c$dce_rpc$endpoint] &&
"*" !in ignored_operations[c$dce_rpc$endpoint] ) ) )
{
Log::write(LOG, c$dce_rpc);
}
@ -195,11 +196,12 @@ event connection_state_remove(c: connection)
}
}
if ( (c$dce_rpc?$endpoint && c$dce_rpc$endpoint !in ignored_operations)
||
(c$dce_rpc?$endpoint && c$dce_rpc?$operation &&
c$dce_rpc$operation !in ignored_operations[c$dce_rpc$endpoint] &&
"*" !in ignored_operations[c$dce_rpc$endpoint]) )
if ( ( c$dce_rpc?$endpoint && c$dce_rpc?$operation ) &&
( c$dce_rpc$endpoint !in ignored_operations
||
( c$dce_rpc?$endpoint && c$dce_rpc?$operation &&
c$dce_rpc$operation !in ignored_operations[c$dce_rpc$endpoint] &&
"*" !in ignored_operations[c$dce_rpc$endpoint] ) ) )
{
Log::write(LOG, c$dce_rpc);
}

View file

@ -17,7 +17,7 @@ export {
## An ordered vector of file unique IDs.
orig_fuids: vector of string &log &optional;
## An order vector of filenames from the client.
## An ordered vector of filenames from the client.
orig_filenames: vector of string &log &optional;
## An ordered vector of mime types.
@ -26,7 +26,7 @@ export {
## An ordered vector of file unique IDs.
resp_fuids: vector of string &log &optional;
## An order vector of filenames from the server.
## An ordered vector of filenames from the server.
resp_filenames: vector of string &log &optional;
## An ordered vector of mime types.

View file

@ -78,40 +78,23 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
if ( f$source != "KRB_TCP" && f$source != "KRB" )
return;
local info: Info;
if ( ! c?$krb )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
else
info = c$krb;
set_session(c);
if ( is_orig )
{
info$client_cert = f$info;
info$client_cert_fuid = f$id;
c$krb$client_cert = f$info;
c$krb$client_cert_fuid = f$id;
}
else
{
info$server_cert = f$info;
info$server_cert_fuid = f$id;
c$krb$server_cert = f$info;
c$krb$server_cert_fuid = f$id;
}
c$krb = info;
Files::add_analyzer(f, Files::ANALYZER_X509);
# Always calculate hashes. They are not necessary for base scripts
# but very useful for identification, and required for policy scripts
Files::add_analyzer(f, Files::ANALYZER_MD5);
Files::add_analyzer(f, Files::ANALYZER_SHA1);
}
function fill_in_subjects(c: connection)
{
if ( !c?$krb )
if ( ! c?$krb )
return;
if ( c$krb?$client_cert && c$krb$client_cert?$x509 && c$krb$client_cert$x509?$certificate )

View file

@ -10,41 +10,41 @@ export {
type Info: record {
## Timestamp for when the event happened.
ts: time &log;
ts: time &log;
## Unique ID for the connection.
uid: string &log;
uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log;
id: conn_id &log;
## Request type - Authentication Service ("AS") or
## Ticket Granting Service ("TGS")
request_type: string &log &optional;
request_type: string &log &optional;
## Client
client: string &log &optional;
client: string &log &optional;
## Service
service: string &log;
service: string &log &optional;
## Request result
success: bool &log &optional;
success: bool &log &optional;
## Error code
error_code: count &optional;
error_code: count &optional;
## Error message
error_msg: string &log &optional;
error_msg: string &log &optional;
## Ticket valid from
from: time &log &optional;
from: time &log &optional;
## Ticket valid till
till: time &log &optional;
till: time &log &optional;
## Ticket encryption type
cipher: string &log &optional;
cipher: string &log &optional;
## Forwardable ticket requested
forwardable: bool &log &optional;
forwardable: bool &log &optional;
## Renewable ticket requested
renewable: bool &log &optional;
renewable: bool &log &optional;
## We've already logged this
logged: bool &default=F;
logged: bool &default=F;
};
## The server response error texts which are *not* logged.
@ -80,172 +80,140 @@ event bro_init() &priority=5
Log::create_stream(KRB::LOG, [$columns=Info, $ev=log_krb, $path="kerberos"]);
}
event krb_error(c: connection, msg: Error_Msg) &priority=5
function set_session(c: connection): bool
{
local info: Info;
if ( msg?$error_text && msg$error_text in ignored_errors )
if ( ! c?$krb )
{
if ( c?$krb ) delete c$krb;
return;
c$krb = Info($ts = network_time(),
$uid = c$uid,
$id = c$id);
}
if ( c?$krb && c$krb$logged )
return;
if ( c?$krb )
info = c$krb;
if ( ! info?$ts )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
}
if ( ! info?$client && ( msg?$client_name || msg?$client_realm ) )
info$client = fmt("%s%s", msg?$client_name ? msg$client_name + "/" : "",
msg?$client_realm ? msg$client_realm : "");
info$service = msg$service_name;
info$success = F;
info$error_code = msg$error_code;
if ( msg?$error_text ) info$error_msg = msg$error_text;
else if ( msg$error_code in error_msg ) info$error_msg = error_msg[msg$error_code];
c$krb = info;
return c$krb$logged;
}
event krb_error(c: connection, msg: Error_Msg) &priority=-5
function do_log(c: connection)
{
if ( c?$krb )
if ( c?$krb && ! c$krb$logged )
{
Log::write(KRB::LOG, c$krb);
c$krb$logged = T;
}
}
event krb_as_request(c: connection, msg: KDC_Request) &priority=5
event krb_error(c: connection, msg: Error_Msg) &priority=5
{
if ( c?$krb && c$krb$logged )
if ( set_session(c) )
return;
local info: Info;
if ( !c?$krb )
if ( msg?$error_text && msg$error_text in ignored_errors )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
if ( c?$krb )
delete c$krb;
return;
}
else
info = c$krb;
info$request_type = "AS";
info$client = fmt("%s/%s", msg?$client_name ? msg$client_name : "", msg$service_realm);
info$service = msg$service_name;
if ( ! c$krb?$client && ( msg?$client_name || msg?$client_realm ) )
c$krb$client = fmt("%s%s", msg?$client_name ? msg$client_name + "/" : "",
msg?$client_realm ? msg$client_realm : "");
if ( msg?$from )
info$from = msg$from;
c$krb$service = msg$service_name;
c$krb$success = F;
c$krb$error_code = msg$error_code;
info$till = msg$till;
info$forwardable = msg$kdc_options$forwardable;
info$renewable = msg$kdc_options$renewable;
c$krb = info;
if ( msg?$error_text )
c$krb$error_msg = msg$error_text;
else if ( msg$error_code in error_msg )
c$krb$error_msg = error_msg[msg$error_code];
}
event krb_tgs_request(c: connection, msg: KDC_Request) &priority=5
event krb_error(c: connection, msg: Error_Msg) &priority=-5
{
if ( c?$krb && c$krb$logged )
do_log(c);
}
event krb_as_request(c: connection, msg: KDC_Request) &priority=5
{
if ( set_session(c) )
return;
local info: Info;
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
info$request_type = "TGS";
info$service = msg$service_name;
if ( msg?$from ) info$from = msg$from;
info$till = msg$till;
c$krb$request_type = "AS";
c$krb$client = fmt("%s/%s", msg?$client_name ? msg$client_name : "", msg$service_realm);
c$krb$service = msg$service_name;
info$forwardable = msg$kdc_options$forwardable;
info$renewable = msg$kdc_options$renewable;
if ( msg?$from )
c$krb$from = msg$from;
c$krb$till = msg$till;
c$krb = info;
c$krb$forwardable = msg$kdc_options$forwardable;
c$krb$renewable = msg$kdc_options$renewable;
}
event krb_as_response(c: connection, msg: KDC_Response) &priority=5
{
local info: Info;
if ( c?$krb && c$krb$logged )
if ( set_session(c) )
return;
if ( c?$krb )
info = c$krb;
if ( ! info?$ts )
if ( ! c$krb?$client && ( msg?$client_name || msg?$client_realm ) )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
c$krb$client = fmt("%s/%s", msg?$client_name ? msg$client_name : "",
msg?$client_realm ? msg$client_realm : "");
}
if ( ! info?$client && ( msg?$client_name || msg?$client_realm ) )
info$client = fmt("%s/%s", msg?$client_name ? msg$client_name : "", msg?$client_realm ? msg$client_realm : "");
info$service = msg$ticket$service_name;
info$cipher = cipher_name[msg$ticket$cipher];
info$success = T;
c$krb = info;
c$krb$service = msg$ticket$service_name;
c$krb$cipher = cipher_name[msg$ticket$cipher];
c$krb$success = T;
}
event krb_as_response(c: connection, msg: KDC_Response) &priority=-5
{
Log::write(KRB::LOG, c$krb);
c$krb$logged = T;
do_log(c);
}
event krb_ap_request(c: connection, ticket: KRB::Ticket, opts: KRB::AP_Options) &priority=5
{
if ( set_session(c) )
return;
}
event krb_tgs_request(c: connection, msg: KDC_Request) &priority=5
{
if ( set_session(c) )
return;
c$krb$request_type = "TGS";
c$krb$service = msg$service_name;
if ( msg?$from )
c$krb$from = msg$from;
c$krb$till = msg$till;
c$krb$forwardable = msg$kdc_options$forwardable;
c$krb$renewable = msg$kdc_options$renewable;
}
event krb_tgs_response(c: connection, msg: KDC_Response) &priority=5
{
local info: Info;
if ( c?$krb && c$krb$logged )
if ( set_session(c) )
return;
if ( c?$krb )
info = c$krb;
if ( ! info?$ts )
if ( ! c$krb?$client && ( msg?$client_name || msg?$client_realm ) )
{
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
c$krb$client = fmt("%s/%s", msg?$client_name ? msg$client_name : "",
msg?$client_realm ? msg$client_realm : "");
}
if ( ! info?$client && ( msg?$client_name || msg?$client_realm ) )
info$client = fmt("%s/%s", msg?$client_name ? msg$client_name : "", msg?$client_realm ? msg$client_realm : "");
info$service = msg$ticket$service_name;
info$cipher = cipher_name[msg$ticket$cipher];
info$success = T;
c$krb = info;
c$krb$service = msg$ticket$service_name;
c$krb$cipher = cipher_name[msg$ticket$cipher];
c$krb$success = T;
}
event krb_tgs_response(c: connection, msg: KDC_Response) &priority=-5
{
Log::write(KRB::LOG, c$krb);
c$krb$logged = T;
do_log(c);
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$krb && ! c$krb$logged )
Log::write(KRB::LOG, c$krb);
do_log(c);
}

View file

@ -10,52 +10,51 @@ export {
type Info: record {
## Timestamp for when the event happened.
ts : time &log;
ts : time &log;
## Unique ID for the connection.
uid : string &log;
uid : string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id : conn_id &log;
id : conn_id &log;
## The username, if present.
username : string &log &optional;
username : string &log &optional;
## MAC address, if present.
mac : string &log &optional;
## Remote IP address, if present.
remote_ip : addr &log &optional;
mac : string &log &optional;
## The address given to the network access server, if
## present. This is only a hint from the RADIUS server
## and the network access server is not required to honor
## the address.
framed_addr : addr &log &optional;
## Remote IP address, if present. This is collected
## from the Tunnel-Client-Endpoint attribute.
remote_ip : addr &log &optional;
## Connect info, if present.
connect_info : string &log &optional;
connect_info : string &log &optional;
## Reply message from the server challenge. This is
## frequently shown to the user authenticating.
reply_msg : string &log &optional;
## Successful or failed authentication.
result : string &log &optional;
## Whether this has already been logged and can be ignored.
logged : bool &optional;
result : string &log &optional;
## The duration between the first request and
## either the "Access-Accept" message or an error.
## If the field is empty, it means that either
## the request or response was not seen.
ttl : interval &log &optional;
## Whether this has already been logged and can be ignored.
logged : bool &default=F;
};
## The amount of time we wait for an authentication response before
## expiring it.
const expiration_interval = 10secs &redef;
## Logs an authentication attempt if we didn't see a response in time.
##
## t: A table of Info records.
##
## idx: The index of the connection$radius table corresponding to the
## radius authentication about to expire.
##
## Returns: 0secs, which when this function is used as an
## :bro:attr:`&expire_func`, indicates to remove the element at
## *idx* immediately.
global expire: function(t: table[count] of Info, idx: count): interval;
## Event that can be handled to access the RADIUS record as it is sent on
## to the loggin framework.
## to the logging framework.
global log_radius: event(rec: Info);
}
redef record connection += {
radius: table[count] of Info &optional &write_expire=expiration_interval &expire_func=expire;
radius: Info &optional;
};
const ports = { 1812/udp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
@ -63,64 +62,86 @@ event bro_init() &priority=5
Analyzer::register_for_ports(Analyzer::ANALYZER_RADIUS, ports);
}
event radius_message(c: connection, result: RADIUS::Message)
event radius_message(c: connection, result: RADIUS::Message) &priority=5
{
local info: Info;
if ( c?$radius && result$trans_id in c$radius )
info = c$radius[result$trans_id];
else
if ( ! c?$radius )
{
c$radius = table();
info$ts = network_time();
info$uid = c$uid;
info$id = c$id;
c$radius = Info($ts = network_time(),
$uid = c$uid,
$id = c$id);
}
switch ( RADIUS::msg_types[result$code] ) {
switch ( RADIUS::msg_types[result$code] )
{
case "Access-Request":
if ( result?$attributes ) {
if ( result?$attributes )
{
# User-Name
if ( ! info?$username && 1 in result$attributes )
info$username = result$attributes[1][0];
if ( ! c$radius?$username && 1 in result$attributes )
c$radius$username = result$attributes[1][0];
# Calling-Station-Id (we expect this to be a MAC)
if ( ! info?$mac && 31 in result$attributes )
info$mac = normalize_mac(result$attributes[31][0]);
if ( ! c$radius?$mac && 31 in result$attributes )
c$radius$mac = normalize_mac(result$attributes[31][0]);
# Tunnel-Client-EndPoint (useful for VPNs)
if ( ! info?$remote_ip && 66 in result$attributes )
info$remote_ip = to_addr(result$attributes[66][0]);
if ( ! c$radius?$remote_ip && 66 in result$attributes )
c$radius$remote_ip = to_addr(result$attributes[66][0]);
# Connect-Info
if ( ! info?$connect_info && 77 in result$attributes )
info$connect_info = result$attributes[77][0];
}
if ( ! c$radius?$connect_info && 77 in result$attributes )
c$radius$connect_info = result$attributes[77][0];
}
break;
case "Access-Challenge":
if ( result?$attributes )
{
# Framed-IP-Address
if ( ! c$radius?$framed_addr && 8 in result$attributes )
c$radius$framed_addr = raw_bytes_to_v4_addr(result$attributes[8][0]);
if ( ! c$radius?$reply_msg && 18 in result$attributes )
c$radius$reply_msg = result$attributes[18][0];
}
break;
case "Access-Accept":
info$result = "success";
c$radius$result = "success";
break;
case "Access-Reject":
info$result = "failed";
c$radius$result = "failed";
break;
}
if ( info?$result && ! info?$logged )
{
info$logged = T;
Log::write(RADIUS::LOG, info);
# TODO: Support RADIUS accounting. (add port 1813/udp above too)
#case "Accounting-Request":
# break;
#
#case "Accounting-Response":
# break;
}
c$radius[result$trans_id] = info;
}
event radius_message(c: connection, result: RADIUS::Message) &priority=-5
{
if ( c$radius?$result )
{
local ttl = network_time() - c$radius$ts;
if ( ttl != 0secs )
c$radius$ttl = ttl;
function expire(t: table[count] of Info, idx: count): interval
{
t[idx]$result = "unknown";
Log::write(RADIUS::LOG, t[idx]);
return 0secs;
}
Log::write(RADIUS::LOG, c$radius);
delete c$radius;
}
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$radius && ! c$radius$logged )
{
c$radius$result = "unknown";
Log::write(RADIUS::LOG, c$radius);
}
}

View file

@ -236,10 +236,6 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
{
# Count up X509 certs.
++c$rdp$cert_count;
Files::add_analyzer(f, Files::ANALYZER_X509);
Files::add_analyzer(f, Files::ANALYZER_MD5);
Files::add_analyzer(f, Files::ANALYZER_SHA1);
}
}

View file

@ -18,12 +18,12 @@ export {
client_minor_version: string &log &optional;
## Major version of the server.
server_major_version: string &log &optional;
## Major version of the client.
## Minor version of the server.
server_minor_version: string &log &optional;
## Identifier of authentication method used.
authentication_method: string &log &optional;
## Whether or not authentication was succesful.
## Whether or not authentication was successful.
auth: bool &log &optional;
## Whether the client has an exclusive or a shared session.

View file

@ -10,29 +10,27 @@ export {
[0x00000000] = [$id="SUCCESS", $desc="The operation completed successfully."],
} &redef &default=function(i: count):StatusCode { local unknown=fmt("unknown-%d", i); return [$id=unknown, $desc=unknown]; };
## These are files names that are used for special
## cases by the file system and would not be
## considered "normal" files.
const pipe_names: set[string] = {
"\\netdfs",
"\\spoolss",
"\\NETLOGON",
"\\winreg",
"\\lsarpc",
"\\samr",
"\\srvsvc",
## Heuristic detection of named pipes when the pipe
## mapping isn't seen. This variable is defined in
## init-bare.bro.
redef SMB::pipe_filenames = {
"spoolss",
"winreg",
"samr",
"srvsvc",
"netdfs",
"lsarpc",
"wkssvc",
"MsFteWds",
"\\wkssvc",
};
## The UUIDs used by the various RPC endpoints
## The UUIDs used by the various RPC endpoints.
const rpc_uuids: table[string] of string = {
["4b324fc8-1670-01d3-1278-5a47bf6ee188"] = "Server Service",
["6bffd098-a112-3610-9833-46c3f87e345a"] = "Workstation Service",
} &redef &default=function(i: string):string { return fmt("unknown-uuid-%s", i); };
## Server service sub commands
## Server service sub commands.
const srv_cmds: table[count] of string = {
[8] = "NetrConnectionEnum",
[9] = "NetrFileEnum",
@ -83,7 +81,7 @@ export {
[57] = "NetrShareDelEx",
} &redef &default=function(i: count):string { return fmt("unknown-srv-command-%d", i); };
## Workstation service sub commands
## Workstation service sub commands.
const wksta_cmds: table[count] of string = {
[0] = "NetrWkstaGetInfo",
[1] = "NetrWkstaSetInfo",
@ -110,7 +108,7 @@ export {
type rpc_cmd_table: table[count] of string;
## The subcommands for RPC endpoints
## The subcommands for RPC endpoints.
const rpc_sub_cmds: table[string] of rpc_cmd_table = {
["4b324fc8-1670-01d3-1278-5a47bf6ee188"] = srv_cmds,
["6bffd098-a112-3610-9833-46c3f87e345a"] = wksta_cmds,

View file

@ -24,7 +24,7 @@ export {
## at least one, since some servers might support no authentication at all.
## It's important to note that not all of these are failures, since
## some servers require two-factor auth (e.g. password AND pubkey)
auth_attempts: count &log &optional;
auth_attempts: count &log &default=0;
## Direction of the connection. If the client was a local host
## logging into an external host, this would be OUTBOUND. INBOUND
## would be set for the opposite situation.
@ -185,13 +185,7 @@ event ssh_auth_attempted(c: connection, authenticated: bool) &priority=5
return;
c$ssh$auth_success = authenticated;
if ( c$ssh?$auth_attempts )
c$ssh$auth_attempts += 1;
else
{
c$ssh$auth_attempts = 1;
}
c$ssh$auth_attempts += 1;
if ( authenticated && disable_analyzer_after_detection )
disable_analyzer(c$id, c$ssh$analyzer_id);

View file

@ -1 +1 @@
Support for Secure Sockets Layer (SSL) protocol analysis.
Support for Secure Sockets Layer (SSL)/Transport Layer Security(TLS) protocol analysis.

View file

@ -1,6 +1,7 @@
@load ./consts
@load ./main
@load ./mozilla-ca-list
@load ./ct-list
@load ./files
@load-sigs ./dpd.sig

View file

@ -30,7 +30,7 @@ export {
return fmt("unknown-%d", i);
};
## TLS content types:
# TLS content types:
const CHANGE_CIPHER_SPEC = 20;
const ALERT = 21;
const HANDSHAKE = 22;
@ -41,7 +41,7 @@ export {
const V2_CLIENT_MASTER_KEY = 302;
const V2_SERVER_HELLO = 304;
## TLS Handshake types:
# TLS Handshake types:
const HELLO_REQUEST = 0;
const CLIENT_HELLO = 1;
const SERVER_HELLO = 2;
@ -156,12 +156,17 @@ export {
[22] = "encrypt_then_mac",
[23] = "extended_master_secret",
[24] = "token_binding", # temporary till 2017-03-06 - draft-ietf-tokbind-negotiation
[25] = "cached_info",
[35] = "SessionTicket TLS",
[40] = "key_share", # new for TLS 1.3; was used for extended_random before. State as of TLS 1.3 draft 16
[41] = "pre_shared_key", # new for 1.3, state of draft-16
[42] = "early_data", # new for 1.3, state of draft-16
[43] = "supported_versions", # new for 1.3, state of draft-16
[44] = "cookie", # new for 1.3, state of draft-16
[45] = "psk_key_exchange_modes", # new for 1.3, state of draft-18
[46] = "TicketEarlyDataInfo", # new for 1.3, state of draft-16
[47] = "certificate_authorities", # new for 1.3, state of draft-18
[48] = "oid_filters", # new for 1.3, state of draft-18
[13172] = "next_protocol_negotiation",
[13175] = "origin_bound_certificates",
[13180] = "encrypted_client_certificates",
@ -215,7 +220,7 @@ export {
[0xFF02] = "arbitrary_explicit_char2_curves"
} &default=function(i: count):string { return fmt("unknown-%d", i); };
## Mapping between numeric codes and human readable string for SSL/TLC EC point formats.
## Mapping between numeric codes and human readable string for SSL/TLS EC point formats.
# See http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml#tls-parameters-9
const ec_point_formats: table[count] of string = {
[0] = "uncompressed",
@ -595,6 +600,11 @@ export {
const TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAC;
const TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAD;
const TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256 = 0xCCAE;
# draft-ietf-tls-ecdhe-psk-aead-05
const TLS_ECDHE_PSK_WITH_AES_128_GCM_SHA256 = 0xD001;
const TLS_ECDHE_PSK_WITH_AES_256_GCM_SHA384 = 0xD002;
const TLS_ECDHE_PSK_WITH_AES_128_CCM_8_SHA256 = 0xD003;
const TLS_ECDHE_PSK_WITH_AES_128_CCM_SHA256 = 0xD004;
const SSL_RSA_FIPS_WITH_DES_CBC_SHA = 0xFEFE;
const SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA = 0xFEFF;
@ -973,6 +983,10 @@ export {
[TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256] = "TLS_ECDHE_PSK_WITH_CHACHA20_POLY1305_SHA256",
[TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256] = "TLS_DHE_PSK_WITH_CHACHA20_POLY1305_SHA256",
[TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256] = "TLS_RSA_PSK_WITH_CHACHA20_POLY1305_SHA256",
[TLS_ECDHE_PSK_WITH_AES_128_GCM_SHA256] = "TLS_ECDHE_PSK_WITH_AES_128_GCM_SHA256",
[TLS_ECDHE_PSK_WITH_AES_256_GCM_SHA384] = "TLS_ECDHE_PSK_WITH_AES_256_GCM_SHA384",
[TLS_ECDHE_PSK_WITH_AES_128_CCM_8_SHA256] = "TLS_ECDHE_PSK_WITH_AES_128_CCM_8_SHA256",
[TLS_ECDHE_PSK_WITH_AES_128_CCM_SHA256] = "TLS_ECDHE_PSK_WITH_AES_128_CCM_SHA256",
[SSL_RSA_FIPS_WITH_DES_CBC_SHA] = "SSL_RSA_FIPS_WITH_DES_CBC_SHA",
[SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA] = "SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA",
[SSL_RSA_FIPS_WITH_DES_CBC_SHA_2] = "SSL_RSA_FIPS_WITH_DES_CBC_SHA_2",

View file

@ -0,0 +1,48 @@
#
# Do not edit this file. This file is automatically generated by gen-ct-list.pl
# File generated at Thu Jul 27 16:59:25 2017
# File generated from https://www.gstatic.com/ct/log_list/all_logs_list.json
#
@load base/protocols/ssl
module SSL;
redef ct_logs += {
["\x68\xf6\x98\xf8\x1f\x64\x82\xbe\x3a\x8c\xee\xb9\x28\x1d\x4c\xfc\x71\x51\x5d\x67\x93\xd4\x44\xd1\x0a\x67\xac\xbb\x4f\x4f\xfb\xc4"] = CTInfo($description="Google 'Aviator' log", $operator="Google", $url="ct.googleapis.com/aviator/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xd7\xf4\xcc\x69\xb2\xe4\x0e\x90\xa3\x8a\xea\x5a\x70\x09\x4f\xef\x13\x62\xd0\x8d\x49\x60\xff\x1b\x40\x50\x07\x0c\x6d\x71\x86\xda\x25\x49\x8d\x65\xe1\x08\x0d\x47\x34\x6b\xbd\x27\xbc\x96\x21\x3e\x34\xf5\x87\x76\x31\xb1\x7f\x1d\xc9\x85\x3b\x0d\xf7\x1f\x3f\xe9"),
["\x29\x3c\x51\x96\x54\xc8\x39\x65\xba\xaa\x50\xfc\x58\x07\xd4\xb7\x6f\xbf\x58\x7a\x29\x72\xdc\xa4\xc3\x0c\xf4\xe5\x45\x47\xf4\x78"] = CTInfo($description="Google 'Icarus' log", $operator="Google", $url="ct.googleapis.com/icarus/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x4e\xd2\xbc\xbf\xb3\x08\x0a\xf7\xb9\xea\xa4\xc7\x1c\x38\x61\x04\xeb\x95\xe0\x89\x54\x68\x44\xb1\x66\xbc\x82\x7e\x4f\x50\x6c\x6f\x5c\xa3\xf0\xaa\x3e\xf4\xec\x80\xf0\xdb\x0a\x9a\x7a\xa0\x5b\x72\x00\x7c\x25\x0e\x19\xef\xaf\xb2\x62\x8d\x74\x43\xf4\x26\xf6\x14"),
["\xa4\xb9\x09\x90\xb4\x18\x58\x14\x87\xbb\x13\xa2\xcc\x67\x70\x0a\x3c\x35\x98\x04\xf9\x1b\xdf\xb8\xe3\x77\xcd\x0e\xc8\x0d\xdc\x10"] = CTInfo($description="Google 'Pilot' log", $operator="Google", $url="ct.googleapis.com/pilot/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x7d\xa8\x4b\x12\x29\x80\xa3\x3d\xad\xd3\x5a\x77\xb8\xcc\xe2\x88\xb3\xa5\xfd\xf1\xd3\x0c\xcd\x18\x0c\xe8\x41\x46\xe8\x81\x01\x1b\x15\xe1\x4b\xf1\x1b\x62\xdd\x36\x0a\x08\x18\xba\xed\x0b\x35\x84\xd0\x9e\x40\x3c\x2d\x9e\x9b\x82\x65\xbd\x1f\x04\x10\x41\x4c\xa0"),
["\xee\x4b\xbd\xb7\x75\xce\x60\xba\xe1\x42\x69\x1f\xab\xe1\x9e\x66\xa3\x0f\x7e\x5f\xb0\x72\xd8\x83\x00\xc4\x7b\x89\x7a\xa8\xfd\xcb"] = CTInfo($description="Google 'Rocketeer' log", $operator="Google", $url="ct.googleapis.com/rocketeer/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x20\x5b\x18\xc8\x3c\xc1\x8b\xb3\x31\x08\x00\xbf\xa0\x90\x57\x2b\xb7\x47\x8c\x6f\xb5\x68\xb0\x8e\x90\x78\xe9\xa0\x73\xea\x4f\x28\x21\x2e\x9c\xc0\xf4\x16\x1b\xaa\xf9\xd5\xd7\xa9\x80\xc3\x4e\x2f\x52\x3c\x98\x01\x25\x46\x24\x25\x28\x23\x77\x2d\x05\xc2\x40\x7a"),
["\xbb\xd9\xdf\xbc\x1f\x8a\x71\xb5\x93\x94\x23\x97\xaa\x92\x7b\x47\x38\x57\x95\x0a\xab\x52\xe8\x1a\x90\x96\x64\x36\x8e\x1e\xd1\x85"] = CTInfo($description="Google 'Skydiver' log", $operator="Google", $url="ct.googleapis.com/skydiver/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x12\x6c\x86\x0e\xf6\x17\xb1\x12\x6c\x37\x25\xd2\xad\x87\x3d\x0e\x31\xec\x21\xad\xb1\xcd\xbe\x14\x47\xb6\x71\x56\x85\x7a\x9a\xb7\x3d\x89\x90\x7b\xc6\x32\x3a\xf8\xda\xce\x8b\x01\xfe\x3f\xfc\x71\x91\x19\x8e\x14\x6e\x89\x7a\x5d\xb4\xab\x7e\xe1\x4e\x1e\x7c\xac"),
["\xa8\x99\xd8\x78\x0c\x92\x90\xaa\xf4\x62\xf3\x18\x80\xcc\xfb\xd5\x24\x51\xe9\x70\xd0\xfb\xf5\x91\xef\x75\xb0\xd9\x9b\x64\x56\x81"] = CTInfo($description="Google 'Submariner' log", $operator="Google", $url="ct.googleapis.com/submariner/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x39\xf8\x9f\x20\x62\xd4\x57\x55\x68\xa2\xef\x49\x2d\xf0\x39\x2d\x9a\xde\x44\xb4\x94\x30\xe0\x9e\x7a\x27\x3c\xab\x70\xf0\xd1\xfa\x51\x90\x63\x16\x57\x41\xad\xab\x6d\x1f\x80\x74\x30\x79\x02\x5e\x2d\x59\x84\x07\x24\x23\xf6\x9f\x35\xb8\x85\xb8\x42\x45\xa4\x4f"),
["\x1d\x02\x4b\x8e\xb1\x49\x8b\x34\x4d\xfd\x87\xea\x3e\xfc\x09\x96\xf7\x50\x6f\x23\x5d\x1d\x49\x70\x61\xa4\x77\x3c\x43\x9c\x25\xfb"] = CTInfo($description="Google 'Daedalus' log", $operator="Google", $url="ct.googleapis.com/daedalus/", $maximum_merge_delay=604800, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x6e\x0c\x1c\xba\xee\x2b\x6a\x41\x85\x60\x1d\x7b\x7e\xab\x08\x2c\xfc\x0c\x0a\xa5\x08\xb3\x3e\xd5\x70\x24\xd1\x6d\x1d\x2d\xb6\xb7\xf3\x8b\x36\xdc\x23\x4d\x95\x63\x12\xbb\xe4\x86\x8d\xcc\xe9\xd1\xee\xa1\x40\xa2\xdf\x0b\xa3\x06\x0a\x30\xca\x8d\xac\xa4\x29\x56"),
["\xb0\xcc\x83\xe5\xa5\xf9\x7d\x6b\xaf\x7c\x09\xcc\x28\x49\x04\x87\x2a\xc7\xe8\x8b\x13\x2c\x63\x50\xb7\xc6\xfd\x26\xe1\x6c\x6c\x77"] = CTInfo($description="Google 'Testtube' log", $operator="Google", $url="ct.googleapis.com/testtube/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xc3\xc8\xbc\x4b\xba\xa2\x18\x4b\x3d\x35\x7b\xf4\x64\x91\x61\xea\xeb\x8e\x99\x1d\x90\xed\xd3\xe9\xaf\x39\x3d\x5c\xd3\x46\x91\x45\xe3\xce\xac\x76\x48\x3b\xd1\x7e\x2c\x0a\x63\x00\x65\x8d\xf5\xae\x8e\x8c\xc7\x11\x25\x4f\x43\x2c\x9d\x19\xa1\xe1\x91\xa4\xb3\xfe"),
["\x56\x14\x06\x9a\x2f\xd7\xc2\xec\xd3\xf5\xe1\xbd\x44\xb2\x3e\xc7\x46\x76\xb9\xbc\x99\x11\x5c\xc0\xef\x94\x98\x55\xd6\x89\xd0\xdd"] = CTInfo($description="DigiCert Log Server", $operator="DigiCert", $url="ct1.digicert-ct.com/log/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x02\x46\xc5\xbe\x1b\xbb\x82\x40\x16\xe8\xc1\xd2\xac\x19\x69\x13\x59\xf8\xf8\x70\x85\x46\x40\xb9\x38\xb0\x23\x82\xa8\x64\x4c\x7f\xbf\xbb\x34\x9f\x4a\x5f\x28\x8a\xcf\x19\xc4\x00\xf6\x36\x06\x93\x65\xed\x4c\xf5\xa9\x21\x62\x5a\xd8\x91\xeb\x38\x24\x40\xac\xe8"),
["\x87\x75\xbf\xe7\x59\x7c\xf8\x8c\x43\x99\x5f\xbd\xf3\x6e\xff\x56\x8d\x47\x56\x36\xff\x4a\xb5\x60\xc1\xb4\xea\xff\x5e\xa0\x83\x0f"] = CTInfo($description="DigiCert Log Server 2", $operator="DigiCert", $url="ct2.digicert-ct.com/log/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xcc\x5d\x39\x2f\x66\xb8\x4c\x7f\xc1\x2e\x03\xa1\x34\xa3\xe8\x8a\x86\x02\xae\x4a\x11\xc6\xf7\x26\x6a\x37\x9b\xf0\x38\xf8\x5d\x09\x8d\x63\xe8\x31\x6b\x86\x66\xcf\x79\xb3\x25\x3c\x1e\xdf\x78\xb4\xa8\xc5\x69\xfa\xb7\xf0\x82\x79\x62\x43\xf6\xcc\xfe\x81\x66\x84"),
["\xdd\xeb\x1d\x2b\x7a\x0d\x4f\xa6\x20\x8b\x81\xad\x81\x68\x70\x7e\x2e\x8e\x9d\x01\xd5\x5c\x88\x8d\x3d\x11\xc4\xcd\xb6\xec\xbe\xcc"] = CTInfo($description="Symantec log", $operator="Symantec", $url="ct.ws.symantec.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x96\xea\xac\x1c\x46\x0c\x1b\x55\xdc\x0d\xfc\xb5\x94\x27\x46\x57\x42\x70\x3a\x69\x18\xe2\xbf\x3b\xc4\xdb\xab\xa0\xf4\xb6\x6c\xc0\x53\x3f\x4d\x42\x10\x33\xf0\x58\x97\x8f\x6b\xbe\x72\xf4\x2a\xec\x1c\x42\xaa\x03\x2f\x1a\x7e\x28\x35\x76\x99\x08\x3d\x21\x14\x86"),
["\xbc\x78\xe1\xdf\xc5\xf6\x3c\x68\x46\x49\x33\x4d\xa1\x0f\xa1\x5f\x09\x79\x69\x20\x09\xc0\x81\xb4\xf3\xf6\x91\x7f\x3e\xd9\xb8\xa5"] = CTInfo($description="Symantec 'Vega' log", $operator="Symantec", $url="vega.ws.symantec.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xea\x95\x9e\x02\xff\xee\xf1\x33\x6d\x4b\x87\xbc\xcd\xfd\x19\x17\x62\xff\x94\xd3\xd0\x59\x07\x3f\x02\x2d\x1c\x90\xfe\xc8\x47\x30\x3b\xf1\xdd\x0d\xb8\x11\x0c\x5d\x1d\x86\xdd\xab\xd3\x2b\x46\x66\xfb\x6e\x65\xb7\x3b\xfd\x59\x68\xac\xdf\xa6\xf8\xce\xd2\x18\x4d"),
["\xa7\xce\x4a\x4e\x62\x07\xe0\xad\xde\xe5\xfd\xaa\x4b\x1f\x86\x76\x87\x67\xb5\xd0\x02\xa5\x5d\x47\x31\x0e\x7e\x67\x0a\x95\xea\xb2"] = CTInfo($description="Symantec Deneb", $operator="Symantec", $url="deneb.ws.symantec.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x96\x82\x1e\xa3\xcd\x3a\x80\x84\x1e\x97\xb8\xb7\x07\x19\xae\x76\x1a\x0e\xf8\x55\x76\x9d\x12\x33\x4e\x91\x88\xe4\xd0\x48\x50\x5c\xc1\x9f\x6a\x72\xd6\x01\xf5\x14\xd6\xd0\x38\x6e\xe1\x32\xbc\x67\x0d\x37\xe8\xba\x22\x10\xd1\x72\x86\x79\x28\x96\xf9\x17\x1e\x98"),
["\x15\x97\x04\x88\xd7\xb9\x97\xa0\x5b\xeb\x52\x51\x2a\xde\xe8\xd2\xe8\xb4\xa3\x16\x52\x64\x12\x1a\x9f\xab\xfb\xd5\xf8\x5a\xd9\x3f"] = CTInfo($description="Symantec 'Sirius' log", $operator="Symantec", $url="sirius.ws.symantec.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xa3\x02\x64\x84\x22\xbb\x25\xec\x0d\xe3\xbc\xc2\xc9\x89\x7d\xdd\x45\xd0\xee\xe6\x15\x85\x8f\xd9\xe7\x17\x1b\x13\x80\xea\xed\xb2\x85\x37\xad\x6a\xc5\xd8\x25\x9d\xfa\xf4\xb4\xf3\x6e\x16\x28\x25\x37\xea\xa3\x37\x64\xb2\xc7\x0b\xfd\x51\xe5\xc1\x05\xf4\x0e\xb5"),
["\xcd\xb5\x17\x9b\x7f\xc1\xc0\x46\xfe\xea\x31\x13\x6a\x3f\x8f\x00\x2e\x61\x82\xfa\xf8\x89\x6f\xec\xc8\xb2\xf5\xb5\xab\x60\x49\x00"] = CTInfo($description="Certly.IO log", $operator="Certly", $url="log.certly.io/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x0b\x23\xcb\x85\x62\x98\x61\x48\x04\x73\xeb\x54\x5d\xf3\xd0\x07\x8c\x2d\x19\x2d\x8c\x36\xf5\xeb\x8f\x01\x42\x0a\x7c\x98\x26\x27\xc1\xb5\xdd\x92\x93\xb0\xae\xf8\x9b\x3d\x0c\xd8\x4c\x4e\x1d\xf9\x15\xfb\x47\x68\x7b\xba\x66\xb7\x25\x9c\xd0\x4a\xc2\x66\xdb\x48"),
["\x74\x61\xb4\xa0\x9c\xfb\x3d\x41\xd7\x51\x59\x57\x5b\x2e\x76\x49\xa4\x45\xa8\xd2\x77\x09\xb0\xcc\x56\x4a\x64\x82\xb7\xeb\x41\xa3"] = CTInfo($description="Izenpe log", $operator="Izenpe", $url="ct.izenpe.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x27\x64\x39\x0c\x2d\xdc\x50\x18\xf8\x21\x00\xa2\x0e\xed\x2c\xea\x3e\x75\xba\x9f\x93\x64\x09\x00\x11\xc4\x11\x17\xab\x5c\xcf\x0f\x74\xac\xb5\x97\x90\x93\x00\x5b\xb8\xeb\xf7\x27\x3d\xd9\xb2\x0a\x81\x5f\x2f\x0d\x75\x38\x94\x37\x99\x1e\xf6\x07\x76\xe0\xee\xbe"),
["\x89\x41\x44\x9c\x70\x74\x2e\x06\xb9\xfc\x9c\xe7\xb1\x16\xba\x00\x24\xaa\x36\xd5\x9a\xf4\x4f\x02\x04\x40\x4f\x00\xf7\xea\x85\x66"] = CTInfo($description="Izenpe 'Argi' log", $operator="Izenpe", $url="ct.izenpe.eus/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xd7\xc8\x0e\x23\x3e\x9e\x02\x3c\x9a\xb8\x07\x4a\x2a\x05\xff\x4a\x4b\x88\xd4\x8a\x4d\x39\xce\xf7\xc5\xf2\xb6\x37\xe9\xa3\xed\xe4\xf5\x45\x09\x0e\x67\x14\xfd\x53\x24\xd5\x3a\x94\xf2\xea\xb5\x13\xd9\x1d\x8b\x5c\xa7\xc3\xf3\x6b\xd8\x3f\x2d\x3b\x65\x72\x58\xd6"),
["\x9e\x4f\xf7\x3d\xc3\xce\x22\x0b\x69\x21\x7c\x89\x9e\x46\x80\x76\xab\xf8\xd7\x86\x36\xd5\xcc\xfc\x85\xa3\x1a\x75\x62\x8b\xa8\x8b"] = CTInfo($description="WoSign CT log #1", $operator="Wosign", $url="ct.wosign.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xd7\xec\x2f\x2b\x75\x4f\x37\xbc\xa3\x43\xba\x8b\x65\x66\x3c\x7d\x6a\xe5\x0c\x2a\xa6\xc2\xe5\x26\xfe\x0c\x7d\x4e\x7c\xf0\x3a\xbc\xe2\xd3\x22\xdc\x01\xd0\x1f\x6e\x43\x9c\x5c\x6e\x83\xad\x9c\x15\xf6\xc4\x8d\x60\xb5\x1d\xbb\xa3\x62\x69\x7e\xeb\xa7\xaa\x01\x9b"),
["\x41\xb2\xdc\x2e\x89\xe6\x3c\xe4\xaf\x1b\xa7\xbb\x29\xbf\x68\xc6\xde\xe6\xf9\xf1\xcc\x04\x7e\x30\xdf\xfa\xe3\xb3\xba\x25\x92\x63"] = CTInfo($description="WoSign log", $operator="Wosign", $url="ctlog.wosign.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xcc\x11\x88\x7b\x2d\x66\xcb\xae\x8f\x4d\x30\x66\x27\x19\x25\x22\x93\x21\x46\xb4\x2f\x01\xd3\xc6\xf9\x2b\xd5\xc8\xba\x73\x9b\x06\xa2\xf0\x8a\x02\x9c\xd0\x6b\x46\x18\x30\x85\xba\xe9\x24\x8b\x0e\xd1\x5b\x70\x28\x0c\x7e\xf1\x3a\x45\x7f\x5a\xf3\x82\x42\x60\x31"),
["\x63\xd0\x00\x60\x26\xdd\xe1\x0b\xb0\x60\x1f\x45\x24\x46\x96\x5e\xe2\xb6\xea\x2c\xd4\xfb\xc9\x5a\xc8\x66\xa5\x50\xaf\x90\x75\xb7"] = CTInfo($description="WoSign log 2", $operator="Wosign", $url="ctlog2.wosign.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xa5\x8c\xe8\x35\x2e\x8e\xe5\x6a\x75\xad\x5c\x4b\x31\x61\x29\x9d\x30\x57\x8e\x02\x13\x5f\xe9\xca\xbb\x52\xa8\x43\x05\x60\xbf\x0d\x73\x57\x77\xb2\x05\xd8\x67\xf6\xf0\x33\xc9\xf9\x44\xde\xb6\x53\x73\xaa\x0c\x55\xc2\x83\x0a\x4b\xce\x5e\x1a\xc7\x17\x1d\xb3\xcd"),
["\xc9\xcf\x89\x0a\x21\x10\x9c\x66\x6c\xc1\x7a\x3e\xd0\x65\xc9\x30\xd0\xe0\x13\x5a\x9f\xeb\xa8\x5a\xf1\x42\x10\xb8\x07\x24\x21\xaa"] = CTInfo($description="GDCA CT log #1", $operator="Wang Shengnan", $url="ct.gdca.com.cn/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xad\x0f\x30\xad\x9e\x79\xa4\x38\x89\x26\x54\x86\xab\x41\x72\x90\x6f\xfb\xca\x17\xa6\xac\xee\xc6\x9f\x7d\x02\x05\xec\x41\xa8\xc7\x41\x9d\x32\x49\xad\xb0\x39\xbd\x3a\x87\x3e\x7c\xee\x68\x6c\x60\xd1\x47\x2a\x93\xae\xe1\x40\xf4\x0b\xc8\x35\x3c\x1d\x0f\x65\xd3"),
["\x92\x4a\x30\xf9\x09\x33\x6f\xf4\x35\xd6\x99\x3a\x10\xac\x75\xa2\xc6\x41\x72\x8e\x7f\xc2\xd6\x59\xae\x61\x88\xff\xad\x40\xce\x01"] = CTInfo($description="GDCA CT log #2", $operator="GDCA", $url="ctlog.gdca.com.cn/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x5b\x4a\xc7\x01\xb7\x74\x54\xba\x40\x9c\x43\x75\x94\x3f\xac\xef\xb3\x71\x56\xb8\xd3\xe2\x7b\xae\xa1\xb1\x3e\x53\xaa\x97\x33\xa1\x82\xbb\x5f\x5d\x1c\x0b\xfa\x85\x0d\xbc\xf7\xe5\xa0\xe0\x22\xf0\xa0\x89\xd9\x0a\x7f\x5f\x26\x94\xd3\x24\xe3\x99\x2e\xe4\x15\x8d"),
["\xdb\x76\xfd\xad\xac\x65\xe7\xd0\x95\x08\x88\x6e\x21\x59\xbd\x8b\x90\x35\x2f\x5f\xea\xd3\xe3\xdc\x5e\x22\xeb\x35\x0a\xcc\x7b\x98"] = CTInfo($description="Comodo 'Dodo' CT log", $operator="Comodo", $url="dodo.ct.comodo.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x2c\xf5\xc2\x31\xf5\x63\x43\x6a\x16\x4a\x0a\xde\xc2\xee\x1f\x21\x6e\x12\x7e\x1d\xe5\x72\x8f\x74\x0b\x02\x99\xd3\xad\x69\xbc\x02\x35\x79\xf9\x61\xe9\xcf\x00\x08\x4f\x74\xa4\xa3\x34\x9a\xe0\x43\x1c\x23\x7e\x8f\x41\xd5\xee\xc7\x1c\xa3\x82\x8a\x40\xfa\xaa\xe0"),
["\xac\x3b\x9a\xed\x7f\xa9\x67\x47\x57\x15\x9e\x6d\x7d\x57\x56\x72\xf9\xd9\x81\x00\x94\x1e\x9b\xde\xff\xec\xa1\x31\x3b\x75\x78\x2d"] = CTInfo($description="Venafi log", $operator="Venafi", $url="ctlog.api.venafi.com/", $maximum_merge_delay=86400, $key="\x30\x82\x01\x22\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x01\x01\x05\x00\x03\x82\x01\x0f\x00\x30\x82\x01\x0a\x02\x82\x01\x01\x00\xa2\x5a\x48\x1f\x17\x52\x95\x35\xcb\xa3\x5b\x3a\x1f\x53\x82\x76\x94\xa3\xff\x80\xf2\x1c\x37\x3c\xc0\xb1\xbd\xc1\x59\x8b\xab\x2d\x65\x93\xd7\xf3\xe0\x04\xd5\x9a\x6f\xbf\xd6\x23\x76\x36\x4f\x23\x99\xcb\x54\x28\xad\x8c\x15\x4b\x65\x59\x76\x41\x4a\x9c\xa6\xf7\xb3\x3b\x7e\xb1\xa5\x49\xa4\x17\x51\x6c\x80\xdc\x2a\x90\x50\x4b\x88\x24\xe9\xa5\x12\x32\x93\x04\x48\x90\x02\xfa\x5f\x0e\x30\x87\x8e\x55\x76\x05\xee\x2a\x4c\xce\xa3\x6a\x69\x09\x6e\x25\xad\x82\x76\x0f\x84\x92\xfa\x38\xd6\x86\x4e\x24\x8f\x9b\xb0\x72\xcb\x9e\xe2\x6b\x3f\xe1\x6d\xc9\x25\x75\x23\x88\xa1\x18\x58\x06\x23\x33\x78\xda\x00\xd0\x38\x91\x67\xd2\xa6\x7d\x27\x97\x67\x5a\xc1\xf3\x2f\x17\xe6\xea\xd2\x5b\xe8\x81\xcd\xfd\x92\x68\xe7\xf3\x06\xf0\xe9\x72\x84\xee\x01\xa5\xb1\xd8\x33\xda\xce\x83\xa5\xdb\xc7\xcf\xd6\x16\x7e\x90\x75\x18\xbf\x16\xdc\x32\x3b\x6d\x8d\xab\x82\x17\x1f\x89\x20\x8d\x1d\x9a\xe6\x4d\x23\x08\xdf\x78\x6f\xc6\x05\xbf\x5f\xae\x94\x97\xdb\x5f\x64\xd4\xee\x16\x8b\xa3\x84\x6c\x71\x2b\xf1\xab\x7f\x5d\x0d\x32\xee\x04\xe2\x90\xec\x41\x9f\xfb\x39\xc1\x02\x03\x01\x00\x01"),
["\x03\x01\x9d\xf3\xfd\x85\xa6\x9a\x8e\xbd\x1f\xac\xc6\xda\x9b\xa7\x3e\x46\x97\x74\xfe\x77\xf5\x79\xfc\x5a\x08\xb8\x32\x8c\x1d\x6b"] = CTInfo($description="Venafi Gen2 CT log", $operator="Venafi", $url="ctlog-gen2.api.venafi.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x8e\x27\x27\x7a\xb6\x55\x09\x74\xeb\x6c\x4b\x94\x84\x65\xbc\xe4\x15\xf1\xea\x5a\xd8\x7c\x0e\x37\xce\xba\x3f\x6c\x09\xda\xe7\x29\x96\xd3\x45\x50\x6f\xde\x1e\xb4\x1c\xd2\x83\x88\xff\x29\x2f\xce\xa9\xff\xdf\x34\xde\x75\x0f\xc0\xcc\x18\x0d\x94\x2e\xfc\x37\x01"),
["\xa5\x77\xac\x9c\xed\x75\x48\xdd\x8f\x02\x5b\x67\xa2\x41\x08\x9d\xf8\x6e\x0f\x47\x6e\xc2\x03\xc2\xec\xbe\xdb\x18\x5f\x28\x26\x38"] = CTInfo($description="CNNIC CT log", $operator="CNNIC", $url="ctserver.cnnic.cn/", $maximum_merge_delay=86400, $key="\x30\x82\x01\x22\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x01\x01\x05\x00\x03\x82\x01\x0f\x00\x30\x82\x01\x0a\x02\x82\x01\x01\x00\xbf\xb5\x08\x61\x9a\x29\x32\x04\xd3\x25\x63\xe9\xd8\x85\xe1\x86\xe0\x1f\xd6\x5e\x9a\xf7\x33\x3b\x80\x1b\xe7\xb6\x3e\x5f\x2d\xa1\x66\xf6\x95\x4a\x84\xa6\x21\x56\x79\xe8\xf7\x85\xee\x5d\xe3\x7c\x12\xc0\xe0\x89\x22\x09\x22\x3e\xba\x16\x95\x06\xbd\xa8\xb9\xb1\xa9\xb2\x7a\xd6\x61\x2e\x87\x11\xb9\x78\x40\x89\x75\xdb\x0c\xdc\x90\xe0\xa4\x79\xd6\xd5\x5e\x6e\xd1\x2a\xdb\x34\xf4\x99\x3f\x65\x89\x3b\x46\xc2\x29\x2c\x15\x07\x1c\xc9\x4b\x1a\x54\xf8\x6c\x1e\xaf\x60\x27\x62\x0a\x65\xd5\x9a\xb9\x50\x36\x16\x6e\x71\xf6\x1f\x01\xf7\x12\xa7\xfc\xbf\xf6\x21\xa3\x29\x90\x86\x2d\x77\xde\xbb\x4c\xd4\xcf\xfd\xd2\xcf\x82\x2c\x4d\xd4\xf2\xc2\x2d\xac\xa9\xbe\xea\xc3\x19\x25\x43\xb2\xe5\x9a\x6c\x0d\xc5\x1c\xa5\x8b\xf7\x3f\x30\xaf\xb9\x01\x91\xb7\x69\x12\x12\xe5\x83\x61\xfe\x34\x00\xbe\xf6\x71\x8a\xc7\xeb\x50\x92\xe8\x59\xfe\x15\x91\xeb\x96\x97\xf8\x23\x54\x3f\x2d\x8e\x07\xdf\xee\xda\xb3\x4f\xc8\x3c\x9d\x6f\xdf\x3c\x2c\x43\x57\xa1\x47\x0c\x91\x04\xf4\x75\x4d\xda\x89\x81\xa4\x14\x06\x34\xb9\x98\xc3\xda\xf1\xfd\xed\x33\x36\xd3\x16\x2d\x35\x02\x03\x01\x00\x01"),
["\x34\xbb\x6a\xd6\xc3\xdf\x9c\x03\xee\xa8\xa4\x99\xff\x78\x91\x48\x6c\x9d\x5e\x5c\xac\x92\xd0\x1f\x7b\xfd\x1b\xce\x19\xdb\x48\xef"] = CTInfo($description="StartCom log", $operator="StartSSL", $url="ct.startssl.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x48\xf3\x59\xf3\xf6\x05\x18\xd3\xdb\xb2\xed\x46\x7e\xcf\xc8\x11\xb5\x57\xb1\xa8\xd6\x4c\xe6\x9f\xb7\x4a\x1a\x14\x86\x43\xa9\x48\xb0\xcb\x5a\x3f\x3c\x4a\xca\xdf\xc4\x82\x14\x55\x9a\xf8\xf7\x8e\x40\x55\xdc\xf4\xd2\xaf\xea\x75\x74\xfb\x4e\x7f\x60\x86\x2e\x51"),
["\xe0\x12\x76\x29\xe9\x04\x96\x56\x4e\x3d\x01\x47\x98\x44\x98\xaa\x48\xf8\xad\xb1\x66\x00\xeb\x79\x02\xa1\xef\x99\x09\x90\x62\x73"] = CTInfo($description="PuChuangSiDa CT log", $operator="Beijing PuChuangSiDa Technology Ltd.", $url="www.certificatetransparency.cn/ct/", $maximum_merge_delay=86400, $key="\x30\x82\x01\x22\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x01\x01\x05\x00\x03\x82\x01\x0f\x00\x30\x82\x01\x0a\x02\x82\x01\x01\x00\xac\xcf\x2f\x4b\x70\xac\xf1\x0d\x96\xbf\xe8\x0a\xfe\x44\x9d\xd4\x8c\x17\x9d\xc3\x9a\x10\x11\x84\x13\xed\x8c\xf9\x37\x6d\x83\xe4\x00\x6f\xb1\x4b\xc0\xa6\x89\xc7\x61\x8f\x9a\x34\xbb\x56\x52\xca\x03\x56\x50\xef\x24\x7f\x4b\x49\xe9\x35\x81\xdd\xf0\xe7\x17\xf5\x72\xd2\x23\xc5\xe3\x13\x7f\xd7\x8e\x78\x35\x8f\x49\xde\x98\x04\x8a\x63\xaf\xad\xa2\x39\x70\x95\x84\x68\x4b\x91\x33\xfe\x4c\xe1\x32\x17\xc2\xf2\x61\xb8\x3a\x8d\x39\x7f\xd5\x95\x82\x3e\x56\x19\x50\x45\x6f\xcb\x08\x33\x0d\xd5\x19\x42\x08\x1a\x48\x42\x10\xf1\x68\xc3\xc3\x41\x13\xcb\x0d\x1e\xdb\x02\xb7\x24\x7a\x51\x96\x6e\xbc\x08\xea\x69\xaf\x6d\xef\x92\x98\x8e\x55\xf3\x65\xe5\xe8\x9c\xbe\x1a\x47\x60\x30\x7d\x7a\x80\xad\x56\x83\x7a\x93\xc3\xae\x93\x2b\x6a\x28\x8a\xa6\x5f\x63\x19\x0c\xbe\x7c\x7b\x21\x63\x41\x38\xb7\xf7\xe8\x76\x73\x6b\x85\xcc\xbc\x72\x2b\xc1\x52\xd0\x5b\x5d\x31\x4e\x9d\x2a\xf3\x4d\x9b\x64\x14\x99\x26\xc6\x71\xf8\x7b\xf8\x44\xd5\xe3\x23\x20\xf3\x0a\xd7\x8b\x51\x3e\x72\x80\xd2\x78\x78\x35\x2d\x4a\xe7\x40\x99\x11\x95\x34\xd4\x2f\x7f\xf9\x5f\x35\x37\x02\x03\x01\x00\x01"),
["\x55\x81\xd4\xc2\x16\x90\x36\x01\x4a\xea\x0b\x9b\x57\x3c\x53\xf0\xc0\xe4\x38\x78\x70\x25\x08\x17\x2f\xa3\xaa\x1d\x07\x13\xd3\x0c"] = CTInfo($description="Comodo 'Sabre' CT log", $operator="Comodo", $url="sabre.ct.comodo.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xf2\x6f\xd2\x89\x0f\x3f\xc5\xf8\x87\x1e\xab\x65\xb3\xd9\xbb\x17\x23\x8c\x06\x0e\x09\x55\x96\x3d\x0a\x08\xa2\xc5\x71\xb3\xd1\xa9\x2f\x28\x3e\x83\x10\xbf\x12\xd0\x44\x66\x15\xef\x54\xe1\x98\x80\xd0\xce\x24\x6d\x3e\x67\x9a\xe9\x37\x23\xce\x52\x93\x86\xda\x80"),
["\x6f\x53\x76\xac\x31\xf0\x31\x19\xd8\x99\x00\xa4\x51\x15\xff\x77\x15\x1c\x11\xd9\x02\xc1\x00\x29\x06\x8d\xb2\x08\x9a\x37\xd9\x13"] = CTInfo($description="Comodo 'Mammoth' CT log", $operator="Comodo", $url="mammoth.ct.comodo.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xef\xe4\x7d\x74\x2e\x15\x15\xb6\xe9\xbb\x23\x8b\xfb\x2c\xb5\xe1\xc7\x80\x98\x47\xfb\x40\x69\x68\xfc\x49\xad\x61\x4e\x83\x47\x3c\x1a\xb7\x8d\xdf\xff\x7b\x30\xb4\xba\xff\x2f\xcb\xa0\x14\xe3\xad\xd5\x85\x3f\x44\x59\x8c\x8c\x60\x8b\xd7\xb8\xb1\xbf\xae\x8c\x67"),
["\x53\x7b\x69\xa3\x56\x43\x35\xa9\xc0\x49\x04\xe3\x95\x93\xb2\xc2\x98\xeb\x8d\x7a\x6e\x83\x02\x36\x35\xc6\x27\x24\x8c\xd6\xb4\x40"] = CTInfo($description="Nordu 'flimsy' log", $operator="NORDUnet", $url="flimsy.ct.nordu.net:8080/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xe2\xa5\xaa\xe9\xa7\xe1\x05\x48\xb4\x39\xd7\x16\x51\x88\x72\x24\xb3\x57\x4e\x41\xaa\x43\xd3\xcc\x4b\x99\x6a\xa0\x28\x24\x57\x68\x75\x66\xfa\x4d\x8c\x11\xf6\xbb\xc5\x1b\x81\xc3\x90\xc2\xa0\xe8\xeb\xac\xfa\x05\x64\x09\x1a\x89\x68\xcd\x96\x26\x34\x71\x36\x91"),
["\xaa\xe7\x0b\x7f\x3c\xb8\xd5\x66\xc8\x6c\x2f\x16\x97\x9c\x9f\x44\x5f\x69\xab\x0e\xb4\x53\x55\x89\xb2\xf7\x7a\x03\x01\x04\xf3\xcd"] = CTInfo($description="Nordu 'plausible' log", $operator="NORDUnet", $url="plausible.ct.nordu.net/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xf5\x45\x7d\xfa\x33\xb6\x30\x24\xf3\x91\xa6\xe8\x74\xed\x85\xec\xb3\x34\xdc\xc5\x01\x73\xc3\x2b\x74\x0b\x64\x71\x6e\xaf\xe8\x60\x3d\xb5\xa4\xd3\xc3\xd4\x09\xaa\x87\xe6\xd0\x16\xdd\x02\xc6\xed\x24\xbf\xee\x9f\x21\x1f\xd3\x32\x24\x46\x05\xe3\x8f\x36\x98\xa9"),
["\xcf\x55\xe2\x89\x23\x49\x7c\x34\x0d\x52\x06\xd0\x53\x53\xae\xb2\x58\x34\xb5\x2f\x1f\x8d\xc9\x52\x68\x09\xf2\x12\xef\xdd\x7c\xa6"] = CTInfo($description="SHECA CT log 1", $operator="SHECA", $url="ctlog.sheca.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x11\xa9\x60\x2b\xb4\x71\x45\x66\xe0\x2e\xde\xd5\x87\x3b\xd5\xfe\xf0\x92\x37\xf4\x68\xc6\x92\xdd\x3f\x1a\xe2\xbc\x0c\x22\xd6\x99\x63\x29\x6e\x32\x28\x14\xc0\x76\x2c\x80\xa8\x22\x51\x91\xd6\xeb\xa6\xd8\xf1\xec\xf0\x07\x7e\xb0\xfc\x76\x70\x76\x72\x7c\x91\xe9"),
["\x32\xdc\x59\xc2\xd4\xc4\x19\x68\xd5\x6e\x14\xbc\x61\xac\x8f\x0e\x45\xdb\x39\xfa\xf3\xc1\x55\xaa\x42\x52\xf5\x00\x1f\xa0\xc6\x23"] = CTInfo($description="SHECA CT log 2", $operator="SHECA", $url="ct.sheca.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xb1\x8e\x1d\x8a\xaa\x3a\xac\xce\x86\xcb\x53\x76\xe8\xa8\x9d\x59\xbe\x17\x88\x03\x07\xf2\x27\xe0\x82\xbe\xb1\xfc\x67\x3b\x46\xee\xd3\xf1\x8d\xd6\x77\xe8\xa3\xb4\xdb\x09\x5c\xa0\x09\x43\xfc\x5f\xd0\x68\x34\x23\x24\x08\xc2\x4f\xd8\xd2\xb6\x9d\xed\xd5\x8c\xdb"),
["\x96\x06\xc0\x2c\x69\x00\x33\xaa\x1d\x14\x5f\x59\xc6\xe2\x64\x8d\x05\x49\xf0\xdf\x96\xaa\xb8\xdb\x91\x5a\x70\xd8\xec\xf3\x90\xa5"] = CTInfo($description="Akamai CT Log", $operator="Akamai", $url="ct.akamai.com/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x43\x79\xeb\x49\x5c\x50\x2a\x4a\x6a\x8f\x59\x93\xbc\xc3\x42\x76\xc2\x99\xf8\x27\x81\x3c\x06\x6c\xd2\xc8\x04\x8f\x74\x7b\xb4\xb5\x21\xf2\xe3\xa8\xdc\x33\xb9\xfe\x25\xe9\x3d\x04\xfc\x3f\xb4\xae\x40\xe3\x45\x7e\x84\x92\x2a\xd8\x52\xeb\x1f\x3f\x73\x13\xd0\xc8"),
["\x39\x37\x6f\x54\x5f\x7b\x46\x07\xf5\x97\x42\xd7\x68\xcd\x5d\x24\x37\xbf\x34\x73\xb6\x53\x4a\x48\x34\xbc\xf7\x2e\x68\x1c\x83\xc9"] = CTInfo($description="Alpha CT Log", $operator="Matt Palmer", $url="alpha.ctlogs.org/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\xa2\xf7\xed\x13\xe1\xd3\x5c\x02\x08\xc4\x8e\x8b\x9b\x8b\x3b\x39\x68\xc7\x92\x6a\x38\xa1\x4f\x23\xc5\xa5\x6f\x6f\xd7\x65\x81\xf8\xc1\x9b\xf4\x9f\xa9\x8b\x45\xf4\xb9\x4e\x1b\xc9\xa2\x69\x17\xa5\x78\x87\xd9\xce\x88\x6f\x41\x03\xbb\xa3\x2a\xe3\x77\x97\x8d\x78"),
["\x29\x6a\xfa\x2d\x56\x8b\xca\x0d\x2e\xa8\x44\x95\x6a\xe9\x72\x1f\xc3\x5f\xa3\x55\xec\xda\x99\x69\x3a\xaf\xd4\x58\xa7\x1a\xef\xdd"] = CTInfo($description="Let's Encrypt 'Clicky' log", $operator="Let's Encrypt", $url="clicky.ct.letsencrypt.org/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x1f\x1a\x15\x83\x77\x00\x75\x62\xb9\x9f\xf6\x06\x05\xed\x95\x89\x83\x41\x81\x97\xe7\xe0\xd4\x33\xfe\x76\xba\x3b\xc9\x49\xc2\xcd\xf1\xcf\xfe\x12\x70\xd7\xbe\xa8\x22\x5f\xb2\xa4\x67\x02\x7b\x71\xae\x1d\xac\xa8\xe9\xd1\x08\xd5\xce\xef\x33\x7a\xc3\x5f\x00\xdc"),
["\xb0\xb7\x84\xbc\x81\xc0\xdd\xc4\x75\x44\xe8\x83\xf0\x59\x85\xbb\x90\x77\xd1\x34\xd8\xab\x88\xb2\xb2\xe5\x33\x98\x0b\x8e\x50\x8b"] = CTInfo($description="Up In The Air 'Behind the Sofa' log", $operator="Up In The Air Consulting", $url="ct.filippo.io/behindthesofa/", $maximum_merge_delay=86400, $key="\x30\x59\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48\xce\x3d\x03\x01\x07\x03\x42\x00\x04\x59\x39\xb2\xa6\x94\xc6\x32\xb9\xfe\x63\x69\x1e\x30\x3b\xa3\x5b\xd5\xb0\x43\xc9\x50\x1e\x95\xa5\x2d\xa7\x4c\x4a\x49\x8e\x8b\x8f\xb7\xf8\xcc\xe2\x5b\x97\x72\xd5\xea\x3f\xb1\x21\x48\xe8\x44\x6b\x7f\xea\xef\x22\xff\xdf\xf4\x5f\x3b\x6d\x77\x04\xb1\xaf\x90\x8f"),
};

View file

@ -1,7 +1,7 @@
signature dpd_ssl_server {
ip-proto == tcp
# Server hello.
payload /^((\x15\x03[\x00\x01\x02\x03]....)?\x16\x03[\x00\x01\x02\x03]..\x02...\x03[\x00\x01\x02\x03]|...?\x04..\x00\x02).*/
payload /^((\x15\x03[\x00\x01\x02\x03]....)?\x16\x03[\x00\x01\x02\x03]..\x02...((\x03[\x00\x01\x02\x03\x04])|(\x7F[\x00-\x50]))|...?\x04..\x00\x02).*/
requires-reverse-signature dpd_ssl_client
enable "ssl"
tcp-state responder
@ -10,7 +10,7 @@ signature dpd_ssl_server {
signature dpd_ssl_client {
ip-proto == tcp
# Client hello.
payload /^(\x16\x03[\x00\x01\x02\x03]..\x01...\x03[\x00\x01\x02\x03]|...?\x01[\x00\x03][\x00\x01\x02\x03]).*/
payload /^(\x16\x03[\x00\x01\x02\x03]..\x01...\x03[\x00\x01\x02\x03]|...?\x01[\x00\x03][\x00\x01\x02\x03\x04]).*/
tcp-state originator
}

View file

@ -11,7 +11,7 @@ export {
## complete signing chain.
cert_chain: vector of Files::Info &optional;
## An ordered vector of all certicate file unique IDs for the
## An ordered vector of all certificate file unique IDs for the
## certificates offered by the server.
cert_chain_fuids: vector of string &optional &log;
@ -19,7 +19,7 @@ export {
## complete signing chain.
client_cert_chain: vector of Files::Info &optional;
## An ordered vector of all certicate file unique IDs for the
## An ordered vector of all certificate file unique IDs for the
## certificates offered by the client.
client_cert_chain_fuids: vector of string &optional &log;
@ -91,11 +91,26 @@ event bro_init() &priority=5
$describe = SSL::describe_file]);
}
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
{
if ( ! c?$ssl )
if ( |f$conns| != 1 )
return;
if ( ! f?$info || ! f$info?$mime_type )
return;
if ( ! ( f$info$mime_type == "application/x-x509-ca-cert" || f$info$mime_type == "application/x-x509-user-cert"
|| f$info$mime_type == "application/pkix-cert" ) )
return;
for ( cid in f$conns )
{
if ( ! f$conns[cid]?$ssl )
return;
local c = f$conns[cid];
}
if ( ! c$ssl?$cert_chain )
{
c$ssl$cert_chain = vector();
@ -104,7 +119,7 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
c$ssl$client_cert_chain_fuids = string_vec();
}
if ( is_orig )
if ( f$is_orig )
{
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = f$info;
c$ssl$client_cert_chain_fuids[|c$ssl$client_cert_chain_fuids|] = f$id;
@ -114,12 +129,6 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
c$ssl$cert_chain[|c$ssl$cert_chain|] = f$info;
c$ssl$cert_chain_fuids[|c$ssl$cert_chain_fuids|] = f$id;
}
Files::add_analyzer(f, Files::ANALYZER_X509);
# always calculate hashes. They are not necessary for base scripts
# but very useful for identification, and required for policy scripts
Files::add_analyzer(f, Files::ANALYZER_MD5);
Files::add_analyzer(f, Files::ANALYZER_SHA1);
}
event ssl_established(c: connection) &priority=6

View file

@ -44,10 +44,10 @@ export {
## is being resumed. It's not logged.
client_key_exchange_seen: bool &default=F;
## Count to track if the server already sent an application data
## packet fot TLS 1.3. Used to track when a session was established.
## packet for TLS 1.3. Used to track when a session was established.
server_appdata: count &default=0;
## Flag to track if the client already sent an application data
## packet fot TLS 1.3. Used to track when a session was established.
## packet for TLS 1.3. Used to track when a session was established.
client_appdata: bool &default=F;
## Last alert that was seen during the connection.
@ -62,9 +62,8 @@ export {
analyzer_id: count &optional;
## Flag to indicate if this ssl session has been established
## succesfully, or if it was aborted during the handshake.
## successfully, or if it was aborted during the handshake.
established: bool &log &default=F;
## Flag to indicate if this record already has been logged, to
## prevent duplicates.
logged: bool &default=F;
@ -74,6 +73,26 @@ export {
## script sets this to Mozilla's root CA list.
const root_certs: table[string] of string = {} &redef;
## The record type which contains the field for the Certificate
## Transparency log bundle.
type CTInfo: record {
## Description of the Log
description: string;
## Operator of the Log
operator: string;
## Public key of the Log.
key: string;
## Maximum merge delay of the Log
maximum_merge_delay: count;
## URL of the Log
url: string;
};
## The Certificate Transparency log bundle. By default, the ct-list.bro
## script sets this to the current list of known logs. Entries
## are indexed by (binary) log-id.
const ct_logs: table[string] of CTInfo = {} &redef;
## If true, detach the SSL analyzer from the connection to prevent
## continuing to process encrypted traffic. Helps with performance
## (especially with large file transfers).
@ -90,6 +109,10 @@ export {
## Event that can be handled to access the SSL
## record as it is sent on to the logging framework.
global log_ssl: event(rec: Info);
# Hook that can be used to perform actions right before the log record
# is written.
global ssl_finishing: hook(c: connection);
}
redef record connection += {
@ -281,11 +304,22 @@ event ssl_established(c: connection) &priority=7
c$ssl$established = T;
}
event ssl_established(c: connection) &priority=20
{
hook ssl_finishing(c);
}
event ssl_established(c: connection) &priority=-5
{
finish(c, T);
}
event connection_state_remove(c: connection) &priority=20
{
if ( c?$ssl && ! c$ssl$logged )
hook ssl_finishing(c);
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$ssl )

View file

@ -25,6 +25,10 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
case "port":
return cat(port_to_count(to_port(cat(v))));
case "enum":
fallthrough;
case "interval":
fallthrough;
case "addr":
fallthrough;
case "subnet":
@ -35,14 +39,15 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
case "count":
fallthrough;
case "time":
fallthrough;
case "double":
fallthrough;
case "bool":
fallthrough;
case "enum":
return cat(v);
case "double":
return fmt("%.16g", v);
case "bool":
local bval: bool = v;
return bval ? "true" : "false";
default:
break;
}

View file

@ -0,0 +1,62 @@
##! Enable logging of OCSP responses.
#
# This script is in policy and not loaded by default because OCSP logging
# does not provide a lot of interesting information in most environments.
module OCSP;
export {
redef enum Log::ID += { LOG };
## The record type which contains the fields of the OCSP log.
type Info: record {
## Time when the OCSP reply was encountered.
ts: time &log;
## File id of the OCSP reply.
id: string &log;
## Hash algorithm used to generate issuerNameHash and issuerKeyHash.
hashAlgorithm: string &log;
## Hash of the issuer's distingueshed name.
issuerNameHash: string &log;
## Hash of the issuer's public key.
issuerKeyHash: string &log;
## Serial number of the affected certificate.
serialNumber: string &log;
## Status of the affected certificate.
certStatus: string &log;
## Time at which the certificate was revoked.
revoketime: time &log &optional;
## Reason for which the certificate was revoked.
revokereason: string &log &optional;
## The time at which the status being shows is known to have been correct.
thisUpdate: time &log;
## The latest time at which new information about the status of the certificate will be available.
nextUpdate: time &log &optional;
};
## Event that can be handled to access the OCSP record
## as it is sent to the logging framework.
global log_ocsp: event(rec: Info);
}
event bro_init()
{
Log::create_stream(LOG, [$columns=Info, $ev=log_ocsp, $path="ocsp"]);
Files::register_for_mime_type(Files::ANALYZER_OCSP_REPLY, "application/ocsp-response");
}
event ocsp_response_certificate(f: fa_file, hashAlgorithm: string, issuerNameHash: string, issuerKeyHash: string, serialNumber: string, certStatus: string, revoketime: time, revokereason: string, thisUpdate: time, nextUpdate: time)
{
local wr = OCSP::Info($ts=f$info$ts, $id=f$id, $hashAlgorithm=hashAlgorithm, $issuerNameHash=issuerNameHash,
$issuerKeyHash=issuerKeyHash, $serialNumber=serialNumber, $certStatus=certStatus,
$thisUpdate=thisUpdate);
if ( revokereason != "" )
wr$revokereason = revokereason;
if ( time_to_double(revoketime) != 0 )
wr$revoketime = revoketime;
if ( time_to_double(nextUpdate) != 0 )
wr$nextUpdate = nextUpdate;
Log::write(LOG, wr);
}

View file

@ -7,7 +7,7 @@ module Intel;
export {
redef enum Notice::Type += {
## Intel::Notice is a notice that happens when an intelligence
## This notice is generated when an intelligence
## indicator is denoted to be notice-worthy.
Intel::Notice
};

View file

@ -4,6 +4,7 @@
@load ./file-names
@load ./http-headers
@load ./http-url
@load ./pubkey-hashes
@load ./ssl
@load ./smtp
@load ./smtp-url-extraction

View file

@ -48,6 +48,7 @@ export {
["Microsoft-CryptoAPI/6.2"] = [$name="Windows", $version=[$major=6, $minor=2, $addl="8 or Server 2012"]],
["Microsoft-CryptoAPI/6.3"] = [$name="Windows", $version=[$major=6, $minor=3, $addl="8.1 or Server 2012 R2"]],
["Microsoft-CryptoAPI/6.4"] = [$name="Windows", $version=[$major=6, $minor=4, $addl="10 Technical Preview"]],
["Microsoft-CryptoAPI/10.0"] = [$name="Windows", $version=[$major=10, $minor=0]],
} &redef;
}

View file

@ -74,7 +74,7 @@ export {
reassem_file_size: count &log;
## Current size of packet fragment data in reassembly.
reassem_frag_size: count &log;
## Current size of unkown data in reassembly (this is only PIA buffer right now).
## Current size of unknown data in reassembly (this is only PIA buffer right now).
reassem_unknown_size: count &log;
};

View file

@ -1,4 +1,4 @@
##! This script add VLAN information to the connection logs
##! This script adds VLAN information to the connection log.
@load base/protocols/conn

View file

@ -0,0 +1,33 @@
##! Add Kerberos ticket hashes to the krb.log
@load base/protocols/krb
module KRB;
redef record Info += {
## Hash of ticket used to authorize request/transaction
auth_ticket: string &log &optional;
## Hash of ticket returned by the KDC
new_ticket: string &log &optional;
};
event krb_ap_request(c: connection, ticket: KRB::Ticket, opts: KRB::AP_Options)
{
# Will be overwritten when request is a TGS
c$krb$request_type = "AP";
if ( ticket?$ciphertext )
c$krb$auth_ticket = md5_hash(ticket$ciphertext);
}
event krb_as_response(c: connection, msg: KDC_Response)
{
if ( msg$ticket?$ciphertext )
c$krb$new_ticket = md5_hash(msg$ticket$ciphertext);
}
event krb_tgs_response(c: connection, msg: KDC_Response)
{
if ( msg$ticket?$ciphertext )
c$krb$new_ticket = md5_hash(msg$ticket$ciphertext);
}

View file

@ -28,11 +28,6 @@ export {
PRINT_WRITE,
PRINT_OPEN,
PRINT_CLOSE,
UNKNOWN_READ,
UNKNOWN_WRITE,
UNKNOWN_OPEN,
UNKNOWN_CLOSE,
};
## The file actions which are logged.
@ -43,8 +38,6 @@ export {
PRINT_OPEN,
PRINT_CLOSE,
UNKNOWN_OPEN,
} &redef;
## The server response statuses which are *not* logged.
@ -71,7 +64,7 @@ export {
name : string &log &optional;
## Total size of the file.
size : count &log &default=0;
## If the rename action was seen, this will
## If the rename action was seen, this will be
## the file's previous name.
prev_name : string &log &optional;
## Last time this file was modified.
@ -89,7 +82,7 @@ export {
## Name of the tree path.
path : string &log &optional;
## The type of resource of the tree (disk share, printer share, named pipe, etc.)
## The type of resource of the tree (disk share, printer share, named pipe, etc.).
service : string &log &optional;
## File system of the tree.
native_file_system : string &log &optional;
@ -100,34 +93,34 @@ export {
## This record is for the smb_cmd.log
type CmdInfo: record {
## Timestamp of the command request
## Timestamp of the command request.
ts : time &log;
## Unique ID of the connection the request was sent over
## Unique ID of the connection the request was sent over.
uid : string &log;
## ID of the connection the request was sent over
## ID of the connection the request was sent over.
id : conn_id &log;
## The command sent by the client
## The command sent by the client.
command : string &log;
## The subcommand sent by the client, if present
## The subcommand sent by the client, if present.
sub_command : string &log &optional;
## Command argument sent by the client, if any
## Command argument sent by the client, if any.
argument : string &log &optional;
## Server reply to the client's command
## Server reply to the client's command.
status : string &log &optional;
## Round trip time from the request to the response.
rtt : interval &log &optional;
## Version of SMB for the command
## Version of SMB for the command.
version : string &log;
## Authenticated username, if available
## Authenticated username, if available.
username : string &log &optional;
## If this is related to a tree, this is the tree
## that was used for the current command.
tree : string &log &optional;
## The type of tree (disk share, printer share, named pipe, etc.)
## The type of tree (disk share, printer share, named pipe, etc.).
tree_service : string &log &optional;
## If the command referenced a file, store it here.
@ -173,8 +166,8 @@ export {
smb_state : State &optional;
};
## Internal use only
## Some commands shouldn't be logged by the smb1_message event
## Internal use only.
## Some commands shouldn't be logged by the smb1_message event.
const deferred_logging_cmds: set[string] = {
"NEGOTIATE",
"READ_ANDX",
@ -193,7 +186,7 @@ redef record FileInfo += {
## ID referencing this file.
fid : count &optional;
## UUID referencing this file if DCE/RPC
## UUID referencing this file if DCE/RPC.
uuid : string &optional;
};
@ -202,9 +195,9 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(SMB::CMD_LOG, [$columns=SMB::CmdInfo]);
Log::create_stream(SMB::FILES_LOG, [$columns=SMB::FileInfo]);
Log::create_stream(SMB::MAPPING_LOG, [$columns=SMB::TreeInfo]);
Log::create_stream(SMB::CMD_LOG, [$columns=SMB::CmdInfo, $path="smb_cmd"]);
Log::create_stream(SMB::FILES_LOG, [$columns=SMB::FileInfo, $path="smb_files"]);
Log::create_stream(SMB::MAPPING_LOG, [$columns=SMB::TreeInfo, $path="smb_mapping"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SMB, ports);
}
@ -225,7 +218,6 @@ function write_file_log(state: State)
{
local f = state$current_file;
if ( f?$name &&
f$name !in pipe_names &&
f$action in logged_file_actions )
{
# Everything in this if statement is to avoid overlogging
@ -252,6 +244,12 @@ function write_file_log(state: State)
}
}
event smb_pipe_connect_heuristic(c: connection) &priority=5
{
c$smb_state$current_tree$path = "<unknown>";
c$smb_state$current_tree$share_type = "PIPE";
}
event file_state_remove(f: fa_file) &priority=-5
{
if ( f$source != "SMB" )
@ -266,4 +264,4 @@ event file_state_remove(f: fa_file) &priority=-5
}
return;
}
}
}

View file

@ -3,7 +3,7 @@
module SMB1;
redef record SMB::CmdInfo += {
## Dialects offered by the client
## Dialects offered by the client.
smb1_offered_dialects: string_vec &optional;
};
@ -108,11 +108,6 @@ event smb1_negotiate_response(c: connection, hdr: SMB1::Header, response: SMB1::
event smb1_negotiate_response(c: connection, hdr: SMB1::Header, response: SMB1::NegotiateResponse) &priority=-5
{
if ( SMB::write_cmd_log &&
c$smb_state$current_cmd$status !in SMB::ignored_command_statuses )
{
Log::write(SMB::CMD_LOG, c$smb_state$current_cmd);
}
}
event smb1_tree_connect_andx_request(c: connection, hdr: SMB1::Header, path: string, service: string) &priority=5
@ -141,12 +136,6 @@ event smb1_tree_connect_andx_response(c: connection, hdr: SMB1::Header, service:
event smb1_tree_connect_andx_response(c: connection, hdr: SMB1::Header, service: string, native_file_system: string) &priority=-5
{
Log::write(SMB::MAPPING_LOG, c$smb_state$current_tree);
if ( SMB::write_cmd_log &&
c$smb_state$current_cmd$status !in SMB::ignored_command_statuses )
{
Log::write(SMB::CMD_LOG, c$smb_state$current_cmd);
}
}
event smb1_nt_create_andx_request(c: connection, hdr: SMB1::Header, name: string) &priority=5
@ -192,17 +181,7 @@ event smb1_read_andx_request(c: connection, hdr: SMB1::Header, file_id: count, o
if ( c$smb_state$current_tree?$path && !c$smb_state$current_file?$path )
c$smb_state$current_file$path = c$smb_state$current_tree$path;
# We don't even try to log reads and writes to the files log.
#write_file_log(c$smb_state);
}
event smb1_read_andx_response(c: connection, hdr: SMB1::Header, data_len: count) &priority=5
{
if ( SMB::write_cmd_log &&
c$smb_state$current_cmd$status !in SMB::ignored_command_statuses )
{
Log::write(SMB::CMD_LOG, c$smb_state$current_cmd);
}
SMB::write_file_log(c$smb_state);
}
event smb1_write_andx_request(c: connection, hdr: SMB1::Header, file_id: count, offset: count, data_len: count) &priority=5
@ -281,11 +260,7 @@ event smb1_session_setup_andx_request(c: connection, hdr: SMB1::Header, request:
event smb1_session_setup_andx_response(c: connection, hdr: SMB1::Header, response: SMB1::SessionSetupAndXResponse) &priority=-5
{
if ( SMB::write_cmd_log &&
c$smb_state$current_cmd$status !in SMB::ignored_command_statuses )
{
Log::write(SMB::CMD_LOG, c$smb_state$current_cmd);
}
# No behavior yet.
}
event smb1_transaction_request(c: connection, hdr: SMB1::Header, name: string, sub_cmd: count)

View file

@ -3,7 +3,7 @@
module SMB2;
redef record SMB::CmdInfo += {
## Dialects offered by the client
## Dialects offered by the client.
smb2_offered_dialects: index_vec &optional;
};
@ -101,13 +101,9 @@ event smb2_negotiate_response(c: connection, hdr: SMB2::Header, response: SMB2::
event smb2_negotiate_response(c: connection, hdr: SMB2::Header, response: SMB2::NegotiateResponse) &priority=5
{
if ( SMB::write_cmd_log &&
c$smb_state$current_cmd$status !in SMB::ignored_command_statuses )
{
Log::write(SMB::CMD_LOG, c$smb_state$current_cmd);
}
# No behavior yet.
}
event smb2_tree_connect_request(c: connection, hdr: SMB2::Header, path: string) &priority=5
{
c$smb_state$current_tree$path = path;
@ -123,6 +119,16 @@ event smb2_tree_connect_response(c: connection, hdr: SMB2::Header, response: SMB
Log::write(SMB::MAPPING_LOG, c$smb_state$current_tree);
}
event smb2_tree_disconnect_request(c: connection, hdr: SMB2::Header) &priority=5
{
if ( hdr$tree_id in c$smb_state$tid_map )
{
delete c$smb_state$tid_map[hdr$tree_id];
delete c$smb_state$current_tree;
delete c$smb_state$current_cmd$referenced_tree;
}
}
event smb2_create_request(c: connection, hdr: SMB2::Header, name: string) &priority=5
{
if ( name == "")
@ -142,7 +148,6 @@ event smb2_create_request(c: connection, hdr: SMB2::Header, name: string) &prior
c$smb_state$current_file$action = SMB::PRINT_OPEN;
break;
default:
#c$smb_state$current_file$action = SMB::UNKNOWN_OPEN;
c$smb_state$current_file$action = SMB::FILE_OPEN;
break;
}
@ -150,6 +155,8 @@ event smb2_create_request(c: connection, hdr: SMB2::Header, name: string) &prior
event smb2_create_response(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID, file_size: count, times: SMB::MACTimes, attrs: SMB2::FileAttrs) &priority=5
{
SMB::set_current_file(c$smb_state, file_id$persistent+file_id$volatile);
c$smb_state$current_file$fid = file_id$persistent+file_id$volatile;
c$smb_state$current_file$size = file_size;
@ -188,13 +195,14 @@ event smb2_read_request(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID, o
c$smb_state$current_file$action = SMB::PRINT_READ;
break;
default:
c$smb_state$current_file$action = SMB::FILE_OPEN;
c$smb_state$current_file$action = SMB::FILE_READ;
break;
}
}
event smb2_read_request(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID, offset: count, length: count) &priority=-5
{
SMB::write_file_log(c$smb_state);
}
event smb2_write_request(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID, offset: count, length: count) &priority=5
@ -213,7 +221,6 @@ event smb2_write_request(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID,
c$smb_state$current_file$action = SMB::PRINT_WRITE;
break;
default:
#c$smb_state$current_file$action = SMB::UNKNOWN_WRITE;
c$smb_state$current_file$action = SMB::FILE_WRITE;
break;
}
@ -221,6 +228,7 @@ event smb2_write_request(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID,
event smb2_write_request(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID, offset: count, length: count) &priority=-5
{
SMB::write_file_log(c$smb_state);
}
event smb2_file_rename(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID, dst_filename: string) &priority=5
@ -254,7 +262,9 @@ event smb2_file_delete(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID, de
if ( ! delete_pending )
{
print "huh...";
# This is weird beause it would mean that someone didn't
# set the delete bit in a delete request.
return;
}
switch ( c$smb_state$current_tree$share_type )
@ -289,7 +299,6 @@ event smb2_close_request(c: connection, hdr: SMB2::Header, file_id: SMB2::GUID)
c$smb_state$current_file$action = SMB::PRINT_CLOSE;
break;
default:
#c$smb_state$current_file$action = SMB::UNKNOWN_CLOSE;
c$smb_state$current_file$action = SMB::FILE_CLOSE;
break;
}

Some files were not shown because too many files have changed in this diff Show more