Merge remote-tracking branch 'origin/master' into topic/seth/more-file-type-ident-fixes

This commit is contained in:
Seth Hall 2015-04-09 23:58:52 -04:00
commit 49926ad7bf
284 changed files with 8116 additions and 2701 deletions

165
CHANGES
View file

@ -1,4 +1,169 @@
2.3-680 | 2015-04-06 16:02:43 -0500
* BIT-1371: remove CMake version check from binary package scripts.
(Jon Siwek)
2.3-679 | 2015-04-06 10:16:36 -0500
* Increase some unit test timeouts. (Jon Siwek)
* Fix Coverity warning in RDP analyzer. (Jon Siwek)
2.3-676 | 2015-04-02 10:10:39 -0500
* BIT-1366: improve checksum offloading warning.
(Frank Meier, Jon Siwek)
2.3-675 | 2015-03-30 17:05:05 -0500
* Add an RDP analyzer. (Josh Liburdi, Seth Hall, Johanna Amann)
2.3-640 | 2015-03-30 13:51:51 -0500
* BIT-1359: Limit maximum number of DTLS fragments to 30. (Johanna Amann)
2.3-637 | 2015-03-30 12:02:07 -0500
* Increase timeout duration in some broker tests. (Jon Siwek)
2.3-636 | 2015-03-30 11:26:32 -0500
* Updates related to SSH analysis. (Jon Siwek)
- Some scripts used wrong SSH module/namespace scoping on events.
- Fix outdated notice documentation related to SSH password guessing.
- Add a unit test for SSH pasword guessing notice.
2.3-635 | 2015-03-30 11:02:45 -0500
* Fix outdated documentation unit tests. (Jon Siwek)
2.3-634 | 2015-03-30 10:22:45 -0500
* Add a canonifier to a unit test's output. (Jon Siwek)
2.3-633 | 2015-03-25 18:32:59 -0700
* Log::write in signature framework was missing timestamp.
(Andrew Benson/Michel Laterman)
2.3-631 | 2015-03-25 11:03:12 -0700
* New SSH analyzer. (Vlad Grigorescu)
2.3-600 | 2015-03-25 10:23:46 -0700
* Add defensive checks in code to calculate log rotation intervals.
(Pete Nelson).
2.3-597 | 2015-03-23 12:50:04 -0700
* DTLS analyzer. (Johanna Amann)
* Implement correct parsing of TLS record fragmentation. (Johanna
Amann)
2.3-582 | 2015-03-23 11:34:25 -0700
* BIT-1313: In debug builds, "bro -B <x>" now supports "all" and
"help" for "<x>". "all" enables all debug streams. "help" prints a
list of available debug streams. (John Donnelly/Robin Sommer).
* BIT-1324: Allow logging filters to inherit default path from
stream. This allows the path for the default filter to be
specified explicitly through $path="..." when creating a stream.
Adapted the existing Log::create_stream calls to explicitly
specify a path value. (Jon Siwek)
* BIT-1199: Change the way the input framework deals with values it
cannot convert into BroVals, raising error messages instead of
aborting execution. (Johanna Amann)
* BIT-788: Use DNS QR field to better identify flow direction. (Jon
Siwek)
2.3-572 | 2015-03-23 13:04:53 -0500
* BIT-1226: Fix an example in quickstart docs. (Jon siwek)
2.3-570 | 2015-03-23 09:51:20 -0500
* Correct a spelling error (Daniel Thayer)
* Improvement to SSL analyzer failure mode. (Johanna Amann)
2.3-565 | 2015-03-20 16:27:41 -0500
* BIT-978: Improve documentation of 'for' loop iterator invalidation.
(Jon Siwek)
2.3-564 | 2015-03-20 11:12:02 -0500
* BIT-725: Remove "unmatched_HTTP_reply" weird. (Jon Siwek)
2.3-562 | 2015-03-20 10:31:02 -0500
* BIT-1207: Add unit test to catch breaking changes to local.bro
(Jon Siwek)
* Fix failing sqlite leak test (Johanna Amann)
2.3-560 | 2015-03-19 13:17:39 -0500
* BIT-1255: Increase default values of
"tcp_max_above_hole_without_any_acks" and "tcp_max_initial_window"
from 4096 to 16384 bytes. (Jon Siwek)
2.3-559 | 2015-03-19 12:14:33 -0500
* BIT-849: turn SMTP reporter warnings into weirds,
"smtp_nested_mail_transaction" and "smtp_unmatched_end_of_data".
(Jon Siwek)
2.3-558 | 2015-03-18 22:50:55 -0400
* DNS: Log the type number for the DNS_RR_unknown_type weird. (Vlad Grigorescu)
2.3-555 | 2015-03-17 15:57:13 -0700
* Splitting test-all Makefile target into Bro tests and test-aux.
(Robin Sommer)
2.3-554 | 2015-03-17 15:40:39 -0700
* Deprecate &rotate_interval, &rotate_size, &encrypt. Addresses
BIT-1305. (Jon Siwek)
2.3-549 | 2015-03-17 09:12:18 -0700
* BIT-1077: Fix HTTP::log_server_header_names. Before, it just
re-logged fields from the client side. (Jon Siwek)
2.3-547 | 2015-03-17 09:07:51 -0700
* Update certificate validation script to cache valid intermediate
chains that it encounters on the wire and use those to try to
validate chains that might be missing intermediate certificates.
(Johanna Amann)
2.3-541 | 2015-03-13 15:44:08 -0500
* Make INSTALL a symlink to doc/install/install.rst (Jon siwek)
* Fix Broxygen coverage. (Jon Siwek)
2.3-539 | 2015-03-13 14:19:27 -0500
* BIT-1335: Include timestamp in default extracted file names.
And add a policy script to extract all files. (Jon Siwek)
* BIT-1311: Identify GRE tunnels as Tunnel::GRE, not Tunnel::IP.
(Jon Siwek)
* BIT-1309: Add Connection class getter methods for flow labels.
(Jon Siwek)
2.3-536 | 2015-03-12 16:16:24 -0500 2.3-536 | 2015-03-12 16:16:24 -0500
* Fix Broker leak tests. (Jon Siwek) * Fix Broker leak tests. (Jon Siwek)

View file

@ -1,3 +0,0 @@
See doc/install/install.rst for installation instructions.

1
INSTALL Symbolic link
View file

@ -0,0 +1 @@
doc/install/install.rst

View file

@ -51,13 +51,15 @@ distclean:
$(MAKE) -C testing $@ $(MAKE) -C testing $@
test: test:
@( cd testing && make ) -@( cd testing && make )
test-all: test test-aux:
test -d aux/broctl && ( cd aux/broctl && make test-all ) -test -d aux/broctl && ( cd aux/broctl && make test-all )
test -d aux/btest && ( cd aux/btest && make test ) -test -d aux/btest && ( cd aux/btest && make test )
test -d aux/bro-aux && ( cd aux/bro-aux && make test ) -test -d aux/bro-aux && ( cd aux/bro-aux && make test )
test -d aux/plugins && ( cd aux/plugins && make test-all ) -test -d aux/plugins && ( cd aux/plugins && make test-all )
test-all: test test-aux
configured: configured:
@test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 ) @test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 )

31
NEWS
View file

@ -28,6 +28,10 @@ New Functionality
- Bro now has supoprt for the MySQL wire protocol. Activity gets - Bro now has supoprt for the MySQL wire protocol. Activity gets
logged into mysql.log. logged into mysql.log.
- Bro now features a completely rewritten, enhanced SSH analyzer. A lot
more information about SSH sessions is logged. The analyzer is able to
determine if logins failed or succeeded in most circumstances.
- Bro's file analysis now supports reassembly of files that are not - Bro's file analysis now supports reassembly of files that are not
transferred/seen sequentially. transferred/seen sequentially.
@ -61,6 +65,12 @@ New Functionality
- [TODO] Add new BroControl features. - [TODO] Add new BroControl features.
- A new icmp_sent_payload event provides access to ICMP payload.
- Bro now parses DTLS traffic.
- Bro now has an RDP analyzer.
Changed Functionality Changed Functionality
--------------------- ---------------------
@ -94,8 +104,29 @@ Changed Functionality
- conn.log gained a new field local_resp that works like local_orig, - conn.log gained a new field local_resp that works like local_orig,
just for the responder address of the connection. just for the responder address of the connection.
- GRE tunnels are now identified as ``Tunnel::GRE`` instead of
``Tunnel::IP``.
- The default name for extracted files changed from extract-protocol-id
to extract-timestamp-protocol-id.
- [TODO] Add changed BroControl features. - [TODO] Add changed BroControl features.
- The weird named "unmatched_HTTP_reply" has been removed since it can
be detected at the script-layer and is handled correctly by the
default HTTP scripts.
- When adding a logging filter to a stream, the filter can now inherit
a default ``path`` field from the associated ``Log::Stream`` record.
- When adding a logging filter to a stream, the
``Log::default_path_func`` is now only automatically added to the
filter if it has neither a ``path`` nor a ``path_func`` already
explicitly set. Before, the default path function would always be set
for all filters which didn't specify their own ``path_func``.
- TODO: what SSH events got changed or removed?
Deprecated Functionality Deprecated Functionality
------------------------ ------------------------

View file

@ -1 +1 @@
2.3-536 2.3-680

@ -1 +1 @@
Subproject commit 52b273db79298daf5024d2d3d94824e7ab73a782 Subproject commit 462e300bf9c37dcc39b70a4c2d89d19f7351c804

@ -1 +1 @@
Subproject commit 762d2722290ca0004d0da2b0b96baea6a3a7f3f4 Subproject commit e864a0949e52a797f4000194b5c2980cf3618deb

@ -1 +1 @@
Subproject commit 71d820e9d8ca753fea8fb34ea3987993b28d79e4 Subproject commit 7a14085394e54a950e477eb4fafb3827ff8dbdc3

View file

@ -15,5 +15,5 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
BrokerComm::enable(); BrokerComm::enable();
Log::create_stream(Test::LOG, [$columns=Test::Info, $ev=log_test]); Log::create_stream(Test::LOG, [$columns=Test::Info, $ev=log_test, $path="test"]);
} }

View file

@ -344,7 +344,7 @@ example for the ``Foo`` module:
event bro_init() &priority=5 event bro_init() &priority=5
{ {
# Create the stream. This also adds a default filter automatically. # Create the stream. This also adds a default filter automatically.
Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]); Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo, $path="foo"]);
} }
You can also add the state to the :bro:type:`connection` record to make You can also add the state to the :bro:type:`connection` record to make

View file

@ -88,15 +88,15 @@ directly make modifications to the :bro:see:`Notice::Info` record
given as the argument to the hook. given as the argument to the hook.
Here's a simple example which tells Bro to send an email for all notices of Here's a simple example which tells Bro to send an email for all notices of
type :bro:see:`SSH::Password_Guessing` if the server is 10.0.0.1: type :bro:see:`SSH::Password_Guessing` if the guesser attempted to log in to
the server at 192.168.56.103:
.. code:: bro .. btest-include:: ${DOC_ROOT}/frameworks/notice_ssh_guesser.bro
hook Notice::policy(n: Notice::Info) .. btest:: notice_ssh_guesser.bro
{
if ( n$note == SSH::Password_Guessing && n$id$resp_h == 10.0.0.1 ) @TEST-EXEC: btest-rst-cmd bro -C -r ${TRACES}/ssh/sshguess.pcap ${DOC_ROOT}/frameworks/notice_ssh_guesser.bro
add n$actions[Notice::ACTION_EMAIL]; @TEST-EXEC: btest-rst-cmd cat notice.log
}
.. note:: .. note::
@ -112,8 +112,7 @@ a hook body to run before default hook bodies might look like this:
hook Notice::policy(n: Notice::Info) &priority=5 hook Notice::policy(n: Notice::Info) &priority=5
{ {
if ( n$note == SSH::Password_Guessing && n$id$resp_h == 10.0.0.1 ) # Insert your code here.
add n$actions[Notice::ACTION_EMAIL];
} }
Hooks can also abort later hook bodies with the ``break`` keyword. This Hooks can also abort later hook bodies with the ``break`` keyword. This

View file

@ -0,0 +1,10 @@
@load protocols/ssh/detect-bruteforcing
redef SSH::password_guesses_limit=10;
hook Notice::policy(n: Notice::Info)
{
if ( n$note == SSH::Password_Guessing && /192\.168\.56\.103/ in n$sub )
add n$actions[Notice::ACTION_EMAIL];
}

View file

@ -30,7 +30,7 @@ export {
event bro_init() &priority=3 event bro_init() &priority=3
{ {
Log::create_stream(MimeMetrics::LOG, [$columns=Info]); Log::create_stream(MimeMetrics::LOG, [$columns=Info, $path="mime_metrics"]);
local r1: SumStats::Reducer = [$stream="mime.bytes", local r1: SumStats::Reducer = [$stream="mime.bytes",
$apply=set(SumStats::SUM)]; $apply=set(SumStats::SUM)];
local r2: SumStats::Reducer = [$stream="mime.hits", local r2: SumStats::Reducer = [$stream="mime.hits",

View file

@ -0,0 +1,24 @@
@load protocols/ssl/expiring-certs
const watched_servers: set[addr] = {
87.98.220.10,
} &redef;
# Site::local_nets usually isn't something you need to modify if
# BroControl automatically sets it up from networks.cfg. It's
# shown here for completeness.
redef Site::local_nets += {
87.98.0.0/16,
};
hook Notice::policy(n: Notice::Info)
{
if ( n$note != SSL::Certificate_Expired )
return;
if ( n$id$resp_h !in watched_servers )
return;
add n$actions[Notice::ACTION_EMAIL];
}

View file

@ -156,9 +156,11 @@ changes we want to make:
notice that means an SSL connection was established and the server's notice that means an SSL connection was established and the server's
certificate couldn't be validated using Bro's default trust roots, but certificate couldn't be validated using Bro's default trust roots, but
we want to ignore it. we want to ignore it.
2) ``SSH::Login`` is a notice type that is triggered when an SSH connection 2) ``SSL::Certificate_Expired`` is a notice type that is triggered when
attempt looks like it may have been successful, and we want email when an SSL connection was established using an expired certificate. We
that happens, but only for certain servers. want email when that happens, but only for certain servers on the
local network (Bro can also proactively monitor for certs that will
soon expire, but this is just for demonstration purposes).
We've defined *what* we want to do, but need to know *where* to do it. We've defined *what* we want to do, but need to know *where* to do it.
The answer is to use a script written in the Bro programming language, so The answer is to use a script written in the Bro programming language, so
@ -203,7 +205,7 @@ the variable's value may not change at run-time, but whose initial value can be
modified via the ``redef`` operator at parse-time. modified via the ``redef`` operator at parse-time.
Let's continue on our path to modify the behavior for the two SSL Let's continue on our path to modify the behavior for the two SSL
and SSH notices. Looking at :doc:`/scripts/base/frameworks/notice/main.bro`, notices. Looking at :doc:`/scripts/base/frameworks/notice/main.bro`,
we see that it advertises: we see that it advertises:
.. code:: bro .. code:: bro
@ -216,7 +218,7 @@ we see that it advertises:
const ignored_types: set[Notice::Type] = {} &redef; const ignored_types: set[Notice::Type] = {} &redef;
} }
That's exactly what we want to do for the SSL notice. Add to ``local.bro``: That's exactly what we want to do for the first notice. Add to ``local.bro``:
.. code:: bro .. code:: bro
@ -248,38 +250,30 @@ is valid before installing it and then restarting the Bro instance:
stopping bro ... stopping bro ...
starting bro ... starting bro ...
Now that the SSL notice is ignored, let's look at how to send an email on Now that the SSL notice is ignored, let's look at how to send an email
the SSH notice. The notice framework has a similar option called on the other notice. The notice framework has a similar option called
``emailed_types``, but using that would generate email for all SSH servers and ``emailed_types``, but using that would generate email for all SSL
we only want email for logins to certain ones. There is a ``policy`` hook servers with expired certificates and we only want email for connections
that is actually what is used to implement the simple functionality of to certain ones. There is a ``policy`` hook that is actually what is
``ignored_types`` and used to implement the simple functionality of ``ignored_types`` and
``emailed_types``, but it's extensible such that the condition and action taken ``emailed_types``, but it's extensible such that the condition and
on notices can be user-defined. action taken on notices can be user-defined.
In ``local.bro``, let's define a new ``policy`` hook handler body In ``local.bro``, let's define a new ``policy`` hook handler body:
that takes the email action for SSH logins only for a defined set of servers:
.. code:: bro .. btest-include:: ${DOC_ROOT}/quickstart/conditional-notice.bro
const watched_servers: set[addr] = { .. btest:: conditional-notice
192.168.1.100,
192.168.1.101,
192.168.1.102,
} &redef;
hook Notice::policy(n: Notice::Info) @TEST-EXEC: btest-rst-cmd bro -r ${TRACES}/tls/tls-expired-cert.trace ${DOC_ROOT}/quickstart/conditional-notice.bro
{ @TEST-EXEC: btest-rst-cmd cat notice.log
if ( n$note == SSH::SUCCESSFUL_LOGIN && n$id$resp_h in watched_servers )
add n$actions[Notice::ACTION_EMAIL];
}
You'll just have to trust the syntax for now, but what we've done is You'll just have to trust the syntax for now, but what we've done is
first declare our own variable to hold a set of watched addresses, first declare our own variable to hold a set of watched addresses,
``watched_servers``; then added a hook handler body to the policy that will ``watched_servers``; then added a hook handler body to the policy that
generate an email whenever the notice type is an SSH login and the responding will generate an email whenever the notice type is an SSL expired
host stored certificate and the responding host stored inside the ``Info`` record's
inside the ``Info`` record's connection field is in the set of watched servers. connection field is in the set of watched servers.
.. note:: Record field member access is done with the '$' character .. note:: Record field member access is done with the '$' character
instead of a '.' as might be expected from other languages, in instead of a '.' as might be expected from other languages, in

View file

@ -43,8 +43,6 @@ The Bro scripting language supports the following attributes.
+-----------------------------+-----------------------------------------------+ +-----------------------------+-----------------------------------------------+
| :bro:attr:`&mergeable` |Prefer set union for synchronized state. | | :bro:attr:`&mergeable` |Prefer set union for synchronized state. |
+-----------------------------+-----------------------------------------------+ +-----------------------------+-----------------------------------------------+
| :bro:attr:`&group` |Group event handlers to activate/deactivate. |
+-----------------------------+-----------------------------------------------+
| :bro:attr:`&error_handler` |Used internally for reporter framework events. | | :bro:attr:`&error_handler` |Used internally for reporter framework events. |
+-----------------------------+-----------------------------------------------+ +-----------------------------+-----------------------------------------------+
| :bro:attr:`&type_column` |Used by input framework for "port" type. | | :bro:attr:`&type_column` |Used by input framework for "port" type. |
@ -198,11 +196,6 @@ Here is a more detailed explanation of each attribute:
inconsistencies and can be avoided by unifying the two sets, rather inconsistencies and can be avoided by unifying the two sets, rather
than merely overwriting the old value. than merely overwriting the old value.
.. bro:attr:: &group
Groups event handlers such that those in the same group can be
jointly activated or deactivated.
.. bro:attr:: &error_handler .. bro:attr:: &error_handler
Internally set on the events that are associated with the reporter Internally set on the events that are associated with the reporter

View file

@ -294,7 +294,10 @@ Here are the statements that the Bro scripting language supports.
.. bro:keyword:: for .. bro:keyword:: for
A "for" loop iterates over each element in a string, set, vector, or A "for" loop iterates over each element in a string, set, vector, or
table and executes a statement for each iteration. table and executes a statement for each iteration. Currently,
modifying a container's membership while iterating over it may
result in undefined behavior, so avoid adding or removing elements
inside the loop.
For each iteration of the loop, a loop variable will be assigned to an For each iteration of the loop, a loop variable will be assigned to an
element if the expression evaluates to a string or set, or an index if element if the expression evaluates to a string or set, or an index if

View file

@ -23,7 +23,7 @@ function factorial(n: count): count
event bro_init() event bro_init()
{ {
# Create the logging stream. # Create the logging stream.
Log::create_stream(LOG, [$columns=Info]); Log::create_stream(LOG, [$columns=Info, $path="factor"]);
} }
event bro_done() event bro_done()

View file

@ -37,7 +37,7 @@ function mod5(id: Log::ID, path: string, rec: Factor::Info) : string
event bro_init() event bro_init()
{ {
Log::create_stream(LOG, [$columns=Info]); Log::create_stream(LOG, [$columns=Info, $path="factor"]);
local filter: Log::Filter = [$name="split-mod5s", $path_func=mod5]; local filter: Log::Filter = [$name="split-mod5s", $path_func=mod5];
Log::add_filter(Factor::LOG, filter); Log::add_filter(Factor::LOG, filter);

View file

@ -22,7 +22,7 @@ function factorial(n: count): count
event bro_init() event bro_init()
{ {
Log::create_stream(LOG, [$columns=Info, $ev=log_factor]); Log::create_stream(LOG, [$columns=Info, $ev=log_factor, $path="factor"]);
} }
event bro_done() event bro_done()

View file

@ -1,14 +0,0 @@
#!/bin/sh
# CMake/CPack versions before 2.8.3 have bugs that can create bad packages
# Since packages will be built on several different systems, a single
# version of CMake is required to obtain consistency, but can be increased
# as new versions of CMake come out that also produce working packages.
CMAKE_PACK_REQ="cmake version 2.8.6"
CMAKE_VER=`cmake -version`
if [ "${CMAKE_VER}" != "${CMAKE_PACK_REQ}" ]; then
echo "Package creation requires ${CMAKE_PACK_REQ}" >&2
exit 1
fi

View file

@ -3,8 +3,6 @@
# This script generates binary DEB packages. # This script generates binary DEB packages.
# They can be found in ../build/ after running. # They can be found in ../build/ after running.
./check-cmake || { exit 1; }
# The DEB CPack generator depends on `dpkg-shlibdeps` to automatically # The DEB CPack generator depends on `dpkg-shlibdeps` to automatically
# determine what dependencies to set for the packages # determine what dependencies to set for the packages
type dpkg-shlibdeps > /dev/null 2>&1 || { type dpkg-shlibdeps > /dev/null 2>&1 || {

View file

@ -3,14 +3,6 @@
# This script creates binary packages for Mac OS X. # This script creates binary packages for Mac OS X.
# They can be found in ../build/ after running. # They can be found in ../build/ after running.
cmake -P /dev/stdin << "EOF"
if ( ${CMAKE_VERSION} VERSION_LESS 2.8.9 )
message(FATAL_ERROR "CMake >= 2.8.9 required to build package")
endif ()
EOF
[ $? -ne 0 ] && exit 1;
type sw_vers > /dev/null 2>&1 || { type sw_vers > /dev/null 2>&1 || {
echo "Unable to get Mac OS X version" >&2; echo "Unable to get Mac OS X version" >&2;
exit 1; exit 1;

View file

@ -3,8 +3,6 @@
# This script generates binary RPM packages. # This script generates binary RPM packages.
# They can be found in ../build/ after running. # They can be found in ../build/ after running.
./check-cmake || { exit 1; }
# The RPM CPack generator depends on `rpmbuild` to create packages # The RPM CPack generator depends on `rpmbuild` to create packages
type rpmbuild > /dev/null 2>&1 || { type rpmbuild > /dev/null 2>&1 || {
echo "\ echo "\

View file

@ -53,7 +53,8 @@ function set_limit(f: fa_file, args: Files::AnalyzerArgs, n: count): bool
function on_add(f: fa_file, args: Files::AnalyzerArgs) function on_add(f: fa_file, args: Files::AnalyzerArgs)
{ {
if ( ! args?$extract_filename ) if ( ! args?$extract_filename )
args$extract_filename = cat("extract-", f$source, "-", f$id); args$extract_filename = cat("extract-", f$last_active, "-", f$source,
"-", f$id);
f$info$extracted = args$extract_filename; f$info$extracted = args$extract_filename;
args$extract_filename = build_path_compressed(prefix, args$extract_filename); args$extract_filename = build_path_compressed(prefix, args$extract_filename);

View file

@ -195,7 +195,7 @@ event Input::end_of_data(name: string, source: string)
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Unified2::LOG, [$columns=Info, $ev=log_unified2]); Log::create_stream(Unified2::LOG, [$columns=Info, $ev=log_unified2, $path="unified2"]);
if ( sid_msg == "" ) if ( sid_msg == "" )
{ {

View file

@ -36,7 +36,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509]); Log::create_stream(X509::LOG, [$columns=Info, $ev=log_x509, $path="x509"]);
} }
redef record Files::Info += { redef record Files::Info += {

View file

@ -159,5 +159,5 @@ event bro_init() &priority=5
terminate(); terminate();
} }
Log::create_stream(Cluster::LOG, [$columns=Info]); Log::create_stream(Cluster::LOG, [$columns=Info, $path="cluster"]);
} }

View file

@ -164,7 +164,7 @@ const src_names = {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Communication::LOG, [$columns=Info]); Log::create_stream(Communication::LOG, [$columns=Info, $path="communication"]);
} }
function do_script_log_common(level: count, src: count, msg: string) function do_script_log_common(level: count, src: count, msg: string)

View file

@ -38,7 +38,7 @@ redef record connection += {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DPD::LOG, [$columns=Info]); Log::create_stream(DPD::LOG, [$columns=Info, $path="dpd"]);
} }
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=10 event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=10

View file

@ -313,7 +313,7 @@ global analyzer_add_callbacks: table[Files::Tag] of function(f: fa_file, args: A
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Files::LOG, [$columns=Info, $ev=log_files]); Log::create_stream(Files::LOG, [$columns=Info, $ev=log_files, $path="files"]);
} }
function set_info(f: fa_file) function set_info(f: fa_file)

View file

@ -32,6 +32,8 @@ export {
FILE_NAME, FILE_NAME,
## Certificate SHA-1 hash. ## Certificate SHA-1 hash.
CERT_HASH, CERT_HASH,
## Public key MD5 hash. (SSH server host keys are a good example.)
PUBKEY_HASH,
}; };
## Data about an :bro:type:`Intel::Item`. ## Data about an :bro:type:`Intel::Item`.
@ -174,7 +176,7 @@ global min_data_store: MinDataStore &redef;
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(LOG, [$columns=Info, $ev=log_intel]); Log::create_stream(LOG, [$columns=Info, $ev=log_intel, $path="intel"]);
} }
function find(s: Seen): bool function find(s: Seen): bool

View file

@ -50,11 +50,17 @@ export {
## The event receives a single same parameter, an instance of ## The event receives a single same parameter, an instance of
## type ``columns``. ## type ``columns``.
ev: any &optional; ev: any &optional;
## A path that will be inherited by any filters added to the
## stream which do not already specify their own path.
path: string &optional;
}; };
## Builds the default path values for log filters if not otherwise ## Builds the default path values for log filters if not otherwise
## specified by a filter. The default implementation uses *id* ## specified by a filter. The default implementation uses *id*
## to derive a name. ## to derive a name. Upon adding a filter to a stream, if neither
## ``path`` nor ``path_func`` is explicitly set by them, then
## this function is used as the ``path_func``.
## ##
## id: The ID associated with the log stream. ## id: The ID associated with the log stream.
## ##
@ -143,7 +149,9 @@ export {
## to compute the string dynamically. It is ok to return ## to compute the string dynamically. It is ok to return
## different strings for separate calls, but be careful: it's ## different strings for separate calls, but be careful: it's
## easy to flood the disk by returning a new string for each ## easy to flood the disk by returning a new string for each
## connection. ## connection. Upon adding a filter to a stream, if neither
## ``path`` nor ``path_func`` is explicitly set by them, then
## :bro:see:`default_path_func` is used.
## ##
## id: The ID associated with the log stream. ## id: The ID associated with the log stream.
## ##
@ -379,6 +387,8 @@ export {
global active_streams: table[ID] of Stream = table(); global active_streams: table[ID] of Stream = table();
} }
global all_streams: table[ID] of Stream = table();
# We keep a script-level copy of all filters so that we can manipulate them. # We keep a script-level copy of all filters so that we can manipulate them.
global filters: table[ID, string] of Filter; global filters: table[ID, string] of Filter;
@ -463,6 +473,7 @@ function create_stream(id: ID, stream: Stream) : bool
return F; return F;
active_streams[id] = stream; active_streams[id] = stream;
all_streams[id] = stream;
return add_default_filter(id); return add_default_filter(id);
} }
@ -470,6 +481,7 @@ function create_stream(id: ID, stream: Stream) : bool
function remove_stream(id: ID) : bool function remove_stream(id: ID) : bool
{ {
delete active_streams[id]; delete active_streams[id];
delete all_streams[id];
return __remove_stream(id); return __remove_stream(id);
} }
@ -482,10 +494,12 @@ function disable_stream(id: ID) : bool
function add_filter(id: ID, filter: Filter) : bool function add_filter(id: ID, filter: Filter) : bool
{ {
# This is a work-around for the fact that we can't forward-declare local stream = all_streams[id];
# the default_path_func and then use it as &default in the record
# definition. if ( stream?$path && ! filter?$path )
if ( ! filter?$path_func ) filter$path = stream$path;
if ( ! filter?$path && ! filter?$path_func )
filter$path_func = default_path_func; filter$path_func = default_path_func;
filters[id, filter$name] = filter; filters[id, filter$name] = filter;

View file

@ -21,7 +21,7 @@ export {
## underscores and using leading capitals on each word except for ## underscores and using leading capitals on each word except for
## abbreviations which are kept in all capitals. For example, ## abbreviations which are kept in all capitals. For example,
## SSH::Password_Guessing is for hosts that have crossed a threshold of ## SSH::Password_Guessing is for hosts that have crossed a threshold of
## heuristically determined failed SSH logins. ## failed SSH logins.
type Type: enum { type Type: enum {
## Notice reporting a count of how often a notice occurred. ## Notice reporting a count of how often a notice occurred.
Tally, Tally,
@ -349,9 +349,9 @@ function log_mailing_postprocessor(info: Log::RotationInfo): bool
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Notice::LOG, [$columns=Info, $ev=log_notice]); Log::create_stream(Notice::LOG, [$columns=Info, $ev=log_notice, $path="notice"]);
Log::create_stream(Notice::ALARM_LOG, [$columns=Notice::Info]); Log::create_stream(Notice::ALARM_LOG, [$columns=Notice::Info, $path="notice_alarm"]);
# If Bro is configured for mailing notices, set up mailing for alarms. # If Bro is configured for mailing notices, set up mailing for alarms.
# Make sure that this alarm log is also output as text so that it can # Make sure that this alarm log is also output as text so that it can
# be packaged up and emailed later. # be packaged up and emailed later.

View file

@ -294,7 +294,7 @@ global current_conn: connection;
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Weird::LOG, [$columns=Info, $ev=log_weird]); Log::create_stream(Weird::LOG, [$columns=Info, $ev=log_weird, $path="weird"]);
} }
function flow_id_string(src: addr, dst: addr): string function flow_id_string(src: addr, dst: addr): string

View file

@ -159,7 +159,7 @@ event filter_change_tracking()
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(PacketFilter::LOG, [$columns=Info]); Log::create_stream(PacketFilter::LOG, [$columns=Info, $path="packet_filter"]);
# Preverify the capture and restrict filters to give more granular failure messages. # Preverify the capture and restrict filters to give more granular failure messages.
for ( id in capture_filters ) for ( id in capture_filters )

View file

@ -45,7 +45,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Reporter::LOG, [$columns=Info]); Log::create_stream(Reporter::LOG, [$columns=Info, $path="reporter"]);
} }
event reporter_info(t: time, msg: string, location: string) &priority=-5 event reporter_info(t: time, msg: string, location: string) &priority=-5

View file

@ -142,7 +142,7 @@ global did_sig_log: set[string] &read_expire = 1 hr;
event bro_init() event bro_init()
{ {
Log::create_stream(Signatures::LOG, [$columns=Info, $ev=log_signature]); Log::create_stream(Signatures::LOG, [$columns=Info, $ev=log_signature, $path="signatures"]);
} }
# Returns true if the given signature has already been triggered for the given # Returns true if the given signature has already been triggered for the given
@ -277,7 +277,7 @@ event signature_match(state: signature_state, msg: string, data: string)
orig, sig_id, hcount); orig, sig_id, hcount);
Log::write(Signatures::LOG, Log::write(Signatures::LOG,
[$note=Multiple_Sig_Responders, [$ts=network_time(), $note=Multiple_Sig_Responders,
$src_addr=orig, $sig_id=sig_id, $event_msg=msg, $src_addr=orig, $sig_id=sig_id, $event_msg=msg,
$host_count=hcount, $sub_msg=horz_scan_msg]); $host_count=hcount, $sub_msg=horz_scan_msg]);

View file

@ -105,7 +105,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Software::LOG, [$columns=Info, $ev=log_software]); Log::create_stream(Software::LOG, [$columns=Info, $ev=log_software, $path="software"]);
} }
type Description: record { type Description: record {

View file

@ -89,7 +89,7 @@ redef likely_server_ports += { ayiya_ports, teredo_ports, gtpv1_ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Tunnel::LOG, [$columns=Info]); Log::create_stream(Tunnel::LOG, [$columns=Info, $path="tunnel"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_AYIYA, ayiya_ports); Analyzer::register_for_ports(Analyzer::ANALYZER_AYIYA, ayiya_ports);
Analyzer::register_for_ports(Analyzer::ANALYZER_TEREDO, teredo_ports); Analyzer::register_for_ports(Analyzer::ANALYZER_TEREDO, teredo_ports);

View file

@ -929,7 +929,7 @@ const tcp_storm_interarrival_thresh = 1 sec &redef;
## seeing our peer's ACKs. Set to zero to turn off this determination. ## seeing our peer's ACKs. Set to zero to turn off this determination.
## ##
## .. bro:see:: tcp_max_above_hole_without_any_acks tcp_excessive_data_without_further_acks ## .. bro:see:: tcp_max_above_hole_without_any_acks tcp_excessive_data_without_further_acks
const tcp_max_initial_window = 4096 &redef; const tcp_max_initial_window = 16384 &redef;
## If we're not seeing our peer's ACKs, the maximum volume of data above a ## If we're not seeing our peer's ACKs, the maximum volume of data above a
## sequence hole that we'll tolerate before assuming that there's been a packet ## sequence hole that we'll tolerate before assuming that there's been a packet
@ -937,7 +937,7 @@ const tcp_max_initial_window = 4096 &redef;
## don't ever give up. ## don't ever give up.
## ##
## .. bro:see:: tcp_max_initial_window tcp_excessive_data_without_further_acks ## .. bro:see:: tcp_max_initial_window tcp_excessive_data_without_further_acks
const tcp_max_above_hole_without_any_acks = 4096 &redef; const tcp_max_above_hole_without_any_acks = 16384 &redef;
## If we've seen this much data without any of it being acked, we give up ## If we've seen this much data without any of it being acked, we give up
## on that connection to avoid memory exhaustion due to buffering all that ## on that connection to avoid memory exhaustion due to buffering all that
@ -2216,6 +2216,41 @@ export {
const heartbeat_interval = 1.0 secs &redef; const heartbeat_interval = 1.0 secs &redef;
} }
module SSH;
export {
## The client and server each have some preferences for the algorithms used
## in each direction.
type Algorithm_Prefs: record {
## The algorithm preferences for client to server communication
client_to_server: vector of string &optional;
## The algorithm preferences for server to client communication
server_to_client: vector of string &optional;
};
## This record lists the preferences of an SSH endpoint for
## algorithm selection. During the initial :abbr:`SSH (Secure Shell)`
## key exchange, each endpoint lists the algorithms
## that it supports, in order of preference. See
## :rfc:`4253#section-7.1` for details.
type Capabilities: record {
## Key exchange algorithms
kex_algorithms: string_vec;
## The algorithms supported for the server host key
server_host_key_algorithms: string_vec;
## Symmetric encryption algorithm preferences
encryption_algorithms: Algorithm_Prefs;
## Symmetric MAC algorithm preferences
mac_algorithms: Algorithm_Prefs;
## Compression algorithm preferences
compression_algorithms: Algorithm_Prefs;
## Language preferences
languages: Algorithm_Prefs &optional;
## Are these the capabilities of the server?
is_server: bool;
};
}
module GLOBAL; module GLOBAL;
## An NTP message. ## An NTP message.
@ -2849,7 +2884,44 @@ export {
attributes : RADIUS::Attributes &optional; attributes : RADIUS::Attributes &optional;
}; };
} }
module GLOBAL;
module RDP;
export {
type RDP::EarlyCapabilityFlags: record {
support_err_info_pdu: bool;
want_32bpp_session: bool;
support_statusinfo_pdu: bool;
strong_asymmetric_keys: bool;
support_monitor_layout_pdu: bool;
support_netchar_autodetect: bool;
support_dynvc_gfx_protocol: bool;
support_dynamic_time_zone: bool;
support_heartbeat_pdu: bool;
};
type RDP::ClientCoreData: record {
version_major: count;
version_minor: count;
desktop_width: count;
desktop_height: count;
color_depth: count;
sas_sequence: count;
keyboard_layout: count;
client_build: count;
client_name: string;
keyboard_type: count;
keyboard_sub: count;
keyboard_function_key: count;
ime_file_name: string;
post_beta2_color_depth: count &optional;
client_product_id: string &optional;
serial_number: count &optional;
high_color_depth: count &optional;
supported_color_depths: count &optional;
ec_flags: RDP::EarlyCapabilityFlags &optional;
dig_product_id: string &optional;
};
}
@load base/bif/plugins/Bro_SNMP.types.bif @load base/bif/plugins/Bro_SNMP.types.bif

View file

@ -49,6 +49,7 @@
@load base/protocols/mysql @load base/protocols/mysql
@load base/protocols/pop3 @load base/protocols/pop3
@load base/protocols/radius @load base/protocols/radius
@load base/protocols/rdp
@load base/protocols/snmp @load base/protocols/snmp
@load base/protocols/smtp @load base/protocols/smtp
@load base/protocols/socks @load base/protocols/socks

View file

@ -50,7 +50,7 @@ event ChecksumOffloading::check()
bad_checksum_msg += "UDP"; bad_checksum_msg += "UDP";
} }
local message = fmt("Your %s invalid %s checksums, most likely from NIC checksum offloading.", packet_src, bad_checksum_msg); local message = fmt("Your %s invalid %s checksums, most likely from NIC checksum offloading. By default, packets with invalid checksums are discarded by Bro unless using the -C command-line option or toggling the 'ignore_checksums' variable. Alternatively, disable checksum offloading by the network adapter to ensure Bro analyzes the actual checksums that are transmitted.", packet_src, bad_checksum_msg);
Reporter::warning(message); Reporter::warning(message);
done = T; done = T;
} }

View file

@ -127,7 +127,7 @@ redef record connection += {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Conn::LOG, [$columns=Info, $ev=log_conn]); Log::create_stream(Conn::LOG, [$columns=Info, $ev=log_conn, $path="conn"]);
} }
function conn_state(c: connection, trans: transport_proto): string function conn_state(c: connection, trans: transport_proto): string

View file

@ -49,7 +49,7 @@ redef likely_server_ports += { 67/udp };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp]); Log::create_stream(DHCP::LOG, [$columns=Info, $ev=log_dhcp, $path="dhcp"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_DHCP, ports);
} }

View file

@ -36,7 +36,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DNP3::LOG, [$columns=Info, $ev=log_dnp3]); Log::create_stream(DNP3::LOG, [$columns=Info, $ev=log_dnp3, $path="dnp3"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DNP3_TCP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_DNP3_TCP, ports);
} }

View file

@ -150,7 +150,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(DNS::LOG, [$columns=Info, $ev=log_dns]); Log::create_stream(DNS::LOG, [$columns=Info, $ev=log_dns, $path="dns"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_DNS, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_DNS, ports);
} }
@ -305,6 +305,9 @@ hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
if ( ans$answer_type == DNS_ANS ) if ( ans$answer_type == DNS_ANS )
{ {
if ( ! c$dns?$query )
c$dns$query = ans$query;
c$dns$AA = msg$AA; c$dns$AA = msg$AA;
c$dns$RA = msg$RA; c$dns$RA = msg$RA;

View file

@ -52,7 +52,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(FTP::LOG, [$columns=Info, $ev=log_ftp]); Log::create_stream(FTP::LOG, [$columns=Info, $ev=log_ftp, $path="ftp"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_FTP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_FTP, ports);
} }

View file

@ -142,7 +142,7 @@ redef likely_server_ports += { ports };
# Initialize the HTTP logging stream and ports. # Initialize the HTTP logging stream and ports.
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(HTTP::LOG, [$columns=Info, $ev=log_http]); Log::create_stream(HTTP::LOG, [$columns=Info, $ev=log_http, $path="http"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_HTTP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_HTTP, ports);
} }

View file

@ -43,7 +43,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(IRC::LOG, [$columns=Info, $ev=irc_log]); Log::create_stream(IRC::LOG, [$columns=Info, $ev=irc_log, $path="irc"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_IRC, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_IRC, ports);
} }

View file

@ -34,7 +34,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Modbus::LOG, [$columns=Info, $ev=log_modbus]); Log::create_stream(Modbus::LOG, [$columns=Info, $ev=log_modbus, $path="modbus"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_MODBUS, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_MODBUS, ports);
} }

View file

@ -39,7 +39,7 @@ const ports = { 1434/tcp, 3306/tcp };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(mysql::LOG, [$columns=Info, $ev=log_mysql]); Log::create_stream(mysql::LOG, [$columns=Info, $ev=log_mysql, $path="mysql"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_MYSQL, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_MYSQL, ports);
} }

View file

@ -59,7 +59,7 @@ const ports = { 1812/udp };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(RADIUS::LOG, [$columns=Info, $ev=log_radius]); Log::create_stream(RADIUS::LOG, [$columns=Info, $ev=log_radius, $path="radius"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_RADIUS, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_RADIUS, ports);
} }

View file

@ -0,0 +1,3 @@
@load ./consts
@load ./main
@load-sigs ./dpd.sig

View file

@ -0,0 +1,323 @@
module RDP;
export {
# http://www.c-amie.co.uk/technical/mstsc-versions/
const builds = {
[0419] = "RDP 4.0",
[2195] = "RDP 5.0",
[2221] = "RDP 5.0",
[2600] = "RDP 5.1",
[3790] = "RDP 5.2",
[6000] = "RDP 6.0",
[6001] = "RDP 6.1",
[6002] = "RDP 6.2",
[7600] = "RDP 7.0",
[7601] = "RDP 7.1",
[9200] = "RDP 8.0",
[9600] = "RDP 8.1",
[25189] = "RDP 8.0 (Mac)",
[25282] = "RDP 8.0 (Mac)"
} &default = function(n: count): string { return fmt("client_build-%d", n); };
const security_protocols = {
[0x00] = "RDP",
[0x01] = "SSL",
[0x02] = "HYBRID",
[0x08] = "HYBRID_EX"
} &default = function(n: count): string { return fmt("security_protocol-%d", n); };
const failure_codes = {
[0x01] = "SSL_REQUIRED_BY_SERVER",
[0x02] = "SSL_NOT_ALLOWED_BY_SERVER",
[0x03] = "SSL_CERT_NOT_ON_SERVER",
[0x04] = "INCONSISTENT_FLAGS",
[0x05] = "HYBRID_REQUIRED_BY_SERVER",
[0x06] = "SSL_WITH_USER_AUTH_REQUIRED_BY_SERVER"
} &default = function(n: count): string { return fmt("failure_code-%d", n); };
const cert_types = {
[1] = "RSA",
[2] = "X.509"
} &default = function(n: count): string { return fmt("cert_type-%d", n); };
const encryption_methods = {
[0] = "None",
[1] = "40bit",
[2] = "128bit",
[8] = "56bit",
[10] = "FIPS"
} &default = function(n: count): string { return fmt("encryption_method-%d", n); };
const encryption_levels = {
[0] = "None",
[1] = "Low",
[2] = "Client compatible",
[3] = "High",
[4] = "FIPS"
} &default = function(n: count): string { return fmt("encryption_level-%d", n); };
const high_color_depths = {
[0x0004] = "4bit",
[0x0008] = "8bit",
[0x000F] = "15bit",
[0x0010] = "16bit",
[0x0018] = "24bit"
} &default = function(n: count): string { return fmt("high_color_depth-%d", n); };
const color_depths = {
[0x0001] = "24bit",
[0x0002] = "16bit",
[0x0004] = "15bit",
[0x0008] = "32bit"
} &default = function(n: count): string { return fmt("color_depth-%d", n); };
const results = {
[0] = "Success",
[1] = "User rejected",
[2] = "Resources not available",
[3] = "Rejected for symmetry breaking",
[4] = "Locked conference",
} &default = function(n: count): string { return fmt("result-%d", n); };
# http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx
const languages = {
[1078] = "Afrikaans - South Africa",
[1052] = "Albanian - Albania",
[1156] = "Alsatian",
[1118] = "Amharic - Ethiopia",
[1025] = "Arabic - Saudi Arabia",
[5121] = "Arabic - Algeria",
[15361] = "Arabic - Bahrain",
[3073] = "Arabic - Egypt",
[2049] = "Arabic - Iraq",
[11265] = "Arabic - Jordan",
[13313] = "Arabic - Kuwait",
[12289] = "Arabic - Lebanon",
[4097] = "Arabic - Libya",
[6145] = "Arabic - Morocco",
[8193] = "Arabic - Oman",
[16385] = "Arabic - Qatar",
[10241] = "Arabic - Syria",
[7169] = "Arabic - Tunisia",
[14337] = "Arabic - U.A.E.",
[9217] = "Arabic - Yemen",
[1067] = "Armenian - Armenia",
[1101] = "Assamese",
[2092] = "Azeri (Cyrillic)",
[1068] = "Azeri (Latin)",
[1133] = "Bashkir",
[1069] = "Basque",
[1059] = "Belarusian",
[1093] = "Bengali (India)",
[2117] = "Bengali (Bangladesh)",
[5146] = "Bosnian (Bosnia/Herzegovina)",
[1150] = "Breton",
[1026] = "Bulgarian",
[1109] = "Burmese",
[1027] = "Catalan",
[1116] = "Cherokee - United States",
[2052] = "Chinese - People's Republic of China",
[4100] = "Chinese - Singapore",
[1028] = "Chinese - Taiwan",
[3076] = "Chinese - Hong Kong SAR",
[5124] = "Chinese - Macao SAR",
[1155] = "Corsican",
[1050] = "Croatian",
[4122] = "Croatian (Bosnia/Herzegovina)",
[1029] = "Czech",
[1030] = "Danish",
[1164] = "Dari",
[1125] = "Divehi",
[1043] = "Dutch - Netherlands",
[2067] = "Dutch - Belgium",
[1126] = "Edo",
[1033] = "English - United States",
[2057] = "English - United Kingdom",
[3081] = "English - Australia",
[10249] = "English - Belize",
[4105] = "English - Canada",
[9225] = "English - Caribbean",
[15369] = "English - Hong Kong SAR",
[16393] = "English - India",
[14345] = "English - Indonesia",
[6153] = "English - Ireland",
[8201] = "English - Jamaica",
[17417] = "English - Malaysia",
[5129] = "English - New Zealand",
[13321] = "English - Philippines",
[18441] = "English - Singapore",
[7177] = "English - South Africa",
[11273] = "English - Trinidad",
[12297] = "English - Zimbabwe",
[1061] = "Estonian",
[1080] = "Faroese",
[1065] = "Farsi",
[1124] = "Filipino",
[1035] = "Finnish",
[1036] = "French - France",
[2060] = "French - Belgium",
[11276] = "French - Cameroon",
[3084] = "French - Canada",
[9228] = "French - Democratic Rep. of Congo",
[12300] = "French - Cote d'Ivoire",
[15372] = "French - Haiti",
[5132] = "French - Luxembourg",
[13324] = "French - Mali",
[6156] = "French - Monaco",
[14348] = "French - Morocco",
[58380] = "French - North Africa",
[8204] = "French - Reunion",
[10252] = "French - Senegal",
[4108] = "French - Switzerland",
[7180] = "French - West Indies",
[1122] = "French - West Indies",
[1127] = "Fulfulde - Nigeria",
[1071] = "FYRO Macedonian",
[1110] = "Galician",
[1079] = "Georgian",
[1031] = "German - Germany",
[3079] = "German - Austria",
[5127] = "German - Liechtenstein",
[4103] = "German - Luxembourg",
[2055] = "German - Switzerland",
[1032] = "Greek",
[1135] = "Greenlandic",
[1140] = "Guarani - Paraguay",
[1095] = "Gujarati",
[1128] = "Hausa - Nigeria",
[1141] = "Hawaiian - United States",
[1037] = "Hebrew",
[1081] = "Hindi",
[1038] = "Hungarian",
[1129] = "Ibibio - Nigeria",
[1039] = "Icelandic",
[1136] = "Igbo - Nigeria",
[1057] = "Indonesian",
[1117] = "Inuktitut",
[2108] = "Irish",
[1040] = "Italian - Italy",
[2064] = "Italian - Switzerland",
[1041] = "Japanese",
[1158] = "K'iche",
[1099] = "Kannada",
[1137] = "Kanuri - Nigeria",
[2144] = "Kashmiri",
[1120] = "Kashmiri (Arabic)",
[1087] = "Kazakh",
[1107] = "Khmer",
[1159] = "Kinyarwanda",
[1111] = "Konkani",
[1042] = "Korean",
[1088] = "Kyrgyz (Cyrillic)",
[1108] = "Lao",
[1142] = "Latin",
[1062] = "Latvian",
[1063] = "Lithuanian",
[1134] = "Luxembourgish",
[1086] = "Malay - Malaysia",
[2110] = "Malay - Brunei Darussalam",
[1100] = "Malayalam",
[1082] = "Maltese",
[1112] = "Manipuri",
[1153] = "Maori - New Zealand",
[1146] = "Mapudungun",
[1102] = "Marathi",
[1148] = "Mohawk",
[1104] = "Mongolian (Cyrillic)",
[2128] = "Mongolian (Mongolian)",
[1121] = "Nepali",
[2145] = "Nepali - India",
[1044] = "Norwegian (Bokmål)",
[2068] = "Norwegian (Nynorsk)",
[1154] = "Occitan",
[1096] = "Oriya",
[1138] = "Oromo",
[1145] = "Papiamentu",
[1123] = "Pashto",
[1045] = "Polish",
[1046] = "Portuguese - Brazil",
[2070] = "Portuguese - Portugal",
[1094] = "Punjabi",
[2118] = "Punjabi (Pakistan)",
[1131] = "Quecha - Bolivia",
[2155] = "Quecha - Ecuador",
[3179] = "Quecha - Peru CB",
[1047] = "Rhaeto-Romanic",
[1048] = "Romanian",
[2072] = "Romanian - Moldava",
[1049] = "Russian",
[2073] = "Russian - Moldava",
[1083] = "Sami (Lappish)",
[1103] = "Sanskrit",
[1084] = "Scottish Gaelic",
[1132] = "Sepedi",
[3098] = "Serbian (Cyrillic)",
[2074] = "Serbian (Latin)",
[1113] = "Sindhi - India",
[2137] = "Sindhi - Pakistan",
[1115] = "Sinhalese - Sri Lanka",
[1051] = "Slovak",
[1060] = "Slovenian",
[1143] = "Somali",
[1070] = "Sorbian",
[3082] = "Spanish - Spain (Modern Sort)",
[1034] = "Spanish - Spain (Traditional Sort)",
[11274] = "Spanish - Argentina",
[16394] = "Spanish - Bolivia",
[13322] = "Spanish - Chile",
[9226] = "Spanish - Colombia",
[5130] = "Spanish - Costa Rica",
[7178] = "Spanish - Dominican Republic",
[12298] = "Spanish - Ecuador",
[17418] = "Spanish - El Salvador",
[4106] = "Spanish - Guatemala",
[18442] = "Spanish - Honduras",
[22538] = "Spanish - Latin America",
[2058] = "Spanish - Mexico",
[19466] = "Spanish - Nicaragua",
[6154] = "Spanish - Panama",
[15370] = "Spanish - Paraguay",
[10250] = "Spanish - Peru",
[20490] = "Spanish - Puerto Rico",
[21514] = "Spanish - United States",
[14346] = "Spanish - Uruguay",
[8202] = "Spanish - Venezuela",
[1072] = "Sutu",
[1089] = "Swahili",
[1053] = "Swedish",
[2077] = "Swedish - Finland",
[1114] = "Syriac",
[1064] = "Tajik",
[1119] = "Tamazight (Arabic)",
[2143] = "Tamazight (Latin)",
[1097] = "Tamil",
[1092] = "Tatar",
[1098] = "Telugu",
[1054] = "Thai",
[2129] = "Tibetan - Bhutan",
[1105] = "Tibetan - People's Republic of China",
[2163] = "Tigrigna - Eritrea",
[1139] = "Tigrigna - Ethiopia",
[1073] = "Tsonga",
[1074] = "Tswana",
[1055] = "Turkish",
[1090] = "Turkmen",
[1152] = "Uighur - China",
[1058] = "Ukrainian",
[1056] = "Urdu",
[2080] = "Urdu - India",
[2115] = "Uzbek (Cyrillic)",
[1091] = "Uzbek (Latin)",
[1075] = "Venda",
[1066] = "Vietnamese",
[1106] = "Welsh",
[1160] = "Wolof",
[1076] = "Xhosa",
[1157] = "Yakut",
[1144] = "Yi",
[1085] = "Yiddish",
[1130] = "Yoruba",
[1077] = "Zulu",
[1279] = "HID (Human Interface Device)",
} &default = function(n: count): string { return fmt("keyboard-%d", n); };
}

View file

@ -0,0 +1,12 @@
signature dpd_rdp_client {
ip-proto == tcp
# Client request
payload /.*(Cookie: mstshash\=|Duca.*(rdpdr|rdpsnd|drdynvc|cliprdr))/
requires-reverse-signature dpd_rdp_server
enable "rdp"
}
signature dpd_rdp_server {
ip-proto == tcp
payload /(.{5}\xd0|.*McDn)/
}

View file

@ -0,0 +1,269 @@
##! Implements base functionality for RDP analysis. Generates the rdp.log file.
@load ./consts
module RDP;
export {
redef enum Log::ID += { LOG };
type Info: record {
## Timestamp for when the event happened.
ts: time &log;
## Unique ID for the connection.
uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log;
## Cookie value used by the client machine.
## This is typically a username.
cookie: string &log &optional;
## Status result for the connection. It's a mix between
## RDP negotation failure messages and GCC server create
## response messages.
result: string &log &optional;
## Security protocol chosen by the server.
security_protocol: string &log &optional;
## Keyboard layout (language) of the client machine.
keyboard_layout: string &log &optional;
## RDP client version used by the client machine.
client_build: string &log &optional;
## Name of the client machine.
client_name: string &log &optional;
## Product ID of the client machine.
client_dig_product_id: string &log &optional;
## Desktop width of the client machine.
desktop_width: count &log &optional;
## Desktop height of the client machine.
desktop_height: count &log &optional;
## The color depth requested by the client in
## the high_color_depth field.
requested_color_depth: string &log &optional;
## If the connection is being encrypted with native
## RDP encryption, this is the type of cert
## being used.
cert_type: string &log &optional;
## The number of certs seen. X.509 can transfer an
## entire certificate chain.
cert_count: count &log &default=0;
## Indicates if the provided certificate or certificate
## chain is permanent or temporary.
cert_permanent: bool &log &optional;
## Encryption level of the connection.
encryption_level: string &log &optional;
## Encryption method of the connection.
encryption_method: string &log &optional;
};
## If true, detach the RDP analyzer from the connection to prevent
## continuing to process encrypted traffic.
const disable_analyzer_after_detection = F &redef;
## The amount of time to monitor an RDP session from when it is first
## identified. When this interval is reached, the session is logged.
const rdp_check_interval = 10secs &redef;
## Event that can be handled to access the rdp record as it is sent on
## to the logging framework.
global log_rdp: event(rec: Info);
}
# Internal fields that aren't useful externally
redef record Info += {
## The analyzer ID used for the analyzer instance attached
## to each connection. It is not used for logging since it's a
## meaningless arbitrary number.
analyzer_id: count &optional;
## Track status of logging RDP connections.
done: bool &default=F;
};
redef record connection += {
rdp: Info &optional;
};
const ports = { 3389/tcp };
redef likely_server_ports += { ports };
event bro_init() &priority=5
{
Log::create_stream(RDP::LOG, [$columns=RDP::Info, $ev=log_rdp, $path="rdp"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_RDP, ports);
}
function write_log(c: connection)
{
local info = c$rdp;
if ( info$done )
return;
# Mark this record as fully logged and finished.
info$done = T;
# Verify that the RDP session contains
# RDP data before writing it to the log.
if ( info?$cookie || info?$keyboard_layout || info?$result )
Log::write(RDP::LOG, info);
}
event check_record(c: connection)
{
# If the record was logged, then stop processing.
if ( c$rdp$done )
return;
# If the value rdp_check_interval has passed since the
# RDP session was started, then log the record.
local diff = network_time() - c$rdp$ts;
if ( diff > rdp_check_interval )
{
write_log(c);
# Remove the analyzer if it is still attached.
if ( disable_analyzer_after_detection &&
connection_exists(c$id) &&
c$rdp?$analyzer_id )
{
disable_analyzer(c$id, c$rdp$analyzer_id);
}
return;
}
else
{
# If the analyzer is attached and the duration
# to monitor the RDP session was not met, then
# reschedule the logging event.
schedule rdp_check_interval { check_record(c) };
}
}
function set_session(c: connection)
{
if ( ! c?$rdp )
{
c$rdp = [$ts=network_time(),$id=c$id,$uid=c$uid];
# The RDP session is scheduled to be logged from
# the time it is first initiated.
schedule rdp_check_interval { check_record(c) };
}
}
event rdp_connect_request(c: connection, cookie: string) &priority=5
{
set_session(c);
c$rdp$cookie = cookie;
}
event rdp_negotiation_response(c: connection, security_protocol: count) &priority=5
{
set_session(c);
c$rdp$security_protocol = security_protocols[security_protocol];
}
event rdp_negotiation_failure(c: connection, failure_code: count) &priority=5
{
set_session(c);
c$rdp$result = failure_codes[failure_code];
}
event rdp_client_core_data(c: connection, data: RDP::ClientCoreData) &priority=5
{
set_session(c);
c$rdp$keyboard_layout = RDP::languages[data$keyboard_layout];
c$rdp$client_build = RDP::builds[data$client_build];
c$rdp$client_name = data$client_name;
c$rdp$client_dig_product_id = data$dig_product_id;
c$rdp$desktop_width = data$desktop_width;
c$rdp$desktop_height = data$desktop_height;
if ( data?$ec_flags && data$ec_flags$want_32bpp_session )
c$rdp$requested_color_depth = "32bit";
else
c$rdp$requested_color_depth = RDP::high_color_depths[data$high_color_depth];
}
event rdp_gcc_server_create_response(c: connection, result: count) &priority=5
{
set_session(c);
c$rdp$result = RDP::results[result];
}
event rdp_server_security(c: connection, encryption_method: count, encryption_level: count) &priority=5
{
set_session(c);
c$rdp$encryption_method = RDP::encryption_methods[encryption_method];
c$rdp$encryption_level = RDP::encryption_levels[encryption_level];
}
event rdp_server_certificate(c: connection, cert_type: count, permanently_issued: bool) &priority=5
{
set_session(c);
c$rdp$cert_type = RDP::cert_types[cert_type];
# There are no events for proprietary/RSA certs right
# now so we manually count this one.
if ( c$rdp$cert_type == "RSA" )
++c$rdp$cert_count;
c$rdp$cert_permanent = permanently_issued;
}
event rdp_begin_encryption(c: connection, security_protocol: count) &priority=5
{
set_session(c);
if ( ! c$rdp?$result )
{
c$rdp$result = "encrypted";
}
c$rdp$security_protocol = security_protocols[security_protocol];
}
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
{
if ( c?$rdp && f$source == "RDP" )
{
# Count up X509 certs.
++c$rdp$cert_count;
Files::add_analyzer(f, Files::ANALYZER_X509);
Files::add_analyzer(f, Files::ANALYZER_MD5);
Files::add_analyzer(f, Files::ANALYZER_SHA1);
}
}
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5
{
if ( atype == Analyzer::ANALYZER_RDP )
{
set_session(c);
c$rdp$analyzer_id = aid;
}
}
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count, reason: string) &priority=5
{
# If a protocol violation occurs, then log the record immediately.
if ( c?$rdp )
write_log(c);
}
event connection_state_remove(c: connection) &priority=-5
{
# If the connection is removed, then log the record immediately.
if ( c?$rdp )
{
write_log(c);
}
}

View file

@ -92,7 +92,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(SMTP::LOG, [$columns=SMTP::Info, $ev=log_smtp]); Log::create_stream(SMTP::LOG, [$columns=SMTP::Info, $ev=log_smtp, $path="smtp"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SMTP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_SMTP, ports);
} }

View file

@ -66,7 +66,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Analyzer::register_for_ports(Analyzer::ANALYZER_SNMP, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_SNMP, ports);
Log::create_stream(SNMP::LOG, [$columns=SNMP::Info, $ev=log_snmp]); Log::create_stream(SNMP::LOG, [$columns=SNMP::Info, $ev=log_snmp, $path="snmp"]);
} }
function init_state(c: connection, h: SNMP::Header): Info function init_state(c: connection, h: SNMP::Header): Info

View file

@ -43,7 +43,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(SOCKS::LOG, [$columns=Info, $ev=log_socks]); Log::create_stream(SOCKS::LOG, [$columns=Info, $ev=log_socks, $path="socks"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SOCKS, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_SOCKS, ports);
} }

View file

@ -1 +0,0 @@
Support for Secure Shell (SSH) protocol analysis.

View file

@ -1,3 +1,2 @@
@load ./main @load ./main
@load-sigs ./dpd.sig @load-sigs ./dpd.sig

View file

@ -1,6 +1,6 @@
signature dpd_ssh_client { signature dpd_ssh_client {
ip-proto == tcp ip-proto == tcp
payload /^[sS][sS][hH]-/ payload /^[sS][sS][hH]-[12]\./
requires-reverse-signature dpd_ssh_server requires-reverse-signature dpd_ssh_server
enable "ssh" enable "ssh"
tcp-state originator tcp-state originator
@ -8,6 +8,6 @@ signature dpd_ssh_client {
signature dpd_ssh_server { signature dpd_ssh_server {
ip-proto == tcp ip-proto == tcp
payload /^[sS][sS][hH]-/ payload /^[sS][sS][hH]-[12]\./
tcp-state responder tcp-state responder
} }

View file

@ -1,15 +1,5 @@
##! Base SSH analysis script. The heuristic to blindly determine success or ##! Implements base functionality for SSH analysis. Generates the ssh.log file.
##! failure for SSH connections is implemented here. At this time, it only
##! uses the size of the data being returned from the server to make the
##! heuristic determination about success of the connection.
##! Requires that :bro:id:`use_conn_size_analyzer` is set to T! The heuristic
##! is not attempted if the connection size analyzer isn't enabled.
@load base/protocols/conn
@load base/frameworks/notice
@load base/utils/site
@load base/utils/thresholds
@load base/utils/conn-ids
@load base/utils/directions-and-hosts @load base/utils/directions-and-hosts
module SSH; module SSH;
@ -25,45 +15,63 @@ export {
uid: string &log; uid: string &log;
## The connection's 4-tuple of endpoint addresses/ports. ## The connection's 4-tuple of endpoint addresses/ports.
id: conn_id &log; id: conn_id &log;
## Indicates if the login was heuristically guessed to be ## SSH major version (1 or 2)
## "success", "failure", or "undetermined". version: count &log;
status: string &log &default="undetermined"; ## Authentication result (T=success, F=failure, unset=unknown)
auth_success: bool &log &optional;
## Direction of the connection. If the client was a local host ## Direction of the connection. If the client was a local host
## logging into an external host, this would be OUTBOUND. INBOUND ## logging into an external host, this would be OUTBOUND. INBOUND
## would be set for the opposite situation. ## would be set for the opposite situation.
# TODO: handle local-local and remote-remote better. # TODO - handle local-local and remote-remote better.
direction: Direction &log &optional; direction: Direction &log &optional;
## Software string from the client. ## The client's version string
client: string &log &optional; client: string &log &optional;
## Software string from the server. ## The server's version string
server: string &log &optional; server: string &log &optional;
## Indicate if the SSH session is done being watched. ## The encryption algorithm in use
done: bool &default=F; cipher_alg: string &log &optional;
## The signing (MAC) algorithm in use
mac_alg: string &log &optional;
## The compression algorithm in use
compression_alg: string &log &optional;
## The key exchange algorithm in use
kex_alg: string &log &optional;
## The server host key's algorithm
host_key_alg: string &log &optional;
## The server's key fingerprint
host_key: string &log &optional;
}; };
## The size in bytes of data sent by the server at which the SSH ## The set of compression algorithms. We can't accurately determine
## connection is presumed to be successful. ## authentication success or failure when compression is enabled.
const authentication_data_size = 4000 &redef; const compression_algorithms = set("zlib", "zlib@openssh.com") &redef;
## If true, we tell the event engine to not look at further data ## If true, we tell the event engine to not look at further data
## packets after the initial SSH handshake. Helps with performance ## packets after the initial SSH handshake. Helps with performance
## (especially with large file transfers) but precludes some ## (especially with large file transfers) but precludes some
## kinds of analyses. ## kinds of analyses. Defaults to T.
const skip_processing_after_detection = F &redef; const skip_processing_after_detection = T &redef;
## Event that is generated when the heuristic thinks that a login ## Event that can be handled to access the SSH record as it is sent on
## was successful. ## to the logging framework.
global heuristic_successful_login: event(c: connection);
## Event that is generated when the heuristic thinks that a login
## failed.
global heuristic_failed_login: event(c: connection);
## Event that can be handled to access the :bro:type:`SSH::Info`
## record as it is sent on to the logging framework.
global log_ssh: event(rec: Info); global log_ssh: event(rec: Info);
## Event that can be handled when the analyzer sees an SSH server host
## key. This abstracts :bro:id:`ssh1_server_host_key` and
## :bro:id:`ssh2_server_host_key`.
global ssh_server_host_key: event(c: connection, hash: string);
} }
redef record Info += {
# This connection has been logged (internal use)
logged: bool &default=F;
# Number of failures seen (internal use)
num_failures: count &default=0;
# Store capabilities from the first host for
# comparison with the second (internal use)
capabilities: Capabilities &optional;
};
redef record connection += { redef record connection += {
ssh: Info &optional; ssh: Info &optional;
}; };
@ -72,133 +80,152 @@ const ports = { 22/tcp };
redef likely_server_ports += { ports }; redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(SSH::LOG, [$columns=Info, $ev=log_ssh]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SSH, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_SSH, ports);
} Log::create_stream(SSH::LOG, [$columns=Info, $ev=log_ssh, $path="ssh"]);
}
function set_session(c: connection) function set_session(c: connection)
{ {
if ( ! c?$ssh ) if ( ! c?$ssh )
{ {
local info: Info; local info: SSH::Info;
info$ts=network_time(); info$ts = network_time();
info$uid=c$uid; info$uid = c$uid;
info$id=c$id; info$id = c$id;
c$ssh = info; c$ssh = info;
} }
} }
function check_ssh_connection(c: connection, done: bool) event ssh_server_version(c: connection, version: string)
{
# If already done watching this connection, just return.
if ( c$ssh$done )
return;
if ( done )
{
# If this connection is done, then we can look to see if
# this matches the conditions for a failed login. Failed
# logins are only detected at connection state removal.
if ( # Require originators and responders to have sent at least 50 bytes.
c$orig$size > 50 && c$resp$size > 50 &&
# Responders must be below 4000 bytes.
c$resp$size < authentication_data_size &&
# Responder must have sent fewer than 40 packets.
c$resp$num_pkts < 40 &&
# If there was a content gap we can't reliably do this heuristic.
c?$conn && c$conn$missed_bytes == 0 )# &&
# Only "normal" connections can count.
#c$conn?$conn_state && c$conn$conn_state in valid_states )
{
c$ssh$status = "failure";
event SSH::heuristic_failed_login(c);
}
if ( c$resp$size >= authentication_data_size )
{
c$ssh$status = "success";
event SSH::heuristic_successful_login(c);
}
}
else
{
# If this connection is still being tracked, then it's possible
# to watch for it to be a successful connection.
if ( c$resp$size >= authentication_data_size )
{
c$ssh$status = "success";
event SSH::heuristic_successful_login(c);
}
else
# This connection must be tracked longer. Let the scheduled
# check happen again.
return;
}
# Set the direction for the log.
c$ssh$direction = Site::is_local_addr(c$id$orig_h) ? OUTBOUND : INBOUND;
# Set the "done" flag to prevent the watching event from rescheduling
# after detection is done.
c$ssh$done=T;
if ( skip_processing_after_detection )
{
# Stop watching this connection, we don't care about it anymore.
skip_further_processing(c$id);
set_record_packets(c$id, F);
}
}
event heuristic_successful_login(c: connection) &priority=-5
{
Log::write(SSH::LOG, c$ssh);
}
event heuristic_failed_login(c: connection) &priority=-5
{
Log::write(SSH::LOG, c$ssh);
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$ssh )
{
check_ssh_connection(c, T);
if ( c$ssh$status == "undetermined" )
Log::write(SSH::LOG, c$ssh);
}
}
event ssh_watcher(c: connection)
{
local id = c$id;
# don't go any further if this connection is gone already!
if ( ! connection_exists(id) )
return;
lookup_connection(c$id);
check_ssh_connection(c, F);
if ( ! c$ssh$done )
schedule +15secs { ssh_watcher(c) };
}
event ssh_server_version(c: connection, version: string) &priority=5
{ {
set_session(c); set_session(c);
c$ssh$server = version; c$ssh$server = version;
} }
event ssh_client_version(c: connection, version: string) &priority=5 event ssh_client_version(c: connection, version: string)
{ {
set_session(c); set_session(c);
c$ssh$client = version; c$ssh$client = version;
# The heuristic detection for SSH relies on the ConnSize analyzer. if ( ( |version| > 3 ) && ( version[4] == "1" ) )
# Don't do the heuristics if it's disabled. c$ssh$version = 1;
if ( use_conn_size_analyzer ) if ( ( |version| > 3 ) && ( version[4] == "2" ) )
schedule +15secs { ssh_watcher(c) }; c$ssh$version = 2;
}
event ssh_auth_successful(c: connection, auth_method_none: bool)
{
# TODO - what to do here?
if ( !c?$ssh || ( c$ssh?$auth_success && c$ssh$auth_success ) )
return;
# We can't accurately tell for compressed streams
if ( c$ssh?$compression_alg && ( c$ssh$compression_alg in compression_algorithms ) )
return;
c$ssh$auth_success = T;
if ( skip_processing_after_detection)
{
skip_further_processing(c$id);
set_record_packets(c$id, F);
}
}
event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=-5
{
if ( c?$ssh && !c$ssh$logged )
{
c$ssh$logged = T;
Log::write(SSH::LOG, c$ssh);
}
}
event ssh_auth_failed(c: connection)
{
if ( !c?$ssh || ( c$ssh?$auth_success && !c$ssh$auth_success ) )
return;
# We can't accurately tell for compressed streams
if ( c$ssh?$compression_alg && ( c$ssh$compression_alg in compression_algorithms ) )
return;
c$ssh$auth_success = F;
c$ssh$num_failures += 1;
}
# Determine the negotiated algorithm
function find_alg(client_algorithms: vector of string, server_algorithms: vector of string): string
{
for ( i in client_algorithms )
for ( j in server_algorithms )
if ( client_algorithms[i] == server_algorithms[j] )
return client_algorithms[i];
return "Algorithm negotiation failed";
}
# This is a simple wrapper around find_alg for cases where client to server and server to client
# negotiate different algorithms. This is rare, but provided for completeness.
function find_bidirectional_alg(client_prefs: Algorithm_Prefs, server_prefs: Algorithm_Prefs): string
{
local c_to_s = find_alg(client_prefs$client_to_server, server_prefs$client_to_server);
local s_to_c = find_alg(client_prefs$server_to_client, server_prefs$server_to_client);
# Usually these are the same, but if they're not, return the details
return c_to_s == s_to_c ? c_to_s : fmt("To server: %s, to client: %s", c_to_s, s_to_c);
}
event ssh_capabilities(c: connection, cookie: string, capabilities: Capabilities)
{
if ( !c?$ssh || ( c$ssh?$capabilities && c$ssh$capabilities$is_server == capabilities$is_server ) )
return;
if ( !c$ssh?$capabilities )
{
c$ssh$capabilities = capabilities;
return;
}
local client_caps = capabilities$is_server ? c$ssh$capabilities : capabilities;
local server_caps = capabilities$is_server ? capabilities : c$ssh$capabilities;
c$ssh$cipher_alg = find_bidirectional_alg(client_caps$encryption_algorithms,
server_caps$encryption_algorithms);
c$ssh$mac_alg = find_bidirectional_alg(client_caps$mac_algorithms,
server_caps$mac_algorithms);
c$ssh$compression_alg = find_bidirectional_alg(client_caps$compression_algorithms,
server_caps$compression_algorithms);
c$ssh$kex_alg = find_alg(client_caps$kex_algorithms, server_caps$kex_algorithms);
c$ssh$host_key_alg = find_alg(client_caps$server_host_key_algorithms,
server_caps$server_host_key_algorithms);
}
event connection_state_remove(c: connection) &priority=-5
{
if ( c?$ssh && !c$ssh$logged && c$ssh?$client && c$ssh?$server )
{
c$ssh$logged = T;
Log::write(SSH::LOG, c$ssh);
}
}
function generate_fingerprint(c: connection, key: string)
{
if ( !c?$ssh )
return;
local lx = str_split(md5_hash(key), vector(2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30));
lx[0] = "";
c$ssh$host_key = sub(join_string_vec(lx, ":"), /:/, "");
}
event ssh1_server_host_key(c: connection, p: string, e: string) &priority=5
{
generate_fingerprint(c, e + p);
}
event ssh2_server_host_key(c: connection, key: string) &priority=5
{
generate_fingerprint(c, key);
} }

View file

@ -6,6 +6,11 @@ export {
const TLSv10 = 0x0301; const TLSv10 = 0x0301;
const TLSv11 = 0x0302; const TLSv11 = 0x0302;
const TLSv12 = 0x0303; const TLSv12 = 0x0303;
const DTLSv10 = 0xFEFF;
# DTLSv11 does not exist
const DTLSv12 = 0xFEFD;
## Mapping between the constants and string values for SSL/TLS versions. ## Mapping between the constants and string values for SSL/TLS versions.
const version_strings: table[count] of string = { const version_strings: table[count] of string = {
[SSLv2] = "SSLv2", [SSLv2] = "SSLv2",
@ -13,6 +18,8 @@ export {
[TLSv10] = "TLSv10", [TLSv10] = "TLSv10",
[TLSv11] = "TLSv11", [TLSv11] = "TLSv11",
[TLSv12] = "TLSv12", [TLSv12] = "TLSv12",
[DTLSv10] = "DTLSv10",
[DTLSv12] = "DTLSv12"
} &default=function(i: count):string { return fmt("unknown-%d", i); }; } &default=function(i: count):string { return fmt("unknown-%d", i); };
## TLS content types: ## TLS content types:

View file

@ -13,3 +13,10 @@ signature dpd_ssl_client {
payload /^(\x16\x03[\x00\x01\x02\x03]..\x01...\x03[\x00\x01\x02\x03]|...?\x01[\x00\x03][\x00\x01\x02\x03]).*/ payload /^(\x16\x03[\x00\x01\x02\x03]..\x01...\x03[\x00\x01\x02\x03]|...?\x01[\x00\x03][\x00\x01\x02\x03]).*/
tcp-state originator tcp-state originator
} }
signature dpd_dtls_client {
ip-proto == udp
# Client hello.
payload /^\x16\xfe[\xff\xfd]\x00\x00\x00\x00\x00\x00\x00...\x01...........\xfe[\xff\xfd].*/
enable "dtls"
}

View file

@ -85,6 +85,10 @@ event bro_init() &priority=5
Files::register_protocol(Analyzer::ANALYZER_SSL, Files::register_protocol(Analyzer::ANALYZER_SSL,
[$get_file_handle = SSL::get_file_handle, [$get_file_handle = SSL::get_file_handle,
$describe = SSL::describe_file]); $describe = SSL::describe_file]);
Files::register_protocol(Analyzer::ANALYZER_DTLS,
[$get_file_handle = SSL::get_file_handle,
$describe = SSL::describe_file]);
} }
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5 event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5

View file

@ -92,16 +92,22 @@ redef record Info += {
delay_tokens: set[string] &optional; delay_tokens: set[string] &optional;
}; };
const ports = { const ssl_ports = {
443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp, 443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp,
989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp 989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp
}; };
redef likely_server_ports += { ports };
# There are no well known DTLS ports at the moment. Let's
# just add 443 for now for good measure - who knows :)
const dtls_ports = { 443/udp };
redef likely_server_ports += { ssl_ports, dtls_ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(SSL::LOG, [$columns=Info, $ev=log_ssl]); Log::create_stream(SSL::LOG, [$columns=Info, $ev=log_ssl, $path="ssl"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SSL, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_SSL, ssl_ports);
Analyzer::register_for_ports(Analyzer::ANALYZER_DTLS, dtls_ports);
} }
function set_session(c: connection) function set_session(c: connection)
@ -268,7 +274,7 @@ event connection_state_remove(c: connection) &priority=-5
event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5 event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &priority=5
{ {
if ( atype == Analyzer::ANALYZER_SSL ) if ( atype == Analyzer::ANALYZER_SSL || atype == Analyzer::ANALYZER_DTLS )
{ {
set_session(c); set_session(c);
c$ssl$analyzer_id = aid; c$ssl$analyzer_id = aid;
@ -278,6 +284,6 @@ event protocol_confirmation(c: connection, atype: Analyzer::Tag, aid: count) &pr
event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count, event protocol_violation(c: connection, atype: Analyzer::Tag, aid: count,
reason: string) &priority=5 reason: string) &priority=5
{ {
if ( c?$ssl ) if ( c?$ssl && ( atype == Analyzer::ANALYZER_SSL || atype == Analyzer::ANALYZER_DTLS ) )
finish(c, T); finish(c, T);
} }

View file

@ -35,7 +35,7 @@ redef likely_server_ports += { ports };
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Syslog::LOG, [$columns=Info]); Log::create_stream(Syslog::LOG, [$columns=Info, $path="syslog"]);
Analyzer::register_for_ports(Analyzer::ANALYZER_SYSLOG, ports); Analyzer::register_for_ports(Analyzer::ANALYZER_SYSLOG, ports);
} }

View file

@ -5,6 +5,7 @@
@load frameworks/communication/listen.bro @load frameworks/communication/listen.bro
@load frameworks/control/controllee.bro @load frameworks/control/controllee.bro
@load frameworks/control/controller.bro @load frameworks/control/controller.bro
@load frameworks/files/extract-all-files.bro
@load policy/misc/dump-events.bro @load policy/misc/dump-events.bro
@load ./example.bro @load ./example.bro

View file

@ -0,0 +1,8 @@
##! Extract all files to disk.
@load base/files/extract
event file_new(f: fa_file)
{
Files::add_analyzer(f, Files::ANALYZER_EXTRACT);
}

View file

@ -0,0 +1,11 @@
@load base/frameworks/intel
@load ./where-locations
event ssh_server_host_key(c: connection, hash: string)
{
local seen = Intel::Seen($indicator=hash,
$indicator_type=Intel::PUBKEY_HASH,
$conn=c,
$where=SSH::IN_SERVER_HOST_KEY);
Intel::seen(seen);
}

View file

@ -21,6 +21,7 @@ export {
SMTP::IN_REPLY_TO, SMTP::IN_REPLY_TO,
SMTP::IN_X_ORIGINATING_IP_HEADER, SMTP::IN_X_ORIGINATING_IP_HEADER,
SMTP::IN_MESSAGE, SMTP::IN_MESSAGE,
SSH::IN_SERVER_HOST_KEY,
SSL::IN_SERVER_NAME, SSL::IN_SERVER_NAME,
SMTP::IN_HEADER, SMTP::IN_HEADER,
X509::IN_CERT, X509::IN_CERT,

View file

@ -23,7 +23,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Barnyard2::LOG, [$columns=Info]); Log::create_stream(Barnyard2::LOG, [$columns=Info, $path="barnyard2"]);
} }

View file

@ -38,7 +38,7 @@ global add_sumstats: hook(id: conn_id, hostname: string, size: count);
event bro_init() &priority=3 event bro_init() &priority=3
{ {
Log::create_stream(AppStats::LOG, [$columns=Info]); Log::create_stream(AppStats::LOG, [$columns=Info, $path="app_stats"]);
local r1: SumStats::Reducer = [$stream="apps.bytes", $apply=set(SumStats::SUM)]; local r1: SumStats::Reducer = [$stream="apps.bytes", $apply=set(SumStats::SUM)];
local r2: SumStats::Reducer = [$stream="apps.hits", $apply=set(SumStats::UNIQUE)]; local r2: SumStats::Reducer = [$stream="apps.hits", $apply=set(SumStats::UNIQUE)];

View file

@ -76,7 +76,7 @@ event CaptureLoss::take_measurement(last_ts: time, last_acks: count, last_gaps:
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(LOG, [$columns=Info]); Log::create_stream(LOG, [$columns=Info, $path="capture_loss"]);
# We only schedule the event if we are capturing packets. # We only schedule the event if we are capturing packets.
if ( reading_live_traffic() || reading_traces() ) if ( reading_live_traffic() || reading_traces() )

View file

@ -55,7 +55,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Traceroute::LOG, [$columns=Info, $ev=log_traceroute]); Log::create_stream(Traceroute::LOG, [$columns=Info, $ev=log_traceroute, $path="traceroute"]);
local r1: SumStats::Reducer = [$stream="traceroute.time_exceeded", $apply=set(SumStats::UNIQUE)]; local r1: SumStats::Reducer = [$stream="traceroute.time_exceeded", $apply=set(SumStats::UNIQUE)];
local r2: SumStats::Reducer = [$stream="traceroute.low_ttl_packet", $apply=set(SumStats::SUM)]; local r2: SumStats::Reducer = [$stream="traceroute.low_ttl_packet", $apply=set(SumStats::SUM)];

View file

@ -38,5 +38,5 @@ export {
event bro_init() event bro_init()
{ {
Log::create_stream(Known::DEVICES_LOG, [$columns=DevicesInfo, $ev=log_known_devices]); Log::create_stream(Known::DEVICES_LOG, [$columns=DevicesInfo, $ev=log_known_devices, $path="known_devices"]);
} }

View file

@ -30,7 +30,7 @@ const depth: table[count] of string = {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(LoadedScripts::LOG, [$columns=Info]); Log::create_stream(LoadedScripts::LOG, [$columns=Info, $path="loaded_scripts"]);
} }
event bro_script_loaded(path: string, level: count) event bro_script_loaded(path: string, level: count)

View file

@ -50,7 +50,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats]); Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats, $path="stats"]);
} }
event check_stats(last_ts: time, last_ns: NetStats, last_res: bro_resources) event check_stats(last_ts: time, last_ns: NetStats, last_res: bro_resources)

View file

@ -38,7 +38,7 @@ export {
event bro_init() event bro_init()
{ {
Log::create_stream(Known::HOSTS_LOG, [$columns=HostsInfo, $ev=log_known_hosts]); Log::create_stream(Known::HOSTS_LOG, [$columns=HostsInfo, $ev=log_known_hosts, $path="known_hosts"]);
} }
event connection_established(c: connection) &priority=5 event connection_established(c: connection) &priority=5

View file

@ -49,7 +49,8 @@ redef record connection += {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Known::SERVICES_LOG, [$columns=ServicesInfo, Log::create_stream(Known::SERVICES_LOG, [$columns=ServicesInfo,
$ev=log_known_services]); $ev=log_known_services,
$path="known_services"]);
} }
event log_it(ts: time, a: addr, p: port, services: set[string]) event log_it(ts: time, a: addr, p: port, services: set[string])

View file

@ -26,16 +26,20 @@ export {
event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=3 event http_header(c: connection, is_orig: bool, name: string, value: string) &priority=3
{ {
if ( ! is_orig || ! c?$http ) if ( ! c?$http )
return; return;
if ( is_orig )
{
if ( log_client_header_names ) if ( log_client_header_names )
{ {
if ( ! c$http?$client_header_names ) if ( ! c$http?$client_header_names )
c$http$client_header_names = vector(); c$http$client_header_names = vector();
c$http$client_header_names[|c$http$client_header_names|] = name; c$http$client_header_names[|c$http$client_header_names|] = name;
} }
}
else
{
if ( log_server_header_names ) if ( log_server_header_names )
{ {
if ( ! c$http?$server_header_names ) if ( ! c$http?$server_header_names )
@ -43,3 +47,4 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
c$http$server_header_names[|c$http$server_header_names|] = name; c$http$server_header_names[|c$http$server_header_names|] = name;
} }
} }
}

View file

@ -35,7 +35,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Known::MODBUS_LOG, [$columns=ModbusInfo, $ev=log_known_modbus]); Log::create_stream(Known::MODBUS_LOG, [$columns=ModbusInfo, $ev=log_known_modbus, $path="known_modbus"]);
} }
event modbus_message(c: connection, headers: ModbusHeaders, is_orig: bool) event modbus_message(c: connection, headers: ModbusHeaders, is_orig: bool)

View file

@ -54,7 +54,7 @@ redef record Modbus::Info += {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Modbus::REGISTER_CHANGE_LOG, [$columns=MemmapInfo]); Log::create_stream(Modbus::REGISTER_CHANGE_LOG, [$columns=MemmapInfo, $path="modbus_register_change"]);
} }
event modbus_read_holding_registers_request(c: connection, headers: ModbusHeaders, start_address: count, quantity: count) event modbus_read_holding_registers_request(c: connection, headers: ModbusHeaders, start_address: count, quantity: count)

View file

@ -0,0 +1,22 @@
##! If an RDP session is "upgraded" to SSL, this will be indicated
##! with this script in a new field added to the RDP log.
@load base/protocols/rdp
@load base/protocols/ssl
module RDP;
export {
redef record RDP::Info += {
## Flag the connection if it was seen over SSL.
ssl: bool &log &default=F;
};
}
event ssl_established(c: connection)
{
if ( c?$rdp )
{
c$rdp$ssl = T;
}
}

View file

@ -12,10 +12,10 @@ export {
redef enum Notice::Type += { redef enum Notice::Type += {
## Indicates that a host has been identified as crossing the ## Indicates that a host has been identified as crossing the
## :bro:id:`SSH::password_guesses_limit` threshold with ## :bro:id:`SSH::password_guesses_limit` threshold with
## heuristically determined failed logins. ## failed logins.
Password_Guessing, Password_Guessing,
## Indicates that a host previously identified as a "password ## Indicates that a host previously identified as a "password
## guesser" has now had a heuristically successful login ## guesser" has now had a successful login
## attempt. This is not currently implemented. ## attempt. This is not currently implemented.
Login_By_Password_Guesser, Login_By_Password_Guesser,
}; };
@ -34,8 +34,7 @@ export {
const guessing_timeout = 30 mins &redef; const guessing_timeout = 30 mins &redef;
## This value can be used to exclude hosts or entire networks from being ## This value can be used to exclude hosts or entire networks from being
## tracked as potential "guessers". There are cases where the success ## tracked as potential "guessers". The index represents
## heuristic fails and this acts as the whitelist. The index represents
## client subnets and the yield value represents server subnets. ## client subnets and the yield value represents server subnets.
const ignore_guessers: table[subnet] of subnet &redef; const ignore_guessers: table[subnet] of subnet &redef;
} }
@ -70,7 +69,7 @@ event bro_init()
}]); }]);
} }
event SSH::heuristic_successful_login(c: connection) event ssh_auth_successful(c: connection, auth_method_none: bool)
{ {
local id = c$id; local id = c$id;
@ -79,7 +78,7 @@ event SSH::heuristic_successful_login(c: connection)
$where=SSH::SUCCESSFUL_LOGIN]); $where=SSH::SUCCESSFUL_LOGIN]);
} }
event SSH::heuristic_failed_login(c: connection) event ssh_auth_failed(c: connection)
{ {
local id = c$id; local id = c$id;

View file

@ -30,7 +30,7 @@ function get_location(c: connection): geo_location
return lookup_location(lookup_ip); return lookup_location(lookup_ip);
} }
event SSH::heuristic_successful_login(c: connection) &priority=5 event ssh_auth_successful(c: connection, auth_method_none: bool) &priority=3
{ {
# Add the location data to the SSH record. # Add the location data to the SSH record.
c$ssh$remote_location = get_location(c); c$ssh$remote_location = get_location(c);
@ -45,7 +45,7 @@ event SSH::heuristic_successful_login(c: connection) &priority=5
} }
} }
event SSH::heuristic_failed_login(c: connection) &priority=5 event ssh_auth_failed(c: connection) &priority=3
{ {
# Add the location data to the SSH record. # Add the location data to the SSH record.
c$ssh$remote_location = get_location(c); c$ssh$remote_location = get_location(c);

View file

@ -27,7 +27,7 @@ export {
/^ftp[0-9]*\./ &redef; /^ftp[0-9]*\./ &redef;
} }
event SSH::heuristic_successful_login(c: connection) event ssh_auth_successful(c: connection, auth_method_none: bool)
{ {
for ( host in set(c$id$orig_h, c$id$resp_h) ) for ( host in set(c$id$orig_h, c$id$resp_h) )
{ {

View file

@ -43,7 +43,7 @@ export {
event bro_init() &priority=5 event bro_init() &priority=5
{ {
Log::create_stream(Known::CERTS_LOG, [$columns=CertsInfo, $ev=log_known_certs]); Log::create_stream(Known::CERTS_LOG, [$columns=CertsInfo, $ev=log_known_certs, $path="known_certs"]);
} }
event ssl_established(c: connection) &priority=3 event ssl_established(c: connection) &priority=3

View file

@ -1,4 +1,7 @@
##! Perform full certificate chain validation for SSL certificates. ##! Perform full certificate chain validation for SSL certificates.
#
# Also caches all intermediate certificates encountered so far and use them
# for future validations.
@load base/frameworks/notice @load base/frameworks/notice
@load base/protocols/ssl @load base/protocols/ssl
@ -19,12 +22,107 @@ export {
}; };
## MD5 hash values for recently validated chains along with the ## MD5 hash values for recently validated chains along with the
## validation status message are kept in this table to avoid constant ## validation status are kept in this table to avoid constant
## validation every time the same certificate chain is seen. ## validation every time the same certificate chain is seen.
global recently_validated_certs: table[string] of string = table() global recently_validated_certs: table[string] of string = table()
&read_expire=5mins &synchronized &redef; &read_expire=5mins &redef;
## Use intermediate CA certificate caching when trying to validate
## certificates. When this is enabled, Bro keeps track of all valid
## intermediate CA certificates that it has seen in the past. When
## encountering a host certificate that cannot be validated because
## of missing intermediate CA certificate, the cached list is used
## to try to validate the cert. This is similar to how Firefox is
## doing certificate validation.
##
## Disabling this will usually greatly increase the number of validation warnings
## that you encounter. Only disable if you want to find misconfigured servers.
global ssl_cache_intermediate_ca: bool = T &redef;
## Event from a worker to the manager that it has encountered a new
## valid intermediate.
global intermediate_add: event(key: string, value: vector of opaque of x509);
## Event from the manager to the workers that a new intermediate chain
## is to be added.
global new_intermediate: event(key: string, value: vector of opaque of x509);
} }
global intermediate_cache: table[string] of vector of opaque of x509;
@if ( Cluster::is_enabled() )
@load base/frameworks/cluster
redef Cluster::manager2worker_events += /SSL::intermediate_add/;
redef Cluster::worker2manager_events += /SSL::new_intermediate/;
@endif
function add_to_cache(key: string, value: vector of opaque of x509)
{
intermediate_cache[key] = value;
@if ( Cluster::is_enabled() )
event SSL::new_intermediate(key, value);
@endif
}
@if ( Cluster::is_enabled() && Cluster::local_node_type() != Cluster::MANAGER )
event SSL::intermediate_add(key: string, value: vector of opaque of x509)
{
intermediate_cache[key] = value;
}
@endif
@if ( Cluster::is_enabled() && Cluster::local_node_type() == Cluster::MANAGER )
event SSL::new_intermediate(key: string, value: vector of opaque of x509)
{
if ( key in intermediate_cache )
return;
intermediate_cache[key] = value;
event SSL::intermediate_add(key, value);
}
@endif
function cache_validate(chain: vector of opaque of x509): string
{
local chain_hash: vector of string = vector();
for ( i in chain )
chain_hash[i] = sha1_hash(x509_get_certificate_string(chain[i]));
local chain_id = join_string_vec(chain_hash, ".");
# If we tried this certificate recently, just return the cached result.
if ( chain_id in recently_validated_certs )
return recently_validated_certs[chain_id];
local result = x509_verify(chain, root_certs);
recently_validated_certs[chain_id] = result$result_string;
# if we have a working chain where we did not store the intermediate certs
# in our cache yet - do so
if ( ssl_cache_intermediate_ca &&
result$result_string == "ok" &&
result?$chain_certs &&
|result$chain_certs| > 2 )
{
local result_chain = result$chain_certs;
local icert = x509_parse(result_chain[1]);
if ( icert$subject !in intermediate_cache )
{
local cachechain: vector of opaque of x509;
for ( i in result_chain )
{
if ( i >=1 && i<=|result_chain|-2 )
cachechain[i-1] = result_chain[i];
}
add_to_cache(icert$subject, cachechain);
}
}
return result$result_string;
}
event ssl_established(c: connection) &priority=3 event ssl_established(c: connection) &priority=3
{ {
# If there aren't any certs we can't very well do certificate validation. # If there aren't any certs we can't very well do certificate validation.
@ -32,9 +130,31 @@ event ssl_established(c: connection) &priority=3
! c$ssl$cert_chain[0]?$x509 ) ! c$ssl$cert_chain[0]?$x509 )
return; return;
local chain_id = join_string_vec(c$ssl$cert_chain_fuids, "."); local intermediate_chain: vector of opaque of x509 = vector();
local issuer = c$ssl$cert_chain[0]$x509$certificate$issuer;
local hash = c$ssl$cert_chain[0]$sha1; local hash = c$ssl$cert_chain[0]$sha1;
local result: string;
# Look if we already have a working chain for the issuer of this cert.
# If yes, try this chain first instead of using the chain supplied from
# the server.
if ( ssl_cache_intermediate_ca && issuer in intermediate_cache )
{
intermediate_chain[0] = c$ssl$cert_chain[0]$x509$handle;
for ( i in intermediate_cache[issuer] )
intermediate_chain[i+1] = intermediate_cache[issuer][i];
result = cache_validate(intermediate_chain);
if ( result == "ok" )
{
c$ssl$validation_status = result;
return;
}
}
# Validation with known chains failed or there was no fitting intermediate
# in our store.
# Fall back to validating the certificate with the server-supplied chain.
local chain: vector of opaque of x509 = vector(); local chain: vector of opaque of x509 = vector();
for ( i in c$ssl$cert_chain ) for ( i in c$ssl$cert_chain )
{ {
@ -42,18 +162,10 @@ event ssl_established(c: connection) &priority=3
chain[i] = c$ssl$cert_chain[i]$x509$handle; chain[i] = c$ssl$cert_chain[i]$x509$handle;
} }
if ( chain_id in recently_validated_certs ) result = cache_validate(chain);
{ c$ssl$validation_status = result;
c$ssl$validation_status = recently_validated_certs[chain_id];
}
else
{
local result = x509_verify(chain, root_certs);
c$ssl$validation_status = result$result_string;
recently_validated_certs[chain_id] = result$result_string;
}
if ( c$ssl$validation_status != "ok" ) if ( result != "ok" )
{ {
local message = fmt("SSL certificate validation failed with (%s)", c$ssl$validation_status); local message = fmt("SSL certificate validation failed with (%s)", c$ssl$validation_status);
NOTICE([$note=Invalid_Server_Cert, $msg=message, NOTICE([$note=Invalid_Server_Cert, $msg=message,
@ -61,5 +173,3 @@ event ssl_established(c: connection) &priority=3
$identifier=cat(c$id$resp_h,c$id$resp_p,hash,c$ssl$validation_status)]); $identifier=cat(c$id$resp_h,c$id$resp_p,hash,c$ssl$validation_status)]);
} }
} }

View file

@ -22,12 +22,14 @@
@load frameworks/intel/seen/file-names.bro @load frameworks/intel/seen/file-names.bro
@load frameworks/intel/seen/http-headers.bro @load frameworks/intel/seen/http-headers.bro
@load frameworks/intel/seen/http-url.bro @load frameworks/intel/seen/http-url.bro
@load frameworks/intel/seen/pubkey-hashes.bro
@load frameworks/intel/seen/smtp-url-extraction.bro @load frameworks/intel/seen/smtp-url-extraction.bro
@load frameworks/intel/seen/smtp.bro @load frameworks/intel/seen/smtp.bro
@load frameworks/intel/seen/ssl.bro @load frameworks/intel/seen/ssl.bro
@load frameworks/intel/seen/where-locations.bro @load frameworks/intel/seen/where-locations.bro
@load frameworks/intel/seen/x509.bro @load frameworks/intel/seen/x509.bro
@load frameworks/files/detect-MHR.bro @load frameworks/files/detect-MHR.bro
#@load frameworks/files/extract-all-files.bro
@load frameworks/files/hash-all-files.bro @load frameworks/files/hash-all-files.bro
@load frameworks/packet-filter/shunt.bro @load frameworks/packet-filter/shunt.bro
@load frameworks/software/version-changes.bro @load frameworks/software/version-changes.bro
@ -77,6 +79,7 @@
@load protocols/modbus/known-masters-slaves.bro @load protocols/modbus/known-masters-slaves.bro
@load protocols/modbus/track-memmap.bro @load protocols/modbus/track-memmap.bro
@load protocols/mysql/software.bro @load protocols/mysql/software.bro
@load protocols/rdp/indicate_ssl.bro
@load protocols/smtp/blocklists.bro @load protocols/smtp/blocklists.bro
@load protocols/smtp/detect-suspicious-orig.bro @load protocols/smtp/detect-suspicious-orig.bro
@load protocols/smtp/entities-excerpt.bro @load protocols/smtp/entities-excerpt.bro

View file

@ -269,6 +269,7 @@ set(bro_SRCS
ChunkedIO.cc ChunkedIO.cc
CompHash.cc CompHash.cc
Conn.cc Conn.cc
ConvertUTF.c
DFA.cc DFA.cc
DbgBreakpoint.cc DbgBreakpoint.cc
DbgHelp.cc DbgHelp.cc

View file

@ -263,6 +263,9 @@ public:
void CheckFlowLabel(bool is_orig, uint32 flow_label); void CheckFlowLabel(bool is_orig, uint32 flow_label);
uint32 GetOrigFlowLabel() { return orig_flow_label; }
uint32 GetRespFlowLabel() { return resp_flow_label; }
protected: protected:
Connection() { persistent = 0; } Connection() { persistent = 0; }

755
src/ConvertUTF.c Normal file
View file

@ -0,0 +1,755 @@
/*===--- ConvertUTF.c - Universal Character Names conversions ---------------===
*
* The LLVM Compiler Infrastructure
*
* This file is distributed under the University of Illinois Open Source
* License:
*
* University of Illinois/NCSA
* Open Source License
*
* Copyright (c) 2003-2014 University of Illinois at Urbana-Champaign.
* All rights reserved.
*
* Developed by:
*
* LLVM Team
*
* University of Illinois at Urbana-Champaign
*
* http://llvm.org
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal with the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* * Redistributions of source code must retain the above
* copyright notice, this list of conditions and the
* following disclaimers.
*
* * Redistributions in binary form must reproduce the
* above copyright notice, this list of conditions and
* the following disclaimers in the documentation and/or
* other materials provided with the distribution.
*
* * Neither the names of the LLVM Team, University of
* Illinois at Urbana-Champaign, nor the names of its
* contributors may be used to endorse or promote
* products derived from this Software without specific
* prior written permission.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR
* COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS WITH THE SOFTWARE.
*
*===------------------------------------------------------------------------=*/
/*
* Copyright 2001-2004 Unicode, Inc.
*
* Disclaimer
*
* This source code is provided as is by Unicode, Inc. No claims are
* made as to fitness for any particular purpose. No warranties of any
* kind are expressed or implied. The recipient agrees to determine
* applicability of information provided. If this file has been
* purchased on magnetic or optical media from Unicode, Inc., the
* sole remedy for any claim will be exchange of defective media
* within 90 days of receipt.
*
* Limitations on Rights to Redistribute This Code
*
* Unicode, Inc. hereby grants the right to freely use the information
* supplied in this file in the creation of products supporting the
* Unicode Standard, and to make copies of this file in any form
* for internal or external distribution as long as this notice
* remains attached.
*/
/* ---------------------------------------------------------------------
Conversions between UTF32, UTF-16, and UTF-8. Source code file.
Author: Mark E. Davis, 1994.
Rev History: Rick McGowan, fixes & updates May 2001.
Sept 2001: fixed const & error conditions per
mods suggested by S. Parent & A. Lillich.
June 2002: Tim Dodd added detection and handling of incomplete
source sequences, enhanced error detection, added casts
to eliminate compiler warnings.
July 2003: slight mods to back out aggressive FFFE detection.
Jan 2004: updated switches in from-UTF8 conversions.
Oct 2004: updated to use UNI_MAX_LEGAL_UTF32 in UTF-32 conversions.
See the header file "ConvertUTF.h" for complete documentation.
------------------------------------------------------------------------ */
#include "ConvertUTF.h"
#ifdef CVTUTF_DEBUG
#include <stdio.h>
#endif
#include <assert.h>
static const int halfShift = 10; /* used for shifting by 10 bits */
static const UTF32 halfBase = 0x0010000UL;
static const UTF32 halfMask = 0x3FFUL;
#define UNI_SUR_HIGH_START (UTF32)0xD800
#define UNI_SUR_HIGH_END (UTF32)0xDBFF
#define UNI_SUR_LOW_START (UTF32)0xDC00
#define UNI_SUR_LOW_END (UTF32)0xDFFF
#define false 0
#define true 1
/* --------------------------------------------------------------------- */
/*
* Index into the table below with the first byte of a UTF-8 sequence to
* get the number of trailing bytes that are supposed to follow it.
* Note that *legal* UTF-8 values can't have 4 or 5-bytes. The table is
* left as-is for anyone who may want to do such conversion, which was
* allowed in earlier algorithms.
*/
static const char trailingBytesForUTF8[256] = {
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1, 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2, 3,3,3,3,3,3,3,3,4,4,4,4,5,5,5,5
};
/*
* Magic values subtracted from a buffer value during UTF8 conversion.
* This table contains as many values as there might be trailing bytes
* in a UTF-8 sequence.
*/
static const UTF32 offsetsFromUTF8[6] = { 0x00000000UL, 0x00003080UL, 0x000E2080UL,
0x03C82080UL, 0xFA082080UL, 0x82082080UL };
/*
* Once the bits are split out into bytes of UTF-8, this is a mask OR-ed
* into the first byte, depending on how many bytes follow. There are
* as many entries in this table as there are UTF-8 sequence types.
* (I.e., one byte sequence, two byte... etc.). Remember that sequencs
* for *legal* UTF-8 will be 4 or fewer bytes total.
*/
static const UTF8 firstByteMark[7] = { 0x00, 0x00, 0xC0, 0xE0, 0xF0, 0xF8, 0xFC };
/* --------------------------------------------------------------------- */
/* The interface converts a whole buffer to avoid function-call overhead.
* Constants have been gathered. Loops & conditionals have been removed as
* much as possible for efficiency, in favor of drop-through switches.
* (See "Note A" at the bottom of the file for equivalent code.)
* If your compiler supports it, the "isLegalUTF8" call can be turned
* into an inline function.
*/
/* --------------------------------------------------------------------- */
ConversionResult ConvertUTF32toUTF16 (
const UTF32** sourceStart, const UTF32* sourceEnd,
UTF16** targetStart, UTF16* targetEnd, ConversionFlags flags) {
ConversionResult result = conversionOK;
const UTF32* source = *sourceStart;
UTF16* target = *targetStart;
while (source < sourceEnd) {
UTF32 ch;
if (target >= targetEnd) {
result = targetExhausted; break;
}
ch = *source++;
if (ch <= UNI_MAX_BMP) { /* Target is a character <= 0xFFFF */
/* UTF-16 surrogate values are illegal in UTF-32; 0xffff or 0xfffe are both reserved values */
if (ch >= UNI_SUR_HIGH_START && ch <= UNI_SUR_LOW_END) {
if (flags == strictConversion) {
--source; /* return to the illegal value itself */
result = sourceIllegal;
break;
} else {
*target++ = UNI_REPLACEMENT_CHAR;
}
} else {
*target++ = (UTF16)ch; /* normal case */
}
} else if (ch > UNI_MAX_LEGAL_UTF32) {
if (flags == strictConversion) {
result = sourceIllegal;
} else {
*target++ = UNI_REPLACEMENT_CHAR;
}
} else {
/* target is a character in range 0xFFFF - 0x10FFFF. */
if (target + 1 >= targetEnd) {
--source; /* Back up source pointer! */
result = targetExhausted; break;
}
ch -= halfBase;
*target++ = (UTF16)((ch >> halfShift) + UNI_SUR_HIGH_START);
*target++ = (UTF16)((ch & halfMask) + UNI_SUR_LOW_START);
}
}
*sourceStart = source;
*targetStart = target;
return result;
}
/* --------------------------------------------------------------------- */
ConversionResult ConvertUTF16toUTF32 (
const UTF16** sourceStart, const UTF16* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags) {
ConversionResult result = conversionOK;
const UTF16* source = *sourceStart;
UTF32* target = *targetStart;
UTF32 ch, ch2;
while (source < sourceEnd) {
const UTF16* oldSource = source; /* In case we have to back up because of target overflow. */
ch = *source++;
/* If we have a surrogate pair, convert to UTF32 first. */
if (ch >= UNI_SUR_HIGH_START && ch <= UNI_SUR_HIGH_END) {
/* If the 16 bits following the high surrogate are in the source buffer... */
if (source < sourceEnd) {
ch2 = *source;
/* If it's a low surrogate, convert to UTF32. */
if (ch2 >= UNI_SUR_LOW_START && ch2 <= UNI_SUR_LOW_END) {
ch = ((ch - UNI_SUR_HIGH_START) << halfShift)
+ (ch2 - UNI_SUR_LOW_START) + halfBase;
++source;
} else if (flags == strictConversion) { /* it's an unpaired high surrogate */
--source; /* return to the illegal value itself */
result = sourceIllegal;
break;
}
} else { /* We don't have the 16 bits following the high surrogate. */
--source; /* return to the high surrogate */
result = sourceExhausted;
break;
}
} else if (flags == strictConversion) {
/* UTF-16 surrogate values are illegal in UTF-32 */
if (ch >= UNI_SUR_LOW_START && ch <= UNI_SUR_LOW_END) {
--source; /* return to the illegal value itself */
result = sourceIllegal;
break;
}
}
if (target >= targetEnd) {
source = oldSource; /* Back up source pointer! */
result = targetExhausted; break;
}
*target++ = ch;
}
*sourceStart = source;
*targetStart = target;
#ifdef CVTUTF_DEBUG
if (result == sourceIllegal) {
fprintf(stderr, "ConvertUTF16toUTF32 illegal seq 0x%04x,%04x\n", ch, ch2);
fflush(stderr);
}
#endif
return result;
}
ConversionResult ConvertUTF16toUTF8 (
const UTF16** sourceStart, const UTF16* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags) {
ConversionResult result = conversionOK;
const UTF16* source = *sourceStart;
UTF8* target = *targetStart;
while (source < sourceEnd) {
UTF32 ch;
unsigned short bytesToWrite = 0;
const UTF32 byteMask = 0xBF;
const UTF32 byteMark = 0x80;
const UTF16* oldSource = source; /* In case we have to back up because of target overflow. */
ch = *source++;
/* If we have a surrogate pair, convert to UTF32 first. */
if (ch >= UNI_SUR_HIGH_START && ch <= UNI_SUR_HIGH_END) {
/* If the 16 bits following the high surrogate are in the source buffer... */
if (source < sourceEnd) {
UTF32 ch2 = *source;
/* If it's a low surrogate, convert to UTF32. */
if (ch2 >= UNI_SUR_LOW_START && ch2 <= UNI_SUR_LOW_END) {
ch = ((ch - UNI_SUR_HIGH_START) << halfShift)
+ (ch2 - UNI_SUR_LOW_START) + halfBase;
++source;
} else if (flags == strictConversion) { /* it's an unpaired high surrogate */
--source; /* return to the illegal value itself */
result = sourceIllegal;
break;
}
} else { /* We don't have the 16 bits following the high surrogate. */
--source; /* return to the high surrogate */
result = sourceExhausted;
break;
}
} else if (flags == strictConversion) {
/* UTF-16 surrogate values are illegal in UTF-32 */
if (ch >= UNI_SUR_LOW_START && ch <= UNI_SUR_LOW_END) {
--source; /* return to the illegal value itself */
result = sourceIllegal;
break;
}
}
/* Figure out how many bytes the result will require */
if (ch < (UTF32)0x80) { bytesToWrite = 1;
} else if (ch < (UTF32)0x800) { bytesToWrite = 2;
} else if (ch < (UTF32)0x10000) { bytesToWrite = 3;
} else if (ch < (UTF32)0x110000) { bytesToWrite = 4;
} else { bytesToWrite = 3;
ch = UNI_REPLACEMENT_CHAR;
}
target += bytesToWrite;
if (target > targetEnd) {
source = oldSource; /* Back up source pointer! */
target -= bytesToWrite; result = targetExhausted; break;
}
switch (bytesToWrite) { /* note: everything falls through. */
case 4: *--target = (UTF8)((ch | byteMark) & byteMask); ch >>= 6;
case 3: *--target = (UTF8)((ch | byteMark) & byteMask); ch >>= 6;
case 2: *--target = (UTF8)((ch | byteMark) & byteMask); ch >>= 6;
case 1: *--target = (UTF8)(ch | firstByteMark[bytesToWrite]);
}
target += bytesToWrite;
}
*sourceStart = source;
*targetStart = target;
return result;
}
/* --------------------------------------------------------------------- */
ConversionResult ConvertUTF32toUTF8 (
const UTF32** sourceStart, const UTF32* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags) {
ConversionResult result = conversionOK;
const UTF32* source = *sourceStart;
UTF8* target = *targetStart;
while (source < sourceEnd) {
UTF32 ch;
unsigned short bytesToWrite = 0;
const UTF32 byteMask = 0xBF;
const UTF32 byteMark = 0x80;
ch = *source++;
if (flags == strictConversion ) {
/* UTF-16 surrogate values are illegal in UTF-32 */
if (ch >= UNI_SUR_HIGH_START && ch <= UNI_SUR_LOW_END) {
--source; /* return to the illegal value itself */
result = sourceIllegal;
break;
}
}
/*
* Figure out how many bytes the result will require. Turn any
* illegally large UTF32 things (> Plane 17) into replacement chars.
*/
if (ch < (UTF32)0x80) { bytesToWrite = 1;
} else if (ch < (UTF32)0x800) { bytesToWrite = 2;
} else if (ch < (UTF32)0x10000) { bytesToWrite = 3;
} else if (ch <= UNI_MAX_LEGAL_UTF32) { bytesToWrite = 4;
} else { bytesToWrite = 3;
ch = UNI_REPLACEMENT_CHAR;
result = sourceIllegal;
}
target += bytesToWrite;
if (target > targetEnd) {
--source; /* Back up source pointer! */
target -= bytesToWrite; result = targetExhausted; break;
}
switch (bytesToWrite) { /* note: everything falls through. */
case 4: *--target = (UTF8)((ch | byteMark) & byteMask); ch >>= 6;
case 3: *--target = (UTF8)((ch | byteMark) & byteMask); ch >>= 6;
case 2: *--target = (UTF8)((ch | byteMark) & byteMask); ch >>= 6;
case 1: *--target = (UTF8) (ch | firstByteMark[bytesToWrite]);
}
target += bytesToWrite;
}
*sourceStart = source;
*targetStart = target;
return result;
}
/* --------------------------------------------------------------------- */
/*
* Utility routine to tell whether a sequence of bytes is legal UTF-8.
* This must be called with the length pre-determined by the first byte.
* If not calling this from ConvertUTF8to*, then the length can be set by:
* length = trailingBytesForUTF8[*source]+1;
* and the sequence is illegal right away if there aren't that many bytes
* available.
* If presented with a length > 4, this returns false. The Unicode
* definition of UTF-8 goes up to 4-byte sequences.
*/
static Boolean isLegalUTF8(const UTF8 *source, int length) {
UTF8 a;
const UTF8 *srcptr = source+length;
switch (length) {
default: return false;
/* Everything else falls through when "true"... */
case 4: if ((a = (*--srcptr)) < 0x80 || a > 0xBF) return false;
case 3: if ((a = (*--srcptr)) < 0x80 || a > 0xBF) return false;
case 2: if ((a = (*--srcptr)) < 0x80 || a > 0xBF) return false;
switch (*source) {
/* no fall-through in this inner switch */
case 0xE0: if (a < 0xA0) return false; break;
case 0xED: if (a > 0x9F) return false; break;
case 0xF0: if (a < 0x90) return false; break;
case 0xF4: if (a > 0x8F) return false; break;
default: if (a < 0x80) return false;
}
case 1: if (*source >= 0x80 && *source < 0xC2) return false;
}
if (*source > 0xF4) return false;
return true;
}
/* --------------------------------------------------------------------- */
/*
* Exported function to return whether a UTF-8 sequence is legal or not.
* This is not used here; it's just exported.
*/
Boolean isLegalUTF8Sequence(const UTF8 *source, const UTF8 *sourceEnd) {
int length = trailingBytesForUTF8[*source]+1;
if (length > sourceEnd - source) {
return false;
}
return isLegalUTF8(source, length);
}
/* --------------------------------------------------------------------- */
static unsigned
findMaximalSubpartOfIllFormedUTF8Sequence(const UTF8 *source,
const UTF8 *sourceEnd) {
UTF8 b1, b2, b3;
assert(!isLegalUTF8Sequence(source, sourceEnd));
/*
* Unicode 6.3.0, D93b:
*
* Maximal subpart of an ill-formed subsequence: The longest code unit
* subsequence starting at an unconvertible offset that is either:
* a. the initial subsequence of a well-formed code unit sequence, or
* b. a subsequence of length one.
*/
if (source == sourceEnd)
return 0;
/*
* Perform case analysis. See Unicode 6.3.0, Table 3-7. Well-Formed UTF-8
* Byte Sequences.
*/
b1 = *source;
++source;
if (b1 >= 0xC2 && b1 <= 0xDF) {
/*
* First byte is valid, but we know that this code unit sequence is
* invalid, so the maximal subpart has to end after the first byte.
*/
return 1;
}
if (source == sourceEnd)
return 1;
b2 = *source;
++source;
if (b1 == 0xE0) {
return (b2 >= 0xA0 && b2 <= 0xBF) ? 2 : 1;
}
if (b1 >= 0xE1 && b1 <= 0xEC) {
return (b2 >= 0x80 && b2 <= 0xBF) ? 2 : 1;
}
if (b1 == 0xED) {
return (b2 >= 0x80 && b2 <= 0x9F) ? 2 : 1;
}
if (b1 >= 0xEE && b1 <= 0xEF) {
return (b2 >= 0x80 && b2 <= 0xBF) ? 2 : 1;
}
if (b1 == 0xF0) {
if (b2 >= 0x90 && b2 <= 0xBF) {
if (source == sourceEnd)
return 2;
b3 = *source;
return (b3 >= 0x80 && b3 <= 0xBF) ? 3 : 2;
}
return 1;
}
if (b1 >= 0xF1 && b1 <= 0xF3) {
if (b2 >= 0x80 && b2 <= 0xBF) {
if (source == sourceEnd)
return 2;
b3 = *source;
return (b3 >= 0x80 && b3 <= 0xBF) ? 3 : 2;
}
return 1;
}
if (b1 == 0xF4) {
if (b2 >= 0x80 && b2 <= 0x8F) {
if (source == sourceEnd)
return 2;
b3 = *source;
return (b3 >= 0x80 && b3 <= 0xBF) ? 3 : 2;
}
return 1;
}
assert((b1 >= 0x80 && b1 <= 0xC1) || b1 >= 0xF5);
/*
* There are no valid sequences that start with these bytes. Maximal subpart
* is defined to have length 1 in these cases.
*/
return 1;
}
/* --------------------------------------------------------------------- */
/*
* Exported function to return the total number of bytes in a codepoint
* represented in UTF-8, given the value of the first byte.
*/
unsigned getNumBytesForUTF8(UTF8 first) {
return trailingBytesForUTF8[first] + 1;
}
/* --------------------------------------------------------------------- */
/*
* Exported function to return whether a UTF-8 string is legal or not.
* This is not used here; it's just exported.
*/
Boolean isLegalUTF8String(const UTF8 **source, const UTF8 *sourceEnd) {
while (*source != sourceEnd) {
int length = trailingBytesForUTF8[**source] + 1;
if (length > sourceEnd - *source || !isLegalUTF8(*source, length))
return false;
*source += length;
}
return true;
}
/* --------------------------------------------------------------------- */
ConversionResult ConvertUTF8toUTF16 (
const UTF8** sourceStart, const UTF8* sourceEnd,
UTF16** targetStart, UTF16* targetEnd, ConversionFlags flags) {
ConversionResult result = conversionOK;
const UTF8* source = *sourceStart;
UTF16* target = *targetStart;
while (source < sourceEnd) {
UTF32 ch = 0;
unsigned short extraBytesToRead = trailingBytesForUTF8[*source];
if (extraBytesToRead >= sourceEnd - source) {
result = sourceExhausted; break;
}
/* Do this check whether lenient or strict */
if (!isLegalUTF8(source, extraBytesToRead+1)) {
result = sourceIllegal;
break;
}
/*
* The cases all fall through. See "Note A" below.
*/
switch (extraBytesToRead) {
case 5: ch += *source++; ch <<= 6; /* remember, illegal UTF-8 */
case 4: ch += *source++; ch <<= 6; /* remember, illegal UTF-8 */
case 3: ch += *source++; ch <<= 6;
case 2: ch += *source++; ch <<= 6;
case 1: ch += *source++; ch <<= 6;
case 0: ch += *source++;
}
ch -= offsetsFromUTF8[extraBytesToRead];
if (target >= targetEnd) {
source -= (extraBytesToRead+1); /* Back up source pointer! */
result = targetExhausted; break;
}
if (ch <= UNI_MAX_BMP) { /* Target is a character <= 0xFFFF */
/* UTF-16 surrogate values are illegal in UTF-32 */
if (ch >= UNI_SUR_HIGH_START && ch <= UNI_SUR_LOW_END) {
if (flags == strictConversion) {
source -= (extraBytesToRead+1); /* return to the illegal value itself */
result = sourceIllegal;
break;
} else {
*target++ = UNI_REPLACEMENT_CHAR;
}
} else {
*target++ = (UTF16)ch; /* normal case */
}
} else if (ch > UNI_MAX_UTF16) {
if (flags == strictConversion) {
result = sourceIllegal;
source -= (extraBytesToRead+1); /* return to the start */
break; /* Bail out; shouldn't continue */
} else {
*target++ = UNI_REPLACEMENT_CHAR;
}
} else {
/* target is a character in range 0xFFFF - 0x10FFFF. */
if (target + 1 >= targetEnd) {
source -= (extraBytesToRead+1); /* Back up source pointer! */
result = targetExhausted; break;
}
ch -= halfBase;
*target++ = (UTF16)((ch >> halfShift) + UNI_SUR_HIGH_START);
*target++ = (UTF16)((ch & halfMask) + UNI_SUR_LOW_START);
}
}
*sourceStart = source;
*targetStart = target;
return result;
}
/* --------------------------------------------------------------------- */
static ConversionResult ConvertUTF8toUTF32Impl(
const UTF8** sourceStart, const UTF8* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags,
Boolean InputIsPartial) {
ConversionResult result = conversionOK;
const UTF8* source = *sourceStart;
UTF32* target = *targetStart;
while (source < sourceEnd) {
UTF32 ch = 0;
unsigned short extraBytesToRead = trailingBytesForUTF8[*source];
if (extraBytesToRead >= sourceEnd - source) {
if (flags == strictConversion || InputIsPartial) {
result = sourceExhausted;
break;
} else {
result = sourceIllegal;
/*
* Replace the maximal subpart of ill-formed sequence with
* replacement character.
*/
source += findMaximalSubpartOfIllFormedUTF8Sequence(source,
sourceEnd);
*target++ = UNI_REPLACEMENT_CHAR;
continue;
}
}
if (target >= targetEnd) {
result = targetExhausted; break;
}
/* Do this check whether lenient or strict */
if (!isLegalUTF8(source, extraBytesToRead+1)) {
result = sourceIllegal;
if (flags == strictConversion) {
/* Abort conversion. */
break;
} else {
/*
* Replace the maximal subpart of ill-formed sequence with
* replacement character.
*/
source += findMaximalSubpartOfIllFormedUTF8Sequence(source,
sourceEnd);
*target++ = UNI_REPLACEMENT_CHAR;
continue;
}
}
/*
* The cases all fall through. See "Note A" below.
*/
switch (extraBytesToRead) {
case 5: ch += *source++; ch <<= 6;
case 4: ch += *source++; ch <<= 6;
case 3: ch += *source++; ch <<= 6;
case 2: ch += *source++; ch <<= 6;
case 1: ch += *source++; ch <<= 6;
case 0: ch += *source++;
}
ch -= offsetsFromUTF8[extraBytesToRead];
if (ch <= UNI_MAX_LEGAL_UTF32) {
/*
* UTF-16 surrogate values are illegal in UTF-32, and anything
* over Plane 17 (> 0x10FFFF) is illegal.
*/
if (ch >= UNI_SUR_HIGH_START && ch <= UNI_SUR_LOW_END) {
if (flags == strictConversion) {
source -= (extraBytesToRead+1); /* return to the illegal value itself */
result = sourceIllegal;
break;
} else {
*target++ = UNI_REPLACEMENT_CHAR;
}
} else {
*target++ = ch;
}
} else { /* i.e., ch > UNI_MAX_LEGAL_UTF32 */
result = sourceIllegal;
*target++ = UNI_REPLACEMENT_CHAR;
}
}
*sourceStart = source;
*targetStart = target;
return result;
}
ConversionResult ConvertUTF8toUTF32Partial(const UTF8 **sourceStart,
const UTF8 *sourceEnd,
UTF32 **targetStart,
UTF32 *targetEnd,
ConversionFlags flags) {
return ConvertUTF8toUTF32Impl(sourceStart, sourceEnd, targetStart, targetEnd,
flags, /*InputIsPartial=*/true);
}
ConversionResult ConvertUTF8toUTF32(const UTF8 **sourceStart,
const UTF8 *sourceEnd, UTF32 **targetStart,
UTF32 *targetEnd, ConversionFlags flags) {
return ConvertUTF8toUTF32Impl(sourceStart, sourceEnd, targetStart, targetEnd,
flags, /*InputIsPartial=*/false);
}
/* ---------------------------------------------------------------------
Note A.
The fall-through switches in UTF-8 reading code save a
temp variable, some decrements & conditionals. The switches
are equivalent to the following loop:
{
int tmpBytesToRead = extraBytesToRead+1;
do {
ch += *source++;
--tmpBytesToRead;
if (tmpBytesToRead) ch <<= 6;
} while (tmpBytesToRead > 0);
}
In UTF-8 writing code, the switches on "bytesToWrite" are
similarly unrolled loops.
--------------------------------------------------------------------- */

230
src/ConvertUTF.h Normal file
View file

@ -0,0 +1,230 @@
/*===--- ConvertUTF.h - Universal Character Names conversions ---------------===
*
* The LLVM Compiler Infrastructure
*
* This file is distributed under the University of Illinois Open Source
* License:
*
* University of Illinois/NCSA
* Open Source License
*
* Copyright (c) 2003-2014 University of Illinois at Urbana-Champaign.
* All rights reserved.
*
* Developed by:
*
* LLVM Team
*
* University of Illinois at Urbana-Champaign
*
* http://llvm.org
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal with the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* * Redistributions of source code must retain the above
* copyright notice, this list of conditions and the
* following disclaimers.
*
* * Redistributions in binary form must reproduce the
* above copyright notice, this list of conditions and
* the following disclaimers in the documentation and/or
* other materials provided with the distribution.
*
* * Neither the names of the LLVM Team, University of
* Illinois at Urbana-Champaign, nor the names of its
* contributors may be used to endorse or promote
* products derived from this Software without specific
* prior written permission.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE CONTRIBUTORS OR
* COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
* USE OR OTHER DEALINGS WITH THE SOFTWARE.
*
*==------------------------------------------------------------------------==*/
/*
* Copyright 2001-2004 Unicode, Inc.
*
* Disclaimer
*
* This source code is provided as is by Unicode, Inc. No claims are
* made as to fitness for any particular purpose. No warranties of any
* kind are expressed or implied. The recipient agrees to determine
* applicability of information provided. If this file has been
* purchased on magnetic or optical media from Unicode, Inc., the
* sole remedy for any claim will be exchange of defective media
* within 90 days of receipt.
*
* Limitations on Rights to Redistribute This Code
*
* Unicode, Inc. hereby grants the right to freely use the information
* supplied in this file in the creation of products supporting the
* Unicode Standard, and to make copies of this file in any form
* for internal or external distribution as long as this notice
* remains attached.
*/
/* ---------------------------------------------------------------------
Conversions between UTF32, UTF-16, and UTF-8. Header file.
Several funtions are included here, forming a complete set of
conversions between the three formats. UTF-7 is not included
here, but is handled in a separate source file.
Each of these routines takes pointers to input buffers and output
buffers. The input buffers are const.
Each routine converts the text between *sourceStart and sourceEnd,
putting the result into the buffer between *targetStart and
targetEnd. Note: the end pointers are *after* the last item: e.g.
*(sourceEnd - 1) is the last item.
The return result indicates whether the conversion was successful,
and if not, whether the problem was in the source or target buffers.
(Only the first encountered problem is indicated.)
After the conversion, *sourceStart and *targetStart are both
updated to point to the end of last text successfully converted in
the respective buffers.
Input parameters:
sourceStart - pointer to a pointer to the source buffer.
The contents of this are modified on return so that
it points at the next thing to be converted.
targetStart - similarly, pointer to pointer to the target buffer.
sourceEnd, targetEnd - respectively pointers to the ends of the
two buffers, for overflow checking only.
These conversion functions take a ConversionFlags argument. When this
flag is set to strict, both irregular sequences and isolated surrogates
will cause an error. When the flag is set to lenient, both irregular
sequences and isolated surrogates are converted.
Whether the flag is strict or lenient, all illegal sequences will cause
an error return. This includes sequences such as: <F4 90 80 80>, <C0 80>,
or <A0> in UTF-8, and values above 0x10FFFF in UTF-32. Conformant code
must check for illegal sequences.
When the flag is set to lenient, characters over 0x10FFFF are converted
to the replacement character; otherwise (when the flag is set to strict)
they constitute an error.
Output parameters:
The value "sourceIllegal" is returned from some routines if the input
sequence is malformed. When "sourceIllegal" is returned, the source
value will point to the illegal value that caused the problem. E.g.,
in UTF-8 when a sequence is malformed, it points to the start of the
malformed sequence.
Author: Mark E. Davis, 1994.
Rev History: Rick McGowan, fixes & updates May 2001.
Fixes & updates, Sept 2001.
------------------------------------------------------------------------ */
#ifndef LLVM_SUPPORT_CONVERTUTF_H
#define LLVM_SUPPORT_CONVERTUTF_H
/* ---------------------------------------------------------------------
The following 4 definitions are compiler-specific.
The C standard does not guarantee that wchar_t has at least
16 bits, so wchar_t is no less portable than unsigned short!
All should be unsigned values to avoid sign extension during
bit mask & shift operations.
------------------------------------------------------------------------ */
typedef unsigned int UTF32; /* at least 32 bits */
typedef unsigned short UTF16; /* at least 16 bits */
typedef unsigned char UTF8; /* typically 8 bits */
typedef unsigned char Boolean; /* 0 or 1 */
/* Some fundamental constants */
#define UNI_REPLACEMENT_CHAR (UTF32)0x0000FFFD
#define UNI_MAX_BMP (UTF32)0x0000FFFF
#define UNI_MAX_UTF16 (UTF32)0x0010FFFF
#define UNI_MAX_UTF32 (UTF32)0x7FFFFFFF
#define UNI_MAX_LEGAL_UTF32 (UTF32)0x0010FFFF
#define UNI_MAX_UTF8_BYTES_PER_CODE_POINT 4
#define UNI_UTF16_BYTE_ORDER_MARK_NATIVE 0xFEFF
#define UNI_UTF16_BYTE_ORDER_MARK_SWAPPED 0xFFFE
typedef enum {
conversionOK, /* conversion successful */
sourceExhausted, /* partial character in source, but hit end */
targetExhausted, /* insuff. room in target for conversion */
sourceIllegal /* source sequence is illegal/malformed */
} ConversionResult;
typedef enum {
strictConversion = 0,
lenientConversion
} ConversionFlags;
/* This is for C++ and does no harm in C */
#ifdef __cplusplus
extern "C" {
#endif
ConversionResult ConvertUTF8toUTF16 (
const UTF8** sourceStart, const UTF8* sourceEnd,
UTF16** targetStart, UTF16* targetEnd, ConversionFlags flags);
/**
* Convert a partial UTF8 sequence to UTF32. If the sequence ends in an
* incomplete code unit sequence, returns \c sourceExhausted.
*/
ConversionResult ConvertUTF8toUTF32Partial(
const UTF8** sourceStart, const UTF8* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags);
/**
* Convert a partial UTF8 sequence to UTF32. If the sequence ends in an
* incomplete code unit sequence, returns \c sourceIllegal.
*/
ConversionResult ConvertUTF8toUTF32(
const UTF8** sourceStart, const UTF8* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags);
ConversionResult ConvertUTF16toUTF8 (
const UTF16** sourceStart, const UTF16* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags);
ConversionResult ConvertUTF32toUTF8 (
const UTF32** sourceStart, const UTF32* sourceEnd,
UTF8** targetStart, UTF8* targetEnd, ConversionFlags flags);
ConversionResult ConvertUTF16toUTF32 (
const UTF16** sourceStart, const UTF16* sourceEnd,
UTF32** targetStart, UTF32* targetEnd, ConversionFlags flags);
ConversionResult ConvertUTF32toUTF16 (
const UTF32** sourceStart, const UTF32* sourceEnd,
UTF16** targetStart, UTF16* targetEnd, ConversionFlags flags);
Boolean isLegalUTF8Sequence(const UTF8 *source, const UTF8 *sourceEnd);
Boolean isLegalUTF8String(const UTF8 **source, const UTF8 *sourceEnd);
unsigned getNumBytesForUTF8(UTF8 firstByte);
#ifdef __cplusplus
}
#endif
/* --------------------------------------------------------------------- */
#endif

View file

@ -55,32 +55,81 @@ DebugLogger::~DebugLogger()
fclose(file); fclose(file);
} }
void DebugLogger::ShowStreamsHelp()
{
fprintf(stderr, "\n");
fprintf(stderr, "Enable debug output into debug.log with -B <streams>.\n");
fprintf(stderr, "<streams> is a comma-separated list of streams to enable.\n");
fprintf(stderr, "\n");
fprintf(stderr, "Available streams:\n");
for ( int i = 0; i < NUM_DBGS; ++i )
fprintf(stderr," %s\n", streams[i].prefix);
fprintf(stderr, "\n");
fprintf(stderr, " plugin-<plugin-name> (replace '::' in name with '-'; e.g., '-B plugin-Bro-Netmap')\n");
fprintf(stderr, "\n");
fprintf(stderr, "Pseudo streams\n");
fprintf(stderr, " verbose Increase verbosity.\n");
fprintf(stderr, " all Enable all streams at maximum verbosity.\n");
fprintf(stderr, "\n");
}
void DebugLogger::EnableStreams(const char* s) void DebugLogger::EnableStreams(const char* s)
{ {
char* tmp = copy_string(s);
char* brkt; char* brkt;
char* tmp = copy_string(s);
char* tok = strtok(tmp, ","); char* tok = strtok(tmp, ",");
while ( tok ) while ( tok )
{ {
if ( strcasecmp("all", tok) == 0 )
{
for ( int i = 0; i < NUM_DBGS; ++i )
{
streams[i].enabled = true;
enabled_streams.insert(streams[i].prefix);
}
verbose = true;
goto next;
}
if ( strcasecmp("verbose", tok) == 0 )
{
verbose = true;
goto next;
}
if ( strcasecmp("help", tok) == 0 )
{
ShowStreamsHelp();
exit(0);
}
if ( strncmp(tok, "plugin-", strlen("plugin-")) == 0 )
{
// Cannot verify this at this time, plugins may not
// have been loaded.
enabled_streams.insert(tok);
goto next;
}
int i; int i;
for ( i = 0; i < NUM_DBGS; ++i ) for ( i = 0; i < NUM_DBGS; ++i )
{
if ( strcasecmp(streams[i].prefix, tok) == 0 ) if ( strcasecmp(streams[i].prefix, tok) == 0 )
{ {
streams[i].enabled = true; streams[i].enabled = true;
break;
}
if ( i == NUM_DBGS )
{
if ( strcasecmp("verbose", tok) == 0 )
verbose = true;
else if ( strncmp(tok, "plugin-", 7) != 0 )
reporter->FatalError("unknown debug stream %s\n", tok);
}
enabled_streams.insert(tok); enabled_streams.insert(tok);
goto next;
}
}
reporter->FatalError("unknown debug stream '%s', try -B help.\n", tok);
next:
tok = strtok(0, ","); tok = strtok(0, ",");
} }

View file

@ -78,6 +78,8 @@ public:
void SetVerbose(bool arg_verbose) { verbose = arg_verbose; } void SetVerbose(bool arg_verbose) { verbose = arg_verbose; }
bool IsVerbose() const { return verbose; } bool IsVerbose() const { return verbose; }
void ShowStreamsHelp();
private: private:
FILE* file; FILE* file;
bool verbose; bool verbose;

View file

@ -466,6 +466,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
id.src_addr = ip_hdr->SrcAddr(); id.src_addr = ip_hdr->SrcAddr();
id.dst_addr = ip_hdr->DstAddr(); id.dst_addr = ip_hdr->DstAddr();
Dictionary* d = 0; Dictionary* d = 0;
BifEnum::Tunnel::Type tunnel_type = BifEnum::Tunnel::IP;
switch ( proto ) { switch ( proto ) {
case IPPROTO_TCP: case IPPROTO_TCP:
@ -606,6 +607,8 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
// Treat GRE tunnel like IP tunnels, fallthrough to logic below now // Treat GRE tunnel like IP tunnels, fallthrough to logic below now
// that GRE header is stripped and only payload packet remains. // that GRE header is stripped and only payload packet remains.
// The only thing different is the tunnel type enum value to use.
tunnel_type = BifEnum::Tunnel::GRE;
} }
case IPPROTO_IPV4: case IPPROTO_IPV4:
@ -653,7 +656,8 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr,
if ( it == ip_tunnels.end() ) if ( it == ip_tunnels.end() )
{ {
EncapsulatingConn ec(ip_hdr->SrcAddr(), ip_hdr->DstAddr()); EncapsulatingConn ec(ip_hdr->SrcAddr(), ip_hdr->DstAddr(),
tunnel_type);
ip_tunnels[tunnel_idx] = TunnelActivity(ec, network_time); ip_tunnels[tunnel_idx] = TunnelActivity(ec, network_time);
timer_mgr->Add(new IPTunnelTimer(network_time, tunnel_idx)); timer_mgr->Add(new IPTunnelTimer(network_time, tunnel_idx));
} }

Some files were not shown because too many files have changed in this diff Show more