diff --git a/.gitignore b/.gitignore
index d59a62b7e1..fa397f98d2 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,2 +1,3 @@
build
tmp
+*.gcov
diff --git a/CHANGES b/CHANGES
index 07e64b4a0f..af9f1b88f5 100644
--- a/CHANGES
+++ b/CHANGES
@@ -1,4 +1,90 @@
+2.5-842 | 2018-08-15 11:00:20 -0500
+
+ * Fix seg fault on trying to type-cast invalid/nil Broker::Data
+ (Jon Siwek, Corelight)
+
+2.5-841 | 2018-08-14 16:45:09 -0500
+
+ * BIT-1798: fix PPTP GRE tunnel decapsulation (Jon Siwek, Corelight)
+
+2.5-840 | 2018-08-13 17:40:06 -0500
+
+ * Fix SumStats::observe key normalization logic
+ (reported by Jim Mellander and fixed by Jon Siwek, Corelight)
+
+2.5-839 | 2018-08-13 10:51:43 -0500
+
+ * Make options redef-able by default. (Johanna Amann, Corelight)
+
+ * Fix incorrect input framework warnings when parsing ports.
+ (Johanna Amann, Corelight)
+
+ * Allow input framework to accept 0 and 1 as valid boolean values.
+ (Johanna Amann, Corelight)
+
+ * Improve the travis-job script to work outside of Travis (Daniel Thayer)
+
+ * Fix validate-certs.bro comments (Jon Siwek, Corelight)
+
+2.5-831 | 2018-08-10 17:12:53 -0500
+
+ * Immediately apply broker subscriptions made during bro_init()
+ (Jon Siwek, Corelight)
+
+ * Update default broker threading configuration to use 4 threads and allow
+ tuning via BRO_BROKER_MAX_THREADS env. variable (Jon Siwek, Corelight)
+
+ * Misc. unit test improvements (Jon Siwek, Corelight)
+
+2.5-826 | 2018-08-08 13:09:27 -0700
+
+ * Add support for code coverage statistics for bro source files after running btest
+ test suite
+
+ This adds --enable-coverage flag to configure Bro with gcov.
+ A new directory named /testing/coverage/ contains a new
+ coverage target. By default a coverage.log is created; running
+ make html in testing/coverage creates a HTML report.
+ (Chung Min Kim, Corelight)
+
+2.5-819 | 2018-08-08 13:03:22 -0500
+
+ * Fix cluster layout graphic and doc warnings (Jon Siwek, Corelight)
+
+ * Added missing tcp-state for signature dpd_rfb_server (Zhongjie Wang)
+
+2.5-815 | 2018-08-06 17:07:56 -0500
+
+ * Fix an "uninitialized" compiler warning (Jon Siwek, Corelight)
+
+ * Fix (non)suppression of proxy-bound events in known-*.bro scripts
+ (Jon Siwek, Corelight)
+
+2.5-811 | 2018-08-03 11:33:57 -0500
+
+ * Update scripts to use vector "+=" append operation (Vern Paxson, Corelight)
+
+ * Add vector "+=" append operation (Vern Paxson, Corelight)
+
+ * Improve a travis output message in pull request builds (Daniel Thayer)
+
+ * Use default version of OpenSSL on all travis docker containers
+ (Daniel Thayer)
+
+2.5-802 | 2018-08-02 10:40:36 -0500
+
+ * Add set operations: union, intersection, difference, comparison
+ (Vern Paxson, Corelight)
+
+2.5-796 | 2018-08-01 16:31:25 -0500
+
+ * Add 'W' connection history indicator for zero windows
+ (Vern Paxson, Corelight)
+
+ * Allow logarithmic 'T'/'C'/'W' connection history repetitions, which
+ also now raise their own events (Vern Paxson, Corelight)
+
2.5-792 | 2018-08-01 12:15:31 -0500
* fix NTLM NegotiateFlags field offsets (Jeffrey Bencteux)
diff --git a/NEWS b/NEWS
index 5a2ad70c56..4c1ebc6baa 100644
--- a/NEWS
+++ b/NEWS
@@ -283,6 +283,39 @@ New Functionality
- Bro now supports OpenSSL 1.1.
+- The new connection/conn.log history character 'W' indicates that
+ the originator ('w' = responder) advertised a TCP zero window
+ (instructing the peer to not send any data until receiving a
+ non-zero window).
+
+- The connection/conn.log history characters 'C' (checksum error seen),
+ 'T' (retransmission seen), and 'W' (zero window advertised) are now
+ repeated in a logarithmic fashion upon seeing multiple instances
+ of the corresponding behavior. Thus a connection with 2 C's in its
+ history means that the originator sent >= 10 packets with checksum
+ errors; 3 C's means >= 100, etc.
+
+- The above connection history behaviors occurring multiple times
+ (i.e., starting at 10 instances, than again for 100 instances,
+ etc.) generate corresponding events: tcp_multiple_checksum_errors,
+ udp_multiple_checksum_errors, tcp_multiple_zero_windows, and
+ tcp_multiple_retransmissions. Each has the same form, e.g.
+
+ event tcp_multiple_retransmissions(c: connection, is_orig: bool,
+ threshold: count);
+
+- Added support for set union, intersection, difference, and comparison
+ operations. The corresponding operators for the first three are
+ "s1 | s2", "s1 & s2", and "s1 - s2". Relationals are in terms
+ of subsets, so "s1 < s2" yields true if s1 is a proper subset of s2
+ and "s1 == s2" if the two sets have exactly the same elements.
+ "s1 <= s2" holds for subsets or equality, and similarly "s1 != s2",
+ "s1 > s2", and "s1 >= s2" have the expected meanings in terms
+ of non-equality, proper superset, and superset-or-equal.
+
+- An expression of the form "v += e" will append the value of the expression
+ "e" to the end of the vector "v" (of course assuming type-compatbility).
+
Changed Functionality
---------------------
diff --git a/VERSION b/VERSION
index 67eaa8d910..4744b3cd09 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-2.5-792
+2.5-842
diff --git a/aux/bifcl b/aux/bifcl
index f648ad79b2..e99152c00a 160000
--- a/aux/bifcl
+++ b/aux/bifcl
@@ -1 +1 @@
-Subproject commit f648ad79b20baba4f80259d059044ae78d56d7c4
+Subproject commit e99152c00aad8f81c684a01bc4d40790a295f85c
diff --git a/aux/binpac b/aux/binpac
index 7e6b47ee90..74cf55ace0 160000
--- a/aux/binpac
+++ b/aux/binpac
@@ -1 +1 @@
-Subproject commit 7e6b47ee90c5b6ab80b0e6f93d5cf835fd86ce4e
+Subproject commit 74cf55ace0de2bf061bbbf285ccf47cba122955f
diff --git a/aux/bro-aux b/aux/bro-aux
index 1b27ec4c24..53aae82024 160000
--- a/aux/bro-aux
+++ b/aux/bro-aux
@@ -1 +1 @@
-Subproject commit 1b27ec4c24bb13443f1a5e8f4249ff4e20e06dd1
+Subproject commit 53aae820242c02790089e384a9fe2d3174799ab1
diff --git a/aux/broccoli b/aux/broccoli
index 80a4aa6892..edf754ea6e 160000
--- a/aux/broccoli
+++ b/aux/broccoli
@@ -1 +1 @@
-Subproject commit 80a4aa68927c2f60ece1200268106edc27f50338
+Subproject commit edf754ea6e89a84ad74eff69a454c5e285c4b81b
diff --git a/aux/broctl b/aux/broctl
index d900149ef6..70a8b2e151 160000
--- a/aux/broctl
+++ b/aux/broctl
@@ -1 +1 @@
-Subproject commit d900149ef6e2f744599a9575b67fb7155953bd4a
+Subproject commit 70a8b2e15105f4c238765a882151718162e46208
diff --git a/aux/broker b/aux/broker
index 488dc806d0..e0f9f6504d 160000
--- a/aux/broker
+++ b/aux/broker
@@ -1 +1 @@
-Subproject commit 488dc806d0f777dcdee04f3794f04121ded08b8e
+Subproject commit e0f9f6504db9285a48e0be490abddf959999a404
diff --git a/cmake b/cmake
index ec66fd8fbd..4cc3e344cf 160000
--- a/cmake
+++ b/cmake
@@ -1 +1 @@
-Subproject commit ec66fd8fbd81fd8b25ced0214016f4c89604a15c
+Subproject commit 4cc3e344cf2698010a46684d32a2907a943430e3
diff --git a/configure b/configure
index 46aaeb3bbd..90dda2fdd7 100755
--- a/configure
+++ b/configure
@@ -45,6 +45,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
Optional Features:
--enable-debug compile in debugging mode (like --build-type=Debug)
+ --enable-coverage compile with code coverage support (implies debugging mode)
--enable-mobile-ipv6 analyze mobile IPv6 features defined by RFC 6275
--enable-perftools force use of Google perftools on non-Linux systems
(automatically on when perftools is present on Linux)
@@ -141,6 +142,8 @@ append_cache_entry INSTALL_BROCTL BOOL true
append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING
append_cache_entry ENABLE_MOBILE_IPV6 BOOL false
append_cache_entry DISABLE_PERFTOOLS BOOL false
+append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
+append_cache_entry ENABLE_COVERAGE BOOL false
# parse arguments
while [ $# -ne 0 ]; do
@@ -196,6 +199,10 @@ while [ $# -ne 0 ]; do
--logdir=*)
append_cache_entry BRO_LOG_DIR PATH $optarg
;;
+ --enable-coverage)
+ append_cache_entry ENABLE_COVERAGE BOOL true
+ append_cache_entry ENABLE_DEBUG BOOL true
+ ;;
--enable-debug)
append_cache_entry ENABLE_DEBUG BOOL true
;;
diff --git a/doc/frameworks/broker/cluster-layout.png b/doc/frameworks/broker/cluster-layout.png
index 8016c7321b..3813bfbfda 100644
Binary files a/doc/frameworks/broker/cluster-layout.png and b/doc/frameworks/broker/cluster-layout.png differ
diff --git a/doc/frameworks/broker/cluster-layout.xml b/doc/frameworks/broker/cluster-layout.xml
index d0fe32437f..4269c6723f 100644
--- a/doc/frameworks/broker/cluster-layout.xml
+++ b/doc/frameworks/broker/cluster-layout.xml
@@ -1,2 +1,2 @@
-7Vxdc6M2FP01flwPkpCAx0022T60MzuTTrt9lEGx2cXIg0ns9NdXGAkjAbExH3a6+CXWRRIgnXN075WcGbpf778mdLP6gwcsmkEr2M/QlxkUH9cRfzLLW25xLDs3LJMwyE3gaHgK/2XSaEnrSxiwrVYx5TxKw41u9HkcMz/VbDRJ+E6v9swj/a4bumQVw5NPo6r17zBIV9IKLOt44TcWLlfy1i6WFxbU/7lM+Ess7zeD6PnwyS+vqepL1t+uaMB3JRN6mKH7hPM0/7be37MoG1s1bHm7x4arxXMnLE7PasAoAo7jWQH0qeeiTwDlXbzS6IWpdzg8afqmRocFYrBkMeax+HN3eGWWdQpEaZWuI/k1ogsW3RWjcs8jnhybbVOapJ+zCTNsj2GU9WCpsoQIFmUWB6qFH9HtNvT/XIVxfkE2A3mp1OgHS9M3WaYvKRcmnqQrvuQxjX7nfCNbbdOE/2TqKcXsecRBn0lxRaEhq/vM4/SRrsMoA/lfLAloTKVZ3glAWS51aB0+wh7zh9I4Hh55H6bfs7eeY1n6R47B8Vll1eo8y6nf8pfEZ02TK6lEkyVLG+p4eZ1sjksdS/R8ZXzN0uRNVEhYRNPwVScMlbxbFvVkUzFj9K1UYcPDON2Wev6WGUQFJSGKaVJAkAN1HJv1ERSDVm5h23a5hfiSP4MqlV7maDqw41ym2BNTfmmmoJtgCiDtmAKgMzpTKkRZCwAsWTKDJBKje7fIvi3TYrbKDIoisehnaN+twpQ9behhznbC79Dpc+TVgQpqXc0u+Xwd+vLCKZbVgJo2gFowqFSTgQAzpzvYaRQu44yxAq9ihN4B8CtLUrZ/F3rqKtJnHCl13R2dG0+aViW3RkGrDqwluLRDA6yRzRwG2w2NFRA2Cd+/fYJ1CMkt4r7l+rcGnMDxFpZ1DnCenxnx/dsEDoQ6cACuAsfBVeDgIYBT55lWgRPxpVCWTHM+Bk7IguCzBEYEEfBWceLpa5BNrggTrwITCYka92ya+x4WFw9fbfKnYFUD1BfHu2tadLoBp3C4S+52/uBixr6XC8oRrzrtDa654T+399XhGb56L655xZe2CZp7jo0wsICNXAc5GhM8OEeQeBhjghwM9d7zV5IdvhfZemBORC8WcsQdsHLWixXZtudZ7454S2IDixi3yUelcpvmKEOjXjvHvy4gbvLswOTZdVXtikTXUKdRtY2ocUzHDk+iPa5oq4xJRbSt0UTbvppGE2jkRxC5TIix8jZUR8ToaEypJROHRuJQwZU5shyNL3OL1FKmPjlZS6ZDZ99YEoqpz5T9MJTDMcwbhGHINhgG8GUMs1HfDGtIgVpGxAK0TbLOCVBnIufIUUmFaSXaEtdgLTxvoeuRmxeFLGgYsiJX55jlXEZW5OgdAWh01BNZIR6WrEo4S+TU44YSE5uj3YvJcediG9eToy7mzpE+mJvmVXFpDwJDc3/XNrcaGmDYWuqhHi57PYOHtFN2E0+nHPRRcDYmmoZxSdqiAiEj+rWMUzIn6qsweKj6vWuc2w2mx8UUjyp8lWX9qlCFwygh0dfR7krYIaxEoBtQavSshJmqos07OZaTppU1xMzondI00lKjOtbvW9Oq23x1yeYdT37mW38fNNu8aMa4mW12fXar2eYi6TvGhjCsQcYUfl9jU9Ca26a/MG7gfU5SbIqzNToOJNfTufLb2fWZE2Jmsj9ITmyYBDZRbm+RwHYv4yp29Y4ANNazvnJi7sA5MVzh5gU+/yCh4fhbl4CcAcxhYkPXzJKBgfCk9kQHypI57aS+AWHVPOiVIscGEB4eaUCMDeOoAAtqc4/71qxT8eipePLM671roNcPaN/bmholmzaqMPayWzBMasvqZz6N2bwtCfqYAnRiK6inRa5tzr9luqtT9d6jrbrzQY3ZsY97FvP/kR0b89cSdWnTXzgMf0eFB0qOXXVnYpgjltA4T3XxEUvbPElmQr6vA2DGT1aMtaC771j9xevEslGTXa1cqDnKkKLlwwrPvKdk1znu1TDkLE5jKlfDMzh1LjmLnwgoF8fsqC9ykl7JKYrHf6eSVz/+zxr08B8=
\ No newline at end of file
+7VxLc6M4EP41Po4LSUjAcZJJZg+7VVOVrd3ZowwKZgYjFyaxvb9+hZEwEhA/eITs2JdYrZYE0ve1ultyZuh+tfua0vXyDx6weAatYDdDX2ZQfDxL/Mkl+0Li2G4hCNMoKETgKHiK/mVSKNuFL1HANppixnmcRWtd6PMkYX6myWia8q2u9sxjfdQ1DVlN8OTTuC79OwqypZQCyzpW/MaicCmHdrGsWFD/Z5jyl0SON4Po+fApqldU9SX1N0sa8G1FhB5m6D7lPCu+rXb3LM7nVk1b0e6xpbZ87pQl2VkNGEXAcTwrgD71XPQJoKKLVxq/MPUOhyfN9mp2WCAmSxYTnog/d4dXZnmnQJSW2SqWX2O6YPFdOSv3PObpsdkmo2n2OV8wQ/YYxXkPlipLiGBRZkmgWvgx3Wwi/89llBQVshkoSpVGP1iW7WWZvmRciHiaLXnIExr/zvlattpkKf/J1FOK1fOIgz6TskahIdd95kn2SFdRnIP8L5YGNKFSLEcCUJYrHVqHj5An/KEyj4dH3kXZ9/yt51iW/pFzcHxWqVpfZ7n0G/6S+qxtcSWVaBqyrEXHK3TyNa50LNHzlfEVy9K9UEhZTLPoVScMlbwLSz3ZVKwY3VcU1jxKsk2l52+5QCgoE6KYJg0IcqCOY1MfQTFp1Ra2bVdbiC/FM6hS5WWOogM7zmWKfWPKL80UNAmmAHIZUwB0RmdKjSgrAYCQpTNIYjG7d4v8W5iVq1VlUByLTT9H+3YZZexpTQ9rthV+h06fI68OVFD7al7l81Xky4pTLGsANW0BtWBQRZOBADOnO9hpHIVJzliBVzFDbwD4laUZ270JPVWL9BVHyrpuj86NctmWFbdGQasJrBW4XIYG2GA2Cxhs1jRRQFinfLf/BJsQUkjEuFX9qQEncLyFZZ0DnOdnRnx/msCBUAcOwHXgOLgOHDwEcJo80zpwYh4Ky5LbnI+BE7Ig+CwDI4IIOFWcePoeZJN3hIlXg4mERIN7dlv7HjYXD7/b4t+CVQ1QXxzvrm3T6Qac0uGuuNvFg4sV+14tKEe87rT35JrDM1xzMIhrXvOlbYLmnmMjDCxgI9dBjsYED84RJB7GmCAHQ7334h1lh29Fth6YE9GLhRwxAlbOerkj2/Y8790Rr01sYBFjmGKaasO0Rxka9S5z/JsC4jbPDtw8u3e12kbUOKZjh29Ge1yjrTImNaNtDWW07enYaAKN/Agi1xlirLwN1RExOhrT1JIbh0biUMmVObIcjS9zizRSpjk52UimQ2ffWBqJpc8t+2Eqe2PYMKn8GjGQbTAM4OsYZqO+GdaSArWMiAVoh2SdE6DOjZwjRyU1plVoS1yDtfC8je56bl4VsgxzmlAnK3J1jlnOdWRFjt4RgEZHPZEV4mHJqixphZx63FBhYnu0ezU57lxs42ZyNMXcBdL7ctO8Oi7tcWBonu/a5lFDCwwvNvVQD5e9nsFDLrPsJp5OOeij4GxANE30dgFCRvRrGbdkTuirMHgo/d5tnNsNpsfNFI9q+Grb+phQhSNZQqLvo90tYYewEoFuQGmwZxXM1C3avJNjORGbNo17IMjM6J2yaeRCG9VR37BpdR4YUSS2+7WB9WPBpuT0lqc/i6PCD5qdXrRzwsxOuz6bana6TBKPcYAMG5BxC9ff4xDRmtumfzFooH5OEu0WlzeP4wwblt/uoU/nlGhOiJn5nmYObaSEN1Fucpnwdq/jKnb1jgA09rO+cmjuwDk0XOPmFTHCIKHk4EedgJwBzJFiSdfMqoGB8KTOUAfKqjmXmfoWhNXzpu8UabaA8PBI/WFsJOMHjPN0bOYrumLsVPx6Kv48s/5UPFp7z57jURWQdgX5W0dfo2TrhjSkw5xGDJM6s/pZT2M1p2WyejVYI0VW4NRRU0+b4qVnChem0zqp9x6dNd0/as2mfdy7nv+PbNqYv8ZoSrP+wmH7G1Z4oGTamCcfI13hhMZ9rauvcNrmTTUT8n1dMDN+EmPsBd19x/ovam8sGzU5dpELNUc5UrT8WemZX5ccO8e9Gomc5W1P5Wp4BqfOJWf5EwTl4pgd9UVO0is5RfH471oK9eP/xEEP/wE=
\ No newline at end of file
diff --git a/doc/intro/index.rst b/doc/intro/index.rst
index cf448a0c84..b58a4dbb5b 100644
--- a/doc/intro/index.rst
+++ b/doc/intro/index.rst
@@ -169,7 +169,7 @@ History
Bro's history goes back much further than many people realize. `Vern
Paxson `_ designed and implemented the
-initial version almost two decades ago.
+initial version more than two decades ago.
Vern began work on the code in 1995 as a researcher at the `Lawrence
Berkeley National Laboratory (LBNL) `_. Berkeley
Lab began operational deployment in 1996, and the USENIX Security
diff --git a/doc/script-reference/types.rst b/doc/script-reference/types.rst
index f1d81f63ab..cfb47270ff 100644
--- a/doc/script-reference/types.rst
+++ b/doc/script-reference/types.rst
@@ -544,6 +544,15 @@ Here is a more detailed description of each type:
|s|
+ You can compute the union, intersection, or difference of two sets
+ using the ``|``, ``&``, and ``-`` operators. You can compare
+ sets for equality (they have exactly the same elements) using ``==``.
+ The ``<`` operator returns ``T`` if the lefthand operand is a proper
+ subset of the righthand operand. Similarly, ``<=`` returns ``T``
+ if the lefthand operator is a subset (not necessarily proper, i.e.,
+ it may be equal to the righthand operand). The operators ``!=``, ``>``
+ and ``>=`` provide the expected complementary operations.
+
See the :bro:keyword:`for` statement for info on how to iterate over
the elements in a set.
@@ -599,6 +608,20 @@ Here is a more detailed description of each type:
|v|
+ A particularly common operation on a vector is to append an element
+ to its end. You can do so using:
+
+ .. code:: bro
+
+ v += e;
+
+ where if e's type is ``X``, v's type is ``vector of X``. Note that
+ this expression is equivalent to:
+
+ .. code:: bro
+
+ v[|v|] = e;
+
Vectors of integral types (``int`` or ``count``) support the pre-increment
(``++``) and pre-decrement operators (``--``), which will increment or
decrement each element in the vector.
diff --git a/doc/scripting/data_struct_vector_declaration.bro b/doc/scripting/data_struct_vector_declaration.bro
index 6d684d09b1..e9b31880f6 100644
--- a/doc/scripting/data_struct_vector_declaration.bro
+++ b/doc/scripting/data_struct_vector_declaration.bro
@@ -3,10 +3,10 @@ event bro_init()
local v1: vector of count;
local v2 = vector(1, 2, 3, 4);
- v1[|v1|] = 1;
- v1[|v1|] = 2;
- v1[|v1|] = 3;
- v1[|v1|] = 4;
+ v1 += 1;
+ v1 += 2;
+ v1 += 3;
+ v1 += 4;
print fmt("contents of v1: %s", v1);
print fmt("length of v1: %d", |v1|);
diff --git a/scripts/base/files/pe/main.bro b/scripts/base/files/pe/main.bro
index b2723e4138..972e8a31c8 100644
--- a/scripts/base/files/pe/main.bro
+++ b/scripts/base/files/pe/main.bro
@@ -126,7 +126,7 @@ event pe_section_header(f: fa_file, h: PE::SectionHeader) &priority=5
if ( ! f$pe?$section_names )
f$pe$section_names = vector();
- f$pe$section_names[|f$pe$section_names|] = h$name;
+ f$pe$section_names += h$name;
}
event file_state_remove(f: fa_file) &priority=-5
diff --git a/scripts/base/files/x509/main.bro b/scripts/base/files/x509/main.bro
index 30f60f1362..b6fdde5494 100644
--- a/scripts/base/files/x509/main.bro
+++ b/scripts/base/files/x509/main.bro
@@ -66,7 +66,7 @@ event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certifi
event x509_extension(f: fa_file, ext: X509::Extension) &priority=5
{
if ( f$info?$x509 )
- f$info$x509$extensions[|f$info$x509$extensions|] = ext;
+ f$info$x509$extensions += ext;
}
event x509_ext_basic_constraints(f: fa_file, ext: X509::BasicConstraints) &priority=5
diff --git a/scripts/base/frameworks/broker/main.bro b/scripts/base/frameworks/broker/main.bro
index 0fdf289ee6..645c2b4382 100644
--- a/scripts/base/frameworks/broker/main.bro
+++ b/scripts/base/frameworks/broker/main.bro
@@ -56,9 +56,11 @@ export {
## control mechanisms).
const congestion_queue_size = 200 &redef;
- ## Max number of threads to use for Broker/CAF functionality.
- ## Using zero will cause this to be automatically determined
- ## based on number of available CPUs.
+ ## Max number of threads to use for Broker/CAF functionality. Setting to
+ ## zero implies using the value of BRO_BROKER_MAX_THREADS environment
+ ## variable, if set, or else typically defaults to 4 (actually 2 threads
+ ## when simply reading offline pcaps as there's not expected to be any
+ ## communication and more threads just adds more overhead).
const max_threads = 0 &redef;
## Max number of microseconds for under-utilized Broker/CAF
@@ -259,7 +261,8 @@ export {
global publish_id: function(topic: string, id: string): bool;
## Register interest in all peer event messages that use a certain topic
- ## prefix.
+ ## prefix. Note that subscriptions may not be altered immediately after
+ ## calling (except during :bro:see:`bro_init`).
##
## topic_prefix: a prefix to match against remote message topics.
## e.g. an empty prefix matches everything and "a" matches
@@ -269,6 +272,8 @@ export {
global subscribe: function(topic_prefix: string): bool;
## Unregister interest in all peer event messages that use a topic prefix.
+ ## Note that subscriptions may not be altered immediately after calling
+ ## (except during :bro:see:`bro_init`).
##
## topic_prefix: a prefix previously supplied to a successful call to
## :bro:see:`Broker::subscribe`.
diff --git a/scripts/base/frameworks/cluster/main.bro b/scripts/base/frameworks/cluster/main.bro
index 8eb4bf90bf..779f0c159a 100644
--- a/scripts/base/frameworks/cluster/main.bro
+++ b/scripts/base/frameworks/cluster/main.bro
@@ -251,7 +251,7 @@ function nodes_with_type(node_type: NodeType): vector of NamedNode
local names: vector of string = vector();
for ( name in Cluster::nodes )
- names[|names|] = name;
+ names += name;
names = sort(names, strcmp);
@@ -263,7 +263,7 @@ function nodes_with_type(node_type: NodeType): vector of NamedNode
if ( n$node_type != node_type )
next;
- rval[|rval|] = NamedNode($name=name, $node=n);
+ rval += NamedNode($name=name, $node=n);
}
return rval;
diff --git a/scripts/base/frameworks/cluster/pools.bro b/scripts/base/frameworks/cluster/pools.bro
index fb45594adc..ac8673b7e8 100644
--- a/scripts/base/frameworks/cluster/pools.bro
+++ b/scripts/base/frameworks/cluster/pools.bro
@@ -157,7 +157,7 @@ global registered_pools: vector of Pool = vector();
function register_pool(spec: PoolSpec): Pool
{
local rval = Pool($spec = spec);
- registered_pools[|registered_pools|] = rval;
+ registered_pools += rval;
return rval;
}
@@ -276,7 +276,7 @@ function init_pool_node(pool: Pool, name: string): bool
local pn = PoolNode($name=name, $alias=alias, $site_id=site_id,
$alive=Cluster::node == name);
pool$nodes[name] = pn;
- pool$node_list[|pool$node_list|] = pn;
+ pool$node_list += pn;
if ( pn$alive )
++pool$alive_count;
@@ -366,7 +366,7 @@ event bro_init() &priority=-5
if ( |mgr| > 0 )
{
local eln = pool_eligibility[Cluster::LOGGER]$eligible_nodes;
- eln[|eln|] = mgr[0];
+ eln += mgr[0];
}
}
@@ -423,7 +423,7 @@ event bro_init() &priority=-5
if ( j < e )
next;
- nen[|nen|] = pet$eligible_nodes[j];
+ nen += pet$eligible_nodes[j];
}
pet$eligible_nodes = nen;
diff --git a/scripts/base/frameworks/config/main.bro b/scripts/base/frameworks/config/main.bro
index b11e67659a..30ddeaf3b9 100644
--- a/scripts/base/frameworks/config/main.bro
+++ b/scripts/base/frameworks/config/main.bro
@@ -120,14 +120,14 @@ function format_value(value: any) : string
{
local it: set[bool] = value;
for ( sv in it )
- part[|part|] = cat(sv);
+ part += cat(sv);
return join_string_vec(part, ",");
}
else if ( /^vector/ in tn )
{
local vit: vector of any = value;
for ( i in vit )
- part[|part|] = cat(vit[i]);
+ part += cat(vit[i]);
return join_string_vec(part, ",");
}
else if ( tn == "string" )
diff --git a/scripts/base/frameworks/netcontrol/main.bro b/scripts/base/frameworks/netcontrol/main.bro
index 3e9b35fa8c..a9418508af 100644
--- a/scripts/base/frameworks/netcontrol/main.bro
+++ b/scripts/base/frameworks/netcontrol/main.bro
@@ -555,19 +555,19 @@ function quarantine_host(infected: addr, dns: addr, quarantine: addr, t: interva
local orules: vector of string = vector();
local edrop: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected))];
local rdrop: Rule = [$ty=DROP, $target=FORWARD, $entity=edrop, $expire=t, $location=location];
- orules[|orules|] = add_rule(rdrop);
+ orules += add_rule(rdrop);
local todnse: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected), $dst_h=addr_to_subnet(dns), $dst_p=53/udp)];
local todnsr = Rule($ty=MODIFY, $target=FORWARD, $entity=todnse, $expire=t, $location=location, $mod=FlowMod($dst_h=quarantine), $priority=+5);
- orules[|orules|] = add_rule(todnsr);
+ orules += add_rule(todnsr);
local fromdnse: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(dns), $src_p=53/udp, $dst_h=addr_to_subnet(infected))];
local fromdnsr = Rule($ty=MODIFY, $target=FORWARD, $entity=fromdnse, $expire=t, $location=location, $mod=FlowMod($src_h=dns), $priority=+5);
- orules[|orules|] = add_rule(fromdnsr);
+ orules += add_rule(fromdnsr);
local wle: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected), $dst_h=addr_to_subnet(quarantine), $dst_p=80/tcp)];
local wlr = Rule($ty=WHITELIST, $target=FORWARD, $entity=wle, $expire=t, $location=location, $priority=+5);
- orules[|orules|] = add_rule(wlr);
+ orules += add_rule(wlr);
return orules;
}
@@ -637,7 +637,7 @@ event NetControl::init() &priority=-20
function activate_impl(p: PluginState, priority: int)
{
p$_priority = priority;
- plugins[|plugins|] = p;
+ plugins += p;
sort(plugins, function(p1: PluginState, p2: PluginState) : int { return p2$_priority - p1$_priority; });
plugin_ids[plugin_counter] = p;
@@ -734,7 +734,7 @@ function find_rules_subnet(sn: subnet) : vector of Rule
for ( rule_id in rules_by_subnets[sn_entry] )
{
if ( rule_id in rules )
- ret[|ret|] = rules[rule_id];
+ ret += rules[rule_id];
else
Reporter::error("find_rules_subnet - internal data structure error, missing rule");
}
diff --git a/scripts/base/frameworks/netcontrol/plugins/openflow.bro b/scripts/base/frameworks/netcontrol/plugins/openflow.bro
index c528a1ba3e..f1403a70a8 100644
--- a/scripts/base/frameworks/netcontrol/plugins/openflow.bro
+++ b/scripts/base/frameworks/netcontrol/plugins/openflow.bro
@@ -158,17 +158,17 @@ function entity_to_match(p: PluginState, e: Entity): vector of OpenFlow::ofp_mat
if ( e$ty == CONNECTION )
{
- v[|v|] = OpenFlow::match_conn(e$conn); # forward and...
- v[|v|] = OpenFlow::match_conn(e$conn, T); # reverse
+ v += OpenFlow::match_conn(e$conn); # forward and...
+ v += OpenFlow::match_conn(e$conn, T); # reverse
return openflow_match_pred(p, e, v);
}
if ( e$ty == MAC )
{
- v[|v|] = OpenFlow::ofp_match(
+ v += OpenFlow::ofp_match(
$dl_src=e$mac
);
- v[|v|] = OpenFlow::ofp_match(
+ v += OpenFlow::ofp_match(
$dl_dst=e$mac
);
@@ -182,12 +182,12 @@ function entity_to_match(p: PluginState, e: Entity): vector of OpenFlow::ofp_mat
if ( is_v6_subnet(e$ip) )
dl_type = OpenFlow::ETH_IPv6;
- v[|v|] = OpenFlow::ofp_match(
+ v += OpenFlow::ofp_match(
$dl_type=dl_type,
$nw_src=e$ip
);
- v[|v|] = OpenFlow::ofp_match(
+ v += OpenFlow::ofp_match(
$dl_type=dl_type,
$nw_dst=e$ip
);
@@ -231,7 +231,7 @@ function entity_to_match(p: PluginState, e: Entity): vector of OpenFlow::ofp_mat
m$tp_dst = port_to_count(f$dst_p);
}
- v[|v|] = m;
+ v += m;
return openflow_match_pred(p, e, v);
}
diff --git a/scripts/base/frameworks/openflow/plugins/ryu.bro b/scripts/base/frameworks/openflow/plugins/ryu.bro
index f022fe0f03..cc400293a0 100644
--- a/scripts/base/frameworks/openflow/plugins/ryu.bro
+++ b/scripts/base/frameworks/openflow/plugins/ryu.bro
@@ -88,7 +88,7 @@ function ryu_flow_mod(state: OpenFlow::ControllerState, match: ofp_match, flow_m
local flow_actions: vector of ryu_flow_action = vector();
for ( i in flow_mod$actions$out_ports )
- flow_actions[|flow_actions|] = ryu_flow_action($_type="OUTPUT", $_port=flow_mod$actions$out_ports[i]);
+ flow_actions += ryu_flow_action($_type="OUTPUT", $_port=flow_mod$actions$out_ports[i]);
# Generate our ryu_flow_mod record for the ReST API call.
local mod: ryu_ofp_flow_mod = ryu_ofp_flow_mod(
diff --git a/scripts/base/frameworks/sumstats/main.bro b/scripts/base/frameworks/sumstats/main.bro
index edd80ede0f..69a853fd5a 100644
--- a/scripts/base/frameworks/sumstats/main.bro
+++ b/scripts/base/frameworks/sumstats/main.bro
@@ -267,7 +267,7 @@ function add_observe_plugin_dependency(calc: Calculation, depends_on: Calculatio
{
if ( calc !in calc_deps )
calc_deps[calc] = vector();
- calc_deps[calc][|calc_deps[calc]|] = depends_on;
+ calc_deps[calc] += depends_on;
}
event bro_init() &priority=100000
@@ -348,7 +348,7 @@ function add_calc_deps(calcs: vector of Calculation, c: Calculation)
{
if ( calc_deps[c][i] in calc_deps )
add_calc_deps(calcs, calc_deps[c][i]);
- calcs[|c|] = calc_deps[c][i];
+ calcs += calc_deps[c][i];
#print fmt("add dep for %s [%s] ", c, calc_deps[c][i]);
}
}
@@ -387,7 +387,7 @@ function create(ss: SumStat)
skip_calc=T;
}
if ( ! skip_calc )
- reducer$calc_funcs[|reducer$calc_funcs|] = calc;
+ reducer$calc_funcs += calc;
}
if ( reducer$stream !in reducer_store )
@@ -399,7 +399,7 @@ function create(ss: SumStat)
schedule ss$epoch { SumStats::finish_epoch(ss) };
}
-function observe(id: string, key: Key, obs: Observation)
+function observe(id: string, orig_key: Key, obs: Observation)
{
if ( id !in reducer_store )
return;
@@ -407,8 +407,7 @@ function observe(id: string, key: Key, obs: Observation)
# Try to add the data to all of the defined reducers.
for ( r in reducer_store[id] )
{
- if ( r?$normalize_key )
- key = r$normalize_key(copy(key));
+ local key = r?$normalize_key ? r$normalize_key(copy(orig_key)) : orig_key;
# If this reducer has a predicate, run the predicate
# and skip this key if the predicate return false.
diff --git a/scripts/base/frameworks/sumstats/non-cluster.bro b/scripts/base/frameworks/sumstats/non-cluster.bro
index 57785a03b2..100e8dad4a 100644
--- a/scripts/base/frameworks/sumstats/non-cluster.bro
+++ b/scripts/base/frameworks/sumstats/non-cluster.bro
@@ -11,7 +11,7 @@ event SumStats::process_epoch_result(ss: SumStat, now: time, data: ResultTable)
for ( key in data )
{
ss$epoch_result(now, key, data[key]);
- keys_to_delete[|keys_to_delete|] = key;
+ keys_to_delete += key;
if ( --i == 0 )
break;
diff --git a/scripts/base/frameworks/sumstats/plugins/sample.bro b/scripts/base/frameworks/sumstats/plugins/sample.bro
index 0200e85949..2f96c5eb30 100644
--- a/scripts/base/frameworks/sumstats/plugins/sample.bro
+++ b/scripts/base/frameworks/sumstats/plugins/sample.bro
@@ -43,7 +43,7 @@ function sample_add_sample(obs:Observation, rv: ResultVal)
++rv$sample_elements;
if ( |rv$samples| < rv$num_samples )
- rv$samples[|rv$samples|] = obs;
+ rv$samples += obs;
else
{
local ra = rand(rv$sample_elements);
diff --git a/scripts/base/init-bare.bro b/scripts/base/init-bare.bro
index b39490ed7e..8febc9dae3 100644
--- a/scripts/base/init-bare.bro
+++ b/scripts/base/init-bare.bro
@@ -872,7 +872,7 @@ type geo_location: record {
longitude: double &optional; ##< Longitude.
} &log;
-## The directory containing MaxMind DB (*.mmdb) files to use for GeoIP support.
+## The directory containing MaxMind DB (.mmdb) files to use for GeoIP support.
const mmdb_dir: string = "" &redef;
## Computed entropy values. The record captures a number of measures that are
diff --git a/scripts/base/protocols/conn/main.bro b/scripts/base/protocols/conn/main.bro
index 0e9661dea3..e96b27873c 100644
--- a/scripts/base/protocols/conn/main.bro
+++ b/scripts/base/protocols/conn/main.bro
@@ -86,8 +86,9 @@ export {
## d packet with payload ("data")
## f packet with FIN bit set
## r packet with RST bit set
- ## c packet with a bad checksum
+ ## c packet with a bad checksum (applies to UDP too)
## t packet with retransmitted payload
+ ## w packet with a zero window advertisement
## i inconsistent packet (e.g. FIN+RST bits set)
## q multi-flag packet (SYN+FIN or SYN+RST bits set)
## ^ connection direction was flipped by Bro's heuristic
@@ -95,12 +96,15 @@ export {
##
## If the event comes from the originator, the letter is in
## upper-case; if it comes from the responder, it's in
- ## lower-case. The 'a', 'c', 'd', 'i', 'q', and 't' flags are
+ ## lower-case. The 'a', 'd', 'i' and 'q' flags are
## recorded a maximum of one time in either direction regardless
- ## of how many are actually seen. However, 'f', 'h', 'r', or
- ## 's' may be recorded multiple times for either direction and
- ## only compressed when sharing a sequence number with the
+ ## of how many are actually seen. 'f', 'h', 'r' and
+ ## 's' can be recorded multiple times for either direction
+ ## if the associated sequence number differs from the
## last-seen packet of the same flag type.
+ ## 'c', 't' and 'w' are recorded in a logarithmic fashion:
+ ## the second instance represents that the event was seen
+ ## (at least) 10 times; the third instance, 100 times; etc.
history: string &log &optional;
## Number of packets that the originator sent.
## Only set if :bro:id:`use_conn_size_analyzer` = T.
diff --git a/scripts/base/protocols/dhcp/main.bro b/scripts/base/protocols/dhcp/main.bro
index 5e33d269f3..ae102e6085 100644
--- a/scripts/base/protocols/dhcp/main.bro
+++ b/scripts/base/protocols/dhcp/main.bro
@@ -178,7 +178,7 @@ event DHCP::aggregate_msgs(ts: time, id: conn_id, uid: string, is_orig: bool, ms
if ( uid !in log_info$uids )
add log_info$uids[uid];
- log_info$msg_types[|log_info$msg_types|] = DHCP::message_types[msg$m_type];
+ log_info$msg_types += DHCP::message_types[msg$m_type];
# Let's watch for messages in any DHCP message type
# and split them out based on client and server.
diff --git a/scripts/base/protocols/dns/main.bro b/scripts/base/protocols/dns/main.bro
index a8946e871e..127a06b5a0 100644
--- a/scripts/base/protocols/dns/main.bro
+++ b/scripts/base/protocols/dns/main.bro
@@ -324,11 +324,11 @@ hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
{
if ( ! c$dns?$answers )
c$dns$answers = vector();
- c$dns$answers[|c$dns$answers|] = reply;
+ c$dns$answers += reply;
if ( ! c$dns?$TTLs )
c$dns$TTLs = vector();
- c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
+ c$dns$TTLs += ans$TTL;
}
}
}
diff --git a/scripts/base/protocols/http/entities.bro b/scripts/base/protocols/http/entities.bro
index bec89b536d..3670d7879a 100644
--- a/scripts/base/protocols/http/entities.bro
+++ b/scripts/base/protocols/http/entities.bro
@@ -87,14 +87,14 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
if ( ! c$http?$orig_fuids )
c$http$orig_fuids = string_vec(f$id);
else
- c$http$orig_fuids[|c$http$orig_fuids|] = f$id;
+ c$http$orig_fuids += f$id;
if ( f$info?$filename )
{
if ( ! c$http?$orig_filenames )
c$http$orig_filenames = string_vec(f$info$filename);
else
- c$http$orig_filenames[|c$http$orig_filenames|] = f$info$filename;
+ c$http$orig_filenames += f$info$filename;
}
}
@@ -103,14 +103,14 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
if ( ! c$http?$resp_fuids )
c$http$resp_fuids = string_vec(f$id);
else
- c$http$resp_fuids[|c$http$resp_fuids|] = f$id;
+ c$http$resp_fuids += f$id;
if ( f$info?$filename )
{
if ( ! c$http?$resp_filenames )
c$http$resp_filenames = string_vec(f$info$filename);
else
- c$http$resp_filenames[|c$http$resp_filenames|] = f$info$filename;
+ c$http$resp_filenames += f$info$filename;
}
}
@@ -130,14 +130,14 @@ event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
if ( ! f$http?$orig_mime_types )
f$http$orig_mime_types = string_vec(meta$mime_type);
else
- f$http$orig_mime_types[|f$http$orig_mime_types|] = meta$mime_type;
+ f$http$orig_mime_types += meta$mime_type;
}
else
{
if ( ! f$http?$resp_mime_types )
f$http$resp_mime_types = string_vec(meta$mime_type);
else
- f$http$resp_mime_types[|f$http$resp_mime_types|] = meta$mime_type;
+ f$http$resp_mime_types += meta$mime_type;
}
}
diff --git a/scripts/base/protocols/http/utils.bro b/scripts/base/protocols/http/utils.bro
index 88549f8404..67f13f2640 100644
--- a/scripts/base/protocols/http/utils.bro
+++ b/scripts/base/protocols/http/utils.bro
@@ -47,7 +47,7 @@ function extract_keys(data: string, kv_splitter: pattern): string_vec
{
local key_val = split_string1(parts[part_index], /=/);
if ( 0 in key_val )
- key_vec[|key_vec|] = key_val[0];
+ key_vec += key_val[0];
}
return key_vec;
}
diff --git a/scripts/base/protocols/rfb/dpd.sig b/scripts/base/protocols/rfb/dpd.sig
index 40793ad590..c105070b24 100644
--- a/scripts/base/protocols/rfb/dpd.sig
+++ b/scripts/base/protocols/rfb/dpd.sig
@@ -1,6 +1,7 @@
signature dpd_rfb_server {
ip-proto == tcp
payload /^RFB/
+ tcp-state responder
requires-reverse-signature dpd_rfb_client
enable "rfb"
}
@@ -9,4 +10,4 @@ signature dpd_rfb_client {
ip-proto == tcp
payload /^RFB/
tcp-state originator
-}
\ No newline at end of file
+}
diff --git a/scripts/base/protocols/sip/main.bro b/scripts/base/protocols/sip/main.bro
index f629049928..f4dba22876 100644
--- a/scripts/base/protocols/sip/main.bro
+++ b/scripts/base/protocols/sip/main.bro
@@ -226,7 +226,7 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
c$sip$user_agent = value;
break;
case "VIA", "V":
- c$sip$request_path[|c$sip$request_path|] = split_string1(value, /;[ ]?branch/)[0];
+ c$sip$request_path += split_string1(value, /;[ ]?branch/)[0];
break;
}
@@ -256,7 +256,7 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
c$sip$response_to = value;
break;
case "VIA", "V":
- c$sip$response_path[|c$sip$response_path|] = split_string1(value, /;[ ]?branch/)[0];
+ c$sip$response_path += split_string1(value, /;[ ]?branch/)[0];
break;
}
diff --git a/scripts/base/protocols/smtp/files.bro b/scripts/base/protocols/smtp/files.bro
index 352c2025a3..a65b90b528 100644
--- a/scripts/base/protocols/smtp/files.bro
+++ b/scripts/base/protocols/smtp/files.bro
@@ -49,5 +49,5 @@ event bro_init() &priority=5
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
{
if ( c?$smtp && !c$smtp$tls )
- c$smtp$fuids[|c$smtp$fuids|] = f$id;
+ c$smtp$fuids += f$id;
}
diff --git a/scripts/base/protocols/smtp/main.bro b/scripts/base/protocols/smtp/main.bro
index cd0e730d8e..18c75a93c0 100644
--- a/scripts/base/protocols/smtp/main.bro
+++ b/scripts/base/protocols/smtp/main.bro
@@ -295,7 +295,7 @@ event mime_one_header(c: connection, h: mime_header_rec) &priority=3
c$smtp$process_received_from = F;
}
if ( c$smtp$path[|c$smtp$path|-1] != ip )
- c$smtp$path[|c$smtp$path|] = ip;
+ c$smtp$path += ip;
}
event connection_state_remove(c: connection) &priority=-5
diff --git a/scripts/base/protocols/ssl/files.bro b/scripts/base/protocols/ssl/files.bro
index 8750645b36..d0d89561e3 100644
--- a/scripts/base/protocols/ssl/files.bro
+++ b/scripts/base/protocols/ssl/files.bro
@@ -121,13 +121,13 @@ event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
if ( f$is_orig )
{
- c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = f$info;
- c$ssl$client_cert_chain_fuids[|c$ssl$client_cert_chain_fuids|] = f$id;
+ c$ssl$client_cert_chain += f$info;
+ c$ssl$client_cert_chain_fuids += f$id;
}
else
{
- c$ssl$cert_chain[|c$ssl$cert_chain|] = f$info;
- c$ssl$cert_chain_fuids[|c$ssl$cert_chain_fuids|] = f$id;
+ c$ssl$cert_chain += f$info;
+ c$ssl$cert_chain_fuids += f$id;
}
}
diff --git a/scripts/base/utils/addrs.bro b/scripts/base/utils/addrs.bro
index e8fd746e5e..9d165936ef 100644
--- a/scripts/base/utils/addrs.bro
+++ b/scripts/base/utils/addrs.bro
@@ -118,7 +118,7 @@ function extract_ip_addresses(input: string): string_vec
for ( i in parts )
{
if ( i % 2 == 1 && is_valid_ip(parts[i]) )
- output[|output|] = parts[i];
+ output += parts[i];
}
return output;
}
diff --git a/scripts/base/utils/email.bro b/scripts/base/utils/email.bro
index 08e8db8500..4feed351b4 100644
--- a/scripts/base/utils/email.bro
+++ b/scripts/base/utils/email.bro
@@ -10,7 +10,7 @@ function extract_email_addrs_vec(str: string): string_vec
local raw_addrs = find_all(str, /(^|[<,:[:blank:]])[^<,:[:blank:]@]+"@"[^>,;[:blank:]]+([>,;[:blank:]]|$)/);
for ( raw_addr in raw_addrs )
- addrs[|addrs|] = gsub(raw_addr, /[<>,:;[:blank:]]/, "");
+ addrs += gsub(raw_addr, /[<>,:;[:blank:]]/, "");
return addrs;
}
diff --git a/scripts/base/utils/exec.bro b/scripts/base/utils/exec.bro
index a926775bda..61488a1249 100644
--- a/scripts/base/utils/exec.bro
+++ b/scripts/base/utils/exec.bro
@@ -69,14 +69,14 @@ event Exec::line(description: Input::EventDescription, tpe: Input::Event, s: str
if ( ! result?$stderr )
result$stderr = vector(s);
else
- result$stderr[|result$stderr|] = s;
+ result$stderr += s;
}
else
{
if ( ! result?$stdout )
result$stdout = vector(s);
else
- result$stdout[|result$stdout|] = s;
+ result$stdout += s;
}
}
@@ -93,7 +93,7 @@ event Exec::file_line(description: Input::EventDescription, tpe: Input::Event, s
if ( track_file !in result$files )
result$files[track_file] = vector(s);
else
- result$files[track_file][|result$files[track_file]|] = s;
+ result$files[track_file] += s;
}
event Input::end_of_data(orig_name: string, source:string)
diff --git a/scripts/base/utils/json.bro b/scripts/base/utils/json.bro
index 05dcff1e27..45248e3ea2 100644
--- a/scripts/base/utils/json.bro
+++ b/scripts/base/utils/json.bro
@@ -66,7 +66,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
if ( field_desc?$value && (!only_loggable || field_desc$log) )
{
local onepart = cat("\"", field, "\": ", to_json(field_desc$value, only_loggable));
- rec_parts[|rec_parts|] = onepart;
+ rec_parts += onepart;
}
}
return cat("{", join_string_vec(rec_parts, ", "), "}");
@@ -79,7 +79,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
local sa: set[bool] = v;
for ( sv in sa )
{
- set_parts[|set_parts|] = to_json(sv, only_loggable);
+ set_parts += to_json(sv, only_loggable);
}
return cat("[", join_string_vec(set_parts, ", "), "]");
}
@@ -91,7 +91,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
{
local ts = to_json(ti);
local if_quotes = (ts[0] == "\"") ? "" : "\"";
- tab_parts[|tab_parts|] = cat(if_quotes, ts, if_quotes, ": ", to_json(ta[ti], only_loggable));
+ tab_parts += cat(if_quotes, ts, if_quotes, ": ", to_json(ta[ti], only_loggable));
}
return cat("{", join_string_vec(tab_parts, ", "), "}");
}
@@ -101,7 +101,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
local va: vector of any = v;
for ( vi in va )
{
- vec_parts[|vec_parts|] = to_json(va[vi], only_loggable);
+ vec_parts += to_json(va[vi], only_loggable);
}
return cat("[", join_string_vec(vec_parts, ", "), "]");
}
diff --git a/scripts/policy/frameworks/notice/extend-email/hostnames.bro b/scripts/policy/frameworks/notice/extend-email/hostnames.bro
index d8dac39e43..9ee58d3e0b 100644
--- a/scripts/policy/frameworks/notice/extend-email/hostnames.bro
+++ b/scripts/policy/frameworks/notice/extend-email/hostnames.bro
@@ -35,7 +35,7 @@ hook notice(n: Notice::Info) &priority=10
when ( local src_name = lookup_addr(n$src) )
{
output = string_cat("orig/src hostname: ", src_name, "\n");
- tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output;
+ tmp_notice_storage[uid]$email_body_sections += output;
delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-src"];
}
}
@@ -45,7 +45,7 @@ hook notice(n: Notice::Info) &priority=10
when ( local dst_name = lookup_addr(n$dst) )
{
output = string_cat("resp/dst hostname: ", dst_name, "\n");
- tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output;
+ tmp_notice_storage[uid]$email_body_sections += output;
delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-dst"];
}
}
diff --git a/scripts/policy/misc/load-balancing.bro b/scripts/policy/misc/load-balancing.bro
index 928d0e74c0..40bbe238ca 100644
--- a/scripts/policy/misc/load-balancing.bro
+++ b/scripts/policy/misc/load-balancing.bro
@@ -40,7 +40,7 @@ event bro_init() &priority=5
# Sort nodes list so that every node iterates over it in same order.
for ( name in Cluster::nodes )
- sorted_node_names[|sorted_node_names|] = name;
+ sorted_node_names += name;
sort(sorted_node_names, strcmp);
diff --git a/scripts/policy/protocols/conn/known-hosts.bro b/scripts/policy/protocols/conn/known-hosts.bro
index b920912f11..410ed9edfe 100644
--- a/scripts/policy/protocols/conn/known-hosts.bro
+++ b/scripts/policy/protocols/conn/known-hosts.bro
@@ -138,6 +138,9 @@ event Known::host_found(info: HostsInfo)
if ( use_host_store )
return;
+ if ( info$host in Known::hosts )
+ return;
+
Cluster::publish_hrw(Cluster::proxy_pool, info$host, known_host_add, info);
event known_host_add(info);
}
diff --git a/scripts/policy/protocols/conn/known-services.bro b/scripts/policy/protocols/conn/known-services.bro
index d737c9bad0..7a829214c1 100644
--- a/scripts/policy/protocols/conn/known-services.bro
+++ b/scripts/policy/protocols/conn/known-services.bro
@@ -159,6 +159,9 @@ event service_info_commit(info: ServicesInfo)
if ( Known::use_service_store )
return;
+ if ( [info$host, info$port_num] in Known::services )
+ return;
+
local key = cat(info$host, info$port_num);
Cluster::publish_hrw(Cluster::proxy_pool, key, known_service_add, info);
event known_service_add(info);
diff --git a/scripts/policy/protocols/dhcp/msg-orig.bro b/scripts/policy/protocols/dhcp/msg-orig.bro
index beb7c7d78b..d2350192b5 100644
--- a/scripts/policy/protocols/dhcp/msg-orig.bro
+++ b/scripts/policy/protocols/dhcp/msg-orig.bro
@@ -17,5 +17,5 @@ export {
event DHCP::aggregate_msgs(ts: time, id: conn_id, uid: string, is_orig: bool, msg: DHCP::Msg, options: DHCP::Options) &priority=3
{
- log_info$msg_orig[|log_info$msg_orig|] = is_orig ? id$orig_h : id$resp_h;
+ log_info$msg_orig += is_orig ? id$orig_h : id$resp_h;
}
diff --git a/scripts/policy/protocols/http/header-names.bro b/scripts/policy/protocols/http/header-names.bro
index ed3f9380a7..1b256226dd 100644
--- a/scripts/policy/protocols/http/header-names.bro
+++ b/scripts/policy/protocols/http/header-names.bro
@@ -35,7 +35,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
{
if ( ! c$http?$client_header_names )
c$http$client_header_names = vector();
- c$http$client_header_names[|c$http$client_header_names|] = name;
+ c$http$client_header_names += name;
}
}
else
@@ -44,7 +44,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
{
if ( ! c$http?$server_header_names )
c$http$server_header_names = vector();
- c$http$server_header_names[|c$http$server_header_names|] = name;
+ c$http$server_header_names += name;
}
}
}
diff --git a/scripts/policy/protocols/ssl/heartbleed.bro b/scripts/policy/protocols/ssl/heartbleed.bro
index 783961bef2..e1073b3ff0 100644
--- a/scripts/policy/protocols/ssl/heartbleed.bro
+++ b/scripts/policy/protocols/ssl/heartbleed.bro
@@ -50,33 +50,33 @@ event bro_init()
# Minimum length a heartbeat packet must have for different cipher suites.
# Note - tls 1.1f and 1.0 have different lengths :(
# This should be all cipher suites usually supported by vulnerable servers.
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_256_GCM_SHA384$/, $min_length=43];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_128_GCM_SHA256$/, $min_length=43];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA384$/, $min_length=96];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA256$/, $min_length=80];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA$/, $min_length=64];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA256$/, $min_length=80];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA$/, $min_length=64];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=48];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=64];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=48];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES_CBC_SHA$/, $min_length=48];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=48];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
- min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=48];
- min_lengths[|min_lengths|] = [$cipher=/_256_CBC_SHA$/, $min_length=48];
- min_lengths[|min_lengths|] = [$cipher=/_128_CBC_SHA$/, $min_length=48];
- min_lengths[|min_lengths|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=40];
- min_lengths[|min_lengths|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=48];
- min_lengths[|min_lengths|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=40];
- min_lengths[|min_lengths|] = [$cipher=/_DES_CBC_SHA$/, $min_length=40];
- min_lengths[|min_lengths|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=40];
- min_lengths[|min_lengths|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
- min_lengths[|min_lengths|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
- min_lengths[|min_lengths|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
- min_lengths[|min_lengths|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=40];
+ min_lengths_tls11 += [$cipher=/_AES_256_GCM_SHA384$/, $min_length=43];
+ min_lengths_tls11 += [$cipher=/_AES_128_GCM_SHA256$/, $min_length=43];
+ min_lengths_tls11 += [$cipher=/_256_CBC_SHA384$/, $min_length=96];
+ min_lengths_tls11 += [$cipher=/_256_CBC_SHA256$/, $min_length=80];
+ min_lengths_tls11 += [$cipher=/_256_CBC_SHA$/, $min_length=64];
+ min_lengths_tls11 += [$cipher=/_128_CBC_SHA256$/, $min_length=80];
+ min_lengths_tls11 += [$cipher=/_128_CBC_SHA$/, $min_length=64];
+ min_lengths_tls11 += [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=48];
+ min_lengths_tls11 += [$cipher=/_SEED_CBC_SHA$/, $min_length=64];
+ min_lengths_tls11 += [$cipher=/_IDEA_CBC_SHA$/, $min_length=48];
+ min_lengths_tls11 += [$cipher=/_DES_CBC_SHA$/, $min_length=48];
+ min_lengths_tls11 += [$cipher=/_DES40_CBC_SHA$/, $min_length=48];
+ min_lengths_tls11 += [$cipher=/_RC4_128_SHA$/, $min_length=39];
+ min_lengths_tls11 += [$cipher=/_RC4_128_MD5$/, $min_length=35];
+ min_lengths_tls11 += [$cipher=/_RC4_40_MD5$/, $min_length=35];
+ min_lengths_tls11 += [$cipher=/_RC2_CBC_40_MD5$/, $min_length=48];
+ min_lengths += [$cipher=/_256_CBC_SHA$/, $min_length=48];
+ min_lengths += [$cipher=/_128_CBC_SHA$/, $min_length=48];
+ min_lengths += [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=40];
+ min_lengths += [$cipher=/_SEED_CBC_SHA$/, $min_length=48];
+ min_lengths += [$cipher=/_IDEA_CBC_SHA$/, $min_length=40];
+ min_lengths += [$cipher=/_DES_CBC_SHA$/, $min_length=40];
+ min_lengths += [$cipher=/_DES40_CBC_SHA$/, $min_length=40];
+ min_lengths += [$cipher=/_RC4_128_SHA$/, $min_length=39];
+ min_lengths += [$cipher=/_RC4_128_MD5$/, $min_length=35];
+ min_lengths += [$cipher=/_RC4_40_MD5$/, $min_length=35];
+ min_lengths += [$cipher=/_RC2_CBC_40_MD5$/, $min_length=40];
}
event ssl_heartbeat(c: connection, is_orig: bool, length: count, heartbeat_type: count, payload_length: count, payload: string)
diff --git a/scripts/policy/protocols/ssl/known-certs.bro b/scripts/policy/protocols/ssl/known-certs.bro
index 25365eb4b4..e45a243dfd 100644
--- a/scripts/policy/protocols/ssl/known-certs.bro
+++ b/scripts/policy/protocols/ssl/known-certs.bro
@@ -127,6 +127,9 @@ event Known::cert_found(info: CertsInfo, hash: string)
if ( Known::use_cert_store )
return;
+ if ( [info$host, hash] in Known::certs )
+ return;
+
local key = cat(info$host, hash);
Cluster::publish_hrw(Cluster::proxy_pool, key, known_cert_add, info, hash);
event known_cert_add(info, hash);
@@ -140,6 +143,7 @@ event Cluster::node_up(name: string, id: string)
if ( Cluster::local_node_type() != Cluster::WORKER )
return;
+ # Drop local suppression cache on workers to force HRW key repartitioning.
Known::certs = table();
}
@@ -151,6 +155,7 @@ event Cluster::node_down(name: string, id: string)
if ( Cluster::local_node_type() != Cluster::WORKER )
return;
+ # Drop local suppression cache on workers to force HRW key repartitioning.
Known::certs = table();
}
diff --git a/scripts/policy/protocols/ssl/notary.bro b/scripts/policy/protocols/ssl/notary.bro
index 07f2cdebc4..4406dd9629 100644
--- a/scripts/policy/protocols/ssl/notary.bro
+++ b/scripts/policy/protocols/ssl/notary.bro
@@ -56,7 +56,7 @@ event ssl_established(c: connection) &priority=3
local waits_already = digest in waitlist;
if ( ! waits_already )
waitlist[digest] = vector();
- waitlist[digest][|waitlist[digest]|] = c$ssl;
+ waitlist[digest] += c$ssl;
if ( waits_already )
return;
diff --git a/scripts/policy/protocols/ssl/validate-certs.bro b/scripts/policy/protocols/ssl/validate-certs.bro
index 451388da24..3f0d18a1c5 100644
--- a/scripts/policy/protocols/ssl/validate-certs.bro
+++ b/scripts/policy/protocols/ssl/validate-certs.bro
@@ -50,11 +50,11 @@ export {
## and is thus disabled by default.
global ssl_store_valid_chain: bool = F &redef;
- ## Event from a worker to the manager that it has encountered a new
- ## valid intermediate.
+ ## Event from a manager to workers when encountering a new, valid
+ ## intermediate.
global intermediate_add: event(key: string, value: vector of opaque of x509);
- ## Event from the manager to the workers that a new intermediate chain
+ ## Event from workers to the manager when a new intermediate chain
## is to be added.
global new_intermediate: event(key: string, value: vector of opaque of x509);
}
diff --git a/scripts/policy/protocols/ssl/validate-sct.bro b/scripts/policy/protocols/ssl/validate-sct.bro
index f4d1646ae8..0ce11b63ff 100644
--- a/scripts/policy/protocols/ssl/validate-sct.bro
+++ b/scripts/policy/protocols/ssl/validate-sct.bro
@@ -76,7 +76,7 @@ event bro_init()
event ssl_extension_signed_certificate_timestamp(c: connection, is_orig: bool, version: count, logid: string, timestamp: count, signature_and_hashalgorithm: SSL::SignatureAndHashAlgorithm, signature: string) &priority=5
{
- c$ssl$ct_proofs[|c$ssl$ct_proofs|] = SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_and_hashalgorithm$SignatureAlgorithm, $hash_alg=signature_and_hashalgorithm$HashAlgorithm, $signature=signature, $source=SCT_TLS_EXT);
+ c$ssl$ct_proofs += SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_and_hashalgorithm$SignatureAlgorithm, $hash_alg=signature_and_hashalgorithm$HashAlgorithm, $signature=signature, $source=SCT_TLS_EXT);
}
event x509_ocsp_ext_signed_certificate_timestamp(f: fa_file, version: count, logid: string, timestamp: count, hash_algorithm: count, signature_algorithm: count, signature: string) &priority=5
@@ -103,7 +103,7 @@ event x509_ocsp_ext_signed_certificate_timestamp(f: fa_file, version: count, log
local c = f$conns[cid];
}
- c$ssl$ct_proofs[|c$ssl$ct_proofs|] = SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_algorithm, $hash_alg=hash_algorithm, $signature=signature, $source=src);
+ c$ssl$ct_proofs += SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_algorithm, $hash_alg=hash_algorithm, $signature=signature, $source=src);
}
# Priority = 19 will be handled after validation is done
diff --git a/src/3rdparty b/src/3rdparty
index b7c6be774b..02c5b1d6a3 160000
--- a/src/3rdparty
+++ b/src/3rdparty
@@ -1 +1 @@
-Subproject commit b7c6be774b922be1e15f53571201c3be2bc28b75
+Subproject commit 02c5b1d6a3990ca989377798bc7e89eacf4713aa
diff --git a/src/Conn.cc b/src/Conn.cc
index 1edecde0b9..447f730418 100644
--- a/src/Conn.cc
+++ b/src/Conn.cc
@@ -289,6 +289,50 @@ bool Connection::IsReuse(double t, const u_char* pkt)
return root_analyzer && root_analyzer->IsReuse(t, pkt);
}
+bool Connection::ScaledHistoryEntry(char code, uint32& counter,
+ uint32& scaling_threshold,
+ uint32 scaling_base)
+ {
+ if ( ++counter == scaling_threshold )
+ {
+ AddHistory(code);
+
+ auto new_threshold = scaling_threshold * scaling_base;
+
+ if ( new_threshold <= scaling_threshold )
+ // This can happen due to wrap-around. In that
+ // case, reset the counter but leave the threshold
+ // unchanged.
+ counter = 0;
+
+ else
+ scaling_threshold = new_threshold;
+
+ return true;
+ }
+
+ return false;
+ }
+
+void Connection::HistoryThresholdEvent(EventHandlerPtr e, bool is_orig,
+ uint32 threshold)
+ {
+ if ( ! e )
+ return;
+
+ if ( threshold == 1 )
+ // This will be far and away the most common case,
+ // and at this stage it's not a *multiple* instance.
+ return;
+
+ val_list* vl = new val_list;
+ vl->append(BuildConnVal());
+ vl->append(new Val(is_orig, TYPE_BOOL));
+ vl->append(new Val(threshold, TYPE_COUNT));
+
+ ConnectionEvent(e, 0, vl);
+ }
+
void Connection::DeleteTimer(double /* t */)
{
if ( is_active )
diff --git a/src/Conn.h b/src/Conn.h
index db0cb2fe55..07765ee474 100644
--- a/src/Conn.h
+++ b/src/Conn.h
@@ -240,6 +240,17 @@ public:
return true;
}
+ // Increments the passed counter and adds it as a history
+ // code if it has crossed the next scaling threshold. Scaling
+ // is done in terms of powers of the third argument.
+ // Returns true if the threshold was crossed, false otherwise.
+ bool ScaledHistoryEntry(char code, uint32& counter,
+ uint32& scaling_threshold,
+ uint32 scaling_base = 10);
+
+ void HistoryThresholdEvent(EventHandlerPtr e, bool is_orig,
+ uint32 threshold);
+
void AddHistory(char code) { history += code; }
void DeleteTimer(double t);
diff --git a/src/Expr.cc b/src/Expr.cc
index a86f86b9c4..07034db1a8 100644
--- a/src/Expr.cc
+++ b/src/Expr.cc
@@ -16,6 +16,8 @@
#include "Trigger.h"
#include "IPAddr.h"
+#include "broker/Data.h"
+
const char* expr_name(BroExprTag t)
{
static const char* expr_names[int(NUM_EXPRS)] = {
@@ -672,6 +674,9 @@ Val* BinaryExpr::Fold(Val* v1, Val* v2) const
if ( v1->Type()->Tag() == TYPE_PATTERN )
return PatternFold(v1, v2);
+ if ( v1->Type()->IsSet() )
+ return SetFold(v1, v2);
+
if ( it == TYPE_INTERNAL_ADDR )
return AddrFold(v1, v2);
@@ -858,6 +863,7 @@ Val* BinaryExpr::StringFold(Val* v1, Val* v2) const
return new Val(result, TYPE_BOOL);
}
+
Val* BinaryExpr::PatternFold(Val* v1, Val* v2) const
{
const RE_Matcher* re1 = v1->AsPattern();
@@ -873,6 +879,61 @@ Val* BinaryExpr::PatternFold(Val* v1, Val* v2) const
return new PatternVal(res);
}
+Val* BinaryExpr::SetFold(Val* v1, Val* v2) const
+ {
+ TableVal* tv1 = v1->AsTableVal();
+ TableVal* tv2 = v2->AsTableVal();
+ TableVal* result;
+ bool res = false;
+
+ switch ( tag ) {
+ case EXPR_AND:
+ return tv1->Intersect(tv2);
+
+ case EXPR_OR:
+ result = v1->Clone()->AsTableVal();
+
+ if ( ! tv2->AddTo(result, false, false) )
+ reporter->InternalError("set union failed to type check");
+ return result;
+
+ case EXPR_SUB:
+ result = v1->Clone()->AsTableVal();
+
+ if ( ! tv2->RemoveFrom(result) )
+ reporter->InternalError("set difference failed to type check");
+ return result;
+
+ case EXPR_EQ:
+ res = tv1->EqualTo(tv2);
+ break;
+
+ case EXPR_NE:
+ res = ! tv1->EqualTo(tv2);
+ break;
+
+ case EXPR_LT:
+ res = tv1->IsSubsetOf(tv2) && tv1->Size() < tv2->Size();
+ break;
+
+ case EXPR_LE:
+ res = tv1->IsSubsetOf(tv2);
+ break;
+
+ case EXPR_GE:
+ case EXPR_GT:
+ // These should't happen due to canonicalization.
+ reporter->InternalError("confusion over canonicalization in set comparison");
+ break;
+
+ default:
+ BadTag("BinaryExpr::SetFold", expr_name(tag));
+ return 0;
+ }
+
+ return new Val(res, TYPE_BOOL);
+ }
+
Val* BinaryExpr::AddrFold(Val* v1, Val* v2) const
{
IPAddr a1 = v1->AsAddr();
@@ -1390,7 +1451,8 @@ bool AddExpr::DoUnserialize(UnserialInfo* info)
}
AddToExpr::AddToExpr(Expr* arg_op1, Expr* arg_op2)
-: BinaryExpr(EXPR_ADD_TO, arg_op1->MakeLvalue(), arg_op2)
+: BinaryExpr(EXPR_ADD_TO,
+ is_vector(arg_op1) ? arg_op1 : arg_op1->MakeLvalue(), arg_op2)
{
if ( IsError() )
return;
@@ -1404,6 +1466,32 @@ AddToExpr::AddToExpr(Expr* arg_op1, Expr* arg_op2)
SetType(base_type(bt1));
else if ( BothInterval(bt1, bt2) )
SetType(base_type(bt1));
+
+ else if ( IsVector(bt1) )
+ {
+ bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
+
+ if ( IsArithmetic(bt1) )
+ {
+ if ( IsArithmetic(bt2) )
+ {
+ if ( bt2 != bt1 )
+ op2 = new ArithCoerceExpr(op2, bt1);
+
+ SetType(op1->Type()->Ref());
+ }
+
+ else
+ ExprError("appending non-arithmetic to arithmetic vector");
+ }
+
+ else if ( bt1 != bt2 )
+ ExprError("incompatible vector append");
+
+ else
+ SetType(op1->Type()->Ref());
+ }
+
else
ExprError("requires two arithmetic or two string operands");
}
@@ -1421,6 +1509,14 @@ Val* AddToExpr::Eval(Frame* f) const
return 0;
}
+ if ( is_vector(v1) )
+ {
+ VectorVal* vv = v1->AsVectorVal();
+ if ( ! vv->Assign(vv->Size(), v2) )
+ reporter->Error("type-checking failed in vector append");
+ return v1;
+ }
+
Val* result = Fold(v1, v2);
Unref(v1);
Unref(v2);
@@ -1454,24 +1550,39 @@ SubExpr::SubExpr(Expr* arg_op1, Expr* arg_op2)
if ( IsError() )
return;
- TypeTag bt1 = op1->Type()->Tag();
- if ( IsVector(bt1) )
- bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
+ const BroType* t1 = op1->Type();
+ const BroType* t2 = op2->Type();
- TypeTag bt2 = op2->Type()->Tag();
+ TypeTag bt1 = t1->Tag();
+ if ( IsVector(bt1) )
+ bt1 = t1->AsVectorType()->YieldType()->Tag();
+
+ TypeTag bt2 = t2->Tag();
if ( IsVector(bt2) )
- bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
+ bt2 = t2->AsVectorType()->YieldType()->Tag();
BroType* base_result_type = 0;
if ( bt1 == TYPE_TIME && bt2 == TYPE_INTERVAL )
base_result_type = base_type(bt1);
+
else if ( bt1 == TYPE_TIME && bt2 == TYPE_TIME )
SetType(base_type(TYPE_INTERVAL));
+
else if ( bt1 == TYPE_INTERVAL && bt2 == TYPE_INTERVAL )
base_result_type = base_type(bt1);
+
+ else if ( t1->IsSet() && t2->IsSet() )
+ {
+ if ( same_type(t1, t2) )
+ SetType(op1->Type()->Ref());
+ else
+ ExprError("incompatible \"set\" operands");
+ }
+
else if ( BothArithmetic(bt1, bt2) )
PromoteType(max_type(bt1, bt2), is_vector(op1) || is_vector(op2));
+
else
ExprError("requires arithmetic operands");
@@ -1888,13 +1999,16 @@ BitExpr::BitExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
if ( IsError() )
return;
- TypeTag bt1 = op1->Type()->Tag();
- if ( IsVector(bt1) )
- bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
+ const BroType* t1 = op1->Type();
+ const BroType* t2 = op2->Type();
- TypeTag bt2 = op2->Type()->Tag();
+ TypeTag bt1 = t1->Tag();
+ if ( IsVector(bt1) )
+ bt1 = t1->AsVectorType()->YieldType()->Tag();
+
+ TypeTag bt2 = t2->Tag();
if ( IsVector(bt2) )
- bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
+ bt2 = t2->AsVectorType()->YieldType()->Tag();
if ( (bt1 == TYPE_COUNT || bt1 == TYPE_COUNTER) &&
(bt2 == TYPE_COUNT || bt2 == TYPE_COUNTER) )
@@ -1917,8 +2031,16 @@ BitExpr::BitExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
SetType(base_type(TYPE_PATTERN));
}
+ else if ( t1->IsSet() && t2->IsSet() )
+ {
+ if ( same_type(t1, t2) )
+ SetType(op1->Type()->Ref());
+ else
+ ExprError("incompatible \"set\" operands");
+ }
+
else
- ExprError("requires \"count\" operands");
+ ExprError("requires \"count\" or compatible \"set\" operands");
}
IMPLEMENT_SERIAL(BitExpr, SER_BIT_EXPR);
@@ -1943,13 +2065,16 @@ EqExpr::EqExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
Canonicize();
- TypeTag bt1 = op1->Type()->Tag();
- if ( IsVector(bt1) )
- bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
+ const BroType* t1 = op1->Type();
+ const BroType* t2 = op2->Type();
- TypeTag bt2 = op2->Type()->Tag();
+ TypeTag bt1 = t1->Tag();
+ if ( IsVector(bt1) )
+ bt1 = t1->AsVectorType()->YieldType()->Tag();
+
+ TypeTag bt2 = t2->Tag();
if ( IsVector(bt2) )
- bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
+ bt2 = t2->AsVectorType()->YieldType()->Tag();
if ( is_vector(op1) || is_vector(op2) )
SetType(new VectorType(base_type(TYPE_BOOL)));
@@ -1979,10 +2104,20 @@ EqExpr::EqExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
break;
case TYPE_ENUM:
- if ( ! same_type(op1->Type(), op2->Type()) )
+ if ( ! same_type(t1, t2) )
ExprError("illegal enum comparison");
break;
+ case TYPE_TABLE:
+ if ( t1->IsSet() && t2->IsSet() )
+ {
+ if ( ! same_type(t1, t2) )
+ ExprError("incompatible sets in comparison");
+ break;
+ }
+
+ // FALL THROUGH
+
default:
ExprError("illegal comparison");
}
@@ -2045,13 +2180,16 @@ RelExpr::RelExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
Canonicize();
- TypeTag bt1 = op1->Type()->Tag();
- if ( IsVector(bt1) )
- bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
+ const BroType* t1 = op1->Type();
+ const BroType* t2 = op2->Type();
- TypeTag bt2 = op2->Type()->Tag();
+ TypeTag bt1 = t1->Tag();
+ if ( IsVector(bt1) )
+ bt1 = t1->AsVectorType()->YieldType()->Tag();
+
+ TypeTag bt2 = t2->Tag();
if ( IsVector(bt2) )
- bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
+ bt2 = t2->AsVectorType()->YieldType()->Tag();
if ( is_vector(op1) || is_vector(op2) )
SetType(new VectorType(base_type(TYPE_BOOL)));
@@ -2061,6 +2199,12 @@ RelExpr::RelExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
if ( BothArithmetic(bt1, bt2) )
PromoteOps(max_type(bt1, bt2));
+ else if ( t1->IsSet() && t2->IsSet() )
+ {
+ if ( ! same_type(t1, t2) )
+ ExprError("incompatible sets in comparison");
+ }
+
else if ( bt1 != bt2 )
ExprError("operands must be of the same type");
@@ -5361,11 +5505,16 @@ Val* CastExpr::Eval(Frame* f) const
}
ODesc d;
- d.Add("cannot cast value of type '");
+ d.Add("invalid cast of value with type '");
v->Type()->Describe(&d);
d.Add("' to type '");
Type()->Describe(&d);
d.Add("'");
+
+ if ( same_type(v->Type(), bro_broker::DataVal::ScriptDataType()) &&
+ ! v->AsRecordVal()->Lookup(0) )
+ d.Add(" (nil $data field)");
+
Unref(v);
reporter->ExprRuntimeError(this, "%s", d.Description());
return 0; // not reached.
diff --git a/src/Expr.h b/src/Expr.h
index c21547d91c..8ac547c534 100644
--- a/src/Expr.h
+++ b/src/Expr.h
@@ -332,6 +332,9 @@ protected:
// Same for when the constants are patterns.
virtual Val* PatternFold(Val* v1, Val* v2) const;
+ // Same for when the constants are sets.
+ virtual Val* SetFold(Val* v1, Val* v2) const;
+
// Same for when the constants are addresses or subnets.
virtual Val* AddrFold(Val* v1, Val* v2) const;
virtual Val* SubNetFold(Val* v1, Val* v2) const;
diff --git a/src/ID.cc b/src/ID.cc
index 4216422225..a68abb6264 100644
--- a/src/ID.cc
+++ b/src/ID.cc
@@ -294,6 +294,22 @@ void ID::RemoveAttr(attr_tag a)
}
}
+void ID::SetOption()
+ {
+ if ( is_option )
+ return;
+
+ is_option = true;
+
+ // option implied redefinable
+ if ( ! IsRedefinable() )
+ {
+ attr_list* attr = new attr_list;
+ attr->append(new Attr(ATTR_REDEF));
+ AddAttrs(new Attributes(attr, Type(), false));
+ }
+ }
+
void ID::EvalFunc(Expr* ef, Expr* ev)
{
Expr* arg1 = new ConstExpr(val->Ref());
diff --git a/src/ID.h b/src/ID.h
index 442a13dfcc..18754584df 100644
--- a/src/ID.h
+++ b/src/ID.h
@@ -60,7 +60,7 @@ public:
void SetConst() { is_const = true; }
bool IsConst() const { return is_const; }
- void SetOption() { is_option = true; }
+ void SetOption();
bool IsOption() const { return is_option; }
void SetEnumConst() { is_enum_const = true; }
diff --git a/src/Sessions.cc b/src/Sessions.cc
index 9dc569daa7..876988361d 100644
--- a/src/Sessions.cc
+++ b/src/Sessions.cc
@@ -532,7 +532,7 @@ void NetSessions::DoNextPacket(double t, const Packet* pkt, const IP_Hdr* ip_hdr
// If a carried packet has ethernet, this will help skip it.
unsigned int eth_len = 0;
unsigned int gre_len = gre_header_len(flags_ver);
- unsigned int ppp_len = gre_version == 1 ? 1 : 0;
+ unsigned int ppp_len = gre_version == 1 ? 4 : 0;
if ( gre_version != 0 && gre_version != 1 )
{
@@ -598,7 +598,7 @@ void NetSessions::DoNextPacket(double t, const Packet* pkt, const IP_Hdr* ip_hdr
if ( gre_version == 1 )
{
- int ppp_proto = *((uint8*)(data + gre_len));
+ uint16 ppp_proto = ntohs(*((uint16*)(data + gre_len + 2)));
if ( ppp_proto != 0x0021 && ppp_proto != 0x0057 )
{
diff --git a/src/Val.cc b/src/Val.cc
index 36269ad9c9..7879d282b2 100644
--- a/src/Val.cc
+++ b/src/Val.cc
@@ -1706,9 +1706,11 @@ int TableVal::RemoveFrom(Val* val) const
HashKey* k;
while ( tbl->NextEntry(k, c) )
{
- Val* index = RecoverIndex(k);
-
- Unref(index);
+ // Not sure that this is 100% sound, since the HashKey
+ // comes from one table but is being used in another.
+ // OTOH, they are both the same type, so as long as
+ // we don't have hash keys that are keyed per dictionary,
+ // it should work ...
Unref(t->Delete(k));
delete k;
}
@@ -1716,6 +1718,91 @@ int TableVal::RemoveFrom(Val* val) const
return 1;
}
+TableVal* TableVal::Intersect(const TableVal* tv) const
+ {
+ TableVal* result = new TableVal(table_type);
+
+ const PDict(TableEntryVal)* t0 = AsTable();
+ const PDict(TableEntryVal)* t1 = tv->AsTable();
+ PDict(TableEntryVal)* t2 = result->AsNonConstTable();
+
+ // Figure out which is smaller; assign it to t1.
+ if ( t1->Length() > t0->Length() )
+ { // Swap.
+ const PDict(TableEntryVal)* tmp = t1;
+ t1 = t0;
+ t0 = tmp;
+ }
+
+ IterCookie* c = t1->InitForIteration();
+ HashKey* k;
+ while ( t1->NextEntry(k, c) )
+ {
+ // Here we leverage the same assumption about consistent
+ // hashes as in TableVal::RemoveFrom above.
+ if ( t0->Lookup(k) )
+ t2->Insert(k, new TableEntryVal(0));
+
+ delete k;
+ }
+
+ return result;
+ }
+
+bool TableVal::EqualTo(const TableVal* tv) const
+ {
+ const PDict(TableEntryVal)* t0 = AsTable();
+ const PDict(TableEntryVal)* t1 = tv->AsTable();
+
+ if ( t0->Length() != t1->Length() )
+ return false;
+
+ IterCookie* c = t0->InitForIteration();
+ HashKey* k;
+ while ( t0->NextEntry(k, c) )
+ {
+ // Here we leverage the same assumption about consistent
+ // hashes as in TableVal::RemoveFrom above.
+ if ( ! t1->Lookup(k) )
+ {
+ delete k;
+ t0->StopIteration(c);
+ return false;
+ }
+
+ delete k;
+ }
+
+ return true;
+ }
+
+bool TableVal::IsSubsetOf(const TableVal* tv) const
+ {
+ const PDict(TableEntryVal)* t0 = AsTable();
+ const PDict(TableEntryVal)* t1 = tv->AsTable();
+
+ if ( t0->Length() > t1->Length() )
+ return false;
+
+ IterCookie* c = t0->InitForIteration();
+ HashKey* k;
+ while ( t0->NextEntry(k, c) )
+ {
+ // Here we leverage the same assumption about consistent
+ // hashes as in TableVal::RemoveFrom above.
+ if ( ! t1->Lookup(k) )
+ {
+ delete k;
+ t0->StopIteration(c);
+ return false;
+ }
+
+ delete k;
+ }
+
+ return true;
+ }
+
int TableVal::ExpandAndInit(Val* index, Val* new_val)
{
BroType* index_type = index->Type();
@@ -3545,6 +3632,10 @@ Val* cast_value_to_type(Val* v, BroType* t)
if ( same_type(v->Type(), bro_broker::DataVal::ScriptDataType()) )
{
auto dv = v->AsRecordVal()->Lookup(0);
+
+ if ( ! dv )
+ return 0;
+
return static_cast(dv)->castTo(t);
}
@@ -3567,6 +3658,10 @@ bool can_cast_value_to_type(const Val* v, BroType* t)
if ( same_type(v->Type(), bro_broker::DataVal::ScriptDataType()) )
{
auto dv = v->AsRecordVal()->Lookup(0);
+
+ if ( ! dv )
+ return false;
+
return static_cast(dv)->canCastTo(t);
}
diff --git a/src/Val.h b/src/Val.h
index 771ed40dd1..bb18dceb4f 100644
--- a/src/Val.h
+++ b/src/Val.h
@@ -809,6 +809,22 @@ public:
// Returns true if the addition typechecked, false if not.
int RemoveFrom(Val* v) const override;
+ // Returns a new table that is the intersection of this
+ // table and the given table. Intersection is just done
+ // on index, not on yield value, so this really only makes
+ // sense for sets.
+ TableVal* Intersect(const TableVal* v) const;
+
+ // Returns true if this set contains the same members as the
+ // given set. Note that comparisons are done using hash keys,
+ // so errors can arise for compound sets such as sets-of-sets.
+ // See https://bro-tracker.atlassian.net/browse/BIT-1949.
+ bool EqualTo(const TableVal* v) const;
+
+ // Returns true if this set is a subset (not necessarily proper)
+ // of the given set.
+ bool IsSubsetOf(const TableVal* v) const;
+
// Expands any lists in the index into multiple initializations.
// Returns true if the initializations typecheck, false if not.
int ExpandAndInit(Val* index, Val* new_val);
@@ -1015,8 +1031,6 @@ public:
// Returns false if the type of the argument was wrong.
// The vector will automatically grow to accomodate the index.
- // 'assigner" is the expression that is doing the assignment;
- // it's just used for pinpointing errors.
//
// Note: does NOT Ref() the element! Remember to do so unless
// the element was just created and thus has refcount 1.
diff --git a/src/analyzer/protocol/tcp/TCP.cc b/src/analyzer/protocol/tcp/TCP.cc
index 791cf9f779..08dd56190c 100644
--- a/src/analyzer/protocol/tcp/TCP.cc
+++ b/src/analyzer/protocol/tcp/TCP.cc
@@ -459,7 +459,7 @@ bool TCP_Analyzer::ValidateChecksum(const struct tcphdr* tp,
! endpoint->ValidChecksum(tp, len) )
{
Weird("bad_TCP_checksum");
- endpoint->CheckHistory(HIST_CORRUPT_PKT, 'C');
+ endpoint->ChecksumError();
return false;
}
else
@@ -579,16 +579,38 @@ static void init_window(TCP_Endpoint* endpoint, TCP_Endpoint* peer,
static void update_window(TCP_Endpoint* endpoint, unsigned int window,
uint32 base_seq, uint32 ack_seq, TCP_Flags flags)
{
- // Note, the offered window on an initial SYN is unscaled, even
- // if the SYN includes scaling, so we need to do the following
- // test *before* updating the scaling information below. (Hmmm,
- // how does this work for windows on SYN/ACKs? ###)
+ // Note, applying scaling here would be incorrect for an initial SYN,
+ // whose window value is always unscaled. However, we don't
+ // check the window's value for recision in that case anyway, so
+ // no-harm-no-foul.
int scale = endpoint->window_scale;
window = window << scale;
+ // Zero windows are boring if either (1) they come with a RST packet
+ // or after a RST packet, or (2) they come after the peer has sent
+ // a FIN (because there's no relevant window at that point anyway).
+ // (They're also boring if they come after the peer has sent a RST,
+ // but *nothing* should be sent in response to a RST, so we ignore
+ // that case.)
+ //
+ // However, they *are* potentially interesting if sent by an
+ // endpoint that's already sent a FIN, since that FIN meant "I'm
+ // not going to send any more", but doesn't mean "I won't receive
+ // any more".
+ if ( window == 0 && ! flags.RST() &&
+ endpoint->peer->state != TCP_ENDPOINT_CLOSED &&
+ endpoint->state != TCP_ENDPOINT_RESET )
+ endpoint->ZeroWindow();
+
// Don't analyze window values off of SYNs, they're sometimes
- // immediately rescinded.
- if ( ! flags.SYN() )
+ // immediately rescinded. Also don't do so for FINs or RSTs,
+ // or if the connection has already been partially closed, since
+ // such recisions occur frequently in practice, probably as the
+ // receiver loses buffer memory due to its process going away.
+
+ if ( ! flags.SYN() && ! flags.FIN() && ! flags.RST() &&
+ endpoint->state != TCP_ENDPOINT_CLOSED &&
+ endpoint->state != TCP_ENDPOINT_RESET )
{
// ### Decide whether to accept new window based on Active
// Mapping policy.
@@ -601,21 +623,12 @@ static void update_window(TCP_Endpoint* endpoint, unsigned int window,
if ( advance < 0 )
{
- // A window recision. We don't report these
- // for FINs or RSTs, or if the connection
- // has already been partially closed, since
- // such recisions occur frequently in practice,
- // probably as the receiver loses buffer memory
- // due to its process going away.
- //
- // We also, for window scaling, allow a bit
- // of slop ###. This is because sometimes
- // there will be an apparent recision due
- // to the granularity of the scaling.
- if ( ! flags.FIN() && ! flags.RST() &&
- endpoint->state != TCP_ENDPOINT_CLOSED &&
- endpoint->state != TCP_ENDPOINT_RESET &&
- (-advance) >= (1 << scale) )
+ // An apparent window recision. Allow a
+ // bit of slop for window scaling. This is
+ // because sometimes there will be an
+ // apparent recision due to the granularity
+ // of the scaling.
+ if ( (-advance) >= (1 << scale) )
endpoint->Conn()->Weird("window_recision");
}
@@ -1206,7 +1219,7 @@ static int32 update_last_seq(TCP_Endpoint* endpoint, uint32 last_seq,
endpoint->UpdateLastSeq(last_seq);
else if ( delta_last < 0 && len > 0 )
- endpoint->CheckHistory(HIST_RXMIT, 'T');
+ endpoint->DidRxmit();
return delta_last;
}
diff --git a/src/analyzer/protocol/tcp/TCP_Endpoint.cc b/src/analyzer/protocol/tcp/TCP_Endpoint.cc
index c3175ec9f5..fb736d80f1 100644
--- a/src/analyzer/protocol/tcp/TCP_Endpoint.cc
+++ b/src/analyzer/protocol/tcp/TCP_Endpoint.cc
@@ -32,6 +32,9 @@ TCP_Endpoint::TCP_Endpoint(TCP_Analyzer* arg_analyzer, int arg_is_orig)
tcp_analyzer = arg_analyzer;
is_orig = arg_is_orig;
+ chk_cnt = rxmt_cnt = win0_cnt = 0;
+ chk_thresh = rxmt_thresh = win0_thresh = 1;
+
hist_last_SYN = hist_last_FIN = hist_last_RST = 0;
src_addr = is_orig ? Conn()->RespAddr() : Conn()->OrigAddr();
@@ -284,3 +287,29 @@ void TCP_Endpoint::AddHistory(char code)
Conn()->AddHistory(code);
}
+void TCP_Endpoint::ChecksumError()
+ {
+ uint32 t = chk_thresh;
+ if ( Conn()->ScaledHistoryEntry(IsOrig() ? 'C' : 'c',
+ chk_cnt, chk_thresh) )
+ Conn()->HistoryThresholdEvent(tcp_multiple_checksum_errors,
+ IsOrig(), t);
+ }
+
+void TCP_Endpoint::DidRxmit()
+ {
+ uint32 t = rxmt_thresh;
+ if ( Conn()->ScaledHistoryEntry(IsOrig() ? 'T' : 't',
+ rxmt_cnt, rxmt_thresh) )
+ Conn()->HistoryThresholdEvent(tcp_multiple_retransmissions,
+ IsOrig(), t);
+ }
+
+void TCP_Endpoint::ZeroWindow()
+ {
+ uint32 t = win0_thresh;
+ if ( Conn()->ScaledHistoryEntry(IsOrig() ? 'W' : 'w',
+ win0_cnt, win0_thresh) )
+ Conn()->HistoryThresholdEvent(tcp_multiple_zero_windows,
+ IsOrig(), t);
+ }
diff --git a/src/analyzer/protocol/tcp/TCP_Endpoint.h b/src/analyzer/protocol/tcp/TCP_Endpoint.h
index 2e8a8a041e..4c38aadd93 100644
--- a/src/analyzer/protocol/tcp/TCP_Endpoint.h
+++ b/src/analyzer/protocol/tcp/TCP_Endpoint.h
@@ -166,6 +166,15 @@ public:
int ValidChecksum(const struct tcphdr* tp, int len) const;
+ // Called to inform endpoint that it has generated a checksum error.
+ void ChecksumError();
+
+ // Called to inform endpoint that it has generated a retransmission.
+ void DidRxmit();
+
+ // Called to inform endpoint that it has offered a zero window.
+ void ZeroWindow();
+
// Returns true if the data was used (and hence should be recorded
// in the save file), false otherwise.
int DataSent(double t, uint64 seq, int len, int caplen, const u_char* data,
@@ -188,6 +197,7 @@ public:
#define HIST_MULTI_FLAG_PKT 0x40
#define HIST_CORRUPT_PKT 0x80
#define HIST_RXMIT 0x100
+#define HIST_WIN0 0x200
int CheckHistory(uint32 mask, char code);
void AddHistory(char code);
@@ -202,7 +212,7 @@ public:
double start_time, last_time;
IPAddr src_addr; // the other endpoint
IPAddr dst_addr; // this endpoint
- uint32 window; // current congestion window (*scaled*, not pre-scaling)
+ uint32 window; // current advertised window (*scaled*, not pre-scaling)
int window_scale; // from the TCP option
uint32 window_ack_seq; // at which ack_seq number did we record 'window'
uint32 window_seq; // at which sending sequence number did we record 'window'
@@ -225,6 +235,11 @@ protected:
uint32 last_seq, ack_seq; // in host order
uint32 seq_wraps, ack_wraps; // Number of times 32-bit TCP sequence space
// has wrapped around (overflowed).
+
+ // Performance history accounting.
+ uint32 chk_cnt, chk_thresh;
+ uint32 rxmt_cnt, rxmt_thresh;
+ uint32 win0_cnt, win0_thresh;
};
#define ENDIAN_UNKNOWN 0
diff --git a/src/analyzer/protocol/tcp/events.bif b/src/analyzer/protocol/tcp/events.bif
index 5cf2710804..d93ebe4819 100644
--- a/src/analyzer/protocol/tcp/events.bif
+++ b/src/analyzer/protocol/tcp/events.bif
@@ -290,6 +290,43 @@ event tcp_contents%(c: connection, is_orig: bool, seq: count, contents: string%)
## TODO.
event tcp_rexmit%(c: connection, is_orig: bool, seq: count, len: count, data_in_flight: count, window: count%);
+## Generated if a TCP flow crosses a checksum-error threshold, per
+## 'C'/'c' history reporting.
+##
+## c: The connection record for the TCP connection.
+##
+## is_orig: True if the event is raised for the originator side.
+##
+## threshold: the threshold that was crossed
+##
+## .. bro:see:: udp_multiple_checksum_errors
+## tcp_multiple_zero_windows tcp_multiple_retransmissions
+event tcp_multiple_checksum_errors%(c: connection, is_orig: bool, threshold: count%);
+
+## Generated if a TCP flow crosses a zero-window threshold, per
+## 'W'/'w' history reporting.
+##
+## c: The connection record for the TCP connection.
+##
+## is_orig: True if the event is raised for the originator side.
+##
+## threshold: the threshold that was crossed
+##
+## .. bro:see:: tcp_multiple_checksum_errors tcp_multiple_retransmissions
+event tcp_multiple_zero_windows%(c: connection, is_orig: bool, threshold: count%);
+
+## Generated if a TCP flow crosses a retransmission threshold, per
+## 'T'/'t' history reporting.
+##
+## c: The connection record for the TCP connection.
+##
+## is_orig: True if the event is raised for the originator side.
+##
+## threshold: the threshold that was crossed
+##
+## .. bro:see:: tcp_multiple_checksum_errors tcp_multiple_zero_windows
+event tcp_multiple_retransmissions%(c: connection, is_orig: bool, threshold: count%);
+
## Generated when failing to write contents of a TCP stream to a file.
##
## c: The connection whose contents are being recorded.
diff --git a/src/analyzer/protocol/udp/UDP.cc b/src/analyzer/protocol/udp/UDP.cc
index ca46b88339..b3a334b76b 100644
--- a/src/analyzer/protocol/udp/UDP.cc
+++ b/src/analyzer/protocol/udp/UDP.cc
@@ -20,6 +20,9 @@ UDP_Analyzer::UDP_Analyzer(Connection* conn)
conn->EnableStatusUpdateTimer();
conn->SetInactivityTimeout(udp_inactivity_timeout);
request_len = reply_len = -1; // -1 means "haven't seen any activity"
+
+ req_chk_cnt = rep_chk_cnt = 0;
+ req_chk_thresh = rep_chk_thresh = 1;
}
UDP_Analyzer::~UDP_Analyzer()
@@ -77,9 +80,19 @@ void UDP_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig,
Weird("bad_UDP_checksum");
if ( is_orig )
- Conn()->CheckHistory(HIST_ORIG_CORRUPT_PKT, 'C');
+ {
+ uint32 t = req_chk_thresh;
+ if ( Conn()->ScaledHistoryEntry('C', req_chk_cnt,
+ req_chk_thresh) )
+ ChecksumEvent(is_orig, t);
+ }
else
- Conn()->CheckHistory(HIST_RESP_CORRUPT_PKT, 'c');
+ {
+ uint32 t = rep_chk_thresh;
+ if ( Conn()->ScaledHistoryEntry('c', rep_chk_cnt,
+ rep_chk_thresh) )
+ ChecksumEvent(is_orig, t);
+ }
return;
}
@@ -209,6 +222,12 @@ unsigned int UDP_Analyzer::MemoryAllocation() const
return Analyzer::MemoryAllocation() + padded_sizeof(*this) - 24;
}
+void UDP_Analyzer::ChecksumEvent(bool is_orig, uint32 threshold)
+ {
+ Conn()->HistoryThresholdEvent(udp_multiple_checksum_errors,
+ is_orig, threshold);
+ }
+
bool UDP_Analyzer::ValidateChecksum(const IP_Hdr* ip, const udphdr* up, int len)
{
uint32 sum;
diff --git a/src/analyzer/protocol/udp/UDP.h b/src/analyzer/protocol/udp/UDP.h
index 2c7f3ce150..7e07902a7e 100644
--- a/src/analyzer/protocol/udp/UDP.h
+++ b/src/analyzer/protocol/udp/UDP.h
@@ -31,6 +31,8 @@ protected:
bool IsReuse(double t, const u_char* pkt) override;
unsigned int MemoryAllocation() const override;
+ void ChecksumEvent(bool is_orig, uint32 threshold);
+
// Returns true if the checksum is valid, false if not
static bool ValidateChecksum(const IP_Hdr* ip, const struct udphdr* up,
int len);
@@ -44,6 +46,10 @@ private:
#define HIST_RESP_DATA_PKT 0x2
#define HIST_ORIG_CORRUPT_PKT 0x4
#define HIST_RESP_CORRUPT_PKT 0x8
+
+ // For tracking checksum history.
+ uint32 req_chk_cnt, req_chk_thresh;
+ uint32 rep_chk_cnt, rep_chk_thresh;
};
} } // namespace analyzer::*
diff --git a/src/analyzer/protocol/udp/events.bif b/src/analyzer/protocol/udp/events.bif
index 394181cf5d..afcace330b 100644
--- a/src/analyzer/protocol/udp/events.bif
+++ b/src/analyzer/protocol/udp/events.bif
@@ -36,3 +36,16 @@ event udp_reply%(u: connection%);
## udp_content_deliver_all_orig udp_content_deliver_all_resp
## udp_content_delivery_ports_orig udp_content_delivery_ports_resp
event udp_contents%(u: connection, is_orig: bool, contents: string%);
+
+## Generated if a UDP flow crosses a checksum-error threshold, per
+## 'C'/'c' history reporting.
+##
+## u: The connection record for the corresponding UDP flow.
+##
+## is_orig: True if the event is raised for the originator side.
+##
+## threshold: the threshold that was crossed
+##
+## .. bro:see:: udp_reply udp_request udp_session_done
+## tcp_multiple_checksum_errors
+event udp_multiple_checksum_errors%(u: connection, is_orig: bool, threshold: count%);
diff --git a/src/broker/Manager.cc b/src/broker/Manager.cc
index 9712516b74..ca5ac53c96 100644
--- a/src/broker/Manager.cc
+++ b/src/broker/Manager.cc
@@ -137,6 +137,7 @@ Manager::Manager(bool arg_reading_pcaps)
{
bound_port = 0;
reading_pcaps = arg_reading_pcaps;
+ after_bro_init = false;
peer_count = 0;
log_topic_func = nullptr;
vector_of_data_type = nullptr;
@@ -184,22 +185,29 @@ void Manager::InitPostScript()
config.set("scheduler.max-threads", max_threads);
else
{
- // On high-core-count systems, spawning one thread per core
- // can lead to significant performance problems even if most
- // threads are under-utilized. Related:
- // https://github.com/actor-framework/actor-framework/issues/699
- if ( reading_pcaps )
- config.set("scheduler.max-threads", 2u);
+ auto max_threads_env = getenv("BRO_BROKER_MAX_THREADS");
+
+ if ( max_threads_env )
+ config.set("scheduler.max-threads", atoi(max_threads_env));
else
{
- auto hc = std::thread::hardware_concurrency();
-
- if ( hc > 8u )
- hc = 8u;
- else if ( hc < 4u)
- hc = 4u;
-
- config.set("scheduler.max-threads", hc);
+ // On high-core-count systems, letting CAF spawn a thread per core
+ // can lead to significant performance problems even if most
+ // threads are under-utilized. Related:
+ // https://github.com/actor-framework/actor-framework/issues/699
+ if ( reading_pcaps )
+ config.set("scheduler.max-threads", 2u);
+ else
+ // If the goal was to map threads to actors, 4 threads seems
+ // like a minimal default that could make sense -- the main
+ // actors that should be doing work are (1) the core,
+ // (2) the subscriber, (3) data stores (actually made of
+ // a frontend + proxy actor). Number of data stores may
+ // actually vary, but lumped togather for simplicity. A (4)
+ // may be CAF's multiplexing or other internals...
+ // 4 is also the minimum number that CAF uses by default,
+ // even for systems with less than 4 cores.
+ config.set("scheduler.max-threads", 4u);
}
}
@@ -840,14 +848,14 @@ RecordVal* Manager::MakeEvent(val_list* args, Frame* frame)
bool Manager::Subscribe(const string& topic_prefix)
{
DBG_LOG(DBG_BROKER, "Subscribing to topic prefix %s", topic_prefix.c_str());
- bstate->subscriber.add_topic(topic_prefix);
+ bstate->subscriber.add_topic(topic_prefix, ! after_bro_init);
return true;
}
bool Manager::Unsubscribe(const string& topic_prefix)
{
DBG_LOG(DBG_BROKER, "Unsubscribing from topic prefix %s", topic_prefix.c_str());
- bstate->subscriber.remove_topic(topic_prefix);
+ bstate->subscriber.remove_topic(topic_prefix, ! after_bro_init);
return true;
}
diff --git a/src/broker/Manager.h b/src/broker/Manager.h
index b5faaee345..415dd00a2c 100644
--- a/src/broker/Manager.h
+++ b/src/broker/Manager.h
@@ -66,6 +66,9 @@ public:
*/
void InitPostScript();
+ void BroInitDone()
+ { after_bro_init = true; }
+
/**
* Shuts Broker down at termination.
*/
@@ -404,6 +407,7 @@ private:
uint16_t bound_port;
bool reading_pcaps;
+ bool after_bro_init;
int peer_count;
Func* log_topic_func;
diff --git a/src/main.cc b/src/main.cc
index 2e9a89ddd1..757b09351f 100644
--- a/src/main.cc
+++ b/src/main.cc
@@ -1182,6 +1182,7 @@ int main(int argc, char** argv)
// Drain the event queue here to support the protocols framework configuring DPM
mgr.Drain();
+ broker_mgr->BroInitDone();
analyzer_mgr->DumpDebug();
have_pending_timers = ! reading_traces && timer_mgr->Size() > 0;
diff --git a/src/threading/formatters/Ascii.cc b/src/threading/formatters/Ascii.cc
index e96290e793..0ea7d07d16 100644
--- a/src/threading/formatters/Ascii.cc
+++ b/src/threading/formatters/Ascii.cc
@@ -227,9 +227,9 @@ threading::Value* Ascii::ParseValue(const string& s, const string& name, TypeTag
}
case TYPE_BOOL:
- if ( s == "T" )
+ if ( s == "T" || s == "1" )
val->val.int_val = 1;
- else if ( s == "F" )
+ else if ( s == "F" || s == "0" )
val->val.int_val = 0;
else
{
@@ -261,8 +261,10 @@ threading::Value* Ascii::ParseValue(const string& s, const string& name, TypeTag
break;
case TYPE_PORT:
+ {
val->val.port_val.proto = TRANSPORT_UNKNOWN;
pos = s.find('/');
+ string numberpart;
if ( pos != std::string::npos && s.length() > pos + 1 )
{
auto proto = s.substr(pos+1);
@@ -272,10 +274,21 @@ threading::Value* Ascii::ParseValue(const string& s, const string& name, TypeTag
val->val.port_val.proto = TRANSPORT_UDP;
else if ( strtolower(proto) == "icmp" )
val->val.port_val.proto = TRANSPORT_ICMP;
+ else if ( strtolower(proto) == "unknown" )
+ val->val.port_val.proto = TRANSPORT_UNKNOWN;
+ else
+ GetThread()->Warning(GetThread()->Fmt("Port '%s' contained unknown protocol '%s'", s.c_str(), proto.c_str()));
+ }
+
+ if ( pos != std::string::npos && pos > 0 )
+ {
+ numberpart = s.substr(0, pos);
+ start = numberpart.c_str();
}
val->val.port_val.port = strtoull(start, &end, 10);
if ( CheckNumberError(start, end) )
goto parse_error;
+ }
break;
case TYPE_SUBNET:
diff --git a/testing/Makefile b/testing/Makefile
index e83ec09396..98c6b239a2 100644
--- a/testing/Makefile
+++ b/testing/Makefile
@@ -8,6 +8,7 @@ brief: make-brief coverage
distclean:
@rm -f coverage.log
$(MAKE) -C btest $@
+ $(MAKE) -C coverage $@
make-verbose:
@for repo in $(DIRS); do (cd $$repo && make -s ); done
@@ -22,4 +23,6 @@ coverage:
@echo "Complete test suite code coverage:"
@./scripts/coverage-calc "brocov.tmp.*" coverage.log `pwd`/../scripts
@rm -f brocov.tmp.*
+ @cd coverage && make coverage
+.PHONY: coverage
diff --git a/testing/btest/Baseline/bifs.hll_large_estimate/out b/testing/btest/Baseline/bifs.hll_large_estimate/out
index 6897673f4e..c0bbe2b31d 100644
--- a/testing/btest/Baseline/bifs.hll_large_estimate/out
+++ b/testing/btest/Baseline/bifs.hll_large_estimate/out
@@ -1,3 +1,3 @@
Ok error
-171249.90868
+167377.950902
Ok error
diff --git a/testing/btest/Baseline/core.option-errors-4/.stderr b/testing/btest/Baseline/core.option-errors-4/.stderr
deleted file mode 100644
index b443da2eb9..0000000000
--- a/testing/btest/Baseline/core.option-errors-4/.stderr
+++ /dev/null
@@ -1 +0,0 @@
-error in /Users/johanna/corelight/bro/testing/btest/.tmp/core.option-errors-4/option-errors.bro, line 2 and /Users/johanna/corelight/bro/testing/btest/.tmp/core.option-errors-4/option-errors.bro, line 3: already defined (testopt)
diff --git a/testing/btest/Baseline/core.option-redef/.stdout b/testing/btest/Baseline/core.option-redef/.stdout
index 1e8b314962..baf1966653 100644
--- a/testing/btest/Baseline/core.option-redef/.stdout
+++ b/testing/btest/Baseline/core.option-redef/.stdout
@@ -1 +1,2 @@
6
+7
diff --git a/testing/btest/Baseline/core.pppoe-over-qinq/conn.log b/testing/btest/Baseline/core.pppoe-over-qinq/conn.log
index 6450d8f2f7..028dd982fb 100644
--- a/testing/btest/Baseline/core.pppoe-over-qinq/conn.log
+++ b/testing/btest/Baseline/core.pppoe-over-qinq/conn.log
@@ -3,8 +3,8 @@
#empty_field (empty)
#unset_field -
#path conn
-#open 2018-07-06-12-25-54
+#open 2018-08-01-20-09-03
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
#types time string addr port addr port enum string interval count count string bool bool count string count count count count set[string]
-1523351398.449222 CHhAvVGS1DHFjwGM9 1.1.1.1 20394 2.2.2.2 443 tcp - 273.626833 11352 4984 SF - - 0 ShADdtaTFf 44 25283 42 13001 -
-#close 2018-07-06-12-25-54
+1523351398.449222 CHhAvVGS1DHFjwGM9 1.1.1.1 20394 2.2.2.2 443 tcp - 273.626833 11352 4984 SF - - 0 ShADdtaTTFf 44 25283 42 13001 -
+#close 2018-08-01-20-09-03
diff --git a/testing/btest/Baseline/core.tunnels.gre-pptp/conn.log b/testing/btest/Baseline/core.tunnels.gre-pptp/conn.log
new file mode 100644
index 0000000000..20c0dc7317
--- /dev/null
+++ b/testing/btest/Baseline/core.tunnels.gre-pptp/conn.log
@@ -0,0 +1,10 @@
+#separator \x09
+#set_separator ,
+#empty_field (empty)
+#unset_field -
+#path conn
+#open 2018-08-14-21-42-31
+#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
+#types time string addr port addr port enum string interval count count string bool bool count string count count count count set[string]
+1417577703.821897 C4J4Th3PJpwUYZZ6gc 172.16.44.3 40768 8.8.8.8 53 udp dns 0.213894 71 146 SF - - 0 Dd 1 99 1 174 ClEkJM2Vm5giqnMf4h
+#close 2018-08-14-21-42-31
diff --git a/testing/btest/Baseline/core.tunnels.gre-pptp/dns.log b/testing/btest/Baseline/core.tunnels.gre-pptp/dns.log
new file mode 100644
index 0000000000..01875c2ff9
--- /dev/null
+++ b/testing/btest/Baseline/core.tunnels.gre-pptp/dns.log
@@ -0,0 +1,10 @@
+#separator \x09
+#set_separator ,
+#empty_field (empty)
+#unset_field -
+#path dns
+#open 2018-08-14-21-42-31
+#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id rtt query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected
+#types time string addr port addr port enum count interval string count string count string count string bool bool bool bool count vector[string] vector[interval] bool
+1417577703.821897 C4J4Th3PJpwUYZZ6gc 172.16.44.3 40768 8.8.8.8 53 udp 42540 - xqt-detect-mode2-97712e88-167a-45b9-93ee-913140e76678 1 C_INTERNET 28 AAAA 3 NXDOMAIN F F T F 0 - - F
+#close 2018-08-14-21-42-31
diff --git a/testing/btest/Baseline/core.tunnels.gre-pptp/tunnel.log b/testing/btest/Baseline/core.tunnels.gre-pptp/tunnel.log
new file mode 100644
index 0000000000..780ea33f59
--- /dev/null
+++ b/testing/btest/Baseline/core.tunnels.gre-pptp/tunnel.log
@@ -0,0 +1,11 @@
+#separator \x09
+#set_separator ,
+#empty_field (empty)
+#unset_field -
+#path tunnel
+#open 2018-08-14-21-42-31
+#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action
+#types time string addr port addr port enum enum
+1417577703.821897 CHhAvVGS1DHFjwGM9 2402:f000:1:8e01::5555 0 2607:fcd0:100:2300::b108:2a6b 0 Tunnel::IP Tunnel::DISCOVER
+1417577703.821897 ClEkJM2Vm5giqnMf4h 16.0.0.200 0 192.52.166.154 0 Tunnel::GRE Tunnel::DISCOVER
+#close 2018-08-14-21-42-31
diff --git a/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_vector_declaration_bro/output b/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_vector_declaration_bro/output
index 4f1260e4ed..22790f45fe 100644
--- a/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_vector_declaration_bro/output
+++ b/testing/btest/Baseline/doc.sphinx.include-doc_scripting_data_struct_vector_declaration_bro/output
@@ -7,10 +7,10 @@ event bro_init()
local v1: vector of count;
local v2 = vector(1, 2, 3, 4);
- v1[|v1|] = 1;
- v1[|v1|] = 2;
- v1[|v1|] = 3;
- v1[|v1|] = 4;
+ v1 += 1;
+ v1 += 2;
+ v1 += 3;
+ v1 += 4;
print fmt("contents of v1: %s", v1);
print fmt("length of v1: %d", |v1|);
diff --git a/testing/btest/Baseline/language.set/out b/testing/btest/Baseline/language.set/out
index fc157cf7d9..0128420cbf 100644
--- a/testing/btest/Baseline/language.set/out
+++ b/testing/btest/Baseline/language.set/out
@@ -42,3 +42,30 @@ remove element (PASS)
!in operator (PASS)
remove element (PASS)
!in operator (PASS)
+union (PASS)
+intersection (FAIL)
+difference (PASS)
+difference (PASS)
+union/inter. (PASS)
+relational (PASS)
+relational (PASS)
+subset (FAIL)
+subset (FAIL)
+subset (PASS)
+superset (FAIL)
+superset (FAIL)
+superset (FAIL)
+superset (PASS)
+non-ordering (FAIL)
+non-ordering (PASS)
+superset (PASS)
+superset (FAIL)
+superset (PASS)
+superset (PASS)
+superset (PASS)
+superset (FAIL)
+equality (PASS)
+equality (FAIL)
+non-equality (PASS)
+equality (FAIL)
+magnitude (FAIL)
diff --git a/testing/btest/Baseline/language.type-cast-error-dynamic/output b/testing/btest/Baseline/language.type-cast-error-dynamic/output
index 10d92ac199..8ebf0cc90e 100644
--- a/testing/btest/Baseline/language.type-cast-error-dynamic/output
+++ b/testing/btest/Baseline/language.type-cast-error-dynamic/output
@@ -1,2 +1,4 @@
-expression error in /home/robin/bro/lang-ext/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: cannot cast value of type 'count' to type 'string' [a as string]
-expression error in /home/robin/bro/lang-ext/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: cannot cast value of type 'record { a:addr; b:port; }' to type 'string' [a as string]
+expression error in /Users/jon/projects/bro/bro/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: invalid cast of value with type 'count' to type 'string' [a as string]
+expression error in /Users/jon/projects/bro/bro/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: invalid cast of value with type 'record { a:addr; b:port; }' to type 'string' [a as string]
+expression error in /Users/jon/projects/bro/bro/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: invalid cast of value with type 'record { data:opaque of Broker::Data; }' to type 'string' (nil $data field) [a as string]
+data is string, F
diff --git a/testing/btest/Baseline/language.vector/out b/testing/btest/Baseline/language.vector/out
index 0aa3ab0a8f..0fdcc1fa24 100644
--- a/testing/btest/Baseline/language.vector/out
+++ b/testing/btest/Baseline/language.vector/out
@@ -57,3 +57,4 @@ access element (PASS)
% operator (PASS)
&& operator (PASS)
|| operator (PASS)
++= operator (PASS)
diff --git a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/logger-1..stdout b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/logger-1..stdout
index e10770a5cc..15baa652c9 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/logger-1..stdout
+++ b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/logger-1..stdout
@@ -2,4 +2,10 @@ Connected to a peer
Connected to a peer
Connected to a peer
Connected to a peer
+got fully_connected event from, worker-1
Connected to a peer
+got fully_connected event from, proxy-1
+got fully_connected event from, proxy-2
+got fully_connected event from, manager-1
+got fully_connected event from, worker-2
+termination condition met: shutting down
diff --git a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/manager-1..stdout b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/manager-1..stdout
index e10770a5cc..b7b8f3e3b6 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/manager-1..stdout
+++ b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/manager-1..stdout
@@ -3,3 +3,4 @@ Connected to a peer
Connected to a peer
Connected to a peer
Connected to a peer
+sent fully_connected event
diff --git a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-1..stdout b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-1..stdout
index 7c8eb5ee83..328d7c91a3 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-1..stdout
+++ b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-1..stdout
@@ -2,3 +2,4 @@ Connected to a peer
Connected to a peer
Connected to a peer
Connected to a peer
+sent fully_connected event
diff --git a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-2..stdout b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-2..stdout
index 7c8eb5ee83..328d7c91a3 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-2..stdout
+++ b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/proxy-2..stdout
@@ -2,3 +2,4 @@ Connected to a peer
Connected to a peer
Connected to a peer
Connected to a peer
+sent fully_connected event
diff --git a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-1..stdout b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-1..stdout
index 7c8eb5ee83..328d7c91a3 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-1..stdout
+++ b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-1..stdout
@@ -2,3 +2,4 @@ Connected to a peer
Connected to a peer
Connected to a peer
Connected to a peer
+sent fully_connected event
diff --git a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-2..stdout b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-2..stdout
index 7c8eb5ee83..328d7c91a3 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-2..stdout
+++ b/testing/btest/Baseline/scripts.base.frameworks.cluster.start-it-up-logger/worker-2..stdout
@@ -2,3 +2,4 @@ Connected to a peer
Connected to a peer
Connected to a peer
Connected to a peer
+sent fully_connected event
diff --git a/testing/btest/Baseline/scripts.base.frameworks.config.basic/bro..stderr b/testing/btest/Baseline/scripts.base.frameworks.config.basic/bro..stderr
new file mode 100644
index 0000000000..977e8fc37a
--- /dev/null
+++ b/testing/btest/Baseline/scripts.base.frameworks.config.basic/bro..stderr
@@ -0,0 +1 @@
+received termination signal
diff --git a/testing/btest/Baseline/scripts.base.frameworks.config.basic/bro.config.log b/testing/btest/Baseline/scripts.base.frameworks.config.basic/bro.config.log
index b1e03411e5..0d96d0f111 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.config.basic/bro.config.log
+++ b/testing/btest/Baseline/scripts.base.frameworks.config.basic/bro.config.log
@@ -3,21 +3,23 @@
#empty_field (empty)
#unset_field -
#path config
-#open 2017-10-11-20-23-11
+#open 2018-08-10-18-16-52
#fields ts id old_value new_value location
#types time string string string string
-1507753391.587107 testbool T F ../configfile
-1507753391.587107 testcount 0 1 ../configfile
-1507753391.587107 testcount 1 2 ../configfile
-1507753391.587107 testint 0 -1 ../configfile
-1507753391.587107 testenum SSH::LOG Conn::LOG ../configfile
-1507753391.587107 testport 42/tcp 45/unknown ../configfile
-1507753391.587107 testaddr 127.0.0.1 127.0.0.1 ../configfile
-1507753391.587107 testaddr 127.0.0.1 2607:f8b0:4005:801::200e ../configfile
-1507753391.587107 testinterval 1.0 sec 60.0 ../configfile
-1507753391.587107 testtime 0.0 1507321987.0 ../configfile
-1507753391.587107 test_set (empty) b,c,a,d,erdbeerschnitzel ../configfile
-1507753391.587107 test_vector (empty) 1,2,3,4,5,6 ../configfile
-1507753391.587107 test_set b,c,a,d,erdbeerschnitzel (empty) ../configfile
-1507753391.587107 test_set (empty) \x2d ../configfile
-#close 2017-10-11-20-23-11
+1533925012.140634 testbool T F ../configfile
+1533925012.140634 testcount 0 1 ../configfile
+1533925012.140634 testcount 1 2 ../configfile
+1533925012.140634 testint 0 -1 ../configfile
+1533925012.140634 testenum SSH::LOG Conn::LOG ../configfile
+1533925012.140634 testport 42/tcp 45/unknown ../configfile
+1533925012.140634 testporttcp 40/udp 42/tcp ../configfile
+1533925012.140634 testportudp 40/tcp 42/udp ../configfile
+1533925012.140634 testaddr 127.0.0.1 127.0.0.1 ../configfile
+1533925012.140634 testaddr 127.0.0.1 2607:f8b0:4005:801::200e ../configfile
+1533925012.140634 testinterval 1.0 sec 60.0 ../configfile
+1533925012.140634 testtime 0.0 1507321987.0 ../configfile
+1533925012.140634 test_set (empty) b,c,a,d,erdbeerschnitzel ../configfile
+1533925012.140634 test_vector (empty) 1,2,3,4,5,6 ../configfile
+1533925012.140634 test_set b,c,a,d,erdbeerschnitzel (empty) ../configfile
+1533925012.140634 test_set (empty) \x2d ../configfile
+#close 2018-08-10-18-16-52
diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.basic/out b/testing/btest/Baseline/scripts.base.frameworks.input.basic/out
index 3f288d5c54..5cc19d85a2 100644
--- a/testing/btest/Baseline/scripts.base.frameworks.input.basic/out
+++ b/testing/btest/Baseline/scripts.base.frameworks.input.basic/out
@@ -1,5 +1,5 @@
{
-[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, pp=5/icmp, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, ns=4242, sc={
+[-42] = [b=T, bt=T, e=SSH::LOG, c=21, p=123/unknown, pp=5/icmp, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, ns=4242, sc={
2,
4,
1,
diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.port-embedded/bro..stderr b/testing/btest/Baseline/scripts.base.frameworks.input.port-embedded/bro..stderr
new file mode 100644
index 0000000000..fee70a8699
--- /dev/null
+++ b/testing/btest/Baseline/scripts.base.frameworks.input.port-embedded/bro..stderr
@@ -0,0 +1,2 @@
+warning: ../input.log/Input::READER_ASCII: Port '50/trash' contained unknown protocol 'trash'
+received termination signal
diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.port-embedded/bro..stdout b/testing/btest/Baseline/scripts.base.frameworks.input.port-embedded/bro..stdout
new file mode 100644
index 0000000000..d1d886b370
--- /dev/null
+++ b/testing/btest/Baseline/scripts.base.frameworks.input.port-embedded/bro..stdout
@@ -0,0 +1,4 @@
+[i=1.2.3.4], [p=80/tcp]
+[i=1.2.3.5], [p=52/udp]
+[i=1.2.3.6], [p=30/unknown]
+[i=1.2.3.7], [p=50/unknown]
diff --git a/testing/btest/Makefile b/testing/btest/Makefile
index c9bcfff5ee..c6f2438ad1 100644
--- a/testing/btest/Makefile
+++ b/testing/btest/Makefile
@@ -21,6 +21,7 @@ coverage:
cleanup:
@rm -f $(DIAG)
@rm -rf $(SCRIPT_COV)*
+ @find ../../ -name "*.gcda" -exec rm {} \;
distclean: cleanup
@rm -rf .btest.failed.dat \
diff --git a/testing/btest/Traces/tunnels/gre-pptp.pcap b/testing/btest/Traces/tunnels/gre-pptp.pcap
new file mode 100644
index 0000000000..45216c7f7a
Binary files /dev/null and b/testing/btest/Traces/tunnels/gre-pptp.pcap differ
diff --git a/testing/btest/bifs/hll_large_estimate.bro b/testing/btest/bifs/hll_large_estimate.bro
index 2059e47568..b17b50678d 100644
--- a/testing/btest/bifs/hll_large_estimate.bro
+++ b/testing/btest/bifs/hll_large_estimate.bro
@@ -8,7 +8,7 @@
event bro_init()
{
- local cp: opaque of cardinality = hll_cardinality_init(0.1, 0.99);
+ local cp: opaque of cardinality = hll_cardinality_init(0.1, 1.0);
local base: count = 2130706432; # 127.0.0.0
local i: count = 0;
while ( ++i < 170000 )
diff --git a/testing/btest/core/leaks/broker/data.bro b/testing/btest/core/leaks/broker/data.bro
index d79155fa30..590d041ff1 100644
--- a/testing/btest/core/leaks/broker/data.bro
+++ b/testing/btest/core/leaks/broker/data.bro
@@ -91,7 +91,7 @@ function broker_to_bro_vector_recurse(it: opaque of Broker::VectorIterator,
if ( Broker::vector_iterator_last(it) )
return rval;
- rval[|rval|] = Broker::vector_iterator_value(it) as string;
+ rval += Broker::vector_iterator_value(it) as string;
Broker::vector_iterator_next(it);
return broker_to_bro_vector_recurse(it, rval);
}
diff --git a/testing/btest/core/leaks/set.bro b/testing/btest/core/leaks/set.bro
new file mode 100644
index 0000000000..b3f2200d28
--- /dev/null
+++ b/testing/btest/core/leaks/set.bro
@@ -0,0 +1,194 @@
+# @TEST-GROUP: leaks
+# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks
+
+# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local btest-bg-run bro bro -m -b -r $TRACES/http/get.trace %INPUT
+# @TEST-EXEC: btest-bg-wait 60
+
+function test_case(msg: string, expect: bool)
+ {
+ print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL");
+ }
+
+# Note: only global sets can be initialized with curly braces
+global sg1: set[string] = { "curly", "braces" };
+global sg2: set[port, string, bool] = { [10/udp, "curly", F],
+ [11/udp, "braces", T] };
+global sg3 = { "more", "curly", "braces" };
+
+global did_once = F;
+
+event new_connection(cc: connection)
+ {
+ if ( did_once )
+ return;
+
+ did_once = T;
+
+ local s1: set[string] = set( "test", "example" );
+ local s2: set[string] = set();
+ local s3: set[string];
+ local s4 = set( "type inference" );
+ local s5: set[port, string, bool] = set( [1/tcp, "test", T],
+ [2/tcp, "example", F] );
+ local s6: set[port, string, bool] = set();
+ local s7: set[port, string, bool];
+ local s8 = set( [8/tcp, "type inference", T] );
+
+ # Type inference tests
+
+ test_case( "type inference", type_name(s4) == "set[string]" );
+ test_case( "type inference", type_name(s8) == "set[port,string,bool]" );
+ test_case( "type inference", type_name(sg3) == "set[string]" );
+
+ # Test the size of each set
+
+ test_case( "cardinality", |s1| == 2 );
+ test_case( "cardinality", |s2| == 0 );
+ test_case( "cardinality", |s3| == 0 );
+ test_case( "cardinality", |s4| == 1 );
+ test_case( "cardinality", |s5| == 2 );
+ test_case( "cardinality", |s6| == 0 );
+ test_case( "cardinality", |s7| == 0 );
+ test_case( "cardinality", |s8| == 1 );
+ test_case( "cardinality", |sg1| == 2 );
+ test_case( "cardinality", |sg2| == 2 );
+ test_case( "cardinality", |sg3| == 3 );
+
+ # Test iterating over each set
+
+ local ct: count;
+ ct = 0;
+ for ( c in s1 )
+ {
+ if ( type_name(c) != "string" )
+ print "Error: wrong set element type";
+ ++ct;
+ }
+ test_case( "iterate over set", ct == 2 );
+
+ ct = 0;
+ for ( c in s2 )
+ {
+ ++ct;
+ }
+ test_case( "iterate over set", ct == 0 );
+
+ ct = 0;
+ for ( [c1,c2,c3] in s5 )
+ {
+ ++ct;
+ }
+ test_case( "iterate over set", ct == 2 );
+
+ ct = 0;
+ for ( [c1,c2,c3] in sg2 )
+ {
+ ++ct;
+ }
+ test_case( "iterate over set", ct == 2 );
+
+ # Test adding elements to each set (Note: cannot add elements to sets
+ # of multiple types)
+
+ add s1["added"];
+ add s1["added"]; # element already exists (nothing happens)
+ test_case( "add element", |s1| == 3 );
+ test_case( "in operator", "added" in s1 );
+
+ add s2["another"];
+ test_case( "add element", |s2| == 1 );
+ add s2["test"];
+ test_case( "add element", |s2| == 2 );
+ test_case( "in operator", "another" in s2 );
+ test_case( "in operator", "test" in s2 );
+
+ add s3["foo"];
+ test_case( "add element", |s3| == 1 );
+ test_case( "in operator", "foo" in s3 );
+
+ add s4["local"];
+ test_case( "add element", |s4| == 2 );
+ test_case( "in operator", "local" in s4 );
+
+ add sg1["global"];
+ test_case( "add element", |sg1| == 3 );
+ test_case( "in operator", "global" in sg1 );
+
+ add sg3["more global"];
+ test_case( "add element", |sg3| == 4 );
+ test_case( "in operator", "more global" in sg3 );
+
+ # Test removing elements from each set (Note: cannot remove elements
+ # from sets of multiple types)
+
+ delete s1["test"];
+ delete s1["foobar"]; # element does not exist (nothing happens)
+ test_case( "remove element", |s1| == 2 );
+ test_case( "!in operator", "test" !in s1 );
+
+ delete s2["test"];
+ test_case( "remove element", |s2| == 1 );
+ test_case( "!in operator", "test" !in s2 );
+
+ delete s3["foo"];
+ test_case( "remove element", |s3| == 0 );
+ test_case( "!in operator", "foo" !in s3 );
+
+ delete s4["type inference"];
+ test_case( "remove element", |s4| == 1 );
+ test_case( "!in operator", "type inference" !in s4 );
+
+ delete sg1["braces"];
+ test_case( "remove element", |sg1| == 2 );
+ test_case( "!in operator", "braces" !in sg1 );
+
+ delete sg3["curly"];
+ test_case( "remove element", |sg3| == 3 );
+ test_case( "!in operator", "curly" !in sg3 );
+
+
+ local a = set(1,5,7,9,8,14);
+ local b = set(1,7,9,2);
+
+ local a_plus_b = set(1,2,5,7,9,8,14);
+ local a_also_b = set(1,7,9);
+ local a_sans_b = set(5,8,14);
+ local b_sans_a = set(2);
+
+ local a_or_b = a | b;
+ local a_and_b = a & b;
+
+ test_case( "union", a_or_b == a_plus_b );
+ test_case( "intersection", a_and_b == a_plus_b );
+ test_case( "difference", a - b == a_sans_b );
+ test_case( "difference", b - a == b_sans_a );
+
+ test_case( "union/inter.", |b & set(1,7,9,2)| == |b | set(1,7,2,9)| );
+ test_case( "relational", |b & a_or_b| == |b| && |b| < |a_or_b| );
+ test_case( "relational", b < a_or_b && a < a_or_b && a_or_b > a_and_b );
+
+ test_case( "subset", b < a );
+ test_case( "subset", a < b );
+ test_case( "subset", b < (a | set(2)) );
+ test_case( "superset", b > a );
+ test_case( "superset", b > (a | set(2)) );
+ test_case( "superset", b | set(8, 14, 5) > (a | set(2)) );
+ test_case( "superset", b | set(8, 14, 99, 5) > (a | set(2)) );
+
+ test_case( "non-ordering", (a <= b) || (a >= b) );
+ test_case( "non-ordering", (a <= a_or_b) && (a_or_b >= b) );
+
+ test_case( "superset", (b | set(14, 5)) > a - set(8) );
+ test_case( "superset", (b | set(14)) > a - set(8) );
+ test_case( "superset", (b | set(14)) > a - set(8,5) );
+ test_case( "superset", b >= a - set(5,8,14) );
+ test_case( "superset", b > a - set(5,8,14) );
+ test_case( "superset", (b - set(2)) > a - set(5,8,14) );
+ test_case( "equality", a == a | set(5) );
+ test_case( "equality", a == a | set(5,11) );
+ test_case( "non-equality", a != a | set(5,11) );
+ test_case( "equality", a == a | set(5,11) );
+
+ test_case( "magnitude", |a_and_b| == |a_or_b|);
+ }
+
diff --git a/testing/btest/core/option-errors.bro b/testing/btest/core/option-errors.bro
index 6a53598650..6a9a8f1db6 100644
--- a/testing/btest/core/option-errors.bro
+++ b/testing/btest/core/option-errors.bro
@@ -11,8 +11,3 @@ option testbool : bool;
option testopt = 5;
testopt = 6;
-
-@TEST-START-NEXT
-
-option testopt = 5;
-redef testopt = 6;
diff --git a/testing/btest/core/option-redef.bro b/testing/btest/core/option-redef.bro
index 05706ab48b..3d67a9a755 100644
--- a/testing/btest/core/option-redef.bro
+++ b/testing/btest/core/option-redef.bro
@@ -2,11 +2,15 @@
# @TEST-EXEC: btest-diff .stdout
# options are allowed to be redef-able.
+# And they are even redef-able by default.
option testopt = 5 &redef;
redef testopt = 6;
+option anotheropt = 6;
+redef anotheropt = 7;
event bro_init() {
print testopt;
+ print anotheropt;
}
diff --git a/testing/btest/core/tunnels/gre-pptp.test b/testing/btest/core/tunnels/gre-pptp.test
new file mode 100644
index 0000000000..a5fa8c0d19
--- /dev/null
+++ b/testing/btest/core/tunnels/gre-pptp.test
@@ -0,0 +1,4 @@
+# @TEST-EXEC: bro -r $TRACES/tunnels/gre-pptp.pcap
+# @TEST-EXEC: btest-diff conn.log
+# @TEST-EXEC: btest-diff tunnel.log
+# @TEST-EXEC: btest-diff dns.log
diff --git a/testing/btest/doc/sphinx/include-doc_scripting_data_struct_vector_declaration_bro.btest b/testing/btest/doc/sphinx/include-doc_scripting_data_struct_vector_declaration_bro.btest
index 4f1260e4ed..22790f45fe 100644
--- a/testing/btest/doc/sphinx/include-doc_scripting_data_struct_vector_declaration_bro.btest
+++ b/testing/btest/doc/sphinx/include-doc_scripting_data_struct_vector_declaration_bro.btest
@@ -7,10 +7,10 @@ event bro_init()
local v1: vector of count;
local v2 = vector(1, 2, 3, 4);
- v1[|v1|] = 1;
- v1[|v1|] = 2;
- v1[|v1|] = 3;
- v1[|v1|] = 4;
+ v1 += 1;
+ v1 += 2;
+ v1 += 3;
+ v1 += 4;
print fmt("contents of v1: %s", v1);
print fmt("length of v1: %d", |v1|);
diff --git a/testing/btest/language/ipv6-literals.bro b/testing/btest/language/ipv6-literals.bro
index 004d104c6e..bf888b29e1 100644
--- a/testing/btest/language/ipv6-literals.bro
+++ b/testing/btest/language/ipv6-literals.bro
@@ -3,30 +3,30 @@
local v: vector of addr = vector();
-v[|v|] = [::1];
-v[|v|] = [::ffff];
-v[|v|] = [::ffff:ffff];
-v[|v|] = [::0a0a:ffff];
-v[|v|] = [1::1];
-v[|v|] = [1::a];
-v[|v|] = [1::1:1];
-v[|v|] = [1::1:a];
-v[|v|] = [a::a];
-v[|v|] = [a::1];
-v[|v|] = [a::a:a];
-v[|v|] = [a::a:1];
-v[|v|] = [a:a::a];
-v[|v|] = [aaaa:0::ffff];
-v[|v|] = [::ffff:192.168.1.100];
-v[|v|] = [ffff::192.168.1.100];
-v[|v|] = [::192.168.1.100];
-v[|v|] = [::ffff:0:192.168.1.100];
-v[|v|] = [805B:2D9D:DC28::FC57:212.200.31.255];
-v[|v|] = [0xaaaa::bbbb];
-v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222];
-v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:ffff:1:2222];
-v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:ffff:0:2222];
-v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:0:0:2222];
+v += [::1];
+v += [::ffff];
+v += [::ffff:ffff];
+v += [::0a0a:ffff];
+v += [1::1];
+v += [1::a];
+v += [1::1:1];
+v += [1::1:a];
+v += [a::a];
+v += [a::1];
+v += [a::a:a];
+v += [a::a:1];
+v += [a:a::a];
+v += [aaaa:0::ffff];
+v += [::ffff:192.168.1.100];
+v += [ffff::192.168.1.100];
+v += [::192.168.1.100];
+v += [::ffff:0:192.168.1.100];
+v += [805B:2D9D:DC28::FC57:212.200.31.255];
+v += [0xaaaa::bbbb];
+v += [aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222];
+v += [aaaa:bbbb:cccc:dddd:eeee:ffff:1:2222];
+v += [aaaa:bbbb:cccc:dddd:eeee:ffff:0:2222];
+v += [aaaa:bbbb:cccc:dddd:eeee:0:0:2222];
for (i in v)
print v[i];
diff --git a/testing/btest/language/record-default-coercion.bro b/testing/btest/language/record-default-coercion.bro
index 822b845f65..9d8babf571 100644
--- a/testing/btest/language/record-default-coercion.bro
+++ b/testing/btest/language/record-default-coercion.bro
@@ -43,6 +43,6 @@ print_bar(bar6);
local r: MyRecord = [$c=13];
print r;
print |r$v|;
-r$v[|r$v|] = "test";
+r$v += "test";
print r;
print |r$v|;
diff --git a/testing/btest/language/set.bro b/testing/btest/language/set.bro
index d1eef7e6f0..56cd649b49 100644
--- a/testing/btest/language/set.bro
+++ b/testing/btest/language/set.bro
@@ -136,5 +136,50 @@ event bro_init()
delete sg3["curly"];
test_case( "remove element", |sg3| == 3 );
test_case( "!in operator", "curly" !in sg3 );
+
+
+ local a = set(1,5,7,9,8,14);
+ local b = set(1,7,9,2);
+
+ local a_plus_b = set(1,2,5,7,9,8,14);
+ local a_also_b = set(1,7,9);
+ local a_sans_b = set(5,8,14);
+ local b_sans_a = set(2);
+
+ local a_or_b = a | b;
+ local a_and_b = a & b;
+
+ test_case( "union", a_or_b == a_plus_b );
+ test_case( "intersection", a_and_b == a_plus_b );
+ test_case( "difference", a - b == a_sans_b );
+ test_case( "difference", b - a == b_sans_a );
+
+ test_case( "union/inter.", |b & set(1,7,9,2)| == |b | set(1,7,2,9)| );
+ test_case( "relational", |b & a_or_b| == |b| && |b| < |a_or_b| );
+ test_case( "relational", b < a_or_b && a < a_or_b && a_or_b > a_and_b );
+
+ test_case( "subset", b < a );
+ test_case( "subset", a < b );
+ test_case( "subset", b < (a | set(2)) );
+ test_case( "superset", b > a );
+ test_case( "superset", b > (a | set(2)) );
+ test_case( "superset", b | set(8, 14, 5) > (a | set(2)) );
+ test_case( "superset", b | set(8, 14, 99, 5) > (a | set(2)) );
+
+ test_case( "non-ordering", (a <= b) || (a >= b) );
+ test_case( "non-ordering", (a <= a_or_b) && (a_or_b >= b) );
+
+ test_case( "superset", (b | set(14, 5)) > a - set(8) );
+ test_case( "superset", (b | set(14)) > a - set(8) );
+ test_case( "superset", (b | set(14)) > a - set(8,5) );
+ test_case( "superset", b >= a - set(5,8,14) );
+ test_case( "superset", b > a - set(5,8,14) );
+ test_case( "superset", (b - set(2)) > a - set(5,8,14) );
+ test_case( "equality", a == a | set(5) );
+ test_case( "equality", a == a | set(5,11) );
+ test_case( "non-equality", a != a | set(5,11) );
+ test_case( "equality", a == a | set(5,11) );
+
+ test_case( "magnitude", |a_and_b| == |a_or_b|);
}
diff --git a/testing/btest/language/type-cast-error-dynamic.bro b/testing/btest/language/type-cast-error-dynamic.bro
index 45f1d1fb5f..91fa212ce4 100644
--- a/testing/btest/language/type-cast-error-dynamic.bro
+++ b/testing/btest/language/type-cast-error-dynamic.bro
@@ -18,6 +18,8 @@ event bro_init()
cast_to_string(42);
cast_to_string(x);
+ cast_to_string(Broker::Data());
+ print "data is string", Broker::Data() is string;
}
diff --git a/testing/btest/language/vector.bro b/testing/btest/language/vector.bro
index 76fc8b69e3..85bed8eae2 100644
--- a/testing/btest/language/vector.bro
+++ b/testing/btest/language/vector.bro
@@ -163,5 +163,10 @@ event bro_init()
test_case( "&& operator", v14[0] == F && v14[1] == F && v14[2] == T );
test_case( "|| operator", v15[0] == T && v15[1] == F && v15[2] == T );
+ # Test += operator.
+ local v16 = v6;
+ v16 += 40;
+ test_case( "+= operator", all_set(v16 == vector( 10, 20, 30, 40 )) );
+
}
diff --git a/testing/btest/scripts/base/frameworks/cluster/start-it-up-logger.bro b/testing/btest/scripts/base/frameworks/cluster/start-it-up-logger.bro
index 6bb9dcbc03..b973705c97 100644
--- a/testing/btest/scripts/base/frameworks/cluster/start-it-up-logger.bro
+++ b/testing/btest/scripts/base/frameworks/cluster/start-it-up-logger.bro
@@ -1,13 +1,13 @@
# @TEST-SERIALIZE: comm
#
-# @TEST-EXEC: btest-bg-run logger-1 CLUSTER_NODE=logger-1 BROPATH=$BROPATH:.. bro %INPUT
+# @TEST-EXEC: btest-bg-run logger-1 CLUSTER_NODE=logger-1 BROPATH=$BROPATH:.. bro %INPUT
# @TEST-EXEC: btest-bg-run manager-1 CLUSTER_NODE=manager-1 BROPATH=$BROPATH:.. bro %INPUT
-# @TEST-EXEC: btest-bg-run proxy-1 CLUSTER_NODE=proxy-1 BROPATH=$BROPATH:.. bro %INPUT
-# @TEST-EXEC: btest-bg-run proxy-2 CLUSTER_NODE=proxy-2 BROPATH=$BROPATH:.. bro %INPUT
-# @TEST-EXEC: btest-bg-run worker-1 CLUSTER_NODE=worker-1 BROPATH=$BROPATH:.. bro %INPUT
-# @TEST-EXEC: btest-bg-run worker-2 CLUSTER_NODE=worker-2 BROPATH=$BROPATH:.. bro %INPUT
+# @TEST-EXEC: btest-bg-run proxy-1 CLUSTER_NODE=proxy-1 BROPATH=$BROPATH:.. bro %INPUT
+# @TEST-EXEC: btest-bg-run proxy-2 CLUSTER_NODE=proxy-2 BROPATH=$BROPATH:.. bro %INPUT
+# @TEST-EXEC: btest-bg-run worker-1 CLUSTER_NODE=worker-1 BROPATH=$BROPATH:.. bro %INPUT
+# @TEST-EXEC: btest-bg-run worker-2 CLUSTER_NODE=worker-2 BROPATH=$BROPATH:.. bro %INPUT
# @TEST-EXEC: btest-bg-wait 30
-# @TEST-EXEC: btest-diff logger-1/.stdout
+# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-sort btest-diff logger-1/.stdout
# @TEST-EXEC: btest-diff manager-1/.stdout
# @TEST-EXEC: btest-diff proxy-1/.stdout
# @TEST-EXEC: btest-diff proxy-2/.stdout
@@ -30,20 +30,27 @@ redef Cluster::retry_interval = 1sec;
redef Broker::default_listen_retry = 1sec;
redef Broker::default_connect_retry = 1sec;
-global fully_connected: event();
-
global peer_count = 0;
global fully_connected_nodes = 0;
-event fully_connected()
+event fully_connected(n: string)
{
++fully_connected_nodes;
if ( Cluster::node == "logger-1" )
{
+ print "got fully_connected event from", n;
+
if ( peer_count == 5 && fully_connected_nodes == 5 )
+ {
+ print "termination condition met: shutting down";
terminate();
+ }
+ }
+ else
+ {
+ print "sent fully_connected event";
}
}
@@ -60,17 +67,20 @@ event Broker::peer_added(endpoint: Broker::EndpointInfo, msg: string)
if ( Cluster::node == "logger-1" )
{
if ( peer_count == 5 && fully_connected_nodes == 5 )
+ {
+ print "termination condition met: shutting down";
terminate();
+ }
}
else if ( Cluster::node == "manager-1" )
{
if ( peer_count == 5 )
- event fully_connected();
+ event fully_connected(Cluster::node);
}
else
{
if ( peer_count == 4 )
- event fully_connected();
+ event fully_connected(Cluster::node);
}
}
diff --git a/testing/btest/scripts/base/frameworks/config/basic.bro b/testing/btest/scripts/base/frameworks/config/basic.bro
index 3b72f6572d..f5a02983fd 100644
--- a/testing/btest/scripts/base/frameworks/config/basic.bro
+++ b/testing/btest/scripts/base/frameworks/config/basic.bro
@@ -1,6 +1,7 @@
# @TEST-EXEC: btest-bg-run bro bro -b %INPUT
# @TEST-EXEC: btest-bg-wait 10
# @TEST-EXEC: btest-diff bro/config.log
+# @TEST-EXEC: btest-diff bro/.stderr
@load base/frameworks/config
@load base/protocols/conn
@@ -16,6 +17,8 @@ testcount 2
testint -1
testenum Conn::LOG
testport 45
+testporttcp 42/tcp
+testportudp 42/udp
testaddr 127.0.0.1
testaddr 2607:f8b0:4005:801::200e
testinterval 60
@@ -35,6 +38,8 @@ export {
option testint: int = 0;
option testenum = SSH::LOG;
option testport = 42/tcp;
+ option testporttcp = 40/udp;
+ option testportudp = 40/tcp;
option testaddr = 127.0.0.1;
option testtime = network_time();
option testinterval = 1sec;
diff --git a/testing/btest/scripts/base/frameworks/config/updates.bro b/testing/btest/scripts/base/frameworks/config/updates.bro
index 1e523c752f..a4ee557e27 100644
--- a/testing/btest/scripts/base/frameworks/config/updates.bro
+++ b/testing/btest/scripts/base/frameworks/config/updates.bro
@@ -1,11 +1,11 @@
# @TEST-EXEC: btest-bg-run bro bro -b %INPUT
-# @TEST-EXEC: $SCRIPTS/wait-for-file bro/got1 5 || (btest-bg-wait -k 1 && false)
+# @TEST-EXEC: $SCRIPTS/wait-for-file bro/got1 10 || (btest-bg-wait -k 1 && false)
# @TEST-EXEC: mv configfile2 configfile
# @TEST-EXEC: touch configfile
-# @TEST-EXEC: $SCRIPTS/wait-for-file bro/got2 5 || (btest-bg-wait -k 1 && false)
+# @TEST-EXEC: $SCRIPTS/wait-for-file bro/got2 10 || (btest-bg-wait -k 1 && false)
# @TEST-EXEC: mv configfile3 configfile
# @TEST-EXEC: touch configfile
-# @TEST-EXEC: $SCRIPTS/wait-for-file bro/got3 5 || (btest-bg-wait -k 1 && false)
+# @TEST-EXEC: $SCRIPTS/wait-for-file bro/got3 10 || (btest-bg-wait -k 1 && false)
# @TEST-EXEC: mv configfile4 configfile
# @TEST-EXEC: touch configfile
# @TEST-EXEC: btest-bg-wait 10
diff --git a/testing/btest/scripts/base/frameworks/input/basic.bro b/testing/btest/scripts/base/frameworks/input/basic.bro
index e77a418f0d..356b87d70b 100644
--- a/testing/btest/scripts/base/frameworks/input/basic.bro
+++ b/testing/btest/scripts/base/frameworks/input/basic.bro
@@ -7,9 +7,9 @@ redef exit_only_after_terminate = T;
@TEST-START-FILE input.log
#separator \x09
#path ssh
-#fields b i e c p pp sn a d t iv s sc ss se vc ve ns
+#fields b bt i e c p pp sn a d t iv s sc ss se vc ve ns
#types bool int enum count port port subnet addr double time interval string table table table vector vector string
-T -42 SSH::LOG 21 123 5/icmp 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY 4242
+T 1 -42 SSH::LOG 21 123 5/icmp 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY 4242
@TEST-END-FILE
@load base/protocols/ssh
@@ -26,6 +26,7 @@ type Idx: record {
type Val: record {
b: bool;
+ bt: bool;
e: Log::ID;
c: count;
p: port;
diff --git a/testing/btest/scripts/base/frameworks/input/port-embedded.bro b/testing/btest/scripts/base/frameworks/input/port-embedded.bro
new file mode 100644
index 0000000000..8aab733069
--- /dev/null
+++ b/testing/btest/scripts/base/frameworks/input/port-embedded.bro
@@ -0,0 +1,44 @@
+# @TEST-EXEC: btest-bg-run bro bro -b %INPUT
+# @TEST-EXEC: btest-bg-wait 10
+# @TEST-EXEC: btest-diff bro/.stdout
+# @TEST-EXEC: btest-diff bro/.stderr
+
+@TEST-START-FILE input.log
+#fields i p
+1.2.3.4 80/tcp
+1.2.3.5 52/udp
+1.2.3.6 30/unknown
+1.2.3.7 50/trash
+@TEST-END-FILE
+
+redef exit_only_after_terminate = T;
+
+redef InputAscii::empty_field = "EMPTY";
+
+module A;
+
+type Idx: record {
+ i: addr;
+};
+
+type Val: record {
+ p: port;
+};
+
+global servers: table[addr] of Val = table();
+
+event line(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val)
+ {
+ print left, right;
+ }
+
+event bro_init()
+ {
+ Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $ev=line, $destination=servers]);
+ }
+
+event Input::end_of_data(name: string, source: string)
+ {
+ Input::remove("input");
+ terminate();
+ }
diff --git a/testing/btest/scripts/base/frameworks/input/reread.bro b/testing/btest/scripts/base/frameworks/input/reread.bro
index 4199093543..e4bb09df39 100644
--- a/testing/btest/scripts/base/frameworks/input/reread.bro
+++ b/testing/btest/scripts/base/frameworks/input/reread.bro
@@ -43,7 +43,7 @@ T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz
F -43 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a}
F -44 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a}
F -45 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a}
-F -46 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a}
+0 -46 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a}
F -47 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a}
F -48 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a}
@TEST-END-FILE
diff --git a/testing/btest/scripts/base/frameworks/netcontrol/delete-internal-state.bro b/testing/btest/scripts/base/frameworks/netcontrol/delete-internal-state.bro
index 9b8c995fac..29cb439a64 100644
--- a/testing/btest/scripts/base/frameworks/netcontrol/delete-internal-state.bro
+++ b/testing/btest/scripts/base/frameworks/netcontrol/delete-internal-state.bro
@@ -43,10 +43,10 @@ event dump_info()
event connection_established(c: connection)
{
local id = c$id;
- rules[|rules|] = NetControl::shunt_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 0secs);
- rules[|rules|] = NetControl::drop_address(id$orig_h, 0secs);
- rules[|rules|] = NetControl::whitelist_address(id$orig_h, 0secs);
- rules[|rules|] = NetControl::redirect_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 5, 0secs);
+ rules += NetControl::shunt_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 0secs);
+ rules += NetControl::drop_address(id$orig_h, 0secs);
+ rules += NetControl::whitelist_address(id$orig_h, 0secs);
+ rules += NetControl::redirect_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 5, 0secs);
schedule 1sec { remove_all() };
schedule 2sec { dump_info() };
diff --git a/testing/btest/scripts/base/frameworks/netcontrol/multiple.bro b/testing/btest/scripts/base/frameworks/netcontrol/multiple.bro
index 56a764f2e9..d56c8e2468 100644
--- a/testing/btest/scripts/base/frameworks/netcontrol/multiple.bro
+++ b/testing/btest/scripts/base/frameworks/netcontrol/multiple.bro
@@ -27,10 +27,10 @@ event remove_all()
event connection_established(c: connection)
{
local id = c$id;
- rules[|rules|] = NetControl::shunt_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 0secs);
- rules[|rules|] = NetControl::drop_address(id$orig_h, 0secs);
- rules[|rules|] = NetControl::whitelist_address(id$orig_h, 0secs);
- rules[|rules|] = NetControl::redirect_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 5, 0secs);
+ rules += NetControl::shunt_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 0secs);
+ rules += NetControl::drop_address(id$orig_h, 0secs);
+ rules += NetControl::whitelist_address(id$orig_h, 0secs);
+ rules += NetControl::redirect_flow([$src_h=id$orig_h, $src_p=id$orig_p, $dst_h=id$resp_h, $dst_p=id$resp_p], 5, 0secs);
schedule 1sec { remove_all() };
}
diff --git a/testing/btest/scripts/base/frameworks/sumstats/sample-cluster.bro b/testing/btest/scripts/base/frameworks/sumstats/sample-cluster.bro
index 6426b42680..52fce96dba 100644
--- a/testing/btest/scripts/base/frameworks/sumstats/sample-cluster.bro
+++ b/testing/btest/scripts/base/frameworks/sumstats/sample-cluster.bro
@@ -31,7 +31,7 @@ event bro_init() &priority=5
print fmt("Host: %s Sampled observations: %d", key$host, r$sample_elements);
local sample_nums: vector of count = vector();
for ( sample in r$samples )
- sample_nums[|sample_nums|] =r$samples[sample]$num;
+ sample_nums += r$samples[sample]$num;
print fmt(" %s", sort(sample_nums));
},
diff --git a/testing/coverage/Makefile b/testing/coverage/Makefile
new file mode 100644
index 0000000000..7f458a4f9c
--- /dev/null
+++ b/testing/coverage/Makefile
@@ -0,0 +1,12 @@
+coverage: cleanup
+ @./code_coverage.sh
+
+cleanup:
+ @rm -f coverage.log
+ @find ../../ -name "*.gcov" -exec rm {} \;
+
+distclean: cleanup
+ @find ../../ -name "*.gcno" -exec rm {} \;
+
+html:
+ @./lcov_html.sh $(COVERAGE_HTML_DIR)
diff --git a/testing/coverage/README b/testing/coverage/README
new file mode 100644
index 0000000000..d1352640f2
--- /dev/null
+++ b/testing/coverage/README
@@ -0,0 +1,21 @@
+On a Bro build configured with --enable-coverage, this script produces a code
+coverage report after Bro has been invoked. The intended application of this
+script is after the btest testsuite has run. This combination (btests first,
+coverage computation afterward) happens automatically when running "make" in
+the testing directory. This script puts .gcov files (which are included in
+.gitignore) alongside the corresponding source files.
+
+This depends on gcov, which should come with your gcc. If gcov is not
+installed, the script will abort with an error message.
+
+After `make all` in the upper directory, use `make html` as make target in this
+directory to output the html files that lcov can create. By default, the html
+files will be contained in a directory named "coverage-html" in the base
+directory. To set a custom name, use `make html
+COVERAGE_HTML_DIR=custom-dir-name`.
+
+The script code_coverage.sh is triggered by `make coverage` (included in `make`
+in /testing), and its goal is to automate code coverage testing.
+
+The script lcov_html.sh is triggered by `make html`, and its goal is to create
+html files from the aforementioned coverage data.
diff --git a/testing/coverage/code_coverage.sh b/testing/coverage/code_coverage.sh
new file mode 100755
index 0000000000..758b2fa915
--- /dev/null
+++ b/testing/coverage/code_coverage.sh
@@ -0,0 +1,146 @@
+#!/usr/bin/env bash
+#
+# On a Bro build configured with --enable-coverage, this script
+# produces a code coverage report after Bro has been invoked. The
+# intended application of this script is after the btest testsuite has
+# run. This combination (btests first, coverage computation afterward)
+# happens automatically when running "make" in the testing directory.
+#
+# This depends on gcov, which should come with your gcc.
+#
+# AUTOMATES CODE COVERAGE TESTING
+# 1. Run test suite
+# 2. Check for .gcda files existing.
+# 3a. Run gcov (-p to preserve path)
+# 3b. Prune .gcov files for objects outside of the Bro tree
+# 4a. Analyze .gcov files generated and create summary file
+# 4b. Send .gcov files to appropriate path
+#
+CURR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" # Location of script
+BASE="$( cd "$CURR" && cd ../../ && pwd )"
+TMP="${CURR}/tmp.$$"
+mkdir -p $TMP
+
+# DEFINE CLEANUP PROCESS
+function finish {
+ rm -rf $TMP
+}
+trap finish EXIT
+
+# DEFINE CRUCIAL FUNCTIONS FOR COVERAGE CHECKING
+function check_file_coverage {
+ GCOVDIR="$1"
+
+ for i in $GCOVDIR/*.gcov; do
+ # Effective # of lines: starts with a number (# of runs in line) or ##### (line never run)
+ TOTAL=$(cut -d: -f 1 "$i" | sed 's/ //g' | grep -v "^[[:alpha:]]" | grep -v "-" | wc -l)
+
+ # Count number of lines never run
+ UNRUN=$(grep "#####" "$i" | wc -l)
+
+ # Lines in code are either run or unrun
+ RUN=$(($TOTAL - $UNRUN))
+
+ # Avoid division-by-zero problems:
+ PERCENTAGE=0.000
+ [ $RUN -gt 0 ] && PERCENTAGE=$(bc <<< "scale=3; 100*$RUN/$TOTAL")
+
+ # Find correlation between % of lines run vs. "Runs"
+ echo -e "$PERCENTAGE\t$RUN\t$TOTAL\t$(grep "0:Runs" "$i" | sed 's/.*://')\t$i"
+ done
+}
+
+function check_group_coverage {
+ DATA="$1" # FILE CONTAINING COVERAGE DATA
+ SRC_FOLDER="$2" # WHERE BRO WAS COMPILED
+ OUTPUT="$3"
+
+ # Prints all the relevant directories
+ DIRS=$(for i in $(cut -f 5 "$DATA"); do basename "$i" | sed 's/#[^#]*$//'; done \
+ | sort | uniq | sed 's/^.*'"${SRC_FOLDER}"'//' | grep "^#s\+" )
+ # "Generalize" folders unless it's from analyzers
+ DIRS=$(for i in $DIRS; do
+ if !(echo "$i" | grep "src#analyzer"); then
+ echo "$i" | cut -d "#" -f 1,2,3
+ fi
+ done | sort | uniq )
+
+ for i in $DIRS; do
+ # For elements in #src, we only care about the files direclty in the directory.
+ if [[ "$i" = "#src" ]]; then
+ RUN=$(echo $(grep "$i#[^#]\+$" $DATA | grep "$SRC_FOLDER$i\|build$i" | cut -f 2) | tr " " "+" | bc)
+ TOTAL=$(echo $(grep "$i#[^#]\+$" $DATA | grep "$SRC_FOLDER$i\|build$i" | cut -f 3) | tr " " "+" | bc)
+ else
+ RUN=$(echo $(grep "$i" $DATA | cut -f 2) | tr " " "+" | bc)
+ TOTAL=$(echo $(grep "$i" $DATA | cut -f 3) | tr " " "+" | bc)
+ fi
+
+ PERCENTAGE=$( echo "scale=3;100*$RUN/$TOTAL" | bc | tr "\n" " " )
+ printf "%-50s\t%12s\t%6s %%\n" "$i" "$RUN/$TOTAL" $PERCENTAGE \
+ | sed 's|#|/|g' >>$OUTPUT
+ done
+}
+
+# 1. Run test suite
+# SHOULD HAVE ALREADY BEEN RUN BEFORE THIS SCRIPT (BASED ON MAKEFILE TARGETS)
+
+# 2. Check for .gcno and .gcda file presence
+echo -n "Checking for coverage files... "
+for pat in gcda gcno; do
+ if [ -z "$(find "$BASE" -name "*.$pat" 2>/dev/null)" ]; then
+ echo "no .$pat files, nothing to do"
+ exit 0
+ fi
+done
+echo "ok"
+
+# 3a. Run gcov (-p to preserve path) and move into tmp directory
+# ... if system does not have gcov installed, exit with message.
+echo -n "Creating coverage files... "
+if which gcov > /dev/null 2>&1; then
+ ( cd "$TMP" && find "$BASE" -name "*.o" -exec gcov -p {} > /dev/null 2>&1 \; )
+ NUM_GCOVS=$(find "$TMP" -name *.gcov | wc -l)
+ if [ $NUM_GCOVS -eq 0 ]; then
+ echo "no gcov files produced, aborting"
+ exit 1
+ fi
+
+ # Account for '^' that occurs in macOS due to LLVM
+ # This character seems to be equivalent to ".." (up 1 dir)
+ for file in $(ls $TMP/*.gcov | grep '\^'); do
+ mv $file "$(sed 's/#[^#]*#\^//g' <<< "$file")"
+ done
+
+ echo "ok, $NUM_GCOVS coverage files"
+else
+ echo "gcov is not installed on system, aborting"
+ exit 1
+fi
+
+# 3b. Prune gcov files that fall outside of the Bro tree:
+# Look for files containing gcov's slash substitution character "#"
+# and remove any that don't contain the Bro path root.
+echo -n "Pruning out-of-tree coverage files... "
+PREFIX=$(echo "$BASE" | sed 's|/|#|g')
+for i in "$TMP"/*#*.gcov; do
+ if ! [[ "$i" = *$PREFIX* ]]; then
+ rm -f $i
+ fi
+done
+NUM_GCOVS=$(ls "$TMP"/*.gcov | wc -l)
+echo "ok, $NUM_GCOVS coverage files remain"
+
+# 4a. Analyze .gcov files generated and create summary file
+echo -n "Creating summary file... "
+DATA="${TMP}/data.txt"
+SUMMARY="$CURR/coverage.log"
+check_file_coverage "$TMP" > "$DATA"
+check_group_coverage "$DATA" ${BASE##*/} $SUMMARY
+echo "ok"
+
+# 4b. Send .gcov files to appropriate path
+echo -n "Sending coverage files to respective directories... "
+for i in "$TMP"/*#*.gcov; do
+ mv $i $(echo $(basename $i) | sed 's/#/\//g')
+done
+echo "ok"
diff --git a/testing/coverage/lcov_html.sh b/testing/coverage/lcov_html.sh
new file mode 100755
index 0000000000..c729b2145c
--- /dev/null
+++ b/testing/coverage/lcov_html.sh
@@ -0,0 +1,61 @@
+#!/usr/bin/env bash
+#
+# On a Bro build configured with --enable-coverage, this script
+# produces a code coverage report in HTML format after Bro has been invoked. The
+# intended application of this script is after the btest testsuite has run.
+
+# This depends on lcov to run.
+
+function die {
+ echo "$@"
+ exit 1
+}
+function finish {
+ rm -rf "$TMP"
+}
+function verify_run {
+ if bash -c "$1" > /dev/null 2>&1; then
+ echo ${2:-"ok"}
+ else
+ die ${3:-"error, abort"}
+ fi
+}
+trap finish EXIT
+
+TMP=".tmp.$$"
+COVERAGE_FILE="./$TMP/coverage.info"
+COVERAGE_HTML_DIR="${1:-"coverage-html"}"
+REMOVE_TARGETS="*.yy *.ll *.y *.l */bro.dir/* *.bif"
+
+# 1. Move to base dir, create tmp dir
+cd ../../;
+mkdir "$TMP"
+
+# 2. Check for .gcno and .gcda file presence
+echo -n "Checking for coverage files... "
+for pat in gcda gcno; do
+ if [ -z "$(find . -name "*.$pat" 2>/dev/null)" ]; then
+ echo "no .$pat files, nothing to do"
+ exit 0
+ fi
+done
+echo "ok"
+
+# 3. If lcov does not exist, abort process.
+echo -n "Checking for lcov... "
+verify_run "which lcov" \
+ "lcov installed on system, continue" \
+ "lcov not installed, abort"
+
+# 4. Create a "tracefile" through lcov, which is necessary to create html files later on.
+echo -n "Creating tracefile for html generation... "
+verify_run "lcov --no-external --capture --directory . --output-file $COVERAGE_FILE"
+
+for TARGET in $REMOVE_TARGETS; do
+ echo -n "Getting rid of $TARGET files from tracefile... "
+ verify_run "lcov --remove $COVERAGE_FILE $TARGET --output-file $COVERAGE_FILE"
+done
+
+# 5. Create HTML files.
+echo -n "Creating HTML files... "
+verify_run "genhtml -o $COVERAGE_HTML_DIR $COVERAGE_FILE"
diff --git a/testing/external/scripts/diff-all b/testing/external/scripts/diff-all
index e84416c088..d51f3b294f 100755
--- a/testing/external/scripts/diff-all
+++ b/testing/external/scripts/diff-all
@@ -22,7 +22,7 @@ files_cwd=`ls $@`
files_baseline=`cd $TEST_BASELINE && ls $@`
for i in `echo $files_cwd $files_baseline | sort | uniq`; do
- if [[ "$i" != "loaded_scripts.log" && "$i" != "prof.log" && "$i" != "debug.log" && "$i" != "stats.log" ]]; then
+ if [[ "$i" != "loaded_scripts.log" && "$i" != "prof.log" && "$i" != "debug.log" && "$i" != "stats.log" && "$i" != broker_*.log ]]; then
if [[ "$i" == "reporter.log" ]]; then
# Do not diff the reporter.log if it only complains about missing
diff --git a/testing/scripts/travis-job b/testing/scripts/travis-job
index c6221d76a2..01065900dd 100644
--- a/testing/scripts/travis-job
+++ b/testing/scripts/travis-job
@@ -2,73 +2,97 @@
#
# This script (along with the .travis.yml file) is used by Travis CI to
# build Bro and run the tests.
+#
+# This script can also be used outside of Travis (the "all" build step is
+# especially convenient in this case). Note that if you use this script
+# outside of Travis then you will need to fetch the private tests manually
+# (if you don't, then the private tests will be skipped).
+
+usage() {
+ echo "usage: $0 CMD DISTRO"
+ echo " CMD is a build step:"
+ echo " install: install prereqs"
+ echo " build: build bro"
+ echo " run: run the tests"
+ echo " all: do all of the above"
+ echo " DISTRO is a Linux distro, 'travis' to run without docker, or 'coverity' to run a coverity scan"
+}
if [ $# -ne 2 ]; then
- echo "usage: $0 CMD DISTRO"
- echo " CMD is a build step (install, build, or run)"
- echo " DISTRO is a Linux distro, or 'travis' to run in Travis without docker"
+ usage
exit 1
fi
step=$1
distro=$2
-# Build Bro with the coverity tools.
-build_coverity() {
- # Get the coverity tools
- set -e
+case $step in
+ install) ;;
+ build) ;;
+ run) ;;
+ all) ;;
+ *) echo "Error: unknown build step: $step"; usage; exit 1 ;;
+esac
- if [ -z "${COV_TOKEN}" ]; then
- echo "Error: COV_TOKEN is not defined (should be defined in environment variables section of Travis settings for this repo)"
- exit 1
- fi
+# Install the coverity tools.
+install_coverity() {
+ rm -rf coverity_tool.tgz coverity-tools cov-analysis*
+
+ echo "Downloading coverity tools..."
wget -nv https://scan.coverity.com/download/cxx/linux64 --post-data "token=${COV_TOKEN}&project=Bro" -O coverity_tool.tgz
tar xzf coverity_tool.tgz
- mv cov-analysis* coverity-tools
rm coverity_tool.tgz
+ mv cov-analysis* coverity-tools
+}
- # Configure Bro
- ./configure --prefix=`pwd`/build/root --enable-debug --disable-perftools
- # Build Bro with coverity tools
+# Build Bro with the coverity tools.
+build_coverity() {
+ # Cleanup any previous build (this is really only necessary if running this
+ # outside of Travis).
+ make distclean > /dev/null
+
+ ./configure --prefix=`pwd`/build/root --enable-debug --disable-perftools --disable-broker-tests --disable-python --disable-broctl
+
export PATH=`pwd`/coverity-tools/bin:$PATH
cd build
cov-build --dir cov-int make -j 4
+ cd ..
}
+
# Create a tar file and send it to coverity.
run_coverity() {
- set -e
-
EMAIL=bro-commits-internal@bro.org
- FILE=myproject.bz2
+ FILE=myproject.tgz
VER=`cat VERSION`
DESC=`git rev-parse HEAD`
cd build
- tar cjf ${FILE} cov-int
+ echo "Creating tar file and sending to coverity..."
+ tar czf ${FILE} cov-int
curl --form token=${COV_TOKEN} --form email=${EMAIL} --form file=@${FILE} --form "version=${VER}" --form "description=${DESC}" https://scan.coverity.com/builds?project=Bro
}
-# Setup a docker container.
-setup_docker() {
+# Create a docker container, and install all packages needed to build Bro.
+install_in_docker() {
case $distro in
centos_7)
distro_cmds="yum -y install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel git openssl which"
;;
debian_9)
- distro_cmds="apt-get update; apt-get -y install cmake make gcc g++ flex bison python libpcap-dev libssl1.0-dev zlib1g-dev git sqlite3 curl bsdmainutils"
+ distro_cmds="apt-get update; apt-get -y install cmake make gcc g++ flex bison python libpcap-dev libssl-dev zlib1g-dev git sqlite3 curl bsdmainutils"
;;
fedora_28)
- distro_cmds="yum -y install cmake make gcc gcc-c++ flex bison libpcap-devel compat-openssl10-devel git sqlite findutils which; ln -s /usr/bin/python3 /usr/local/bin/python"
+ distro_cmds="yum -y install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel git sqlite findutils which; ln -s /usr/bin/python3 /usr/local/bin/python"
;;
ubuntu_16.04)
distro_cmds="apt-get update; apt-get -y install cmake make gcc g++ flex bison python libpcap-dev libssl-dev zlib1g-dev git sqlite3 curl bsdmainutils"
;;
ubuntu_18.04)
- distro_cmds="apt-get update; apt-get -y install cmake make gcc g++ flex bison python3 libpcap-dev libssl1.0-dev zlib1g-dev git sqlite3 curl bsdmainutils; ln -s /usr/bin/python3 /usr/local/bin/python"
+ distro_cmds="apt-get update; apt-get -y install cmake make gcc g++ flex bison python3 libpcap-dev libssl-dev zlib1g-dev git sqlite3 curl bsdmainutils; ln -s /usr/bin/python3 /usr/local/bin/python"
;;
*)
echo "Error: distro ${distro} is not recognized by this script"
@@ -83,20 +107,24 @@ setup_docker() {
# Build bro in a docker container.
-build_docker() {
- docker exec -e TRAVIS brotest sh testing/scripts/travis-job $step travis
+build_in_docker() {
+ docker exec brotest sh testing/scripts/travis-job build travis
}
# Run Bro tests in a docker container.
-run_docker() {
+run_in_docker() {
prepare_env
- docker exec -t -e TRAVIS -e TRAVIS_PULL_REQUEST -e trav_key -e trav_iv brotest sh testing/scripts/travis-job $step travis
+ docker exec -t -e TRAVIS -e TRAVIS_PULL_REQUEST -e trav_key -e trav_iv brotest sh testing/scripts/travis-job run travis
}
# Build Bro.
build() {
+ # Cleanup any previous build (this is really only necessary if running this
+ # outside of Travis).
+ make distclean > /dev/null
+
# Skip building broker tests, python bindings, and broctl, as these are
# not needed by the bro tests.
./configure --build-type=Release --disable-broker-tests --disable-python --disable-broctl && make -j 2
@@ -107,6 +135,9 @@ build() {
# hard-coded multiple times in this script.
prepare_env() {
if [ -z "$trav_key" ]; then
+ # This hash value is found by logging into the Travis CI website,
+ # and looking at the settings in the bro repo (look in the
+ # "Environment Variables" section).
hash=6a6fe747ff7b
eval "trav_key=\$encrypted_${hash}_key"
eval "trav_iv=\$encrypted_${hash}_iv"
@@ -117,27 +148,15 @@ prepare_env() {
}
-# Run Bro tests.
-run() {
- echo
- echo "Running unit tests ##################################################"
- echo
- cd testing/btest
- # Must specify a value for "-j" option, otherwise Travis uses a huge value.
- ../../aux/btest/btest -j 4 -d
- ret=$?
-
- echo
- echo "Getting external tests ##############################################"
- echo
- cd ../external
-
- set -e
-
- make init
+# Get the private tests.
+get_private_tests() {
prepare_env
- if [ -n "$trav_key" ] && [ -n "$trav_iv" ]; then
+ if [ "${TRAVIS}" != "true" ]; then
+ # When not running in the Travis environment, just skip trying to get
+ # the private tests.
+ echo "Note: skipping private tests (to run them, do a git clone of the private testing repo in the 'testing/external' directory before running this script)."
+ elif [ -n "$trav_key" ] && [ -n "$trav_iv" ]; then
curl https://www.bro.org/static/travis-ci/travis_key.enc -o travis_key.enc
openssl aes-256-cbc -K $trav_key -iv $trav_iv -in travis_key.enc -out travis_key -d
chmod 600 travis_key
@@ -149,11 +168,39 @@ run() {
elif [ -n "${TRAVIS_PULL_REQUEST}" ] && [ "${TRAVIS_PULL_REQUEST}" != "false" ]; then
# For pull request builds, the private key is not available, so skip
# the private tests to avoid failing.
- echo "Note: skipping private tests because encrypted env. variables are not defined."
+ echo "Note: skipping private tests because encrypted env. variables are not available in pull request builds."
else
echo "Error: cannot get private tests because encrypted env. variables are not defined."
exit 1
fi
+}
+
+
+# Run Bro tests.
+run() {
+ echo
+ echo "Running unit tests ##################################################"
+ echo
+ cd testing/btest
+
+ set +e
+ # Must specify a value for "-j" option, otherwise Travis uses a huge value.
+ ../../aux/btest/btest -j 4 -d
+ ret=$?
+ set -e
+
+ echo
+ echo "Getting external tests ##############################################"
+ echo
+ cd ../external
+
+ if [ ! -d bro-testing ]; then
+ make init
+ fi
+
+ if [ ! -d bro-testing-private ]; then
+ get_private_tests
+ fi
echo
echo "Running external tests ##############################################"
@@ -164,9 +211,8 @@ run() {
exit $ret
}
-# Output the contents of diag.log when a test fails.
+# Show failed tests (not skipped tests) from diag.log when a test fails.
showdiag() {
- # Show failed tests only, not skipped tests.
f=bro-testing/diag.log
grep -qs '... failed$' $f && \
@@ -178,18 +224,22 @@ showdiag() {
exit 1
}
-if [ "$step" != "install" ] && [ "$step" != "build" ] && [ "$step" != "run" ]; then
- echo "Error: unknown build step: $step"
+# Remove the docker container.
+remove_container() {
+ echo "Removing the docker container..."
+ docker rm -f brotest > /dev/null
+}
+
+
+if [ ! -f testing/scripts/travis-job ]; then
+ echo "Error: must change directory to root of bro source tree before running this script."
exit 1
fi
-if [ "${TRAVIS}" != "true" ]; then
- echo "$0: this script is intended for Travis CI"
- exit 1
-fi
+set -e
if [ "${TRAVIS_EVENT_TYPE}" = "cron" ]; then
- # Run the coverity scan from a Travis CI cron job.
+ # This is a Travis CI cron job, so check the job number.
# Extract second component of the job number.
if [ -z "${TRAVIS_JOB_NUMBER}" ]; then
@@ -204,14 +254,35 @@ if [ "${TRAVIS_EVENT_TYPE}" = "cron" ]; then
echo "Coverity scan is performed only in the first job of this build"
exit 0
fi
+fi
- # This is split up into two steps because the build outputs thousands of
- # lines (which are conveniently collapsed into a single line in the
- # "Job log" on the Travis CI web site).
- if [ "$step" = "build" ]; then
+
+if [ "${TRAVIS_EVENT_TYPE}" = "cron" ] || [ "$distro" = "coverity" ]; then
+ # Run coverity scan when this script is run from a Travis cron job, or
+ # if the user specifies the "coverity" distro.
+
+ # Check if the project token is available (this is a secret value and
+ # should not be hard-coded in this script). This value can be found by
+ # logging into the coverity scan web site and looking in the project
+ # settings.
+ if [ -z "${COV_TOKEN}" ]; then
+ echo "Error: COV_TOKEN is not defined (should be defined in environment variables section of Travis settings for this repo)"
+ exit 1
+ fi
+
+ # The "build" and "run" steps are split up into separate steps because the
+ # build outputs thousands of lines (which are conveniently collapsed into
+ # a single line when viewing the "Job log" on the Travis CI web site).
+ if [ "$step" = "install" ]; then
+ install_coverity
+ elif [ "$step" = "build" ]; then
build_coverity
elif [ "$step" = "run" ]; then
run_coverity
+ elif [ "$step" = "all" ]; then
+ install_coverity
+ build_coverity
+ run_coverity
fi
elif [ "$distro" = "travis" ]; then
# Build bro and run tests.
@@ -223,15 +294,24 @@ elif [ "$distro" = "travis" ]; then
build
elif [ "$step" = "run" ]; then
run
+ elif [ "$step" = "all" ]; then
+ build
+ run
fi
else
# Build bro and run tests in a docker container.
if [ "$step" = "install" ]; then
- setup_docker
+ install_in_docker
elif [ "$step" = "build" ]; then
- build_docker
+ build_in_docker
elif [ "$step" = "run" ]; then
- run_docker
+ run_in_docker
+ elif [ "$step" = "all" ]; then
+ install_in_docker
+ build_in_docker
+ run_in_docker
+ # If all tests pass, then remove the docker container.
+ remove_container
fi
fi