mirror of
https://github.com/zeek/zeek.git
synced 2025-10-02 06:38:20 +00:00
Merge remote-tracking branch 'origin/master' into topic/dnthayer/doc-fixes-for-2.6
This commit is contained in:
commit
9291fef6d2
128 changed files with 1729 additions and 355 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -1,2 +1,3 @@
|
|||
build
|
||||
tmp
|
||||
*.gcov
|
||||
|
|
86
CHANGES
86
CHANGES
|
@ -1,4 +1,90 @@
|
|||
|
||||
2.5-842 | 2018-08-15 11:00:20 -0500
|
||||
|
||||
* Fix seg fault on trying to type-cast invalid/nil Broker::Data
|
||||
(Jon Siwek, Corelight)
|
||||
|
||||
2.5-841 | 2018-08-14 16:45:09 -0500
|
||||
|
||||
* BIT-1798: fix PPTP GRE tunnel decapsulation (Jon Siwek, Corelight)
|
||||
|
||||
2.5-840 | 2018-08-13 17:40:06 -0500
|
||||
|
||||
* Fix SumStats::observe key normalization logic
|
||||
(reported by Jim Mellander and fixed by Jon Siwek, Corelight)
|
||||
|
||||
2.5-839 | 2018-08-13 10:51:43 -0500
|
||||
|
||||
* Make options redef-able by default. (Johanna Amann, Corelight)
|
||||
|
||||
* Fix incorrect input framework warnings when parsing ports.
|
||||
(Johanna Amann, Corelight)
|
||||
|
||||
* Allow input framework to accept 0 and 1 as valid boolean values.
|
||||
(Johanna Amann, Corelight)
|
||||
|
||||
* Improve the travis-job script to work outside of Travis (Daniel Thayer)
|
||||
|
||||
* Fix validate-certs.bro comments (Jon Siwek, Corelight)
|
||||
|
||||
2.5-831 | 2018-08-10 17:12:53 -0500
|
||||
|
||||
* Immediately apply broker subscriptions made during bro_init()
|
||||
(Jon Siwek, Corelight)
|
||||
|
||||
* Update default broker threading configuration to use 4 threads and allow
|
||||
tuning via BRO_BROKER_MAX_THREADS env. variable (Jon Siwek, Corelight)
|
||||
|
||||
* Misc. unit test improvements (Jon Siwek, Corelight)
|
||||
|
||||
2.5-826 | 2018-08-08 13:09:27 -0700
|
||||
|
||||
* Add support for code coverage statistics for bro source files after running btest
|
||||
test suite
|
||||
|
||||
This adds --enable-coverage flag to configure Bro with gcov.
|
||||
A new directory named /testing/coverage/ contains a new
|
||||
coverage target. By default a coverage.log is created; running
|
||||
make html in testing/coverage creates a HTML report.
|
||||
(Chung Min Kim, Corelight)
|
||||
|
||||
2.5-819 | 2018-08-08 13:03:22 -0500
|
||||
|
||||
* Fix cluster layout graphic and doc warnings (Jon Siwek, Corelight)
|
||||
|
||||
* Added missing tcp-state for signature dpd_rfb_server (Zhongjie Wang)
|
||||
|
||||
2.5-815 | 2018-08-06 17:07:56 -0500
|
||||
|
||||
* Fix an "uninitialized" compiler warning (Jon Siwek, Corelight)
|
||||
|
||||
* Fix (non)suppression of proxy-bound events in known-*.bro scripts
|
||||
(Jon Siwek, Corelight)
|
||||
|
||||
2.5-811 | 2018-08-03 11:33:57 -0500
|
||||
|
||||
* Update scripts to use vector "+=" append operation (Vern Paxson, Corelight)
|
||||
|
||||
* Add vector "+=" append operation (Vern Paxson, Corelight)
|
||||
|
||||
* Improve a travis output message in pull request builds (Daniel Thayer)
|
||||
|
||||
* Use default version of OpenSSL on all travis docker containers
|
||||
(Daniel Thayer)
|
||||
|
||||
2.5-802 | 2018-08-02 10:40:36 -0500
|
||||
|
||||
* Add set operations: union, intersection, difference, comparison
|
||||
(Vern Paxson, Corelight)
|
||||
|
||||
2.5-796 | 2018-08-01 16:31:25 -0500
|
||||
|
||||
* Add 'W' connection history indicator for zero windows
|
||||
(Vern Paxson, Corelight)
|
||||
|
||||
* Allow logarithmic 'T'/'C'/'W' connection history repetitions, which
|
||||
also now raise their own events (Vern Paxson, Corelight)
|
||||
|
||||
2.5-792 | 2018-08-01 12:15:31 -0500
|
||||
|
||||
* fix NTLM NegotiateFlags field offsets (Jeffrey Bencteux)
|
||||
|
|
33
NEWS
33
NEWS
|
@ -283,6 +283,39 @@ New Functionality
|
|||
|
||||
- Bro now supports OpenSSL 1.1.
|
||||
|
||||
- The new connection/conn.log history character 'W' indicates that
|
||||
the originator ('w' = responder) advertised a TCP zero window
|
||||
(instructing the peer to not send any data until receiving a
|
||||
non-zero window).
|
||||
|
||||
- The connection/conn.log history characters 'C' (checksum error seen),
|
||||
'T' (retransmission seen), and 'W' (zero window advertised) are now
|
||||
repeated in a logarithmic fashion upon seeing multiple instances
|
||||
of the corresponding behavior. Thus a connection with 2 C's in its
|
||||
history means that the originator sent >= 10 packets with checksum
|
||||
errors; 3 C's means >= 100, etc.
|
||||
|
||||
- The above connection history behaviors occurring multiple times
|
||||
(i.e., starting at 10 instances, than again for 100 instances,
|
||||
etc.) generate corresponding events: tcp_multiple_checksum_errors,
|
||||
udp_multiple_checksum_errors, tcp_multiple_zero_windows, and
|
||||
tcp_multiple_retransmissions. Each has the same form, e.g.
|
||||
|
||||
event tcp_multiple_retransmissions(c: connection, is_orig: bool,
|
||||
threshold: count);
|
||||
|
||||
- Added support for set union, intersection, difference, and comparison
|
||||
operations. The corresponding operators for the first three are
|
||||
"s1 | s2", "s1 & s2", and "s1 - s2". Relationals are in terms
|
||||
of subsets, so "s1 < s2" yields true if s1 is a proper subset of s2
|
||||
and "s1 == s2" if the two sets have exactly the same elements.
|
||||
"s1 <= s2" holds for subsets or equality, and similarly "s1 != s2",
|
||||
"s1 > s2", and "s1 >= s2" have the expected meanings in terms
|
||||
of non-equality, proper superset, and superset-or-equal.
|
||||
|
||||
- An expression of the form "v += e" will append the value of the expression
|
||||
"e" to the end of the vector "v" (of course assuming type-compatbility).
|
||||
|
||||
Changed Functionality
|
||||
---------------------
|
||||
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
2.5-792
|
||||
2.5-842
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit f648ad79b20baba4f80259d059044ae78d56d7c4
|
||||
Subproject commit e99152c00aad8f81c684a01bc4d40790a295f85c
|
|
@ -1 +1 @@
|
|||
Subproject commit 7e6b47ee90c5b6ab80b0e6f93d5cf835fd86ce4e
|
||||
Subproject commit 74cf55ace0de2bf061bbbf285ccf47cba122955f
|
|
@ -1 +1 @@
|
|||
Subproject commit 1b27ec4c24bb13443f1a5e8f4249ff4e20e06dd1
|
||||
Subproject commit 53aae820242c02790089e384a9fe2d3174799ab1
|
|
@ -1 +1 @@
|
|||
Subproject commit 80a4aa68927c2f60ece1200268106edc27f50338
|
||||
Subproject commit edf754ea6e89a84ad74eff69a454c5e285c4b81b
|
|
@ -1 +1 @@
|
|||
Subproject commit d900149ef6e2f744599a9575b67fb7155953bd4a
|
||||
Subproject commit 70a8b2e15105f4c238765a882151718162e46208
|
|
@ -1 +1 @@
|
|||
Subproject commit 488dc806d0f777dcdee04f3794f04121ded08b8e
|
||||
Subproject commit e0f9f6504db9285a48e0be490abddf959999a404
|
2
cmake
2
cmake
|
@ -1 +1 @@
|
|||
Subproject commit ec66fd8fbd81fd8b25ced0214016f4c89604a15c
|
||||
Subproject commit 4cc3e344cf2698010a46684d32a2907a943430e3
|
7
configure
vendored
7
configure
vendored
|
@ -45,6 +45,7 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
|
|||
|
||||
Optional Features:
|
||||
--enable-debug compile in debugging mode (like --build-type=Debug)
|
||||
--enable-coverage compile with code coverage support (implies debugging mode)
|
||||
--enable-mobile-ipv6 analyze mobile IPv6 features defined by RFC 6275
|
||||
--enable-perftools force use of Google perftools on non-Linux systems
|
||||
(automatically on when perftools is present on Linux)
|
||||
|
@ -141,6 +142,8 @@ append_cache_entry INSTALL_BROCTL BOOL true
|
|||
append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING
|
||||
append_cache_entry ENABLE_MOBILE_IPV6 BOOL false
|
||||
append_cache_entry DISABLE_PERFTOOLS BOOL false
|
||||
append_cache_entry DISABLE_RUBY_BINDINGS BOOL true
|
||||
append_cache_entry ENABLE_COVERAGE BOOL false
|
||||
|
||||
# parse arguments
|
||||
while [ $# -ne 0 ]; do
|
||||
|
@ -196,6 +199,10 @@ while [ $# -ne 0 ]; do
|
|||
--logdir=*)
|
||||
append_cache_entry BRO_LOG_DIR PATH $optarg
|
||||
;;
|
||||
--enable-coverage)
|
||||
append_cache_entry ENABLE_COVERAGE BOOL true
|
||||
append_cache_entry ENABLE_DEBUG BOOL true
|
||||
;;
|
||||
--enable-debug)
|
||||
append_cache_entry ENABLE_DEBUG BOOL true
|
||||
;;
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 55 KiB |
|
@ -1,2 +1,2 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<mxfile userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36" version="8.5.2" editor="www.draw.io" type="device"><diagram name="Page-1" id="42789a77-a242-8287-6e28-9cd8cfd52e62">7Vxdc6M2FP01flwPkpCAx0022T60MzuTTrt9lEGx2cXIg0ns9NdXGAkjAbExH3a6+CXWRRIgnXN075WcGbpf778mdLP6gwcsmkEr2M/QlxkUH9cRfzLLW25xLDs3LJMwyE3gaHgK/2XSaEnrSxiwrVYx5TxKw41u9HkcMz/VbDRJ+E6v9swj/a4bumQVw5NPo6r17zBIV9IKLOt44TcWLlfy1i6WFxbU/7lM+Ess7zeD6PnwyS+vqepL1t+uaMB3JRN6mKH7hPM0/7be37MoG1s1bHm7x4arxXMnLE7PasAoAo7jWQH0qeeiTwDlXbzS6IWpdzg8afqmRocFYrBkMeax+HN3eGWWdQpEaZWuI/k1ogsW3RWjcs8jnhybbVOapJ+zCTNsj2GU9WCpsoQIFmUWB6qFH9HtNvT/XIVxfkE2A3mp1OgHS9M3WaYvKRcmnqQrvuQxjX7nfCNbbdOE/2TqKcXsecRBn0lxRaEhq/vM4/SRrsMoA/lfLAloTKVZ3glAWS51aB0+wh7zh9I4Hh55H6bfs7eeY1n6R47B8Vll1eo8y6nf8pfEZ02TK6lEkyVLG+p4eZ1sjksdS/R8ZXzN0uRNVEhYRNPwVScMlbxbFvVkUzFj9K1UYcPDON2Wev6WGUQFJSGKaVJAkAN1HJv1ERSDVm5h23a5hfiSP4MqlV7maDqw41ym2BNTfmmmoJtgCiDtmAKgMzpTKkRZCwAsWTKDJBKje7fIvi3TYrbKDIoisehnaN+twpQ9behhznbC79Dpc+TVgQpqXc0u+Xwd+vLCKZbVgJo2gFowqFSTgQAzpzvYaRQu44yxAq9ihN4B8CtLUrZ/F3rqKtJnHCl13R2dG0+aViW3RkGrDqwluLRDA6yRzRwG2w2NFRA2Cd+/fYJ1CMkt4r7l+rcGnMDxFpZ1DnCenxnx/dsEDoQ6cACuAsfBVeDgIYBT55lWgRPxpVCWTHM+Bk7IguCzBEYEEfBWceLpa5BNrggTrwITCYka92ya+x4WFw9fbfKnYFUD1BfHu2tadLoBp3C4S+52/uBixr6XC8oRrzrtDa654T+399XhGb56L655xZe2CZp7jo0wsICNXAc5GhM8OEeQeBhjghwM9d7zV5IdvhfZemBORC8WcsQdsHLWixXZtudZ7454S2IDixi3yUelcpvmKEOjXjvHvy4gbvLswOTZdVXtikTXUKdRtY2ocUzHDk+iPa5oq4xJRbSt0UTbvppGE2jkRxC5TIix8jZUR8ToaEypJROHRuJQwZU5shyNL3OL1FKmPjlZS6ZDZ99YEoqpz5T9MJTDMcwbhGHINhgG8GUMs1HfDGtIgVpGxAK0TbLOCVBnIufIUUmFaSXaEtdgLTxvoeuRmxeFLGgYsiJX55jlXEZW5OgdAWh01BNZIR6WrEo4S+TU44YSE5uj3YvJcediG9eToy7mzpE+mJvmVXFpDwJDc3/XNrcaGmDYWuqhHi57PYOHtFN2E0+nHPRRcDYmmoZxSdqiAiEj+rWMUzIn6qsweKj6vWuc2w2mx8UUjyp8lWX9qlCFwygh0dfR7krYIaxEoBtQavSshJmqos07OZaTppU1xMzondI00lKjOtbvW9Oq23x1yeYdT37mW38fNNu8aMa4mW12fXar2eYi6TvGhjCsQcYUfl9jU9Ca26a/MG7gfU5SbIqzNToOJNfTufLb2fWZE2Jmsj9ITmyYBDZRbm+RwHYv4yp29Y4ANNazvnJi7sA5MVzh5gU+/yCh4fhbl4CcAcxhYkPXzJKBgfCk9kQHypI57aS+AWHVPOiVIscGEB4eaUCMDeOoAAtqc4/71qxT8eipePLM671roNcPaN/bmholmzaqMPayWzBMasvqZz6N2bwtCfqYAnRiK6inRa5tzr9luqtT9d6jrbrzQY3ZsY97FvP/kR0b89cSdWnTXzgMf0eFB0qOXXVnYpgjltA4T3XxEUvbPElmQr6vA2DGT1aMtaC771j9xevEslGTXa1cqDnKkKLlwwrPvKdk1znu1TDkLE5jKlfDMzh1LjmLnwgoF8fsqC9ykl7JKYrHf6eSVz/+zxr08B8=</diagram></mxfile>
|
||||
<mxfile userAgent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.84 Safari/537.36" version="9.0.3-1" editor="www.draw.io" type="device"><diagram name="Page-1" id="42789a77-a242-8287-6e28-9cd8cfd52e62">7VxLc6M4EP41Po4LSUjAcZJJZg+7VVOVrd3ZowwKZgYjFyaxvb9+hZEwEhA/eITs2JdYrZYE0ve1ultyZuh+tfua0vXyDx6weAatYDdDX2ZQfDxL/Mkl+0Li2G4hCNMoKETgKHiK/mVSKNuFL1HANppixnmcRWtd6PMkYX6myWia8q2u9sxjfdQ1DVlN8OTTuC79OwqypZQCyzpW/MaicCmHdrGsWFD/Z5jyl0SON4Po+fApqldU9SX1N0sa8G1FhB5m6D7lPCu+rXb3LM7nVk1b0e6xpbZ87pQl2VkNGEXAcTwrgD71XPQJoKKLVxq/MPUOhyfN9mp2WCAmSxYTnog/d4dXZnmnQJSW2SqWX2O6YPFdOSv3PObpsdkmo2n2OV8wQ/YYxXkPlipLiGBRZkmgWvgx3Wwi/89llBQVshkoSpVGP1iW7WWZvmRciHiaLXnIExr/zvlattpkKf/J1FOK1fOIgz6TskahIdd95kn2SFdRnIP8L5YGNKFSLEcCUJYrHVqHj5An/KEyj4dH3kXZ9/yt51iW/pFzcHxWqVpfZ7n0G/6S+qxtcSWVaBqyrEXHK3TyNa50LNHzlfEVy9K9UEhZTLPoVScMlbwLSz3ZVKwY3VcU1jxKsk2l52+5QCgoE6KYJg0IcqCOY1MfQTFp1Ra2bVdbiC/FM6hS5WWOogM7zmWKfWPKL80UNAmmAHIZUwB0RmdKjSgrAYCQpTNIYjG7d4v8W5iVq1VlUByLTT9H+3YZZexpTQ9rthV+h06fI68OVFD7al7l81Xky4pTLGsANW0BtWBQRZOBADOnO9hpHIVJzliBVzFDbwD4laUZ270JPVWL9BVHyrpuj86NctmWFbdGQasJrBW4XIYG2GA2Cxhs1jRRQFinfLf/BJsQUkjEuFX9qQEncLyFZZ0DnOdnRnx/msCBUAcOwHXgOLgOHDwEcJo80zpwYh4Ky5LbnI+BE7Ig+CwDI4IIOFWcePoeZJN3hIlXg4mERIN7dlv7HjYXD7/b4t+CVQ1QXxzvrm3T6Qac0uGuuNvFg4sV+14tKEe87rT35JrDM1xzMIhrXvOlbYLmnmMjDCxgI9dBjsYED84RJB7GmCAHQ7334h1lh29Fth6YE9GLhRwxAlbOerkj2/Y8790Rr01sYBFjmGKaasO0Rxka9S5z/JsC4jbPDtw8u3e12kbUOKZjh29Ge1yjrTImNaNtDWW07enYaAKN/Agi1xlirLwN1RExOhrT1JIbh0biUMmVObIcjS9zizRSpjk52UimQ2ffWBqJpc8t+2Eqe2PYMKn8GjGQbTAM4OsYZqO+GdaSArWMiAVoh2SdE6DOjZwjRyU1plVoS1yDtfC8je56bl4VsgxzmlAnK3J1jlnOdWRFjt4RgEZHPZEV4mHJqixphZx63FBhYnu0ezU57lxs42ZyNMXcBdL7ctO8Oi7tcWBonu/a5lFDCwwvNvVQD5e9nsFDLrPsJp5OOeij4GxANE30dgFCRvRrGbdkTuirMHgo/d5tnNsNpsfNFI9q+Grb+phQhSNZQqLvo90tYYewEoFuQGmwZxXM1C3avJNjORGbNo17IMjM6J2yaeRCG9VR37BpdR4YUSS2+7WB9WPBpuT0lqc/i6PCD5qdXrRzwsxOuz6bana6TBKPcYAMG5BxC9ff4xDRmtumfzFooH5OEu0WlzeP4wwblt/uoU/nlGhOiJn5nmYObaSEN1Fucpnwdq/jKnb1jgA09rO+cmjuwDk0XOPmFTHCIKHk4EedgJwBzJFiSdfMqoGB8KTOUAfKqjmXmfoWhNXzpu8UabaA8PBI/WFsJOMHjPN0bOYrumLsVPx6Kv48s/5UPFp7z57jURWQdgX5W0dfo2TrhjSkw5xGDJM6s/pZT2M1p2WyejVYI0VW4NRRU0+b4qVnChem0zqp9x6dNd0/as2mfdy7nv+PbNqYv8ZoSrP+wmH7G1Z4oGTamCcfI13hhMZ9rauvcNrmTTUT8n1dMDN+EmPsBd19x/ovam8sGzU5dpELNUc5UrT8WemZX5ccO8e9Gomc5W1P5Wp4BqfOJWf5EwTl4pgd9UVO0is5RfH471oK9eP/xEEP/wE=</diagram></mxfile>
|
|
@ -169,7 +169,7 @@ History
|
|||
|
||||
Bro's history goes back much further than many people realize. `Vern
|
||||
Paxson <http://www.icir.org/vern>`_ designed and implemented the
|
||||
initial version almost two decades ago.
|
||||
initial version more than two decades ago.
|
||||
Vern began work on the code in 1995 as a researcher at the `Lawrence
|
||||
Berkeley National Laboratory (LBNL) <http://www.lbl.gov>`_. Berkeley
|
||||
Lab began operational deployment in 1996, and the USENIX Security
|
||||
|
|
|
@ -544,6 +544,15 @@ Here is a more detailed description of each type:
|
|||
|
||||
|s|
|
||||
|
||||
You can compute the union, intersection, or difference of two sets
|
||||
using the ``|``, ``&``, and ``-`` operators. You can compare
|
||||
sets for equality (they have exactly the same elements) using ``==``.
|
||||
The ``<`` operator returns ``T`` if the lefthand operand is a proper
|
||||
subset of the righthand operand. Similarly, ``<=`` returns ``T``
|
||||
if the lefthand operator is a subset (not necessarily proper, i.e.,
|
||||
it may be equal to the righthand operand). The operators ``!=``, ``>``
|
||||
and ``>=`` provide the expected complementary operations.
|
||||
|
||||
See the :bro:keyword:`for` statement for info on how to iterate over
|
||||
the elements in a set.
|
||||
|
||||
|
@ -599,6 +608,20 @@ Here is a more detailed description of each type:
|
|||
|
||||
|v|
|
||||
|
||||
A particularly common operation on a vector is to append an element
|
||||
to its end. You can do so using:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
v += e;
|
||||
|
||||
where if e's type is ``X``, v's type is ``vector of X``. Note that
|
||||
this expression is equivalent to:
|
||||
|
||||
.. code:: bro
|
||||
|
||||
v[|v|] = e;
|
||||
|
||||
Vectors of integral types (``int`` or ``count``) support the pre-increment
|
||||
(``++``) and pre-decrement operators (``--``), which will increment or
|
||||
decrement each element in the vector.
|
||||
|
|
|
@ -3,10 +3,10 @@ event bro_init()
|
|||
local v1: vector of count;
|
||||
local v2 = vector(1, 2, 3, 4);
|
||||
|
||||
v1[|v1|] = 1;
|
||||
v1[|v1|] = 2;
|
||||
v1[|v1|] = 3;
|
||||
v1[|v1|] = 4;
|
||||
v1 += 1;
|
||||
v1 += 2;
|
||||
v1 += 3;
|
||||
v1 += 4;
|
||||
|
||||
print fmt("contents of v1: %s", v1);
|
||||
print fmt("length of v1: %d", |v1|);
|
||||
|
|
|
@ -126,7 +126,7 @@ event pe_section_header(f: fa_file, h: PE::SectionHeader) &priority=5
|
|||
|
||||
if ( ! f$pe?$section_names )
|
||||
f$pe$section_names = vector();
|
||||
f$pe$section_names[|f$pe$section_names|] = h$name;
|
||||
f$pe$section_names += h$name;
|
||||
}
|
||||
|
||||
event file_state_remove(f: fa_file) &priority=-5
|
||||
|
|
|
@ -66,7 +66,7 @@ event x509_certificate(f: fa_file, cert_ref: opaque of x509, cert: X509::Certifi
|
|||
event x509_extension(f: fa_file, ext: X509::Extension) &priority=5
|
||||
{
|
||||
if ( f$info?$x509 )
|
||||
f$info$x509$extensions[|f$info$x509$extensions|] = ext;
|
||||
f$info$x509$extensions += ext;
|
||||
}
|
||||
|
||||
event x509_ext_basic_constraints(f: fa_file, ext: X509::BasicConstraints) &priority=5
|
||||
|
|
|
@ -56,9 +56,11 @@ export {
|
|||
## control mechanisms).
|
||||
const congestion_queue_size = 200 &redef;
|
||||
|
||||
## Max number of threads to use for Broker/CAF functionality.
|
||||
## Using zero will cause this to be automatically determined
|
||||
## based on number of available CPUs.
|
||||
## Max number of threads to use for Broker/CAF functionality. Setting to
|
||||
## zero implies using the value of BRO_BROKER_MAX_THREADS environment
|
||||
## variable, if set, or else typically defaults to 4 (actually 2 threads
|
||||
## when simply reading offline pcaps as there's not expected to be any
|
||||
## communication and more threads just adds more overhead).
|
||||
const max_threads = 0 &redef;
|
||||
|
||||
## Max number of microseconds for under-utilized Broker/CAF
|
||||
|
@ -259,7 +261,8 @@ export {
|
|||
global publish_id: function(topic: string, id: string): bool;
|
||||
|
||||
## Register interest in all peer event messages that use a certain topic
|
||||
## prefix.
|
||||
## prefix. Note that subscriptions may not be altered immediately after
|
||||
## calling (except during :bro:see:`bro_init`).
|
||||
##
|
||||
## topic_prefix: a prefix to match against remote message topics.
|
||||
## e.g. an empty prefix matches everything and "a" matches
|
||||
|
@ -269,6 +272,8 @@ export {
|
|||
global subscribe: function(topic_prefix: string): bool;
|
||||
|
||||
## Unregister interest in all peer event messages that use a topic prefix.
|
||||
## Note that subscriptions may not be altered immediately after calling
|
||||
## (except during :bro:see:`bro_init`).
|
||||
##
|
||||
## topic_prefix: a prefix previously supplied to a successful call to
|
||||
## :bro:see:`Broker::subscribe`.
|
||||
|
|
|
@ -251,7 +251,7 @@ function nodes_with_type(node_type: NodeType): vector of NamedNode
|
|||
local names: vector of string = vector();
|
||||
|
||||
for ( name in Cluster::nodes )
|
||||
names[|names|] = name;
|
||||
names += name;
|
||||
|
||||
names = sort(names, strcmp);
|
||||
|
||||
|
@ -263,7 +263,7 @@ function nodes_with_type(node_type: NodeType): vector of NamedNode
|
|||
if ( n$node_type != node_type )
|
||||
next;
|
||||
|
||||
rval[|rval|] = NamedNode($name=name, $node=n);
|
||||
rval += NamedNode($name=name, $node=n);
|
||||
}
|
||||
|
||||
return rval;
|
||||
|
|
|
@ -157,7 +157,7 @@ global registered_pools: vector of Pool = vector();
|
|||
function register_pool(spec: PoolSpec): Pool
|
||||
{
|
||||
local rval = Pool($spec = spec);
|
||||
registered_pools[|registered_pools|] = rval;
|
||||
registered_pools += rval;
|
||||
return rval;
|
||||
}
|
||||
|
||||
|
@ -276,7 +276,7 @@ function init_pool_node(pool: Pool, name: string): bool
|
|||
local pn = PoolNode($name=name, $alias=alias, $site_id=site_id,
|
||||
$alive=Cluster::node == name);
|
||||
pool$nodes[name] = pn;
|
||||
pool$node_list[|pool$node_list|] = pn;
|
||||
pool$node_list += pn;
|
||||
|
||||
if ( pn$alive )
|
||||
++pool$alive_count;
|
||||
|
@ -366,7 +366,7 @@ event bro_init() &priority=-5
|
|||
if ( |mgr| > 0 )
|
||||
{
|
||||
local eln = pool_eligibility[Cluster::LOGGER]$eligible_nodes;
|
||||
eln[|eln|] = mgr[0];
|
||||
eln += mgr[0];
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -423,7 +423,7 @@ event bro_init() &priority=-5
|
|||
if ( j < e )
|
||||
next;
|
||||
|
||||
nen[|nen|] = pet$eligible_nodes[j];
|
||||
nen += pet$eligible_nodes[j];
|
||||
}
|
||||
|
||||
pet$eligible_nodes = nen;
|
||||
|
|
|
@ -120,14 +120,14 @@ function format_value(value: any) : string
|
|||
{
|
||||
local it: set[bool] = value;
|
||||
for ( sv in it )
|
||||
part[|part|] = cat(sv);
|
||||
part += cat(sv);
|
||||
return join_string_vec(part, ",");
|
||||
}
|
||||
else if ( /^vector/ in tn )
|
||||
{
|
||||
local vit: vector of any = value;
|
||||
for ( i in vit )
|
||||
part[|part|] = cat(vit[i]);
|
||||
part += cat(vit[i]);
|
||||
return join_string_vec(part, ",");
|
||||
}
|
||||
else if ( tn == "string" )
|
||||
|
|
|
@ -555,19 +555,19 @@ function quarantine_host(infected: addr, dns: addr, quarantine: addr, t: interva
|
|||
local orules: vector of string = vector();
|
||||
local edrop: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected))];
|
||||
local rdrop: Rule = [$ty=DROP, $target=FORWARD, $entity=edrop, $expire=t, $location=location];
|
||||
orules[|orules|] = add_rule(rdrop);
|
||||
orules += add_rule(rdrop);
|
||||
|
||||
local todnse: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected), $dst_h=addr_to_subnet(dns), $dst_p=53/udp)];
|
||||
local todnsr = Rule($ty=MODIFY, $target=FORWARD, $entity=todnse, $expire=t, $location=location, $mod=FlowMod($dst_h=quarantine), $priority=+5);
|
||||
orules[|orules|] = add_rule(todnsr);
|
||||
orules += add_rule(todnsr);
|
||||
|
||||
local fromdnse: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(dns), $src_p=53/udp, $dst_h=addr_to_subnet(infected))];
|
||||
local fromdnsr = Rule($ty=MODIFY, $target=FORWARD, $entity=fromdnse, $expire=t, $location=location, $mod=FlowMod($src_h=dns), $priority=+5);
|
||||
orules[|orules|] = add_rule(fromdnsr);
|
||||
orules += add_rule(fromdnsr);
|
||||
|
||||
local wle: Entity = [$ty=FLOW, $flow=Flow($src_h=addr_to_subnet(infected), $dst_h=addr_to_subnet(quarantine), $dst_p=80/tcp)];
|
||||
local wlr = Rule($ty=WHITELIST, $target=FORWARD, $entity=wle, $expire=t, $location=location, $priority=+5);
|
||||
orules[|orules|] = add_rule(wlr);
|
||||
orules += add_rule(wlr);
|
||||
|
||||
return orules;
|
||||
}
|
||||
|
@ -637,7 +637,7 @@ event NetControl::init() &priority=-20
|
|||
function activate_impl(p: PluginState, priority: int)
|
||||
{
|
||||
p$_priority = priority;
|
||||
plugins[|plugins|] = p;
|
||||
plugins += p;
|
||||
sort(plugins, function(p1: PluginState, p2: PluginState) : int { return p2$_priority - p1$_priority; });
|
||||
|
||||
plugin_ids[plugin_counter] = p;
|
||||
|
@ -734,7 +734,7 @@ function find_rules_subnet(sn: subnet) : vector of Rule
|
|||
for ( rule_id in rules_by_subnets[sn_entry] )
|
||||
{
|
||||
if ( rule_id in rules )
|
||||
ret[|ret|] = rules[rule_id];
|
||||
ret += rules[rule_id];
|
||||
else
|
||||
Reporter::error("find_rules_subnet - internal data structure error, missing rule");
|
||||
}
|
||||
|
|
|
@ -158,17 +158,17 @@ function entity_to_match(p: PluginState, e: Entity): vector of OpenFlow::ofp_mat
|
|||
|
||||
if ( e$ty == CONNECTION )
|
||||
{
|
||||
v[|v|] = OpenFlow::match_conn(e$conn); # forward and...
|
||||
v[|v|] = OpenFlow::match_conn(e$conn, T); # reverse
|
||||
v += OpenFlow::match_conn(e$conn); # forward and...
|
||||
v += OpenFlow::match_conn(e$conn, T); # reverse
|
||||
return openflow_match_pred(p, e, v);
|
||||
}
|
||||
|
||||
if ( e$ty == MAC )
|
||||
{
|
||||
v[|v|] = OpenFlow::ofp_match(
|
||||
v += OpenFlow::ofp_match(
|
||||
$dl_src=e$mac
|
||||
);
|
||||
v[|v|] = OpenFlow::ofp_match(
|
||||
v += OpenFlow::ofp_match(
|
||||
$dl_dst=e$mac
|
||||
);
|
||||
|
||||
|
@ -182,12 +182,12 @@ function entity_to_match(p: PluginState, e: Entity): vector of OpenFlow::ofp_mat
|
|||
if ( is_v6_subnet(e$ip) )
|
||||
dl_type = OpenFlow::ETH_IPv6;
|
||||
|
||||
v[|v|] = OpenFlow::ofp_match(
|
||||
v += OpenFlow::ofp_match(
|
||||
$dl_type=dl_type,
|
||||
$nw_src=e$ip
|
||||
);
|
||||
|
||||
v[|v|] = OpenFlow::ofp_match(
|
||||
v += OpenFlow::ofp_match(
|
||||
$dl_type=dl_type,
|
||||
$nw_dst=e$ip
|
||||
);
|
||||
|
@ -231,7 +231,7 @@ function entity_to_match(p: PluginState, e: Entity): vector of OpenFlow::ofp_mat
|
|||
m$tp_dst = port_to_count(f$dst_p);
|
||||
}
|
||||
|
||||
v[|v|] = m;
|
||||
v += m;
|
||||
|
||||
return openflow_match_pred(p, e, v);
|
||||
}
|
||||
|
|
|
@ -88,7 +88,7 @@ function ryu_flow_mod(state: OpenFlow::ControllerState, match: ofp_match, flow_m
|
|||
local flow_actions: vector of ryu_flow_action = vector();
|
||||
|
||||
for ( i in flow_mod$actions$out_ports )
|
||||
flow_actions[|flow_actions|] = ryu_flow_action($_type="OUTPUT", $_port=flow_mod$actions$out_ports[i]);
|
||||
flow_actions += ryu_flow_action($_type="OUTPUT", $_port=flow_mod$actions$out_ports[i]);
|
||||
|
||||
# Generate our ryu_flow_mod record for the ReST API call.
|
||||
local mod: ryu_ofp_flow_mod = ryu_ofp_flow_mod(
|
||||
|
|
|
@ -267,7 +267,7 @@ function add_observe_plugin_dependency(calc: Calculation, depends_on: Calculatio
|
|||
{
|
||||
if ( calc !in calc_deps )
|
||||
calc_deps[calc] = vector();
|
||||
calc_deps[calc][|calc_deps[calc]|] = depends_on;
|
||||
calc_deps[calc] += depends_on;
|
||||
}
|
||||
|
||||
event bro_init() &priority=100000
|
||||
|
@ -348,7 +348,7 @@ function add_calc_deps(calcs: vector of Calculation, c: Calculation)
|
|||
{
|
||||
if ( calc_deps[c][i] in calc_deps )
|
||||
add_calc_deps(calcs, calc_deps[c][i]);
|
||||
calcs[|c|] = calc_deps[c][i];
|
||||
calcs += calc_deps[c][i];
|
||||
#print fmt("add dep for %s [%s] ", c, calc_deps[c][i]);
|
||||
}
|
||||
}
|
||||
|
@ -387,7 +387,7 @@ function create(ss: SumStat)
|
|||
skip_calc=T;
|
||||
}
|
||||
if ( ! skip_calc )
|
||||
reducer$calc_funcs[|reducer$calc_funcs|] = calc;
|
||||
reducer$calc_funcs += calc;
|
||||
}
|
||||
|
||||
if ( reducer$stream !in reducer_store )
|
||||
|
@ -399,7 +399,7 @@ function create(ss: SumStat)
|
|||
schedule ss$epoch { SumStats::finish_epoch(ss) };
|
||||
}
|
||||
|
||||
function observe(id: string, key: Key, obs: Observation)
|
||||
function observe(id: string, orig_key: Key, obs: Observation)
|
||||
{
|
||||
if ( id !in reducer_store )
|
||||
return;
|
||||
|
@ -407,8 +407,7 @@ function observe(id: string, key: Key, obs: Observation)
|
|||
# Try to add the data to all of the defined reducers.
|
||||
for ( r in reducer_store[id] )
|
||||
{
|
||||
if ( r?$normalize_key )
|
||||
key = r$normalize_key(copy(key));
|
||||
local key = r?$normalize_key ? r$normalize_key(copy(orig_key)) : orig_key;
|
||||
|
||||
# If this reducer has a predicate, run the predicate
|
||||
# and skip this key if the predicate return false.
|
||||
|
|
|
@ -11,7 +11,7 @@ event SumStats::process_epoch_result(ss: SumStat, now: time, data: ResultTable)
|
|||
for ( key in data )
|
||||
{
|
||||
ss$epoch_result(now, key, data[key]);
|
||||
keys_to_delete[|keys_to_delete|] = key;
|
||||
keys_to_delete += key;
|
||||
|
||||
if ( --i == 0 )
|
||||
break;
|
||||
|
|
|
@ -43,7 +43,7 @@ function sample_add_sample(obs:Observation, rv: ResultVal)
|
|||
++rv$sample_elements;
|
||||
|
||||
if ( |rv$samples| < rv$num_samples )
|
||||
rv$samples[|rv$samples|] = obs;
|
||||
rv$samples += obs;
|
||||
else
|
||||
{
|
||||
local ra = rand(rv$sample_elements);
|
||||
|
|
|
@ -872,7 +872,7 @@ type geo_location: record {
|
|||
longitude: double &optional; ##< Longitude.
|
||||
} &log;
|
||||
|
||||
## The directory containing MaxMind DB (*.mmdb) files to use for GeoIP support.
|
||||
## The directory containing MaxMind DB (.mmdb) files to use for GeoIP support.
|
||||
const mmdb_dir: string = "" &redef;
|
||||
|
||||
## Computed entropy values. The record captures a number of measures that are
|
||||
|
|
|
@ -86,8 +86,9 @@ export {
|
|||
## d packet with payload ("data")
|
||||
## f packet with FIN bit set
|
||||
## r packet with RST bit set
|
||||
## c packet with a bad checksum
|
||||
## c packet with a bad checksum (applies to UDP too)
|
||||
## t packet with retransmitted payload
|
||||
## w packet with a zero window advertisement
|
||||
## i inconsistent packet (e.g. FIN+RST bits set)
|
||||
## q multi-flag packet (SYN+FIN or SYN+RST bits set)
|
||||
## ^ connection direction was flipped by Bro's heuristic
|
||||
|
@ -95,12 +96,15 @@ export {
|
|||
##
|
||||
## If the event comes from the originator, the letter is in
|
||||
## upper-case; if it comes from the responder, it's in
|
||||
## lower-case. The 'a', 'c', 'd', 'i', 'q', and 't' flags are
|
||||
## lower-case. The 'a', 'd', 'i' and 'q' flags are
|
||||
## recorded a maximum of one time in either direction regardless
|
||||
## of how many are actually seen. However, 'f', 'h', 'r', or
|
||||
## 's' may be recorded multiple times for either direction and
|
||||
## only compressed when sharing a sequence number with the
|
||||
## of how many are actually seen. 'f', 'h', 'r' and
|
||||
## 's' can be recorded multiple times for either direction
|
||||
## if the associated sequence number differs from the
|
||||
## last-seen packet of the same flag type.
|
||||
## 'c', 't' and 'w' are recorded in a logarithmic fashion:
|
||||
## the second instance represents that the event was seen
|
||||
## (at least) 10 times; the third instance, 100 times; etc.
|
||||
history: string &log &optional;
|
||||
## Number of packets that the originator sent.
|
||||
## Only set if :bro:id:`use_conn_size_analyzer` = T.
|
||||
|
|
|
@ -178,7 +178,7 @@ event DHCP::aggregate_msgs(ts: time, id: conn_id, uid: string, is_orig: bool, ms
|
|||
if ( uid !in log_info$uids )
|
||||
add log_info$uids[uid];
|
||||
|
||||
log_info$msg_types[|log_info$msg_types|] = DHCP::message_types[msg$m_type];
|
||||
log_info$msg_types += DHCP::message_types[msg$m_type];
|
||||
|
||||
# Let's watch for messages in any DHCP message type
|
||||
# and split them out based on client and server.
|
||||
|
|
|
@ -324,11 +324,11 @@ hook DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string)
|
|||
{
|
||||
if ( ! c$dns?$answers )
|
||||
c$dns$answers = vector();
|
||||
c$dns$answers[|c$dns$answers|] = reply;
|
||||
c$dns$answers += reply;
|
||||
|
||||
if ( ! c$dns?$TTLs )
|
||||
c$dns$TTLs = vector();
|
||||
c$dns$TTLs[|c$dns$TTLs|] = ans$TTL;
|
||||
c$dns$TTLs += ans$TTL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -87,14 +87,14 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
|
|||
if ( ! c$http?$orig_fuids )
|
||||
c$http$orig_fuids = string_vec(f$id);
|
||||
else
|
||||
c$http$orig_fuids[|c$http$orig_fuids|] = f$id;
|
||||
c$http$orig_fuids += f$id;
|
||||
|
||||
if ( f$info?$filename )
|
||||
{
|
||||
if ( ! c$http?$orig_filenames )
|
||||
c$http$orig_filenames = string_vec(f$info$filename);
|
||||
else
|
||||
c$http$orig_filenames[|c$http$orig_filenames|] = f$info$filename;
|
||||
c$http$orig_filenames += f$info$filename;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -103,14 +103,14 @@ event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priori
|
|||
if ( ! c$http?$resp_fuids )
|
||||
c$http$resp_fuids = string_vec(f$id);
|
||||
else
|
||||
c$http$resp_fuids[|c$http$resp_fuids|] = f$id;
|
||||
c$http$resp_fuids += f$id;
|
||||
|
||||
if ( f$info?$filename )
|
||||
{
|
||||
if ( ! c$http?$resp_filenames )
|
||||
c$http$resp_filenames = string_vec(f$info$filename);
|
||||
else
|
||||
c$http$resp_filenames[|c$http$resp_filenames|] = f$info$filename;
|
||||
c$http$resp_filenames += f$info$filename;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -130,14 +130,14 @@ event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
|
|||
if ( ! f$http?$orig_mime_types )
|
||||
f$http$orig_mime_types = string_vec(meta$mime_type);
|
||||
else
|
||||
f$http$orig_mime_types[|f$http$orig_mime_types|] = meta$mime_type;
|
||||
f$http$orig_mime_types += meta$mime_type;
|
||||
}
|
||||
else
|
||||
{
|
||||
if ( ! f$http?$resp_mime_types )
|
||||
f$http$resp_mime_types = string_vec(meta$mime_type);
|
||||
else
|
||||
f$http$resp_mime_types[|f$http$resp_mime_types|] = meta$mime_type;
|
||||
f$http$resp_mime_types += meta$mime_type;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -47,7 +47,7 @@ function extract_keys(data: string, kv_splitter: pattern): string_vec
|
|||
{
|
||||
local key_val = split_string1(parts[part_index], /=/);
|
||||
if ( 0 in key_val )
|
||||
key_vec[|key_vec|] = key_val[0];
|
||||
key_vec += key_val[0];
|
||||
}
|
||||
return key_vec;
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
signature dpd_rfb_server {
|
||||
ip-proto == tcp
|
||||
payload /^RFB/
|
||||
tcp-state responder
|
||||
requires-reverse-signature dpd_rfb_client
|
||||
enable "rfb"
|
||||
}
|
||||
|
|
|
@ -226,7 +226,7 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
|
|||
c$sip$user_agent = value;
|
||||
break;
|
||||
case "VIA", "V":
|
||||
c$sip$request_path[|c$sip$request_path|] = split_string1(value, /;[ ]?branch/)[0];
|
||||
c$sip$request_path += split_string1(value, /;[ ]?branch/)[0];
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -256,7 +256,7 @@ event sip_header(c: connection, is_request: bool, name: string, value: string) &
|
|||
c$sip$response_to = value;
|
||||
break;
|
||||
case "VIA", "V":
|
||||
c$sip$response_path[|c$sip$response_path|] = split_string1(value, /;[ ]?branch/)[0];
|
||||
c$sip$response_path += split_string1(value, /;[ ]?branch/)[0];
|
||||
break;
|
||||
}
|
||||
|
||||
|
|
|
@ -49,5 +49,5 @@ event bro_init() &priority=5
|
|||
event file_over_new_connection(f: fa_file, c: connection, is_orig: bool) &priority=5
|
||||
{
|
||||
if ( c?$smtp && !c$smtp$tls )
|
||||
c$smtp$fuids[|c$smtp$fuids|] = f$id;
|
||||
c$smtp$fuids += f$id;
|
||||
}
|
||||
|
|
|
@ -295,7 +295,7 @@ event mime_one_header(c: connection, h: mime_header_rec) &priority=3
|
|||
c$smtp$process_received_from = F;
|
||||
}
|
||||
if ( c$smtp$path[|c$smtp$path|-1] != ip )
|
||||
c$smtp$path[|c$smtp$path|] = ip;
|
||||
c$smtp$path += ip;
|
||||
}
|
||||
|
||||
event connection_state_remove(c: connection) &priority=-5
|
||||
|
|
|
@ -121,13 +121,13 @@ event file_sniff(f: fa_file, meta: fa_metadata) &priority=5
|
|||
|
||||
if ( f$is_orig )
|
||||
{
|
||||
c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = f$info;
|
||||
c$ssl$client_cert_chain_fuids[|c$ssl$client_cert_chain_fuids|] = f$id;
|
||||
c$ssl$client_cert_chain += f$info;
|
||||
c$ssl$client_cert_chain_fuids += f$id;
|
||||
}
|
||||
else
|
||||
{
|
||||
c$ssl$cert_chain[|c$ssl$cert_chain|] = f$info;
|
||||
c$ssl$cert_chain_fuids[|c$ssl$cert_chain_fuids|] = f$id;
|
||||
c$ssl$cert_chain += f$info;
|
||||
c$ssl$cert_chain_fuids += f$id;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -118,7 +118,7 @@ function extract_ip_addresses(input: string): string_vec
|
|||
for ( i in parts )
|
||||
{
|
||||
if ( i % 2 == 1 && is_valid_ip(parts[i]) )
|
||||
output[|output|] = parts[i];
|
||||
output += parts[i];
|
||||
}
|
||||
return output;
|
||||
}
|
||||
|
|
|
@ -10,7 +10,7 @@ function extract_email_addrs_vec(str: string): string_vec
|
|||
|
||||
local raw_addrs = find_all(str, /(^|[<,:[:blank:]])[^<,:[:blank:]@]+"@"[^>,;[:blank:]]+([>,;[:blank:]]|$)/);
|
||||
for ( raw_addr in raw_addrs )
|
||||
addrs[|addrs|] = gsub(raw_addr, /[<>,:;[:blank:]]/, "");
|
||||
addrs += gsub(raw_addr, /[<>,:;[:blank:]]/, "");
|
||||
|
||||
return addrs;
|
||||
}
|
||||
|
|
|
@ -69,14 +69,14 @@ event Exec::line(description: Input::EventDescription, tpe: Input::Event, s: str
|
|||
if ( ! result?$stderr )
|
||||
result$stderr = vector(s);
|
||||
else
|
||||
result$stderr[|result$stderr|] = s;
|
||||
result$stderr += s;
|
||||
}
|
||||
else
|
||||
{
|
||||
if ( ! result?$stdout )
|
||||
result$stdout = vector(s);
|
||||
else
|
||||
result$stdout[|result$stdout|] = s;
|
||||
result$stdout += s;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -93,7 +93,7 @@ event Exec::file_line(description: Input::EventDescription, tpe: Input::Event, s
|
|||
if ( track_file !in result$files )
|
||||
result$files[track_file] = vector(s);
|
||||
else
|
||||
result$files[track_file][|result$files[track_file]|] = s;
|
||||
result$files[track_file] += s;
|
||||
}
|
||||
|
||||
event Input::end_of_data(orig_name: string, source:string)
|
||||
|
|
|
@ -66,7 +66,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
|
|||
if ( field_desc?$value && (!only_loggable || field_desc$log) )
|
||||
{
|
||||
local onepart = cat("\"", field, "\": ", to_json(field_desc$value, only_loggable));
|
||||
rec_parts[|rec_parts|] = onepart;
|
||||
rec_parts += onepart;
|
||||
}
|
||||
}
|
||||
return cat("{", join_string_vec(rec_parts, ", "), "}");
|
||||
|
@ -79,7 +79,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
|
|||
local sa: set[bool] = v;
|
||||
for ( sv in sa )
|
||||
{
|
||||
set_parts[|set_parts|] = to_json(sv, only_loggable);
|
||||
set_parts += to_json(sv, only_loggable);
|
||||
}
|
||||
return cat("[", join_string_vec(set_parts, ", "), "]");
|
||||
}
|
||||
|
@ -91,7 +91,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
|
|||
{
|
||||
local ts = to_json(ti);
|
||||
local if_quotes = (ts[0] == "\"") ? "" : "\"";
|
||||
tab_parts[|tab_parts|] = cat(if_quotes, ts, if_quotes, ": ", to_json(ta[ti], only_loggable));
|
||||
tab_parts += cat(if_quotes, ts, if_quotes, ": ", to_json(ta[ti], only_loggable));
|
||||
}
|
||||
return cat("{", join_string_vec(tab_parts, ", "), "}");
|
||||
}
|
||||
|
@ -101,7 +101,7 @@ function to_json(v: any, only_loggable: bool &default=F, field_escape_pattern: p
|
|||
local va: vector of any = v;
|
||||
for ( vi in va )
|
||||
{
|
||||
vec_parts[|vec_parts|] = to_json(va[vi], only_loggable);
|
||||
vec_parts += to_json(va[vi], only_loggable);
|
||||
}
|
||||
return cat("[", join_string_vec(vec_parts, ", "), "]");
|
||||
}
|
||||
|
|
|
@ -35,7 +35,7 @@ hook notice(n: Notice::Info) &priority=10
|
|||
when ( local src_name = lookup_addr(n$src) )
|
||||
{
|
||||
output = string_cat("orig/src hostname: ", src_name, "\n");
|
||||
tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output;
|
||||
tmp_notice_storage[uid]$email_body_sections += output;
|
||||
delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-src"];
|
||||
}
|
||||
}
|
||||
|
@ -45,7 +45,7 @@ hook notice(n: Notice::Info) &priority=10
|
|||
when ( local dst_name = lookup_addr(n$dst) )
|
||||
{
|
||||
output = string_cat("resp/dst hostname: ", dst_name, "\n");
|
||||
tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output;
|
||||
tmp_notice_storage[uid]$email_body_sections += output;
|
||||
delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-dst"];
|
||||
}
|
||||
}
|
||||
|
|
|
@ -40,7 +40,7 @@ event bro_init() &priority=5
|
|||
|
||||
# Sort nodes list so that every node iterates over it in same order.
|
||||
for ( name in Cluster::nodes )
|
||||
sorted_node_names[|sorted_node_names|] = name;
|
||||
sorted_node_names += name;
|
||||
|
||||
sort(sorted_node_names, strcmp);
|
||||
|
||||
|
|
|
@ -138,6 +138,9 @@ event Known::host_found(info: HostsInfo)
|
|||
if ( use_host_store )
|
||||
return;
|
||||
|
||||
if ( info$host in Known::hosts )
|
||||
return;
|
||||
|
||||
Cluster::publish_hrw(Cluster::proxy_pool, info$host, known_host_add, info);
|
||||
event known_host_add(info);
|
||||
}
|
||||
|
|
|
@ -159,6 +159,9 @@ event service_info_commit(info: ServicesInfo)
|
|||
if ( Known::use_service_store )
|
||||
return;
|
||||
|
||||
if ( [info$host, info$port_num] in Known::services )
|
||||
return;
|
||||
|
||||
local key = cat(info$host, info$port_num);
|
||||
Cluster::publish_hrw(Cluster::proxy_pool, key, known_service_add, info);
|
||||
event known_service_add(info);
|
||||
|
|
|
@ -17,5 +17,5 @@ export {
|
|||
|
||||
event DHCP::aggregate_msgs(ts: time, id: conn_id, uid: string, is_orig: bool, msg: DHCP::Msg, options: DHCP::Options) &priority=3
|
||||
{
|
||||
log_info$msg_orig[|log_info$msg_orig|] = is_orig ? id$orig_h : id$resp_h;
|
||||
log_info$msg_orig += is_orig ? id$orig_h : id$resp_h;
|
||||
}
|
||||
|
|
|
@ -35,7 +35,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
|
|||
{
|
||||
if ( ! c$http?$client_header_names )
|
||||
c$http$client_header_names = vector();
|
||||
c$http$client_header_names[|c$http$client_header_names|] = name;
|
||||
c$http$client_header_names += name;
|
||||
}
|
||||
}
|
||||
else
|
||||
|
@ -44,7 +44,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr
|
|||
{
|
||||
if ( ! c$http?$server_header_names )
|
||||
c$http$server_header_names = vector();
|
||||
c$http$server_header_names[|c$http$server_header_names|] = name;
|
||||
c$http$server_header_names += name;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -50,33 +50,33 @@ event bro_init()
|
|||
# Minimum length a heartbeat packet must have for different cipher suites.
|
||||
# Note - tls 1.1f and 1.0 have different lengths :(
|
||||
# This should be all cipher suites usually supported by vulnerable servers.
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_256_GCM_SHA384$/, $min_length=43];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_AES_128_GCM_SHA256$/, $min_length=43];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA384$/, $min_length=96];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA256$/, $min_length=80];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_256_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA256$/, $min_length=80];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_128_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
|
||||
min_lengths_tls11[|min_lengths_tls11|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_256_CBC_SHA$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_128_CBC_SHA$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_SEED_CBC_SHA$/, $min_length=48];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_IDEA_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_DES_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_DES40_CBC_SHA$/, $min_length=40];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_SHA$/, $min_length=39];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC4_128_MD5$/, $min_length=35];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC4_40_MD5$/, $min_length=35];
|
||||
min_lengths[|min_lengths|] = [$cipher=/_RC2_CBC_40_MD5$/, $min_length=40];
|
||||
min_lengths_tls11 += [$cipher=/_AES_256_GCM_SHA384$/, $min_length=43];
|
||||
min_lengths_tls11 += [$cipher=/_AES_128_GCM_SHA256$/, $min_length=43];
|
||||
min_lengths_tls11 += [$cipher=/_256_CBC_SHA384$/, $min_length=96];
|
||||
min_lengths_tls11 += [$cipher=/_256_CBC_SHA256$/, $min_length=80];
|
||||
min_lengths_tls11 += [$cipher=/_256_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11 += [$cipher=/_128_CBC_SHA256$/, $min_length=80];
|
||||
min_lengths_tls11 += [$cipher=/_128_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11 += [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11 += [$cipher=/_SEED_CBC_SHA$/, $min_length=64];
|
||||
min_lengths_tls11 += [$cipher=/_IDEA_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11 += [$cipher=/_DES_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11 += [$cipher=/_DES40_CBC_SHA$/, $min_length=48];
|
||||
min_lengths_tls11 += [$cipher=/_RC4_128_SHA$/, $min_length=39];
|
||||
min_lengths_tls11 += [$cipher=/_RC4_128_MD5$/, $min_length=35];
|
||||
min_lengths_tls11 += [$cipher=/_RC4_40_MD5$/, $min_length=35];
|
||||
min_lengths_tls11 += [$cipher=/_RC2_CBC_40_MD5$/, $min_length=48];
|
||||
min_lengths += [$cipher=/_256_CBC_SHA$/, $min_length=48];
|
||||
min_lengths += [$cipher=/_128_CBC_SHA$/, $min_length=48];
|
||||
min_lengths += [$cipher=/_3DES_EDE_CBC_SHA$/, $min_length=40];
|
||||
min_lengths += [$cipher=/_SEED_CBC_SHA$/, $min_length=48];
|
||||
min_lengths += [$cipher=/_IDEA_CBC_SHA$/, $min_length=40];
|
||||
min_lengths += [$cipher=/_DES_CBC_SHA$/, $min_length=40];
|
||||
min_lengths += [$cipher=/_DES40_CBC_SHA$/, $min_length=40];
|
||||
min_lengths += [$cipher=/_RC4_128_SHA$/, $min_length=39];
|
||||
min_lengths += [$cipher=/_RC4_128_MD5$/, $min_length=35];
|
||||
min_lengths += [$cipher=/_RC4_40_MD5$/, $min_length=35];
|
||||
min_lengths += [$cipher=/_RC2_CBC_40_MD5$/, $min_length=40];
|
||||
}
|
||||
|
||||
event ssl_heartbeat(c: connection, is_orig: bool, length: count, heartbeat_type: count, payload_length: count, payload: string)
|
||||
|
|
|
@ -127,6 +127,9 @@ event Known::cert_found(info: CertsInfo, hash: string)
|
|||
if ( Known::use_cert_store )
|
||||
return;
|
||||
|
||||
if ( [info$host, hash] in Known::certs )
|
||||
return;
|
||||
|
||||
local key = cat(info$host, hash);
|
||||
Cluster::publish_hrw(Cluster::proxy_pool, key, known_cert_add, info, hash);
|
||||
event known_cert_add(info, hash);
|
||||
|
@ -140,6 +143,7 @@ event Cluster::node_up(name: string, id: string)
|
|||
if ( Cluster::local_node_type() != Cluster::WORKER )
|
||||
return;
|
||||
|
||||
# Drop local suppression cache on workers to force HRW key repartitioning.
|
||||
Known::certs = table();
|
||||
}
|
||||
|
||||
|
@ -151,6 +155,7 @@ event Cluster::node_down(name: string, id: string)
|
|||
if ( Cluster::local_node_type() != Cluster::WORKER )
|
||||
return;
|
||||
|
||||
# Drop local suppression cache on workers to force HRW key repartitioning.
|
||||
Known::certs = table();
|
||||
}
|
||||
|
||||
|
|
|
@ -56,7 +56,7 @@ event ssl_established(c: connection) &priority=3
|
|||
local waits_already = digest in waitlist;
|
||||
if ( ! waits_already )
|
||||
waitlist[digest] = vector();
|
||||
waitlist[digest][|waitlist[digest]|] = c$ssl;
|
||||
waitlist[digest] += c$ssl;
|
||||
if ( waits_already )
|
||||
return;
|
||||
|
||||
|
|
|
@ -50,11 +50,11 @@ export {
|
|||
## and is thus disabled by default.
|
||||
global ssl_store_valid_chain: bool = F &redef;
|
||||
|
||||
## Event from a worker to the manager that it has encountered a new
|
||||
## valid intermediate.
|
||||
## Event from a manager to workers when encountering a new, valid
|
||||
## intermediate.
|
||||
global intermediate_add: event(key: string, value: vector of opaque of x509);
|
||||
|
||||
## Event from the manager to the workers that a new intermediate chain
|
||||
## Event from workers to the manager when a new intermediate chain
|
||||
## is to be added.
|
||||
global new_intermediate: event(key: string, value: vector of opaque of x509);
|
||||
}
|
||||
|
|
|
@ -76,7 +76,7 @@ event bro_init()
|
|||
|
||||
event ssl_extension_signed_certificate_timestamp(c: connection, is_orig: bool, version: count, logid: string, timestamp: count, signature_and_hashalgorithm: SSL::SignatureAndHashAlgorithm, signature: string) &priority=5
|
||||
{
|
||||
c$ssl$ct_proofs[|c$ssl$ct_proofs|] = SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_and_hashalgorithm$SignatureAlgorithm, $hash_alg=signature_and_hashalgorithm$HashAlgorithm, $signature=signature, $source=SCT_TLS_EXT);
|
||||
c$ssl$ct_proofs += SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_and_hashalgorithm$SignatureAlgorithm, $hash_alg=signature_and_hashalgorithm$HashAlgorithm, $signature=signature, $source=SCT_TLS_EXT);
|
||||
}
|
||||
|
||||
event x509_ocsp_ext_signed_certificate_timestamp(f: fa_file, version: count, logid: string, timestamp: count, hash_algorithm: count, signature_algorithm: count, signature: string) &priority=5
|
||||
|
@ -103,7 +103,7 @@ event x509_ocsp_ext_signed_certificate_timestamp(f: fa_file, version: count, log
|
|||
local c = f$conns[cid];
|
||||
}
|
||||
|
||||
c$ssl$ct_proofs[|c$ssl$ct_proofs|] = SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_algorithm, $hash_alg=hash_algorithm, $signature=signature, $source=src);
|
||||
c$ssl$ct_proofs += SctInfo($version=version, $logid=logid, $timestamp=timestamp, $sig_alg=signature_algorithm, $hash_alg=hash_algorithm, $signature=signature, $source=src);
|
||||
}
|
||||
|
||||
# Priority = 19 will be handled after validation is done
|
||||
|
|
|
@ -1 +1 @@
|
|||
Subproject commit b7c6be774b922be1e15f53571201c3be2bc28b75
|
||||
Subproject commit 02c5b1d6a3990ca989377798bc7e89eacf4713aa
|
44
src/Conn.cc
44
src/Conn.cc
|
@ -289,6 +289,50 @@ bool Connection::IsReuse(double t, const u_char* pkt)
|
|||
return root_analyzer && root_analyzer->IsReuse(t, pkt);
|
||||
}
|
||||
|
||||
bool Connection::ScaledHistoryEntry(char code, uint32& counter,
|
||||
uint32& scaling_threshold,
|
||||
uint32 scaling_base)
|
||||
{
|
||||
if ( ++counter == scaling_threshold )
|
||||
{
|
||||
AddHistory(code);
|
||||
|
||||
auto new_threshold = scaling_threshold * scaling_base;
|
||||
|
||||
if ( new_threshold <= scaling_threshold )
|
||||
// This can happen due to wrap-around. In that
|
||||
// case, reset the counter but leave the threshold
|
||||
// unchanged.
|
||||
counter = 0;
|
||||
|
||||
else
|
||||
scaling_threshold = new_threshold;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
void Connection::HistoryThresholdEvent(EventHandlerPtr e, bool is_orig,
|
||||
uint32 threshold)
|
||||
{
|
||||
if ( ! e )
|
||||
return;
|
||||
|
||||
if ( threshold == 1 )
|
||||
// This will be far and away the most common case,
|
||||
// and at this stage it's not a *multiple* instance.
|
||||
return;
|
||||
|
||||
val_list* vl = new val_list;
|
||||
vl->append(BuildConnVal());
|
||||
vl->append(new Val(is_orig, TYPE_BOOL));
|
||||
vl->append(new Val(threshold, TYPE_COUNT));
|
||||
|
||||
ConnectionEvent(e, 0, vl);
|
||||
}
|
||||
|
||||
void Connection::DeleteTimer(double /* t */)
|
||||
{
|
||||
if ( is_active )
|
||||
|
|
11
src/Conn.h
11
src/Conn.h
|
@ -240,6 +240,17 @@ public:
|
|||
return true;
|
||||
}
|
||||
|
||||
// Increments the passed counter and adds it as a history
|
||||
// code if it has crossed the next scaling threshold. Scaling
|
||||
// is done in terms of powers of the third argument.
|
||||
// Returns true if the threshold was crossed, false otherwise.
|
||||
bool ScaledHistoryEntry(char code, uint32& counter,
|
||||
uint32& scaling_threshold,
|
||||
uint32 scaling_base = 10);
|
||||
|
||||
void HistoryThresholdEvent(EventHandlerPtr e, bool is_orig,
|
||||
uint32 threshold);
|
||||
|
||||
void AddHistory(char code) { history += code; }
|
||||
|
||||
void DeleteTimer(double t);
|
||||
|
|
197
src/Expr.cc
197
src/Expr.cc
|
@ -16,6 +16,8 @@
|
|||
#include "Trigger.h"
|
||||
#include "IPAddr.h"
|
||||
|
||||
#include "broker/Data.h"
|
||||
|
||||
const char* expr_name(BroExprTag t)
|
||||
{
|
||||
static const char* expr_names[int(NUM_EXPRS)] = {
|
||||
|
@ -672,6 +674,9 @@ Val* BinaryExpr::Fold(Val* v1, Val* v2) const
|
|||
if ( v1->Type()->Tag() == TYPE_PATTERN )
|
||||
return PatternFold(v1, v2);
|
||||
|
||||
if ( v1->Type()->IsSet() )
|
||||
return SetFold(v1, v2);
|
||||
|
||||
if ( it == TYPE_INTERNAL_ADDR )
|
||||
return AddrFold(v1, v2);
|
||||
|
||||
|
@ -858,6 +863,7 @@ Val* BinaryExpr::StringFold(Val* v1, Val* v2) const
|
|||
return new Val(result, TYPE_BOOL);
|
||||
}
|
||||
|
||||
|
||||
Val* BinaryExpr::PatternFold(Val* v1, Val* v2) const
|
||||
{
|
||||
const RE_Matcher* re1 = v1->AsPattern();
|
||||
|
@ -873,6 +879,61 @@ Val* BinaryExpr::PatternFold(Val* v1, Val* v2) const
|
|||
return new PatternVal(res);
|
||||
}
|
||||
|
||||
Val* BinaryExpr::SetFold(Val* v1, Val* v2) const
|
||||
{
|
||||
TableVal* tv1 = v1->AsTableVal();
|
||||
TableVal* tv2 = v2->AsTableVal();
|
||||
TableVal* result;
|
||||
bool res = false;
|
||||
|
||||
switch ( tag ) {
|
||||
case EXPR_AND:
|
||||
return tv1->Intersect(tv2);
|
||||
|
||||
case EXPR_OR:
|
||||
result = v1->Clone()->AsTableVal();
|
||||
|
||||
if ( ! tv2->AddTo(result, false, false) )
|
||||
reporter->InternalError("set union failed to type check");
|
||||
return result;
|
||||
|
||||
case EXPR_SUB:
|
||||
result = v1->Clone()->AsTableVal();
|
||||
|
||||
if ( ! tv2->RemoveFrom(result) )
|
||||
reporter->InternalError("set difference failed to type check");
|
||||
return result;
|
||||
|
||||
case EXPR_EQ:
|
||||
res = tv1->EqualTo(tv2);
|
||||
break;
|
||||
|
||||
case EXPR_NE:
|
||||
res = ! tv1->EqualTo(tv2);
|
||||
break;
|
||||
|
||||
case EXPR_LT:
|
||||
res = tv1->IsSubsetOf(tv2) && tv1->Size() < tv2->Size();
|
||||
break;
|
||||
|
||||
case EXPR_LE:
|
||||
res = tv1->IsSubsetOf(tv2);
|
||||
break;
|
||||
|
||||
case EXPR_GE:
|
||||
case EXPR_GT:
|
||||
// These should't happen due to canonicalization.
|
||||
reporter->InternalError("confusion over canonicalization in set comparison");
|
||||
break;
|
||||
|
||||
default:
|
||||
BadTag("BinaryExpr::SetFold", expr_name(tag));
|
||||
return 0;
|
||||
}
|
||||
|
||||
return new Val(res, TYPE_BOOL);
|
||||
}
|
||||
|
||||
Val* BinaryExpr::AddrFold(Val* v1, Val* v2) const
|
||||
{
|
||||
IPAddr a1 = v1->AsAddr();
|
||||
|
@ -1390,7 +1451,8 @@ bool AddExpr::DoUnserialize(UnserialInfo* info)
|
|||
}
|
||||
|
||||
AddToExpr::AddToExpr(Expr* arg_op1, Expr* arg_op2)
|
||||
: BinaryExpr(EXPR_ADD_TO, arg_op1->MakeLvalue(), arg_op2)
|
||||
: BinaryExpr(EXPR_ADD_TO,
|
||||
is_vector(arg_op1) ? arg_op1 : arg_op1->MakeLvalue(), arg_op2)
|
||||
{
|
||||
if ( IsError() )
|
||||
return;
|
||||
|
@ -1404,6 +1466,32 @@ AddToExpr::AddToExpr(Expr* arg_op1, Expr* arg_op2)
|
|||
SetType(base_type(bt1));
|
||||
else if ( BothInterval(bt1, bt2) )
|
||||
SetType(base_type(bt1));
|
||||
|
||||
else if ( IsVector(bt1) )
|
||||
{
|
||||
bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
if ( IsArithmetic(bt1) )
|
||||
{
|
||||
if ( IsArithmetic(bt2) )
|
||||
{
|
||||
if ( bt2 != bt1 )
|
||||
op2 = new ArithCoerceExpr(op2, bt1);
|
||||
|
||||
SetType(op1->Type()->Ref());
|
||||
}
|
||||
|
||||
else
|
||||
ExprError("appending non-arithmetic to arithmetic vector");
|
||||
}
|
||||
|
||||
else if ( bt1 != bt2 )
|
||||
ExprError("incompatible vector append");
|
||||
|
||||
else
|
||||
SetType(op1->Type()->Ref());
|
||||
}
|
||||
|
||||
else
|
||||
ExprError("requires two arithmetic or two string operands");
|
||||
}
|
||||
|
@ -1421,6 +1509,14 @@ Val* AddToExpr::Eval(Frame* f) const
|
|||
return 0;
|
||||
}
|
||||
|
||||
if ( is_vector(v1) )
|
||||
{
|
||||
VectorVal* vv = v1->AsVectorVal();
|
||||
if ( ! vv->Assign(vv->Size(), v2) )
|
||||
reporter->Error("type-checking failed in vector append");
|
||||
return v1;
|
||||
}
|
||||
|
||||
Val* result = Fold(v1, v2);
|
||||
Unref(v1);
|
||||
Unref(v2);
|
||||
|
@ -1454,24 +1550,39 @@ SubExpr::SubExpr(Expr* arg_op1, Expr* arg_op2)
|
|||
if ( IsError() )
|
||||
return;
|
||||
|
||||
TypeTag bt1 = op1->Type()->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
|
||||
const BroType* t1 = op1->Type();
|
||||
const BroType* t2 = op2->Type();
|
||||
|
||||
TypeTag bt2 = op2->Type()->Tag();
|
||||
TypeTag bt1 = t1->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = t1->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
TypeTag bt2 = t2->Tag();
|
||||
if ( IsVector(bt2) )
|
||||
bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
|
||||
bt2 = t2->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
BroType* base_result_type = 0;
|
||||
|
||||
if ( bt1 == TYPE_TIME && bt2 == TYPE_INTERVAL )
|
||||
base_result_type = base_type(bt1);
|
||||
|
||||
else if ( bt1 == TYPE_TIME && bt2 == TYPE_TIME )
|
||||
SetType(base_type(TYPE_INTERVAL));
|
||||
|
||||
else if ( bt1 == TYPE_INTERVAL && bt2 == TYPE_INTERVAL )
|
||||
base_result_type = base_type(bt1);
|
||||
|
||||
else if ( t1->IsSet() && t2->IsSet() )
|
||||
{
|
||||
if ( same_type(t1, t2) )
|
||||
SetType(op1->Type()->Ref());
|
||||
else
|
||||
ExprError("incompatible \"set\" operands");
|
||||
}
|
||||
|
||||
else if ( BothArithmetic(bt1, bt2) )
|
||||
PromoteType(max_type(bt1, bt2), is_vector(op1) || is_vector(op2));
|
||||
|
||||
else
|
||||
ExprError("requires arithmetic operands");
|
||||
|
||||
|
@ -1888,13 +1999,16 @@ BitExpr::BitExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
|
|||
if ( IsError() )
|
||||
return;
|
||||
|
||||
TypeTag bt1 = op1->Type()->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
|
||||
const BroType* t1 = op1->Type();
|
||||
const BroType* t2 = op2->Type();
|
||||
|
||||
TypeTag bt2 = op2->Type()->Tag();
|
||||
TypeTag bt1 = t1->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = t1->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
TypeTag bt2 = t2->Tag();
|
||||
if ( IsVector(bt2) )
|
||||
bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
|
||||
bt2 = t2->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
if ( (bt1 == TYPE_COUNT || bt1 == TYPE_COUNTER) &&
|
||||
(bt2 == TYPE_COUNT || bt2 == TYPE_COUNTER) )
|
||||
|
@ -1917,8 +2031,16 @@ BitExpr::BitExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
|
|||
SetType(base_type(TYPE_PATTERN));
|
||||
}
|
||||
|
||||
else if ( t1->IsSet() && t2->IsSet() )
|
||||
{
|
||||
if ( same_type(t1, t2) )
|
||||
SetType(op1->Type()->Ref());
|
||||
else
|
||||
ExprError("requires \"count\" operands");
|
||||
ExprError("incompatible \"set\" operands");
|
||||
}
|
||||
|
||||
else
|
||||
ExprError("requires \"count\" or compatible \"set\" operands");
|
||||
}
|
||||
|
||||
IMPLEMENT_SERIAL(BitExpr, SER_BIT_EXPR);
|
||||
|
@ -1943,13 +2065,16 @@ EqExpr::EqExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
|
|||
|
||||
Canonicize();
|
||||
|
||||
TypeTag bt1 = op1->Type()->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
|
||||
const BroType* t1 = op1->Type();
|
||||
const BroType* t2 = op2->Type();
|
||||
|
||||
TypeTag bt2 = op2->Type()->Tag();
|
||||
TypeTag bt1 = t1->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = t1->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
TypeTag bt2 = t2->Tag();
|
||||
if ( IsVector(bt2) )
|
||||
bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
|
||||
bt2 = t2->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
if ( is_vector(op1) || is_vector(op2) )
|
||||
SetType(new VectorType(base_type(TYPE_BOOL)));
|
||||
|
@ -1979,10 +2104,20 @@ EqExpr::EqExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
|
|||
break;
|
||||
|
||||
case TYPE_ENUM:
|
||||
if ( ! same_type(op1->Type(), op2->Type()) )
|
||||
if ( ! same_type(t1, t2) )
|
||||
ExprError("illegal enum comparison");
|
||||
break;
|
||||
|
||||
case TYPE_TABLE:
|
||||
if ( t1->IsSet() && t2->IsSet() )
|
||||
{
|
||||
if ( ! same_type(t1, t2) )
|
||||
ExprError("incompatible sets in comparison");
|
||||
break;
|
||||
}
|
||||
|
||||
// FALL THROUGH
|
||||
|
||||
default:
|
||||
ExprError("illegal comparison");
|
||||
}
|
||||
|
@ -2045,13 +2180,16 @@ RelExpr::RelExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
|
|||
|
||||
Canonicize();
|
||||
|
||||
TypeTag bt1 = op1->Type()->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = op1->Type()->AsVectorType()->YieldType()->Tag();
|
||||
const BroType* t1 = op1->Type();
|
||||
const BroType* t2 = op2->Type();
|
||||
|
||||
TypeTag bt2 = op2->Type()->Tag();
|
||||
TypeTag bt1 = t1->Tag();
|
||||
if ( IsVector(bt1) )
|
||||
bt1 = t1->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
TypeTag bt2 = t2->Tag();
|
||||
if ( IsVector(bt2) )
|
||||
bt2 = op2->Type()->AsVectorType()->YieldType()->Tag();
|
||||
bt2 = t2->AsVectorType()->YieldType()->Tag();
|
||||
|
||||
if ( is_vector(op1) || is_vector(op2) )
|
||||
SetType(new VectorType(base_type(TYPE_BOOL)));
|
||||
|
@ -2061,6 +2199,12 @@ RelExpr::RelExpr(BroExprTag arg_tag, Expr* arg_op1, Expr* arg_op2)
|
|||
if ( BothArithmetic(bt1, bt2) )
|
||||
PromoteOps(max_type(bt1, bt2));
|
||||
|
||||
else if ( t1->IsSet() && t2->IsSet() )
|
||||
{
|
||||
if ( ! same_type(t1, t2) )
|
||||
ExprError("incompatible sets in comparison");
|
||||
}
|
||||
|
||||
else if ( bt1 != bt2 )
|
||||
ExprError("operands must be of the same type");
|
||||
|
||||
|
@ -5361,11 +5505,16 @@ Val* CastExpr::Eval(Frame* f) const
|
|||
}
|
||||
|
||||
ODesc d;
|
||||
d.Add("cannot cast value of type '");
|
||||
d.Add("invalid cast of value with type '");
|
||||
v->Type()->Describe(&d);
|
||||
d.Add("' to type '");
|
||||
Type()->Describe(&d);
|
||||
d.Add("'");
|
||||
|
||||
if ( same_type(v->Type(), bro_broker::DataVal::ScriptDataType()) &&
|
||||
! v->AsRecordVal()->Lookup(0) )
|
||||
d.Add(" (nil $data field)");
|
||||
|
||||
Unref(v);
|
||||
reporter->ExprRuntimeError(this, "%s", d.Description());
|
||||
return 0; // not reached.
|
||||
|
|
|
@ -332,6 +332,9 @@ protected:
|
|||
// Same for when the constants are patterns.
|
||||
virtual Val* PatternFold(Val* v1, Val* v2) const;
|
||||
|
||||
// Same for when the constants are sets.
|
||||
virtual Val* SetFold(Val* v1, Val* v2) const;
|
||||
|
||||
// Same for when the constants are addresses or subnets.
|
||||
virtual Val* AddrFold(Val* v1, Val* v2) const;
|
||||
virtual Val* SubNetFold(Val* v1, Val* v2) const;
|
||||
|
|
16
src/ID.cc
16
src/ID.cc
|
@ -294,6 +294,22 @@ void ID::RemoveAttr(attr_tag a)
|
|||
}
|
||||
}
|
||||
|
||||
void ID::SetOption()
|
||||
{
|
||||
if ( is_option )
|
||||
return;
|
||||
|
||||
is_option = true;
|
||||
|
||||
// option implied redefinable
|
||||
if ( ! IsRedefinable() )
|
||||
{
|
||||
attr_list* attr = new attr_list;
|
||||
attr->append(new Attr(ATTR_REDEF));
|
||||
AddAttrs(new Attributes(attr, Type(), false));
|
||||
}
|
||||
}
|
||||
|
||||
void ID::EvalFunc(Expr* ef, Expr* ev)
|
||||
{
|
||||
Expr* arg1 = new ConstExpr(val->Ref());
|
||||
|
|
2
src/ID.h
2
src/ID.h
|
@ -60,7 +60,7 @@ public:
|
|||
void SetConst() { is_const = true; }
|
||||
bool IsConst() const { return is_const; }
|
||||
|
||||
void SetOption() { is_option = true; }
|
||||
void SetOption();
|
||||
bool IsOption() const { return is_option; }
|
||||
|
||||
void SetEnumConst() { is_enum_const = true; }
|
||||
|
|
|
@ -532,7 +532,7 @@ void NetSessions::DoNextPacket(double t, const Packet* pkt, const IP_Hdr* ip_hdr
|
|||
// If a carried packet has ethernet, this will help skip it.
|
||||
unsigned int eth_len = 0;
|
||||
unsigned int gre_len = gre_header_len(flags_ver);
|
||||
unsigned int ppp_len = gre_version == 1 ? 1 : 0;
|
||||
unsigned int ppp_len = gre_version == 1 ? 4 : 0;
|
||||
|
||||
if ( gre_version != 0 && gre_version != 1 )
|
||||
{
|
||||
|
@ -598,7 +598,7 @@ void NetSessions::DoNextPacket(double t, const Packet* pkt, const IP_Hdr* ip_hdr
|
|||
|
||||
if ( gre_version == 1 )
|
||||
{
|
||||
int ppp_proto = *((uint8*)(data + gre_len));
|
||||
uint16 ppp_proto = ntohs(*((uint16*)(data + gre_len + 2)));
|
||||
|
||||
if ( ppp_proto != 0x0021 && ppp_proto != 0x0057 )
|
||||
{
|
||||
|
|
101
src/Val.cc
101
src/Val.cc
|
@ -1706,9 +1706,11 @@ int TableVal::RemoveFrom(Val* val) const
|
|||
HashKey* k;
|
||||
while ( tbl->NextEntry(k, c) )
|
||||
{
|
||||
Val* index = RecoverIndex(k);
|
||||
|
||||
Unref(index);
|
||||
// Not sure that this is 100% sound, since the HashKey
|
||||
// comes from one table but is being used in another.
|
||||
// OTOH, they are both the same type, so as long as
|
||||
// we don't have hash keys that are keyed per dictionary,
|
||||
// it should work ...
|
||||
Unref(t->Delete(k));
|
||||
delete k;
|
||||
}
|
||||
|
@ -1716,6 +1718,91 @@ int TableVal::RemoveFrom(Val* val) const
|
|||
return 1;
|
||||
}
|
||||
|
||||
TableVal* TableVal::Intersect(const TableVal* tv) const
|
||||
{
|
||||
TableVal* result = new TableVal(table_type);
|
||||
|
||||
const PDict(TableEntryVal)* t0 = AsTable();
|
||||
const PDict(TableEntryVal)* t1 = tv->AsTable();
|
||||
PDict(TableEntryVal)* t2 = result->AsNonConstTable();
|
||||
|
||||
// Figure out which is smaller; assign it to t1.
|
||||
if ( t1->Length() > t0->Length() )
|
||||
{ // Swap.
|
||||
const PDict(TableEntryVal)* tmp = t1;
|
||||
t1 = t0;
|
||||
t0 = tmp;
|
||||
}
|
||||
|
||||
IterCookie* c = t1->InitForIteration();
|
||||
HashKey* k;
|
||||
while ( t1->NextEntry(k, c) )
|
||||
{
|
||||
// Here we leverage the same assumption about consistent
|
||||
// hashes as in TableVal::RemoveFrom above.
|
||||
if ( t0->Lookup(k) )
|
||||
t2->Insert(k, new TableEntryVal(0));
|
||||
|
||||
delete k;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
bool TableVal::EqualTo(const TableVal* tv) const
|
||||
{
|
||||
const PDict(TableEntryVal)* t0 = AsTable();
|
||||
const PDict(TableEntryVal)* t1 = tv->AsTable();
|
||||
|
||||
if ( t0->Length() != t1->Length() )
|
||||
return false;
|
||||
|
||||
IterCookie* c = t0->InitForIteration();
|
||||
HashKey* k;
|
||||
while ( t0->NextEntry(k, c) )
|
||||
{
|
||||
// Here we leverage the same assumption about consistent
|
||||
// hashes as in TableVal::RemoveFrom above.
|
||||
if ( ! t1->Lookup(k) )
|
||||
{
|
||||
delete k;
|
||||
t0->StopIteration(c);
|
||||
return false;
|
||||
}
|
||||
|
||||
delete k;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
bool TableVal::IsSubsetOf(const TableVal* tv) const
|
||||
{
|
||||
const PDict(TableEntryVal)* t0 = AsTable();
|
||||
const PDict(TableEntryVal)* t1 = tv->AsTable();
|
||||
|
||||
if ( t0->Length() > t1->Length() )
|
||||
return false;
|
||||
|
||||
IterCookie* c = t0->InitForIteration();
|
||||
HashKey* k;
|
||||
while ( t0->NextEntry(k, c) )
|
||||
{
|
||||
// Here we leverage the same assumption about consistent
|
||||
// hashes as in TableVal::RemoveFrom above.
|
||||
if ( ! t1->Lookup(k) )
|
||||
{
|
||||
delete k;
|
||||
t0->StopIteration(c);
|
||||
return false;
|
||||
}
|
||||
|
||||
delete k;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
int TableVal::ExpandAndInit(Val* index, Val* new_val)
|
||||
{
|
||||
BroType* index_type = index->Type();
|
||||
|
@ -3545,6 +3632,10 @@ Val* cast_value_to_type(Val* v, BroType* t)
|
|||
if ( same_type(v->Type(), bro_broker::DataVal::ScriptDataType()) )
|
||||
{
|
||||
auto dv = v->AsRecordVal()->Lookup(0);
|
||||
|
||||
if ( ! dv )
|
||||
return 0;
|
||||
|
||||
return static_cast<bro_broker::DataVal *>(dv)->castTo(t);
|
||||
}
|
||||
|
||||
|
@ -3567,6 +3658,10 @@ bool can_cast_value_to_type(const Val* v, BroType* t)
|
|||
if ( same_type(v->Type(), bro_broker::DataVal::ScriptDataType()) )
|
||||
{
|
||||
auto dv = v->AsRecordVal()->Lookup(0);
|
||||
|
||||
if ( ! dv )
|
||||
return false;
|
||||
|
||||
return static_cast<const bro_broker::DataVal *>(dv)->canCastTo(t);
|
||||
}
|
||||
|
||||
|
|
18
src/Val.h
18
src/Val.h
|
@ -809,6 +809,22 @@ public:
|
|||
// Returns true if the addition typechecked, false if not.
|
||||
int RemoveFrom(Val* v) const override;
|
||||
|
||||
// Returns a new table that is the intersection of this
|
||||
// table and the given table. Intersection is just done
|
||||
// on index, not on yield value, so this really only makes
|
||||
// sense for sets.
|
||||
TableVal* Intersect(const TableVal* v) const;
|
||||
|
||||
// Returns true if this set contains the same members as the
|
||||
// given set. Note that comparisons are done using hash keys,
|
||||
// so errors can arise for compound sets such as sets-of-sets.
|
||||
// See https://bro-tracker.atlassian.net/browse/BIT-1949.
|
||||
bool EqualTo(const TableVal* v) const;
|
||||
|
||||
// Returns true if this set is a subset (not necessarily proper)
|
||||
// of the given set.
|
||||
bool IsSubsetOf(const TableVal* v) const;
|
||||
|
||||
// Expands any lists in the index into multiple initializations.
|
||||
// Returns true if the initializations typecheck, false if not.
|
||||
int ExpandAndInit(Val* index, Val* new_val);
|
||||
|
@ -1015,8 +1031,6 @@ public:
|
|||
|
||||
// Returns false if the type of the argument was wrong.
|
||||
// The vector will automatically grow to accomodate the index.
|
||||
// 'assigner" is the expression that is doing the assignment;
|
||||
// it's just used for pinpointing errors.
|
||||
//
|
||||
// Note: does NOT Ref() the element! Remember to do so unless
|
||||
// the element was just created and thus has refcount 1.
|
||||
|
|
|
@ -459,7 +459,7 @@ bool TCP_Analyzer::ValidateChecksum(const struct tcphdr* tp,
|
|||
! endpoint->ValidChecksum(tp, len) )
|
||||
{
|
||||
Weird("bad_TCP_checksum");
|
||||
endpoint->CheckHistory(HIST_CORRUPT_PKT, 'C');
|
||||
endpoint->ChecksumError();
|
||||
return false;
|
||||
}
|
||||
else
|
||||
|
@ -579,16 +579,38 @@ static void init_window(TCP_Endpoint* endpoint, TCP_Endpoint* peer,
|
|||
static void update_window(TCP_Endpoint* endpoint, unsigned int window,
|
||||
uint32 base_seq, uint32 ack_seq, TCP_Flags flags)
|
||||
{
|
||||
// Note, the offered window on an initial SYN is unscaled, even
|
||||
// if the SYN includes scaling, so we need to do the following
|
||||
// test *before* updating the scaling information below. (Hmmm,
|
||||
// how does this work for windows on SYN/ACKs? ###)
|
||||
// Note, applying scaling here would be incorrect for an initial SYN,
|
||||
// whose window value is always unscaled. However, we don't
|
||||
// check the window's value for recision in that case anyway, so
|
||||
// no-harm-no-foul.
|
||||
int scale = endpoint->window_scale;
|
||||
window = window << scale;
|
||||
|
||||
// Zero windows are boring if either (1) they come with a RST packet
|
||||
// or after a RST packet, or (2) they come after the peer has sent
|
||||
// a FIN (because there's no relevant window at that point anyway).
|
||||
// (They're also boring if they come after the peer has sent a RST,
|
||||
// but *nothing* should be sent in response to a RST, so we ignore
|
||||
// that case.)
|
||||
//
|
||||
// However, they *are* potentially interesting if sent by an
|
||||
// endpoint that's already sent a FIN, since that FIN meant "I'm
|
||||
// not going to send any more", but doesn't mean "I won't receive
|
||||
// any more".
|
||||
if ( window == 0 && ! flags.RST() &&
|
||||
endpoint->peer->state != TCP_ENDPOINT_CLOSED &&
|
||||
endpoint->state != TCP_ENDPOINT_RESET )
|
||||
endpoint->ZeroWindow();
|
||||
|
||||
// Don't analyze window values off of SYNs, they're sometimes
|
||||
// immediately rescinded.
|
||||
if ( ! flags.SYN() )
|
||||
// immediately rescinded. Also don't do so for FINs or RSTs,
|
||||
// or if the connection has already been partially closed, since
|
||||
// such recisions occur frequently in practice, probably as the
|
||||
// receiver loses buffer memory due to its process going away.
|
||||
|
||||
if ( ! flags.SYN() && ! flags.FIN() && ! flags.RST() &&
|
||||
endpoint->state != TCP_ENDPOINT_CLOSED &&
|
||||
endpoint->state != TCP_ENDPOINT_RESET )
|
||||
{
|
||||
// ### Decide whether to accept new window based on Active
|
||||
// Mapping policy.
|
||||
|
@ -601,21 +623,12 @@ static void update_window(TCP_Endpoint* endpoint, unsigned int window,
|
|||
|
||||
if ( advance < 0 )
|
||||
{
|
||||
// A window recision. We don't report these
|
||||
// for FINs or RSTs, or if the connection
|
||||
// has already been partially closed, since
|
||||
// such recisions occur frequently in practice,
|
||||
// probably as the receiver loses buffer memory
|
||||
// due to its process going away.
|
||||
//
|
||||
// We also, for window scaling, allow a bit
|
||||
// of slop ###. This is because sometimes
|
||||
// there will be an apparent recision due
|
||||
// to the granularity of the scaling.
|
||||
if ( ! flags.FIN() && ! flags.RST() &&
|
||||
endpoint->state != TCP_ENDPOINT_CLOSED &&
|
||||
endpoint->state != TCP_ENDPOINT_RESET &&
|
||||
(-advance) >= (1 << scale) )
|
||||
// An apparent window recision. Allow a
|
||||
// bit of slop for window scaling. This is
|
||||
// because sometimes there will be an
|
||||
// apparent recision due to the granularity
|
||||
// of the scaling.
|
||||
if ( (-advance) >= (1 << scale) )
|
||||
endpoint->Conn()->Weird("window_recision");
|
||||
}
|
||||
|
||||
|
@ -1206,7 +1219,7 @@ static int32 update_last_seq(TCP_Endpoint* endpoint, uint32 last_seq,
|
|||
endpoint->UpdateLastSeq(last_seq);
|
||||
|
||||
else if ( delta_last < 0 && len > 0 )
|
||||
endpoint->CheckHistory(HIST_RXMIT, 'T');
|
||||
endpoint->DidRxmit();
|
||||
|
||||
return delta_last;
|
||||
}
|
||||
|
|
|
@ -32,6 +32,9 @@ TCP_Endpoint::TCP_Endpoint(TCP_Analyzer* arg_analyzer, int arg_is_orig)
|
|||
tcp_analyzer = arg_analyzer;
|
||||
is_orig = arg_is_orig;
|
||||
|
||||
chk_cnt = rxmt_cnt = win0_cnt = 0;
|
||||
chk_thresh = rxmt_thresh = win0_thresh = 1;
|
||||
|
||||
hist_last_SYN = hist_last_FIN = hist_last_RST = 0;
|
||||
|
||||
src_addr = is_orig ? Conn()->RespAddr() : Conn()->OrigAddr();
|
||||
|
@ -284,3 +287,29 @@ void TCP_Endpoint::AddHistory(char code)
|
|||
Conn()->AddHistory(code);
|
||||
}
|
||||
|
||||
void TCP_Endpoint::ChecksumError()
|
||||
{
|
||||
uint32 t = chk_thresh;
|
||||
if ( Conn()->ScaledHistoryEntry(IsOrig() ? 'C' : 'c',
|
||||
chk_cnt, chk_thresh) )
|
||||
Conn()->HistoryThresholdEvent(tcp_multiple_checksum_errors,
|
||||
IsOrig(), t);
|
||||
}
|
||||
|
||||
void TCP_Endpoint::DidRxmit()
|
||||
{
|
||||
uint32 t = rxmt_thresh;
|
||||
if ( Conn()->ScaledHistoryEntry(IsOrig() ? 'T' : 't',
|
||||
rxmt_cnt, rxmt_thresh) )
|
||||
Conn()->HistoryThresholdEvent(tcp_multiple_retransmissions,
|
||||
IsOrig(), t);
|
||||
}
|
||||
|
||||
void TCP_Endpoint::ZeroWindow()
|
||||
{
|
||||
uint32 t = win0_thresh;
|
||||
if ( Conn()->ScaledHistoryEntry(IsOrig() ? 'W' : 'w',
|
||||
win0_cnt, win0_thresh) )
|
||||
Conn()->HistoryThresholdEvent(tcp_multiple_zero_windows,
|
||||
IsOrig(), t);
|
||||
}
|
||||
|
|
|
@ -166,6 +166,15 @@ public:
|
|||
|
||||
int ValidChecksum(const struct tcphdr* tp, int len) const;
|
||||
|
||||
// Called to inform endpoint that it has generated a checksum error.
|
||||
void ChecksumError();
|
||||
|
||||
// Called to inform endpoint that it has generated a retransmission.
|
||||
void DidRxmit();
|
||||
|
||||
// Called to inform endpoint that it has offered a zero window.
|
||||
void ZeroWindow();
|
||||
|
||||
// Returns true if the data was used (and hence should be recorded
|
||||
// in the save file), false otherwise.
|
||||
int DataSent(double t, uint64 seq, int len, int caplen, const u_char* data,
|
||||
|
@ -188,6 +197,7 @@ public:
|
|||
#define HIST_MULTI_FLAG_PKT 0x40
|
||||
#define HIST_CORRUPT_PKT 0x80
|
||||
#define HIST_RXMIT 0x100
|
||||
#define HIST_WIN0 0x200
|
||||
int CheckHistory(uint32 mask, char code);
|
||||
void AddHistory(char code);
|
||||
|
||||
|
@ -202,7 +212,7 @@ public:
|
|||
double start_time, last_time;
|
||||
IPAddr src_addr; // the other endpoint
|
||||
IPAddr dst_addr; // this endpoint
|
||||
uint32 window; // current congestion window (*scaled*, not pre-scaling)
|
||||
uint32 window; // current advertised window (*scaled*, not pre-scaling)
|
||||
int window_scale; // from the TCP option
|
||||
uint32 window_ack_seq; // at which ack_seq number did we record 'window'
|
||||
uint32 window_seq; // at which sending sequence number did we record 'window'
|
||||
|
@ -225,6 +235,11 @@ protected:
|
|||
uint32 last_seq, ack_seq; // in host order
|
||||
uint32 seq_wraps, ack_wraps; // Number of times 32-bit TCP sequence space
|
||||
// has wrapped around (overflowed).
|
||||
|
||||
// Performance history accounting.
|
||||
uint32 chk_cnt, chk_thresh;
|
||||
uint32 rxmt_cnt, rxmt_thresh;
|
||||
uint32 win0_cnt, win0_thresh;
|
||||
};
|
||||
|
||||
#define ENDIAN_UNKNOWN 0
|
||||
|
|
|
@ -290,6 +290,43 @@ event tcp_contents%(c: connection, is_orig: bool, seq: count, contents: string%)
|
|||
## TODO.
|
||||
event tcp_rexmit%(c: connection, is_orig: bool, seq: count, len: count, data_in_flight: count, window: count%);
|
||||
|
||||
## Generated if a TCP flow crosses a checksum-error threshold, per
|
||||
## 'C'/'c' history reporting.
|
||||
##
|
||||
## c: The connection record for the TCP connection.
|
||||
##
|
||||
## is_orig: True if the event is raised for the originator side.
|
||||
##
|
||||
## threshold: the threshold that was crossed
|
||||
##
|
||||
## .. bro:see:: udp_multiple_checksum_errors
|
||||
## tcp_multiple_zero_windows tcp_multiple_retransmissions
|
||||
event tcp_multiple_checksum_errors%(c: connection, is_orig: bool, threshold: count%);
|
||||
|
||||
## Generated if a TCP flow crosses a zero-window threshold, per
|
||||
## 'W'/'w' history reporting.
|
||||
##
|
||||
## c: The connection record for the TCP connection.
|
||||
##
|
||||
## is_orig: True if the event is raised for the originator side.
|
||||
##
|
||||
## threshold: the threshold that was crossed
|
||||
##
|
||||
## .. bro:see:: tcp_multiple_checksum_errors tcp_multiple_retransmissions
|
||||
event tcp_multiple_zero_windows%(c: connection, is_orig: bool, threshold: count%);
|
||||
|
||||
## Generated if a TCP flow crosses a retransmission threshold, per
|
||||
## 'T'/'t' history reporting.
|
||||
##
|
||||
## c: The connection record for the TCP connection.
|
||||
##
|
||||
## is_orig: True if the event is raised for the originator side.
|
||||
##
|
||||
## threshold: the threshold that was crossed
|
||||
##
|
||||
## .. bro:see:: tcp_multiple_checksum_errors tcp_multiple_zero_windows
|
||||
event tcp_multiple_retransmissions%(c: connection, is_orig: bool, threshold: count%);
|
||||
|
||||
## Generated when failing to write contents of a TCP stream to a file.
|
||||
##
|
||||
## c: The connection whose contents are being recorded.
|
||||
|
|
|
@ -20,6 +20,9 @@ UDP_Analyzer::UDP_Analyzer(Connection* conn)
|
|||
conn->EnableStatusUpdateTimer();
|
||||
conn->SetInactivityTimeout(udp_inactivity_timeout);
|
||||
request_len = reply_len = -1; // -1 means "haven't seen any activity"
|
||||
|
||||
req_chk_cnt = rep_chk_cnt = 0;
|
||||
req_chk_thresh = rep_chk_thresh = 1;
|
||||
}
|
||||
|
||||
UDP_Analyzer::~UDP_Analyzer()
|
||||
|
@ -77,9 +80,19 @@ void UDP_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig,
|
|||
Weird("bad_UDP_checksum");
|
||||
|
||||
if ( is_orig )
|
||||
Conn()->CheckHistory(HIST_ORIG_CORRUPT_PKT, 'C');
|
||||
{
|
||||
uint32 t = req_chk_thresh;
|
||||
if ( Conn()->ScaledHistoryEntry('C', req_chk_cnt,
|
||||
req_chk_thresh) )
|
||||
ChecksumEvent(is_orig, t);
|
||||
}
|
||||
else
|
||||
Conn()->CheckHistory(HIST_RESP_CORRUPT_PKT, 'c');
|
||||
{
|
||||
uint32 t = rep_chk_thresh;
|
||||
if ( Conn()->ScaledHistoryEntry('c', rep_chk_cnt,
|
||||
rep_chk_thresh) )
|
||||
ChecksumEvent(is_orig, t);
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
@ -209,6 +222,12 @@ unsigned int UDP_Analyzer::MemoryAllocation() const
|
|||
return Analyzer::MemoryAllocation() + padded_sizeof(*this) - 24;
|
||||
}
|
||||
|
||||
void UDP_Analyzer::ChecksumEvent(bool is_orig, uint32 threshold)
|
||||
{
|
||||
Conn()->HistoryThresholdEvent(udp_multiple_checksum_errors,
|
||||
is_orig, threshold);
|
||||
}
|
||||
|
||||
bool UDP_Analyzer::ValidateChecksum(const IP_Hdr* ip, const udphdr* up, int len)
|
||||
{
|
||||
uint32 sum;
|
||||
|
|
|
@ -31,6 +31,8 @@ protected:
|
|||
bool IsReuse(double t, const u_char* pkt) override;
|
||||
unsigned int MemoryAllocation() const override;
|
||||
|
||||
void ChecksumEvent(bool is_orig, uint32 threshold);
|
||||
|
||||
// Returns true if the checksum is valid, false if not
|
||||
static bool ValidateChecksum(const IP_Hdr* ip, const struct udphdr* up,
|
||||
int len);
|
||||
|
@ -44,6 +46,10 @@ private:
|
|||
#define HIST_RESP_DATA_PKT 0x2
|
||||
#define HIST_ORIG_CORRUPT_PKT 0x4
|
||||
#define HIST_RESP_CORRUPT_PKT 0x8
|
||||
|
||||
// For tracking checksum history.
|
||||
uint32 req_chk_cnt, req_chk_thresh;
|
||||
uint32 rep_chk_cnt, rep_chk_thresh;
|
||||
};
|
||||
|
||||
} } // namespace analyzer::*
|
||||
|
|
|
@ -36,3 +36,16 @@ event udp_reply%(u: connection%);
|
|||
## udp_content_deliver_all_orig udp_content_deliver_all_resp
|
||||
## udp_content_delivery_ports_orig udp_content_delivery_ports_resp
|
||||
event udp_contents%(u: connection, is_orig: bool, contents: string%);
|
||||
|
||||
## Generated if a UDP flow crosses a checksum-error threshold, per
|
||||
## 'C'/'c' history reporting.
|
||||
##
|
||||
## u: The connection record for the corresponding UDP flow.
|
||||
##
|
||||
## is_orig: True if the event is raised for the originator side.
|
||||
##
|
||||
## threshold: the threshold that was crossed
|
||||
##
|
||||
## .. bro:see:: udp_reply udp_request udp_session_done
|
||||
## tcp_multiple_checksum_errors
|
||||
event udp_multiple_checksum_errors%(u: connection, is_orig: bool, threshold: count%);
|
||||
|
|
|
@ -137,6 +137,7 @@ Manager::Manager(bool arg_reading_pcaps)
|
|||
{
|
||||
bound_port = 0;
|
||||
reading_pcaps = arg_reading_pcaps;
|
||||
after_bro_init = false;
|
||||
peer_count = 0;
|
||||
log_topic_func = nullptr;
|
||||
vector_of_data_type = nullptr;
|
||||
|
@ -184,22 +185,29 @@ void Manager::InitPostScript()
|
|||
config.set("scheduler.max-threads", max_threads);
|
||||
else
|
||||
{
|
||||
// On high-core-count systems, spawning one thread per core
|
||||
auto max_threads_env = getenv("BRO_BROKER_MAX_THREADS");
|
||||
|
||||
if ( max_threads_env )
|
||||
config.set("scheduler.max-threads", atoi(max_threads_env));
|
||||
else
|
||||
{
|
||||
// On high-core-count systems, letting CAF spawn a thread per core
|
||||
// can lead to significant performance problems even if most
|
||||
// threads are under-utilized. Related:
|
||||
// https://github.com/actor-framework/actor-framework/issues/699
|
||||
if ( reading_pcaps )
|
||||
config.set("scheduler.max-threads", 2u);
|
||||
else
|
||||
{
|
||||
auto hc = std::thread::hardware_concurrency();
|
||||
|
||||
if ( hc > 8u )
|
||||
hc = 8u;
|
||||
else if ( hc < 4u)
|
||||
hc = 4u;
|
||||
|
||||
config.set("scheduler.max-threads", hc);
|
||||
// If the goal was to map threads to actors, 4 threads seems
|
||||
// like a minimal default that could make sense -- the main
|
||||
// actors that should be doing work are (1) the core,
|
||||
// (2) the subscriber, (3) data stores (actually made of
|
||||
// a frontend + proxy actor). Number of data stores may
|
||||
// actually vary, but lumped togather for simplicity. A (4)
|
||||
// may be CAF's multiplexing or other internals...
|
||||
// 4 is also the minimum number that CAF uses by default,
|
||||
// even for systems with less than 4 cores.
|
||||
config.set("scheduler.max-threads", 4u);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -840,14 +848,14 @@ RecordVal* Manager::MakeEvent(val_list* args, Frame* frame)
|
|||
bool Manager::Subscribe(const string& topic_prefix)
|
||||
{
|
||||
DBG_LOG(DBG_BROKER, "Subscribing to topic prefix %s", topic_prefix.c_str());
|
||||
bstate->subscriber.add_topic(topic_prefix);
|
||||
bstate->subscriber.add_topic(topic_prefix, ! after_bro_init);
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Manager::Unsubscribe(const string& topic_prefix)
|
||||
{
|
||||
DBG_LOG(DBG_BROKER, "Unsubscribing from topic prefix %s", topic_prefix.c_str());
|
||||
bstate->subscriber.remove_topic(topic_prefix);
|
||||
bstate->subscriber.remove_topic(topic_prefix, ! after_bro_init);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
|
|
@ -66,6 +66,9 @@ public:
|
|||
*/
|
||||
void InitPostScript();
|
||||
|
||||
void BroInitDone()
|
||||
{ after_bro_init = true; }
|
||||
|
||||
/**
|
||||
* Shuts Broker down at termination.
|
||||
*/
|
||||
|
@ -404,6 +407,7 @@ private:
|
|||
|
||||
uint16_t bound_port;
|
||||
bool reading_pcaps;
|
||||
bool after_bro_init;
|
||||
int peer_count;
|
||||
|
||||
Func* log_topic_func;
|
||||
|
|
|
@ -1182,6 +1182,7 @@ int main(int argc, char** argv)
|
|||
// Drain the event queue here to support the protocols framework configuring DPM
|
||||
mgr.Drain();
|
||||
|
||||
broker_mgr->BroInitDone();
|
||||
analyzer_mgr->DumpDebug();
|
||||
|
||||
have_pending_timers = ! reading_traces && timer_mgr->Size() > 0;
|
||||
|
|
|
@ -227,9 +227,9 @@ threading::Value* Ascii::ParseValue(const string& s, const string& name, TypeTag
|
|||
}
|
||||
|
||||
case TYPE_BOOL:
|
||||
if ( s == "T" )
|
||||
if ( s == "T" || s == "1" )
|
||||
val->val.int_val = 1;
|
||||
else if ( s == "F" )
|
||||
else if ( s == "F" || s == "0" )
|
||||
val->val.int_val = 0;
|
||||
else
|
||||
{
|
||||
|
@ -261,8 +261,10 @@ threading::Value* Ascii::ParseValue(const string& s, const string& name, TypeTag
|
|||
break;
|
||||
|
||||
case TYPE_PORT:
|
||||
{
|
||||
val->val.port_val.proto = TRANSPORT_UNKNOWN;
|
||||
pos = s.find('/');
|
||||
string numberpart;
|
||||
if ( pos != std::string::npos && s.length() > pos + 1 )
|
||||
{
|
||||
auto proto = s.substr(pos+1);
|
||||
|
@ -272,10 +274,21 @@ threading::Value* Ascii::ParseValue(const string& s, const string& name, TypeTag
|
|||
val->val.port_val.proto = TRANSPORT_UDP;
|
||||
else if ( strtolower(proto) == "icmp" )
|
||||
val->val.port_val.proto = TRANSPORT_ICMP;
|
||||
else if ( strtolower(proto) == "unknown" )
|
||||
val->val.port_val.proto = TRANSPORT_UNKNOWN;
|
||||
else
|
||||
GetThread()->Warning(GetThread()->Fmt("Port '%s' contained unknown protocol '%s'", s.c_str(), proto.c_str()));
|
||||
}
|
||||
|
||||
if ( pos != std::string::npos && pos > 0 )
|
||||
{
|
||||
numberpart = s.substr(0, pos);
|
||||
start = numberpart.c_str();
|
||||
}
|
||||
val->val.port_val.port = strtoull(start, &end, 10);
|
||||
if ( CheckNumberError(start, end) )
|
||||
goto parse_error;
|
||||
}
|
||||
break;
|
||||
|
||||
case TYPE_SUBNET:
|
||||
|
|
|
@ -8,6 +8,7 @@ brief: make-brief coverage
|
|||
distclean:
|
||||
@rm -f coverage.log
|
||||
$(MAKE) -C btest $@
|
||||
$(MAKE) -C coverage $@
|
||||
|
||||
make-verbose:
|
||||
@for repo in $(DIRS); do (cd $$repo && make -s ); done
|
||||
|
@ -22,4 +23,6 @@ coverage:
|
|||
@echo "Complete test suite code coverage:"
|
||||
@./scripts/coverage-calc "brocov.tmp.*" coverage.log `pwd`/../scripts
|
||||
@rm -f brocov.tmp.*
|
||||
@cd coverage && make coverage
|
||||
|
||||
.PHONY: coverage
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
Ok error
|
||||
171249.90868
|
||||
167377.950902
|
||||
Ok error
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
error in /Users/johanna/corelight/bro/testing/btest/.tmp/core.option-errors-4/option-errors.bro, line 2 and /Users/johanna/corelight/bro/testing/btest/.tmp/core.option-errors-4/option-errors.bro, line 3: already defined (testopt)
|
|
@ -1 +1,2 @@
|
|||
6
|
||||
7
|
||||
|
|
|
@ -3,8 +3,8 @@
|
|||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path conn
|
||||
#open 2018-07-06-12-25-54
|
||||
#open 2018-08-01-20-09-03
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
|
||||
#types time string addr port addr port enum string interval count count string bool bool count string count count count count set[string]
|
||||
1523351398.449222 CHhAvVGS1DHFjwGM9 1.1.1.1 20394 2.2.2.2 443 tcp - 273.626833 11352 4984 SF - - 0 ShADdtaTFf 44 25283 42 13001 -
|
||||
#close 2018-07-06-12-25-54
|
||||
1523351398.449222 CHhAvVGS1DHFjwGM9 1.1.1.1 20394 2.2.2.2 443 tcp - 273.626833 11352 4984 SF - - 0 ShADdtaTTFf 44 25283 42 13001 -
|
||||
#close 2018-08-01-20-09-03
|
||||
|
|
10
testing/btest/Baseline/core.tunnels.gre-pptp/conn.log
Normal file
10
testing/btest/Baseline/core.tunnels.gre-pptp/conn.log
Normal file
|
@ -0,0 +1,10 @@
|
|||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path conn
|
||||
#open 2018-08-14-21-42-31
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents
|
||||
#types time string addr port addr port enum string interval count count string bool bool count string count count count count set[string]
|
||||
1417577703.821897 C4J4Th3PJpwUYZZ6gc 172.16.44.3 40768 8.8.8.8 53 udp dns 0.213894 71 146 SF - - 0 Dd 1 99 1 174 ClEkJM2Vm5giqnMf4h
|
||||
#close 2018-08-14-21-42-31
|
10
testing/btest/Baseline/core.tunnels.gre-pptp/dns.log
Normal file
10
testing/btest/Baseline/core.tunnels.gre-pptp/dns.log
Normal file
|
@ -0,0 +1,10 @@
|
|||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path dns
|
||||
#open 2018-08-14-21-42-31
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id rtt query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected
|
||||
#types time string addr port addr port enum count interval string count string count string count string bool bool bool bool count vector[string] vector[interval] bool
|
||||
1417577703.821897 C4J4Th3PJpwUYZZ6gc 172.16.44.3 40768 8.8.8.8 53 udp 42540 - xqt-detect-mode2-97712e88-167a-45b9-93ee-913140e76678 1 C_INTERNET 28 AAAA 3 NXDOMAIN F F T F 0 - - F
|
||||
#close 2018-08-14-21-42-31
|
11
testing/btest/Baseline/core.tunnels.gre-pptp/tunnel.log
Normal file
11
testing/btest/Baseline/core.tunnels.gre-pptp/tunnel.log
Normal file
|
@ -0,0 +1,11 @@
|
|||
#separator \x09
|
||||
#set_separator ,
|
||||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path tunnel
|
||||
#open 2018-08-14-21-42-31
|
||||
#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action
|
||||
#types time string addr port addr port enum enum
|
||||
1417577703.821897 CHhAvVGS1DHFjwGM9 2402:f000:1:8e01::5555 0 2607:fcd0:100:2300::b108:2a6b 0 Tunnel::IP Tunnel::DISCOVER
|
||||
1417577703.821897 ClEkJM2Vm5giqnMf4h 16.0.0.200 0 192.52.166.154 0 Tunnel::GRE Tunnel::DISCOVER
|
||||
#close 2018-08-14-21-42-31
|
|
@ -7,10 +7,10 @@ event bro_init()
|
|||
local v1: vector of count;
|
||||
local v2 = vector(1, 2, 3, 4);
|
||||
|
||||
v1[|v1|] = 1;
|
||||
v1[|v1|] = 2;
|
||||
v1[|v1|] = 3;
|
||||
v1[|v1|] = 4;
|
||||
v1 += 1;
|
||||
v1 += 2;
|
||||
v1 += 3;
|
||||
v1 += 4;
|
||||
|
||||
print fmt("contents of v1: %s", v1);
|
||||
print fmt("length of v1: %d", |v1|);
|
||||
|
|
|
@ -42,3 +42,30 @@ remove element (PASS)
|
|||
!in operator (PASS)
|
||||
remove element (PASS)
|
||||
!in operator (PASS)
|
||||
union (PASS)
|
||||
intersection (FAIL)
|
||||
difference (PASS)
|
||||
difference (PASS)
|
||||
union/inter. (PASS)
|
||||
relational (PASS)
|
||||
relational (PASS)
|
||||
subset (FAIL)
|
||||
subset (FAIL)
|
||||
subset (PASS)
|
||||
superset (FAIL)
|
||||
superset (FAIL)
|
||||
superset (FAIL)
|
||||
superset (PASS)
|
||||
non-ordering (FAIL)
|
||||
non-ordering (PASS)
|
||||
superset (PASS)
|
||||
superset (FAIL)
|
||||
superset (PASS)
|
||||
superset (PASS)
|
||||
superset (PASS)
|
||||
superset (FAIL)
|
||||
equality (PASS)
|
||||
equality (FAIL)
|
||||
non-equality (PASS)
|
||||
equality (FAIL)
|
||||
magnitude (FAIL)
|
||||
|
|
|
@ -1,2 +1,4 @@
|
|||
expression error in /home/robin/bro/lang-ext/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: cannot cast value of type 'count' to type 'string' [a as string]
|
||||
expression error in /home/robin/bro/lang-ext/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: cannot cast value of type 'record { a:addr; b:port; }' to type 'string' [a as string]
|
||||
expression error in /Users/jon/projects/bro/bro/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: invalid cast of value with type 'count' to type 'string' [a as string]
|
||||
expression error in /Users/jon/projects/bro/bro/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: invalid cast of value with type 'record { a:addr; b:port; }' to type 'string' [a as string]
|
||||
expression error in /Users/jon/projects/bro/bro/testing/btest/.tmp/language.type-cast-error-dynamic/type-cast-error-dynamic.bro, line 11: invalid cast of value with type 'record { data:opaque of Broker::Data; }' to type 'string' (nil $data field) [a as string]
|
||||
data is string, F
|
||||
|
|
|
@ -57,3 +57,4 @@ access element (PASS)
|
|||
% operator (PASS)
|
||||
&& operator (PASS)
|
||||
|| operator (PASS)
|
||||
+= operator (PASS)
|
||||
|
|
|
@ -2,4 +2,10 @@ Connected to a peer
|
|||
Connected to a peer
|
||||
Connected to a peer
|
||||
Connected to a peer
|
||||
got fully_connected event from, worker-1
|
||||
Connected to a peer
|
||||
got fully_connected event from, proxy-1
|
||||
got fully_connected event from, proxy-2
|
||||
got fully_connected event from, manager-1
|
||||
got fully_connected event from, worker-2
|
||||
termination condition met: shutting down
|
||||
|
|
|
@ -3,3 +3,4 @@ Connected to a peer
|
|||
Connected to a peer
|
||||
Connected to a peer
|
||||
Connected to a peer
|
||||
sent fully_connected event
|
||||
|
|
|
@ -2,3 +2,4 @@ Connected to a peer
|
|||
Connected to a peer
|
||||
Connected to a peer
|
||||
Connected to a peer
|
||||
sent fully_connected event
|
||||
|
|
|
@ -2,3 +2,4 @@ Connected to a peer
|
|||
Connected to a peer
|
||||
Connected to a peer
|
||||
Connected to a peer
|
||||
sent fully_connected event
|
||||
|
|
|
@ -2,3 +2,4 @@ Connected to a peer
|
|||
Connected to a peer
|
||||
Connected to a peer
|
||||
Connected to a peer
|
||||
sent fully_connected event
|
||||
|
|
|
@ -2,3 +2,4 @@ Connected to a peer
|
|||
Connected to a peer
|
||||
Connected to a peer
|
||||
Connected to a peer
|
||||
sent fully_connected event
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
received termination signal
|
|
@ -3,21 +3,23 @@
|
|||
#empty_field (empty)
|
||||
#unset_field -
|
||||
#path config
|
||||
#open 2017-10-11-20-23-11
|
||||
#open 2018-08-10-18-16-52
|
||||
#fields ts id old_value new_value location
|
||||
#types time string string string string
|
||||
1507753391.587107 testbool T F ../configfile
|
||||
1507753391.587107 testcount 0 1 ../configfile
|
||||
1507753391.587107 testcount 1 2 ../configfile
|
||||
1507753391.587107 testint 0 -1 ../configfile
|
||||
1507753391.587107 testenum SSH::LOG Conn::LOG ../configfile
|
||||
1507753391.587107 testport 42/tcp 45/unknown ../configfile
|
||||
1507753391.587107 testaddr 127.0.0.1 127.0.0.1 ../configfile
|
||||
1507753391.587107 testaddr 127.0.0.1 2607:f8b0:4005:801::200e ../configfile
|
||||
1507753391.587107 testinterval 1.0 sec 60.0 ../configfile
|
||||
1507753391.587107 testtime 0.0 1507321987.0 ../configfile
|
||||
1507753391.587107 test_set (empty) b,c,a,d,erdbeerschnitzel ../configfile
|
||||
1507753391.587107 test_vector (empty) 1,2,3,4,5,6 ../configfile
|
||||
1507753391.587107 test_set b,c,a,d,erdbeerschnitzel (empty) ../configfile
|
||||
1507753391.587107 test_set (empty) \x2d ../configfile
|
||||
#close 2017-10-11-20-23-11
|
||||
1533925012.140634 testbool T F ../configfile
|
||||
1533925012.140634 testcount 0 1 ../configfile
|
||||
1533925012.140634 testcount 1 2 ../configfile
|
||||
1533925012.140634 testint 0 -1 ../configfile
|
||||
1533925012.140634 testenum SSH::LOG Conn::LOG ../configfile
|
||||
1533925012.140634 testport 42/tcp 45/unknown ../configfile
|
||||
1533925012.140634 testporttcp 40/udp 42/tcp ../configfile
|
||||
1533925012.140634 testportudp 40/tcp 42/udp ../configfile
|
||||
1533925012.140634 testaddr 127.0.0.1 127.0.0.1 ../configfile
|
||||
1533925012.140634 testaddr 127.0.0.1 2607:f8b0:4005:801::200e ../configfile
|
||||
1533925012.140634 testinterval 1.0 sec 60.0 ../configfile
|
||||
1533925012.140634 testtime 0.0 1507321987.0 ../configfile
|
||||
1533925012.140634 test_set (empty) b,c,a,d,erdbeerschnitzel ../configfile
|
||||
1533925012.140634 test_vector (empty) 1,2,3,4,5,6 ../configfile
|
||||
1533925012.140634 test_set b,c,a,d,erdbeerschnitzel (empty) ../configfile
|
||||
1533925012.140634 test_set (empty) \x2d ../configfile
|
||||
#close 2018-08-10-18-16-52
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, pp=5/icmp, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, ns=4242, sc={
|
||||
[-42] = [b=T, bt=T, e=SSH::LOG, c=21, p=123/unknown, pp=5/icmp, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, ns=4242, sc={
|
||||
2,
|
||||
4,
|
||||
1,
|
||||
|
|
|
@ -0,0 +1,2 @@
|
|||
warning: ../input.log/Input::READER_ASCII: Port '50/trash' contained unknown protocol 'trash'
|
||||
received termination signal
|
|
@ -0,0 +1,4 @@
|
|||
[i=1.2.3.4], [p=80/tcp]
|
||||
[i=1.2.3.5], [p=52/udp]
|
||||
[i=1.2.3.6], [p=30/unknown]
|
||||
[i=1.2.3.7], [p=50/unknown]
|
|
@ -21,6 +21,7 @@ coverage:
|
|||
cleanup:
|
||||
@rm -f $(DIAG)
|
||||
@rm -rf $(SCRIPT_COV)*
|
||||
@find ../../ -name "*.gcda" -exec rm {} \;
|
||||
|
||||
distclean: cleanup
|
||||
@rm -rf .btest.failed.dat \
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue