diff --git a/CHANGES b/CHANGES index 54858ea180..aec6862400 100644 --- a/CHANGES +++ b/CHANGES @@ -1,4 +1,1659 @@ +2.1-84 | 2012-10-19 15:12:56 -0700 + + * Added a BiF strptime() to wrap the corresponding C function. (Seth + Hall) + +2.1-82 | 2012-10-19 15:05:40 -0700 + + * Add IPv6 support to signature header conditions. (Jon Siwek) + + - "src-ip" and "dst-ip" conditions can now use IPv6 addresses/subnets. + They must be written in colon-hexadecimal representation and enclosed + in square brackets (e.g. [fe80::1]). Addresses #774. + + - "icmp6" is now a valid protocol for use with "ip-proto" and "header" + conditions. This allows signatures to be written that can match + against ICMPv6 payloads. Addresses #880. + + - "ip6" is now a valid protocol for use with the "header" condition. + (also the "ip-proto" condition, but it results in a no-op in that + case since signatures apply only to the inner-most IP packet when + packets are tunneled). This allows signatures to match specifically + against IPv6 packets (whereas "ip" only matches against IPv4 packets). + + - "ip-proto" conditions can now match against IPv6 packets. Before, + IPv6 packets were just silently ignored which meant DPD based on + signatures did not function for IPv6 -- protocol analyzers would only + get attached to a connection over IPv6 based on the well-known ports + set in the "dpd_config" table. + +2.1-80 | 2012-10-19 14:48:42 -0700 + + * Change how "gridftp" gets added to service field of connection + records. In addition to checking for a finished SSL handshake over + an FTP connection, it now also requires that the SSL handshake + occurs after the FTP client requested AUTH GSSAPI, more + specifically identifying the characteristics of GridFTP control + channels. Addresses #891. (Jon Siwek) + + * Allow faster rebuilds in certain cases. Previously, when + rebuilding with a different "--prefix" or "--scriptdir", all Bro + source files were recompiled. With this change, only util.cc is + recompiled. (Daniel Thayer) + +2.1-76 | 2012-10-12 10:32:39 -0700 + + * Add support for recognizing GridFTP connections as an extension to + the standard FTP analyzer. (Jon Siwek) + + This is enabled by default and includes: + + - An analyzer for GSI mechanism of GSSAPI FTP AUTH method. GSI + authentication involves an encoded TLS/SSL handshake over the + FTP control session. For FTP sessions that attempt GSI + authentication, the *service* field of the connection log will + include "gridftp" (as well as also "ftp" and "ssl"). + + - Add an example of a GridFTP data channel detection script. It + relies on the heuristics of GridFTP data channels commonly + default to SSL mutual authentication with a NULL bulk cipher + and that they usually transfer large datasets (default + threshold of script is 1 GB). The script also defaults to + skip_further_processing() after detection to try to save + cycles analyzing the large, benign connection. + + For identified GridFTP data channels, the *services* fields of + the connection log will include "gridftp-data". + + * Add *client_subject* and *client_issuer_subject* as &log'd fields + to SSL::Info record. Also add *client_cert* and + *client_cert_chain* fields to track client cert chain. (Jon Siwek) + + * Add a script in base/protocols/conn/polling that generalizes the + process of polling a connection for interesting features. The + GridFTP data channel detection script depends on it to monitor + bytes transferred. (Jon Siwek) + +2.1-68 | 2012-10-12 09:46:41 -0700 + + * Rename the Input Framework's update_finished event to end_of_data. + It will now not only fire after table-reads have been completed, + but also after the last event of a whole-file-read (or + whole-db-read, etc.). (Bernhard Amann) + + * Fix for DNS log problem when a DNS response is seen with 0 RRs. + (Seth Hall) + +2.1-64 | 2012-10-12 09:36:41 -0700 + + * Teach --disable-dataseries/--disable-elasticsearch to ./configure. + Addresses #877. (Jon Siwek) + + * Add --with-curl option to ./configure. Addresses #877. (Jon Siwek) + +2.1-61 | 2012-10-12 09:32:48 -0700 + + * Fix bug in the input framework: the config table did not work. + (Bernhard Amann) + +2.1-58 | 2012-10-08 10:10:09 -0700 + + * Fix a problem with non-manager cluster nodes applying + Notice::policy. This could, for example, result in duplicate + emails being sent if Notice::emailed_types is redef'd in local.bro + (or any script that gets loaded on all cluster nodes). (Jon Siwek) + +2.1-56 | 2012-10-03 16:04:52 -0700 + + * Add general FAQ entry about upgrading Bro. (Jon Siwek) + +2.1-53 | 2012-10-03 16:00:40 -0700 + + * Add new Tunnel::delay_teredo_confirmation option that indicates + that the Teredo analyzer should wait until it sees both sides of a + connection using a valid Teredo encapsulation before issuing a + protocol_confirmation. Default is on. Addresses #890. (Jon Siwek) + +2.1-50 | 2012-10-02 12:06:08 -0700 + + * Fix a typing issue that prevented the ElasticSearch timeout to + work. (Matthias Vallentin) + + * Use second granularity for ElasticSearch timeouts. (Matthias + Vallentin) + + * Fix compile issues with older versions of libcurl, which don't + offer *_MS timeout constants. (Matthias Vallentin) + +2.1-47 | 2012-10-02 11:59:29 -0700 + + * Fix for the input framework: BroStrings were constructed without a + final \0, which makes them unusable by basically all internal + functions (like to_count). (Bernhard Amann) + + * Remove deprecated script functionality (see NEWS for details). + (Daniel Thayer) + +2.1-39 | 2012-09-29 14:09:16 -0700 + + * Reliability adjustments to istate tests with network + communication. (Jon Siwek) + +2.1-37 | 2012-09-25 14:21:37 -0700 + + * Reenable some tests that previously would cause Bro to exit with + an error. (Daniel Thayer) + + * Fix parsing of large integers on 32-bit systems. (Daniel Thayer) + + * Serialize language.when unit test with the "comm" group. (Jon + Siwek) + +2.1-32 | 2012-09-24 16:24:34 -0700 + + * Fix race condition in language/when.bro test. (Daniel Thayer) + +2.1-26 | 2012-09-23 08:46:03 -0700 + + * Add an item to FAQ page about broctl options. (Daniel Thayer) + + * Add more language tests. We now have tests of all built-in Bro + data types (including different representations of constant + values, and max./min. values), keywords, and operators (including + special properties of certain operators, such as short-circuit + evaluation and associativity). (Daniel Thayer) + + * Fix construction of ip6_ah (Authentication Header) record values. + + Authentication Headers with a Payload Len field set to zero would + cause a crash due to invalid memory allocation because the + previous code assumed Payload Len would always be great enough to + contain all mandatory fields of the header. (Jon Siwek) + + * Update compile/dependency docs for OS X. (Jon Siwek) + + * Adjusting Mac binary packaging script. Setting CMAKE_PREFIX_PATH + helps link against standard system libs instead of ones that come + from other package manager (e.g. MacPorts). (Jon Siwek) + + * Adjusting some unit tests that do cluster communication. (Jon Siwek) + + * Small change to non-blocking DNS initialization. (Jon Siwek) + + * Reorder a few statements in scan.l to make 1.5msecs etc work. + Adresses #872. (Bernhard Amann) + +2.1-6 | 2012-09-06 23:23:14 -0700 + + * Fixed a bug where "a -= b" (both operands are intervals) was not + allowed in Bro scripts (although "a = a - b" is allowed). (Daniel + Thayer) + + * Fixed a bug where the "!=" operator with subnet operands was + treated the same as the "==" operator. (Daniel Thayer) + + * Add sleeps to configuration_update test for better reliability. + (Jon Siwek) + + * Fix a segfault when iterating over a set when using malformed + index. (Daniel Thayer) + +2.1 | 2012-08-28 16:46:42 -0700 + + * Make bif.identify_magic robust against FreeBSD's libmagic config. + (Robin Sommer) + + * Remove automatic use of gperftools on non-Linux systems. + --enable-perftools must now explicity be supplied to ./configure + on non-Linux systems to link against the tcmalloc library. + + * Fix uninitialized value for 'is_partial' in TCP analyzer. (Jon + Siwek) + + * Parse 64-bit consts in Bro scripts correctly. (Bernhard Amann) + + * Output 64-bit counts correctly on 32-bit machines (Bernhard Amann) + + * Input framework fixes, including: (Bernhard Amann) + + - One of the change events got the wrong parameters. + + - Escape commas in sets and vectors that were unescaped before + tokenization. + + - Handling of zero-length-strings as last element in a set was + broken (sets ending with a ,). + + - Hashing of lines just containing zero-length-strings was broken. + + - Make set_separators different from , work for input framework. + + - Input framework was not handling counts and ints out of + 32-bit-range correctly. + + - Errors in single lines do not kill processing, but simply ignore + the line, log it, and continue. + + * Update documentation for builtin types. (Daniel Thayer) + + - Add missing description of interval "msec" unit. + + - Improved description of pattern by clarifying the issue of + operand order and difference between exact and embedded + matching. + + * Documentation fixes for signature 'eval' conditions. (Jon Siwek) + + * Remove orphaned 1.5 unit tests. (Jon Siwek) + + * Add type checking for signature 'eval' condition functions. (Jon + Siwek) + + * Adding an identifier to the SMTP blocklist notices for duplicate + suppression. (Seth Hall) + +2.1-beta-45 | 2012-08-22 16:11:10 -0700 + + * Add an option to the input framework that allows the user to chose + to not die upon encountering files/functions. (Bernhard Amann) + +2.1-beta-41 | 2012-08-22 16:05:21 -0700 + + * Add test serialization to "leak" unit tests that use + communication. (Jon Siwek) + + * Change to metrics/basic-cluster unit test for reliability. (Jon + Siwek) + + * Fixed ack tracking which could overflow quickly in some + situations. (Seth Hall) + + * Minor tweak to coverage.bare-mode-errors unit test to work with a + symlinked 'scripts' dir. (Jon Siwek) + +2.1-beta-35 | 2012-08-22 08:44:52 -0700 + + * Add testcase for input framework reading sets (rather than + tables). (Bernhard Amann) + +2.1-beta-31 | 2012-08-21 15:46:05 -0700 + + * Tweak to rotate-custom.bro unit test. (Jon Siwek) + + * Ignore small mem leak every rotation interval for dataseries logs. + (Jon Siwek) + +2.1-beta-28 | 2012-08-21 08:32:42 -0700 + + * Linking ES docs into logging document. (Robin Sommer) + +2.1-beta-27 | 2012-08-20 20:06:20 -0700 + + * Add the Stream record to Log:active_streams to make more dynamic + logging possible. (Seth Hall) + + * Fix portability of printing to files returned by + open("/dev/stderr"). (Jon Siwek) + + * Fix mime type diff canonifier to also skip mime_desc columns. (Jon + Siwek) + + * Unit test tweaks/fixes. (Jon Siwek) + + - Some baselines for tests in "leaks" group were outdated. + + - Changed a few of the cluster/communication tests to terminate + more explicitly instead of relying on btest-bg-wait to kill + processes. This makes the tests finish faster in the success case + and makes the reason for failing clearer in the that case. + + * Fix memory leak of serialized IDs when compiled with + --enable-debug. (Jon Siwek) + +2.1-beta-21 | 2012-08-16 11:48:56 -0700 + + * Installing a handler for running out of memory in "new". Bro will + now print an error message in that case rather than abort with an + uncaught exception. (Robin Sommer) + +2.1-beta-20 | 2012-08-16 11:43:31 -0700 + + * Fixed potential problems with ElasticSearch output plugin. (Seth + Hall) + +2.1-beta-13 | 2012-08-10 12:28:04 -0700 + + * Reporter warnings and error now print to stderr by default. New + options Reporter::warnings_to_stderr and + Reporter::errors_to_stderr to disable. (Seth Hall) + +2.1-beta-9 | 2012-08-10 12:24:29 -0700 + + * Add more BIF tests. (Daniel Thayer) + +2.1-beta-6 | 2012-08-10 12:22:52 -0700 + + * Fix bug in input framework with an edge case. (Bernhard Amann) + + * Fix small bug in input framework test script. (Bernhard Amann) + +2.1-beta-3 | 2012-08-03 10:46:49 -0700 + + * Merge branch 'master' of ssh://git.bro-ids.org/bro (Robin Sommer) + + * Fix configure script to exit with non-zero status on error (Jon + Siwek) + + * Improve ASCII output performance. (Robin Sommer) + +2.1-beta | 2012-07-30 11:59:53 -0700 + + * Improve log filter compatibility with remote logging. Addresses + #842. (Jon Siwek) + +2.0-907 | 2012-07-30 09:13:36 -0700 + + * Add missing breaks to switch cases in + ElasticSearch::HTTPReceive(). (Jon Siwek) + +2.0-905 | 2012-07-28 16:24:34 -0700 + + * Fix log manager hanging on waiting for pending file rotations, + plus writer API tweak for failed rotations. Addresses #860. (Jon + Siwek and Robin Sommer) + + * Tweaking logs-to-elasticsearch.bro so that it doesn't do anything + if ES server is unset. (Robin Sommer) + +2.0-902 | 2012-07-27 12:42:13 -0700 + + * New variable in logging framework Log::active_streams to indicate + Log:ID enums which are currently active. (Seth Hall) + + * Reworked how the logs-to-elasticsearch scripts works to stop + abusing the logging framework. (Seth Hall) + + * Fix input test for recent default change on fastpath. (Robin + Sommer) + +2.0-898 | 2012-07-27 12:22:03 -0700 + + * Small (potential performance) improvement for logging framework. (Seth Hall) + + * Script-level rotation postprocessor fix. This fixes a problem with + writers that don't have a postprocessor. (Seth Hall) + + * Update input framework documentation to reflect want_record + change. (Bernhard Amann) + + * Fix crash when encountering an InterpreterException in a predicate + in logging or input Framework. (Bernhard Amann) + + * Input framework: Make want_record=T the default for events + (Bernhard Amann) + + * Changing the start/end markers in logs to open/close now + reflecting wall clock. (Robin Sommer) + +2.0-891 | 2012-07-26 17:15:10 -0700 + + * Reader/writer API: preventing plugins from receiving further + messages after a failure. (Robin Sommer) + + * New test for input framework that fails to find a file. (Robin + Sommer) + + * Improving error handling for threads. (Robin Sommer) + + * Tweaking the custom-rotate test to produce stable output. (Robin + Sommer) + +2.0-884 | 2012-07-26 14:33:21 -0700 + + * Add comprehensive error handling for close() calls. (Jon Siwek) + + * Add more test cases for input framework. (Bernhard Amann) + + * Input framework: make error output for non-matching event types + much more verbose. (Bernhard Amann) + +2.0-877 | 2012-07-25 17:20:34 -0700 + + * Fix double close() in FilerSerializer class. (Jon Siwek) + + * Fix build warnings. (Daniel Thayer) + + * Fixes to ElasticSearch plugin to make libcurl handle http + responses correctly. (Seth Hall) + + * Fixing FreeBSD compiler error. (Robin Sommer) + + * Silencing compiler warnings. (Robin Sommer) + +2.0-871 | 2012-07-25 13:08:00 -0700 + + * Fix complaint from valgrind about uninitialized memory usage. (Jon + Siwek) + + * Fix differing log filters of streams from writing to same + writer/path (which now produces a warning, but is otherwise + skipped for the second). Addresses #842. (Jon Siwek) + + * Fix tests and error message for to_double BIF. (Daniel Thayer) + + * Compile fix. (Robin Sommer) + +2.0-866 | 2012-07-24 16:02:07 -0700 + + * Correct a typo in usage message. (Daniel Thayer) + + * Fix file permissions of log files (which were created with execute + permissions after a recent change). (Daniel Thayer) + +2.0-862 | 2012-07-24 15:22:52 -0700 + + * Fix initialization problem in logging class. (Jon Siwek) + + * Input framework now accepts escaped ASCII values as input (\x##), + and unescapes appropiately. (Bernhard Amann) + + * Make reading ASCII logfiles work when the input separator is + different from \t. (Bernhard Amann) + + * A number of smaller fixes for input framework. (Bernhard Amann) + +2.0-851 | 2012-07-24 15:04:14 -0700 + + * New built-in function to_double(s: string). (Scott Campbell) + +2.0-849 | 2012-07-24 11:06:16 -0700 + + * Adding missing include needed on some systems. (Robin Sommer) + +2.0-846 | 2012-07-23 16:36:37 -0700 + + * Fix WriterBackend::WriterInfo serialization, reenable ascii + start/end tags. (Jon Siwek) + +2.0-844 | 2012-07-23 16:20:59 -0700 + + * Reworking parts of the internal threading/logging/input APIs for + thread-safety. (Robin Sommer) + + * Bugfix for SSL version check. (Bernhard Amann) + + * Changing a HTTP DPD from port 3138 to 3128. Addresses #857. (Robin + Sommer) + + * ElasticSearch logging writer. See logging-elasticsearch.rst for + more information. (Vlad Grigorescu and Seth Hall). + + * Give configure a --disable-perftools option to disable Perftools + support even if found. (Robin Sommer) + + * The ASCII log writer now includes "#start " and "#end + lines in the each file. (Robin Sommer) + + * Renamed ASCII logger "header" options to "meta". (Robin Sommer) + + * ASCII logs now escape '#' at the beginning of log lines. Addresses + #763. (Robin Sommer) + + * Fix bug, where in dns.log rcode always was set to 0/NOERROR when + no reply package was seen. (Bernhard Amann) + + * Updating to Mozilla's current certificate bundle. (Seth Hall) + +2.0-769 | 2012-07-13 16:17:33 -0700 + + * Fix some Info:Record field documentation. (Vlad Grigorescu) + + * Fix overrides of TCP_ApplicationAnalyzer::EndpointEOF. (Jon Siwek) + + * Fix segfault when incrementing whole vector values. Also removed + RefExpr::Eval(Val*) method since it was never called. (Jon Siwek) + + * Remove baselines for some leak-detecting unit tests. (Jon Siwek) + + * Unblock SIGFPE, SIGILL, SIGSEGV and SIGBUS for threads, so that + they now propagate to the main thread. Adresses #848. (Bernhard + Amann) + +2.0-761 | 2012-07-12 08:14:38 -0700 + + * Some small fixes to further reduce SOCKS false positive logs. (Seth Hall) + + * Calls to pthread_mutex_unlock now log the reason for failures. + (Bernhard Amann) + +2.0-757 | 2012-07-11 08:30:19 -0700 + + * Fixing memory leak. (Seth Hall) + +2.0-755 | 2012-07-10 16:25:16 -0700 + + * Add sorting canonifier to rotate-custom unit test. Addresses #846. + (Jon Siwek) + + * Fix many compiler warnings. (Daniel Thayer) + + * Fix segfault when there's an error/timeout resolving DNS requests. + Addresses #846. (Jon Siwek) + + * Remove a non-portable test case. (Daniel Thayer) + + * Fix typos in input framework doc. (Daniel Thayer) + + * Fix typos in DataSeries documentation. (Daniel Thayer) + + * Bugfix making custom rotate functions work again. (Robin Sommer) + + * Tiny bugfix for returning writer name. (Robin Sommer) + + * Moving make target update-doc-sources from top-level Makefile to + btest Makefile. (Robin Sommer) + +2.0-733 | 2012-07-02 15:31:24 -0700 + + * Extending the input reader DoInit() API. (Bernhard Amann). It now + provides a Info struct similar to what we introduced for log + writers, including a corresponding "config" key/value table. + + * Fix to make writer-info work when debugging is enabled. (Bernhard + Amann) + +2.0-726 | 2012-07-02 15:19:15 -0700 + + * Extending the log writer DoInit() API. (Robin Sommer) + + We now pass in a Info struct that contains: + + - the path name (as before) + - the rotation interval + - the log_rotate_base_time in seconds + - a table of key/value pairs with further configuration options. + + To fill the table, log filters have a new field "config: table[string] + of strings". This gives a way to pass arbitrary values from + script-land to writers. Interpretation is left up to the writer. + + * Split calc_next_rotate() into two functions, one of which is + thread-safe and can be used with the log_rotate_base_time value + from DoInit(). + + * Updates to the None writer. (Robin Sommer) + + - It gets its own script writers/none.bro. + + - New bool option LogNone::debug to enable debug output. It then + prints out all the values passed to DoInit(). + + - Fixed a bug that prevented Bro from terminating. + +2.0-723 | 2012-07-02 15:02:56 -0700 + + * Extract ICMPv6 NDP options and include in ICMP events. This adds + a new parameter of type "icmp6_nd_options" to the ICMPv6 neighbor + discovery events. Addresses #833. (Jon Siwek) + + * Set input frontend type before starting the thread. This means + that the thread type will be output correctly in the error + message. (Bernhard Amann) + +2.0-719 | 2012-07-02 14:49:03 -0700 + + * Fix inconsistencies in random number generation. The + srand()/rand() interface was being intermixed with the + srandom()/random() one. The later is now used throughout. (Jon + Siwek) + + * Changed the srand() and rand() BIFs to work deterministically if + Bro was given a seed file. Addresses #825. (Jon Siwek) + + * Updating input framework unit tests to make them more reliable and + execute quicker. (Jon Siwek) + + * Fixed race condition in writer and reader initializations. (Jon + Siwek) + + * Small tweak to make test complete quicker. (Jon Siwek) + + * Drain events before terminating log/thread managers. (Jon Siwek) + + * Fix strict-aliasing warning in RemoteSerializer.cc. Addresses + #834. (Jon Siwek) + + * Fix typos in event documentation. (Daniel Thayer) + + * Fix typos in NEWS for Bro 2.1 beta. (Daniel Thayer) + +2.0-709 | 2012-06-21 10:14:24 -0700 + + * Fix exceptions thrown in event handlers preventing others from running. (Jon Siwek) + + * Add another SOCKS command. (Seth Hall) + + * Fixed some problems with the SOCKS analyzer and tests. (Seth Hall) + + * Updating NEWS in preparation for beta. (Robin Sommer) + + * Accepting different AF_INET6 values for loopback link headers. + (Robin Sommer) + +2.0-698 | 2012-06-20 14:30:40 -0700 + + * Updates for the SOCKS analyzer (Seth Hall). + + - A SOCKS log! + + - Now supports SOCKSv5 in the analyzer and the DPD sigs. + + - Added protocol violations. + + * Updates to the tunnels framework. (Seth Hall) + + - Make the uid field optional since it's conceptually incorrect + for proxies being treated as tunnels to have it. + + - Reordered two fields in the log. + + - Reduced the default tunnel expiration interface to something + more reasonable (1 hour). + + * Make Teredo bubble packet parsing more lenient. (Jon Siwek) + + * Fix a crash in NetSessions::ParseIPPacket(). (Jon Siwek) + +2.0-690 | 2012-06-18 16:01:33 -0700 + + * Support for decapsulating tunnels via the new tunnel framework in + base/frameworks/tunnels. + + Bro currently supports Teredo, AYIYA, IP-in-IP (both IPv4 and + IPv6), and SOCKS. For all these, it logs the outher tunnel + connections in both conn.log and tunnel.log, and proceeds to + analyze the inner payload as if it were not tunneled, including + also logging it in conn.log (with a new tunnel_parents column + pointing back to the outer connection(s)). (Jon Siwek, Seth Hall, + Gregor Maier) + + * The options "tunnel_port" and "parse_udp_tunnels" have been + removed. (Jon Siwek) + +2.0-623 | 2012-06-15 16:24:52 -0700 + + * Changing an error in the input framework to a warning. (Robin + Sommer) + +2.0-622 | 2012-06-15 15:38:43 -0700 + + * Input framework updates. (Bernhard Amann) + + - Disable streaming reads from executed commands. This lead to + hanging Bros because pclose apparently can wait for eternity if + things go wrong. + + - Automatically delete disabled input streams. + + - Documentation. + +2.0-614 | 2012-06-15 15:19:49 -0700 + + * Remove an old, unused diff canonifier. (Jon Siwek) + + * Improve an error message in ICMP analyzer. (Jon Siwek) + + * Fix a warning message when building docs. (Daniel Thayer) + + * Fix many errors in the event documentation. (Daniel Thayer) + +2.0-608 | 2012-06-11 15:59:00 -0700 + + * Add more error handling code to logging of enum vals. Addresses + #829. (Jon Siwek) + +2.0-606 | 2012-06-11 15:55:56 -0700 + + * Fix summary lines for BIF documentation and corrected the + description of "fmt" and "floor" BIFs. (Daniel Thayer) + + * Fix val_size BIF tests and improve docs. (Daniel Thayer) + +2.0-602 | 2012-06-07 15:06:19 -0700 + + * Include header for usleep(), caused compile failure on Archlinux. (Jon Siwek) + + * Revert "Fixed a bug with the MIME analyzer not removing whitespace + on wrapped headers." Needs discussion. (Robin Sommer) + +2.0-598 | 2012-06-06 11:47:00 -0700 + + * Add @load-sigs directive for loading signature files (addresses + #551). This can be used to load signatures relative to the current + scripts (e.g., "@load-sigs ./foo.sig"). (Jon Siwek) + + +2.0-596 | 2012-06-06 11:41:00 -0700 + + * Fixes for some BiFs and their documentation. (Daniel Thayer) + + * Many new unit tests for BiFs. (Daniel Thayer) + +2.0-579 | 2012-06-06 11:04:46 -0700 + + * Memory leak fixes for bad usages of VectorVal ctor. (Jon Siwek) + + * Fixed a bug with the MIME analyzer not removing whitespace on + wrapped headers. (Seth Hall) + + * Change Input::update_finished lookup to happen at init time. (Jon Siwek) + + * Fix going through the internal_handler() function which will now + set the event as "used" (i.e. it's marked as being raised + somewhere). Addresses #823. (Jon Siwek) + + * Fix format specifier on RemoteSerializer::Connect. This caused + 32-bit systems to show a warning at compile-time, and fail when + connecting to peers. (Jon Siwek) + + * Fixes for running tests in parallel. (Robin Sommer) + +2.0-571 | 2012-05-30 19:12:43 -0700 + + * Updating submodule(s). + +2.0-570 | 2012-05-30 19:08:18 -0700 + + * A new input framework enables scripts to read in external data + dynamically on the fly as Bro is processing network traffic. + (Bernhard Amann) + + Currently, the framework supports reading ASCII input that's + structured similar as Bro's log files as well as raw blobs of + data. Other formats will come in the future. + + See doc/input.rst for more information (this will be extended + further soon). + +2.0-395 | 2012-05-30 17:03:31 -0700 + + * Remove unnecessary assert in ICMP analyzer which could lead to + aborts. Addresses #822. + + * Improve script debugger backtrace and print commands. (Jon Siwek) + + * Switching default DS compression to gzip. (Robin Sommer) + + * Improve availability of IPv6 flow label in connection records. + This adds a "flow_label" field to the "endpoint" record type, + which is used for both the "orig" and "resp" fields of + "connection" records. The new "connection_flow_label_changed" + event also allows tracking of changes in flow labels: it's raised + each time one direction of the connection starts using a different + label. (Jon Siwek) + + * Add unit tests for Broccoli SSL and Broccoli IPv6 connectivity. + (Jon Siwek) + + * Remove AI_ADDRCONFIG getaddrinfo hints flag for listening sockets. + (Jon Siwek) + + * Undo unnecessary communication protocol version bump. (Jon Siwek) + + * Add support to Bro for connecting with peers over IPv6. (Jon Siwek) + + - Communication::listen_ipv6 needs to be redef'd to true in order + for IPv6 listening sockets to be opened. + + - Added Communication::listen_retry option as an interval at which + to retry binding to socket addresses that were already in use. + + - Added some explicit baselines to check in the istate.events and + istate.events-ssl tests -- the SSL test was incorrectly passing + because it compared two empty files. (The files being empty + because "http/base" was given as an argument to Bro which it + couldn't handle because that script doesn't exist anymore). + + - Support for communication over non-global IPv6 addresses. This + usually requires specifying an additional zone identifier (see + RFC 4007). The connect() and listen() BIFs have been changed to + accept this zone identifier as an argument. + + +2.0-377 | 2012-05-24 16:46:06 -0700 + + * Documentation fixes. (Jon Siwek and Daniel Thayer) + +2.0-372 | 2012-05-17 13:59:45 -0700 + + * Fix compile errors. (Jon Siwek) + + * Linking in the DS docs. (Robin Sommer) + + * Fix mobility checksums unit test. (Jon Siwek) + +2.0-367 | 2012-05-17 12:42:30 -0700 + + * Adding support for binary output via DataSeries. See + logging-dataseries.rst for more information. (Gilbert Clark and + Robin Sommer) + + * Adding target update-doc-sources to top-level Makefile that runs + genDocSourcesList.sh. (Robin Sommer) + + * Moving trace for rotation test into traces directory. (Robin Sommer) + + * Fixing a rotation race condition at termination. (Robin Sommer) + + * Extending log post-processor call to include the name of the + writer. (Robin Sommer) + + * In threads, an internal error now immediately aborts. Otherwise, + the error won't make it back to the main thread for a while and + subsequent code in the thread would still execute. (Robin Sommer) + + * DataSeries cleanup. (Robin Sommer) + + * Fixing threads' DoFinish() method. It wasn't called reliably. Now, + it's always called before the thread is destroyed (assuming + processing has went normally so far). (Robin Sommer) + +2.0-341 | 2012-05-17 09:54:30 -0700 + + * Add a comment to explain the ICMPv6 error message types. (Daniel Thayer) + + * Quieting external test output somehwat. (Robin Sommer) + +2.0-336 | 2012-05-14 17:15:44 -0700 + + * Don't print the various "weird" events to stderr. Address #805. + (Daniel Thayer) + + * Generate icmp_error_message event for ICMPv6 error msgs. + Previously, icmp_sent was being generated, but icmp_error_message + contains more info. + + * Improved documentation comments for icmp-related events. (Daniel + Thayer) + +2.0-330 | 2012-05-14 17:05:56 -0700 + + * Add `addr_to_uri` script-level function that adds brackets to an + address if it's IPv6 and will be included in a URI or when a + ":" needs to be appended to it. (Jon Siwek) + + * Also add a test case for content extraction. (Jon Siwek) + + * Fix typos and improve INSTALL document. (Daniel Thayer) + + * Switching to new btest command TEST-SERIALIZE for communication + tests. (Robin Sommer) + +2.0-323 | 2012-05-04 21:04:34 -0700 + + * Add SHA1 and SHA256 hashing BIFs. Addresses #542. + + * Refactor all internal MD5 stuff to use OpenSSL's. (Jon Siwek) + + * Changes to open-file caching limits and uncached file unserialization. (Jon Siwek) + + - Unserializing files that were previously kicked out of the open-file + cache would cause them to be fopen'd with the original access + permissions which is usually 'w' and causes truncation. They + are now opened in 'a' mode. (addresses #780) + + - Add 'max_files_in_cache' script option to manually set the maximum + amount of opened files to keep cached. Mainly this just helped + to create a simple test case for the above change. + + - Remove unused NO_HAVE_SETRLIMIT preprocessor switch. + + - On systems that don't enforce a limit on number of files opened for + the process, raise default max size of open-file cache from + 32 to 512. + +2.0-319 | 2012-05-03 13:24:44 -0700 + + * SSL bugfixes and cleanup. (Seth Hall) + + - SSL related files and classes renamed to remove the "binpac" term. + + - A small fix for DPD scripts to make the DPD log more helpful if + there are multiple continued failures. + + - Fixed the SSL analyzer to make it stop doing repeated violation + messages for some handshake failures. + + - Added a $issuer_subject to the SSL log. + + - Created a basic test for SSL. + + - Fixed parsing of TLS server extensions. (Seth Hall) + +2.0-315 | 2012-05-03 11:44:17 -0700 + + * Add two more TLS extension values that we see in live traffic. + (Bernhard Amann) + + * Fixed IPv6 link local unicast CIDR and added IPv6 loopback to + private address space. (Seth Hall) + + * Fixed a problem where cluster workers were still processing + notices in some cases. (Seth Hall) + + * Added a configure option to specify the 'etc' directory. Addresses + #801. (Daniel Thayer) + + +2.0-306 | 2012-04-24 14:37:00 -0700 + + * Add further TLS extension values "extended_random" and + "heartbeat". (Seth Hall) + + * Fix problem with extracting FTP passwords and add "ftpuser" as + another anonymous username. (Seth Hall, discovered by Patrik + Lundin). + +2.0-303 | 2012-04-19 10:01:06 -0700 + + * Changes related to ICMPv6 Neighbor Discovery messages. (Jon Siwek) + + - The 'icmp_conn' record now contains an 'hlim' field since hop limit + in the IP header is an interesting field for at least these ND + messages. + + - Fixed and extended 'icmp_router_advertisement' event parameters. + + - Changed 'icmp_neighbor_advertisement' event parameters to add + more of the known boolean flags. + +2.0-301 | 2012-04-17 17:58:55 -0700 + + * Bro now support ICMPv6. (Matti Mantere, Jon Siwek, Robin Sommer, + Daniel Thayer). + + Overall, Bro now raises the following ICMP events for v4 and v6 as + appropiate: + + event icmp_sent(c: connection, icmp: icmp_conn); + event icmp_echo_request(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string); + event icmp_echo_reply(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string); + event icmp_error_message(c: connection, icmp: icmp_conn, code: count, context: icmp_context); + event icmp_unreachable(c: connection, icmp: icmp_conn, code: count, context: icmp_context); + event icmp_packet_too_big(c: connection, icmp: icmp_conn, code: count, context: icmp_context); + event icmp_time_exceeded(c: connection, icmp: icmp_conn, code: count, context: icmp_context); + event icmp_parameter_problem(c: connection, icmp: icmp_conn, code: count, context: icmp_context); + event icmp_router_solicitation(c: connection, icmp: icmp_conn); + event icmp_router_advertisement(c: connection, icmp: icmp_conn, hop_limit: count, managed: bool, router_lifetime: count, reachable_time: interval, retrans_timer: interval); + event icmp_neighbor_solicitation(c: connection, icmp: icmp_conn, tgt:addr); + event icmp_neighbor_advertisement(c: connection, icmp: icmp_conn, tgt:addr); + event icmp_redirect(c: connection, icmp: icmp_conn, tgt: addr, dest: addr); + + The `icmp_conn` record got a new boolean field 'v6' that indicates + whether the ICMP message is v4 or v6. + + This change also includes further low-level work on existing IP + and ICMP code, including a reorganization of how ICMPv4 is + handled. + +2.0-281 | 2012-04-17 17:40:39 -0700 + + * Small updates for the bittorrent analyzer to support 64bit types + in binpac. (Seth Hall) + + * Removed the attempt at bittorrent resynchronization. (Seth Hall) + +2.0-276 | 2012-04-17 17:35:56 -0700 + + * Add more support for 's that lack some structure + definitions. (Jon Siwek) + +2.0-273 | 2012-04-16 18:08:56 -0700 + + * Removing QR flag from DNS log in response, which should not have + been there in the first place. (Seth Hall) + + * Sync up patricia.c/h with pysubnettree repo. (Daniel Thayer) + + * Adding missing leak groups to a couple tests. Also activating leak + checking for proxy in basic-cluster test. (Robin Sommer) + +2.0-267 | 2012-04-09 17:47:28 -0700 + + * Add support for mobile IPv6 Mobility Header (RFC 6275). (Jon + Siwek) + + - Enabled through a new --enable-mobile-ipv6 configure-time + option. If not enabled, the mobility header (routing type 2) and + Home Address Destination option are ignored. + + - Accessible at script-layer through 'mobile_ipv6_message' event. + + * Refactor IP_Hdr routing header handling, add MobileIPv6 Home + Address handling. Packets that use the Home Address Destination + option use that option's address as the connection's originator. + (Jon Siwek) + + * Revert TCP checksumming to cache common data, like it did before. + (Jon Siwek) + + * Improve handling of IPv6 routing type 0 extension headers. (Jon + Siwek) + + - flow_weird event with name argument value of "routing0_hdr" is raised + for packets containing an IPv6 routing type 0 header because this + type of header is now deprecated according to RFC 5095. + + - Packets with a routing type 0 header and non-zero segments left + now use the last address in that header in order to associate + with a connection/flow and for calculating TCP/UDP checksums. + + - Added a set of IPv4/IPv6 TCP/UDP checksum unit tests (Jon Siwek) + + * Fix table expiry for values assigned in bro_init() when reading + live. (Jon Siwek) + +2.0-257 | 2012-04-05 15:32:43 -0700 + + * Fix CMake from warning about unused ENABLE_PERFTOOLS_DEBUG + variable. (Jon Siwek) + + * Fix handling of IPv6 atomic fragments. (Jon Siwek) + + * Fix that prevents Bro processes that do neither local logging nor + request remote logs from spawning threads. (Robin Sommer) + + * Fixing perftools-debug support. (Robin Sommer) + + * Reverting SocketComm change tuning I/O behaviour. (Robin Sommer) + + * Adding notice_policy.log canonification for external tests. (Robin Sommer) + + +2.0-245 | 2012-04-04 17:25:20 -0700 + + * Internal restructuring of the logging framework: we now spawn + threads doing the I/O. From a user's perspective not much should + change, except that the OS may now show a bunch of Bro threads. + (Gilbert Clark and Robin Sommer). + + * When building Bro, we now always link in tcmalloc if it's found at + configure time. If it's installed but not picked up, + --with-perftools may help. (Robin Sommer) + + * Renaming the configure option --enable-perftools to + --enable-perftool-debug to indicate that the switch is only + relevant for debugging the heap. It's not needed to pick up + tcmalloc for better performance. (Robin Sommer) + +2.0-184 | 2012-03-28 15:11:11 -0700 + + * Improve handling of IPv6 Routing Type 0 headers. (Jon Siwek) + + - For RH0 headers with non-zero segments left, a + "routing0_segleft" flow_weird event is raised (with a + destination indicating the last address in the routing header), + and an "rh0_segleft" event can also be handled if the other + contents of the packet header are of interest. No further + analysis is done as the complexity required to correctly + identify destination endpoints of connections doesn't seem worth + it as RH0 has been deprecated by RFC 5095. + + - For RH0 headers without any segments left, a "routing0_header" + flow_weird event is raised, but further analysis still occurs as + normal. + +2.0-182 | 2012-03-28 15:01:57 -0700 + + * Remove dead tcp_checksum function from net_util. (Jon Siwek) + + * Change routing0_data_to_addrs BIF to return vector of addresses. + The order of addresses in type 0 routing headers is + interesting/important. (Jon Siwek) + + +2.0-179 | 2012-03-23 17:43:31 -0700 + + * Remove the default "tcp or udp or icmp" filter. In default mode, + Bro would load the packet filter script framework which installs a + filter that allows all packets, but in bare mode (the -b option), + this old filter would not follow IPv6 protocol chains and thus + filter out packets with extension headers. (Jon Siwek) + + * Update PacketFilter/Discarder code for IP version independence. + (Jon Siwek) + + * Fix some IPv6 header related bugs. (Jon Siwek) + + * Add IPv6 fragment reassembly. (Jon Siwek) + + * Add handling for IPv6 extension header chains. Addresses #531. + (Jon Siwek) + + - The script-layer 'pkt_hdr' type is extended with a new 'ip6' field + representing the full IPv6 header chain. + + - The 'new_packet' event is now raised for IPv6 packets. Addresses + #523. + + - A new event called 'ipv6_ext_header' is raised for any IPv6 + packet containing extension headers. + + - A new event called 'esp_packet' is raised for any packets using + ESP ('new_packet' and 'ipv6_ext_header' events provide + connection info, but that info can't be provided here since the + upper-layer payload is encrypted). + + - The 'unknown_protocol' weird is now raised more reliably when + Bro sees a transport protocol or IPv6 extension header it can't + handle. Addresses #522. + + * Add unit tests for IPv6 fragment reassembly, ipv6_ext_headers and + esp_packet events. (Jon Siwek) + + * Adapt FreeBSD's inet_ntop implementation for internal use. Now we + get consistent text representations of IPv6 addresses across + platforms. (Jon Siwek) + + * Update documentation for new syntax of IPv6 literals. (Jon Siwek) + + +2.0-150 | 2012-03-13 16:16:22 -0700 + + * Changing the regular expression to allow Site::local_nets in + signatures. (Julien Sentier) + + * Removing a line of dead code. Found by . Closes #786. (Julien + Sentier) + +2.0-146 | 2012-03-13 15:39:38 -0700 + + * Change IPv6 literal constant syntax to require encasing square + brackets. (Jon Siwek) + +2.0-145 | 2012-03-09 15:10:35 -0800 + + * Remove the match expression. 'match' and 'using' are no longer + keywords. Addressed #753. (Jon Siwek) + +2.0-143 | 2012-03-09 15:07:42 -0800 + + * Fix a BRO_PROFILER_FILE/mkstemp portability issue. Addresses #794. + (Jon Siwek) + +2.0-139 | 2012-03-02 09:33:04 -0800 + + * Changes to how script coverage integrates with test suites. (Jon Siwek) + + - BRO_PROFILER_FILE now passes .X* templated filenames to mkstemp + for generating unique coverage state files. + + - Rearranging Makefile targets. The general rule is that if the + all/brief target fails out due to a test failure, then the dependent + coverage target won't run, but can still be invoked directly later. + (e.g. make brief || make coverage) + + * Standardized on the &default function for SSL constants. (Seth + Hall) + + * Adding btest group "leaks" to leak tests. (Robin Sommer) + + * Adding btest group "comm" to communication tests for parallelizing + execution with new btest version. (Robin Sommer) + + * Sorting all output for diffing in the external tests. (Robin + Sommer) + + * Cleaned up dead code from the old SSL analyzers. Reported by + Julien Sentier. (Seth Hall) + + * Update/add tests for broccoli IPv6 addr/subnet support. Addresses + #448. (Jon Siwek) + + * Remove connection compressor. Addresses #559. (Jon Siwek) + + * Refactor IP_Hdr class ctors. Addresses #532. (Jon Siwek) + + +2.0-121 | 2012-02-24 16:34:17 -0800 + + * A number of smaller memory fixes and code cleanups. (Julien + Sentier) + + * Add to_subnet bif. Fixes #782). (Jon Siwek) + + * Fix IPAddr::Mask/ReverseMask not allowing argument of 0. (Jon + Siwek) + + * Refactor IPAddr v4 initialization from string. Fixes #775. (Jon Siwek) + + * Parse the dotted address string directly instead of canonicalizing + and passing to inet_pton. (Jon Siwek) + + +2.0-108 | 2012-02-24 15:21:07 -0800 + + * Refactoring a number of usages of new IPAddr class. (Jon Siwek) + + * Fixed a bug in remask_addr bif. (Jon Siwek) + +2.0-106 | 2012-02-24 15:02:20 -0800 + + * Raise minimum required CMake version to 2.6.3. (Jon Siwek) + +2.0-104 | 2012-02-24 14:59:12 -0800 + + * Add test case for FTP over IPv4. (Daniel Thayer) + + * Fix IPv6 URLs in ftp.log. (Daniel Thayer) + + * Add a test for FTP over IPv6 (Daniel Thayer) + + * Fix parsing of FTP EPRT command and EPSV response. (Daniel Thayer) + +2.0-95 | 2012-02-22 05:27:34 -0800 + + * GeoIP installation documentation update. (Seth Hall) + + * Decrease strictness of parsing IPv4 strings into addrs. Fixes #775. (Jon Siwek) + + * Fix memory leak in DNS manager. Fixes #777. (Jon Siwek) + + * Fix IPAddr/IPPrefix serialization bugs. (Jon Siwek) + + * Fix compile error. (Jon Siwek) + +2.0-86 | 2012-02-17 15:41:06 -0800 + + * Changing ARP detection to always kick in even if no analyzer is + activated. (Robin Sommer) + + * DNS name lookups performed by Bro now also query AAAA records. + DNS_Mgr handles combining the results of the A and AAAA queries + for a given hostname such that at the scripting layer, the name + resolution can yield a set with both IPv4 and IPv6 addresses. (Jon + Siwek) + + * Add counts_to_addr and addr_to_counts conversion BIFs. (Jon Siwek) + + * Change HashKey threshold for using H3 to 36 bytes. (Jon Siwek) + + * Remove mention of --enable-brov6 in docs. (Daniel Thayer) + + * Remove --enable-brov6 from configure usage text (Daniel Thayer) + + * Add a test and baseline for addr_to_ptr_name BiF. (Daniel Thayer) + + * Adding a test and baseline for ptr_name_to_addr BiF. (Seth Hall) + + * Fix the ptr_name_to_addr BiF to work with IPv6 (Daniel Thayer) + + * Fix a memory leak that perftools now complains about. (Jon Siwek) + + * Remove --enable-brov6 flag, IPv6 now supported by default. (Jon Siwek) + + Some script-layer changes of note: + + - dns_AAAA_reply event signature changed: the string representation + of an IPv6 addr is easily derived from the addr value, it doesn't + need to be another parameter. This event also now generated directly + by the DNS analyzer instead of being "faked" into a dns_A_reply event. + + - Removed addr_to_count BIF. It used to return the host-order + count representation of IPv4 addresses only. To make it more + generic, we might later add a BIF to return a vector of counts + in order to support IPv6. + + - Changed the result of enclosing addr variables in vertical pipes + (e.g. |my_addr|) to return the bit-width of the address type which + is 128 for IPv6 and 32 for IPv4. It used to function the same + way as addr_to_count mentioned above. + + - Remove bro_has_ipv6 BIF + +2.0-57 | 2012-02-10 00:02:35 -0800 + + * Fix typos in the documentation. (Daniel Thayer) + + * Fix compiler warning about Brofiler ctor init list order. (Jon Siwek) + + * Fix missing optional field access in webapp signature_match handler. (Jon Siwek) + +2.0-41 | 2012-02-03 04:10:53 -0500 + + * Updates to the Software framework to simplify the API. (Bernhard + Amann) + +2.0-40 | 2012-02-03 01:55:27 -0800 + + * Fix typos in documentation. (Daniel Thayer) + + * Fix sorting of lines in Brofiler coverage.log. (Daniel Thayer) + +2.0-38 | 2012-01-31 11:50:53 -0800 + + * Canonify sorting of lines in Brofiler coverage.log. (Daniel + Thayer) + +2.0-36 | 2012-01-27 10:38:14 -0800 + + * New "Brofiler" mode that tracks and records script statements + executed during runtime. (Jon Siwek) + + Use the BROFILER_FILE environment variable to point to a file in + which statement usage statistics from Bro script-layer can be + output. + + Script statements that should be ignored can be marked with a "# + @no-test" comment. For example: + + print "don't cover"; # @no-test + + if ( F ) + { # @no-test + ... + } + + * Integrated coverage measurement into test-suite. (Jon Siwek) + +2.0-20 | 2012-01-25 16:34:51 -0800 + + * BiF cleanup (Matthias Vallentin) + + - Rename NFS3::mode2string to a more generic file_mode(). + + - Unify do_profiling()/make_connection_persistent()/expect_connection() + to return any (i.e., nothing) instead of bools. + + - Perform type checking on count-to-port conversion. Related to #684. + + - Remove redundant connection_record() BiF. The same + functionality is provided by lookup_connection(). + + - Remove redundant active_connection() BiF. The same + functionality is provided by connection_exists(). + + - exit() now takes the exit code as argument. + + - to_port() now received a string instead of a count. + +2.0-9 | 2012-01-25 13:47:13 -0800 + + * Allow local table variables to be initialized with {} list + expressions. (Jon Siwek) + +2.0-7 | 2012-01-25 13:38:09 -0800 + + * Teach CompHash to allow indexing by records with vector/table/set + fields. Addresses #464. (Jon Siwek) + +2.0-5 | 2012-01-25 13:25:19 -0800 + + * Fixed a bug resulting in over-logging of detected webapps. (Seth Hall) + + * Make communication log baseline test more reliable. (Jon Siwek) + + * Fixed some broken links in documentation. (Daniel Thayer) + +2.0 | 2012-01-11 13:52:22 -0800 + + * Adding script reference documentation. (The Team). + +2.0-beta-194 | 2012-01-10 10:44:32 -0800 + + * Added an option for filtering out URLs before they are turned into + HTTP::Incorrect_File_Type notices. (Seth Hall) + + * Fix ref counting bug in BIFs that call internal_type. Addresses + #740. (Jon Siwek) + + * Adding back the stats.bro file. (Seth Hall) + + +2.0-beta-188 | 2012-01-10 09:49:29 -0800 + + * Change SFTP/SCP log rotators to use 4-digit year in filenames + Fixes #745. (Jon Siwek) + + * Adding back the stats.bro file. Addresses #656. (Seth Hall) + +2.0-beta-185 | 2012-01-09 18:00:50 -0800 + + * Tweaks for OpenBSD support. (Jon Siwek) + +2.0-beta-181 | 2012-01-08 20:49:04 -0800 + + * Add SFTP log postprocessor that transfers logs to remote hosts. + Addresses #737. (Jon Siwek) + + * Add FAQ entry about disabling NIC offloading features. (Jon Siwek) + + * Add a file NEWS with release notes. (Robin Sommer) + +2.0-beta-177 | 2012-01-05 15:01:07 -0800 + + * Replace the --snaplen/-l command line option with a + scripting-layer option called "snaplen" (which can also be + redefined on the command line, e.g. `bro -i eth0 snaplen=65535`). + + * Reduce snaplen default from 65535 to old default of 8192. Fixes + #720. (Jon Siwek) + +2.0-beta-174 | 2012-01-04 12:47:10 -0800 + + * SSL improvements. (Seth Hall) + + - Added the ssl_session_ticket_handshake event back. + + - Fixed a few bugs. + + - Removed the SSLv2.cc file since it's not used. + +2.0-beta-169 | 2012-01-04 12:44:39 -0800 + + * Tuning the pretty-printed alarm mails, which now include the + covered time range into the subject. (Robin Sommer) + + * Adding top-level "test" target to Makefile. (Robin Sommer) + + * Adding SWIG as dependency to INSTALL. (Robin Sommer) + +2.0-beta-155 | 2012-01-03 15:42:32 -0800 + + * Remove dead code related to record type inheritance. (Jon Siwek) + +2.0-beta-152 | 2012-01-03 14:51:34 -0800 + + * Notices now record the transport-layer protocol. (Bernhard Amann) + +2.0-beta-150 | 2012-01-03 14:42:45 -0800 + + * CMake 2.6 top-level 'install' target compat. Fixes #729. (Jon Siwek) + + * Minor fixes to test process. Addresses #298. + + * Increase timeout interval of communication-related btests. (Jon Siwek) + +2.0-beta-145 | 2011-12-19 11:37:15 -0800 + + * Empty fields are now logged as "(empty)" by default. (Robin + Sommer) + + * In log headers, only escape information when necessary. (Robin + Sommer) + +2.0-beta-139 | 2011-12-19 07:06:29 -0800 + + * The hostname notice email extension works now, plus a general + mechanism for adding delayed information to notices. (Seth Hall) + + * Fix &default fields in records not being initialized in coerced + assignments. Addresses #722. (Jon Siwek) + + * Make log headers include the type of data stored inside a set or + vector ("vector[string]"). (Bernhard Amann) + +2.0-beta-126 | 2011-12-18 15:18:05 -0800 + + * DNS updates. (Seth Hall) + + - Fixed some bugs with capturing data in the base DNS script. + + - Answers and TTLs are now vectors. + + - A warning that was being generated (dns_reply_seen_after_done) + from transaction ID reuse is fixed. + + * SSL updates. (Seth Hall) + + - Added is_orig fields to the SSL events and adapted script. + + - Added a field named last_alert to the SSL log. + + - The x509_certificate function has an is_orig field now instead + of is_server and its position in the argument list has moved. + + - A bit of reorganization and cleanup in the core analyzer. (Seth + Hall) + +2.0-beta-121 | 2011-12-18 15:10:15 -0800 + + * Enable warnings for malformed Broxygen xref roles. (Jon Siwek) + + * Fix Broxygen confusing scoped IDs at start of line as function + parameter. (Jon Siwek) + + * Allow Broxygen markup "##<" for more general use. (Jon Siwek) + +2.0-beta-116 | 2011-12-16 02:38:27 -0800 + + * Cleanup some misc Broxygen css/js stuff. (Jon Siwek) + + * Add search box to Broxygen docs. Fixes #726. (Jon Siwek) + + * Fixed major bug with cluster synchronization, which was not + working. (Seth Hall) + + * Fix missing action in notice policy for looking up GeoIP data. + (Jon Siwek) + + * Better persistent state configuration warning messages (fixes + #433). (Jon Siwek) + + * Renaming HTTP::SQL_Injection_Attack_Against to + HTTP::SQL_Injection_Victim. (Seth Hall). + + * Fixed DPD signatures for IRC. Fixes #311. (Seth Hall) + + * Removing Off_Port_Protocol_Found notice. (Seth Hall) + + * Teach Broxygen to more generally reference attribute values by name. (Jon Siwek) + + * SSH::Interesting_Hostname_Login cleanup. Fixes #664. (Seth Hall) + + * Fixed bug that was causing the malware hash registry script to + break. (Seth Hall) + + * Remove remnant of libmagic optionality. (Jon Siwek) + +2.0-beta-98 | 2011-12-07 08:12:08 -0800 + + * Adapting test-suite's diff-all so that it expands globs in both + current and baseline directory. Closes #677. (Robin Sommer) + +2.0-beta-97 | 2011-12-06 11:49:29 -0800 + + * Omit loading local-.bro scripts from base cluster framework. + Addresses #663 (Jon Siwek) + +2.0-beta-94 | 2011-12-03 15:57:19 -0800 + + * Adapting attribute serialization when talking to Broccoli. (Robin + Sommer) + +2.0-beta-92 | 2011-12-03 15:56:03 -0800 + + * Changes to Broxygen master script package index. (Jon Siwek) + + - Now only lists packages as those directories in the script hierarchy + that contain an __load__.bro file. + + - Script packages (dirs with a __load__.bro file), can now include + a README (in reST format) that will automatically be appended + under the link to a specific package in the master package + index. + +2.0-beta-88 | 2011-12-02 17:00:58 -0800 + + * Teach LogWriterAscii to use BRO_LOG_SUFFIX environemt variable. + Addresses #704. (Jon Siwek) + + * Fix double-free of DNS_Mgr_Request object. Addresses #661. + + * Add a remote_log_peer event which comes with an event_peer record + parameter. Addresses #493. (Jon Siwek) + + * Remove example redef of SMTP::entity_excerpt_len from local.bro. + Fixes error emitted when loading local.bro in bare mode. (Jon + Siwek) + + * Add missing doc targets to top Makefile; remove old doc/Makefile. + Fixes #705. (Jon Siwek) + + * Turn some globals into constants. Addresses #633. (Seth Hall) + + * Rearrange packet filter and DPD documentation. (Jon Siwek) + +2.0-beta-72 | 2011-11-30 20:16:09 -0800 + + * Fine-tuning the Sphinx layout to better match www. (Jon Siwek and + Robin Sommer) + +2.0-beta-69 | 2011-11-29 16:55:31 -0800 + + * Fixing ASCII logger to escape the unset-field place holder if + written out literally. (Robin Sommer) + +2.0-beta-68 | 2011-11-29 15:23:12 -0800 + + * Lots of documentation polishing. (Jon Siwek) + + * Teach Broxygen the ".. bro:see::" directive. (Jon Siwek) + + * Teach Broxygen :bro:see: role for referencing any identifier in + the Bro domain. (Jon Siwek) + + * Teach Broxygen to generate an index of Bro notices. (Jon Siwek) + + * Fix order of include directories. (Jon Siwek) + + * Catch if logged vectors do not contain only atomic types. + (Bernhard Amann) + +2.0-beta-47 | 2011-11-16 08:24:33 -0800 + + * Catch if logged sets do not contain only atomic types. (Bernhard + Amann) + + * Promote libz and libmagic to required dependencies. (Jon Siwek) + + * Fix parallel make from top-level to work on more platforms. (Jon + Siwek) + + * Add decode_base64_custom(). Addresses #670 (Jon Siwek) + + * A bunch of Sphinx-doc reorgs and polishing. (Jon Siwek) + +2.0-beta-28 | 2011-11-14 20:09:28 -0800 + + * Binary packaging script tweaks. We now require CMake 2.8.6. (Jon Siwek) + + * More default "weird" tuning for the "SYN_with_data" notice. (Seth + Hall) + + * Tiny bugfix for http file extraction along with test. (Seth Hall) + 2.0-beta-21 | 2011-11-06 19:27:22 -0800 * Quickstart doc fixes. (Jon Siwek) diff --git a/CMakeLists.txt b/CMakeLists.txt index 78e9344a4c..17ba34ab3b 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -1,5 +1,5 @@ project(Bro C CXX) -cmake_minimum_required(VERSION 2.6 FATAL_ERROR) +cmake_minimum_required(VERSION 2.6.3 FATAL_ERROR) include(cmake/CommonCMakeConfig.cmake) ######################################################################## @@ -53,6 +53,8 @@ FindRequiredPackage(BISON) FindRequiredPackage(PCAP) FindRequiredPackage(OpenSSL) FindRequiredPackage(BIND) +FindRequiredPackage(LibMagic) +FindRequiredPackage(ZLIB) if (NOT BinPAC_ROOT_DIR AND EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/aux/binpac/CMakeLists.txt) @@ -72,26 +74,12 @@ include_directories(BEFORE ${OpenSSL_INCLUDE_DIR} ${BIND_INCLUDE_DIR} ${BinPAC_INCLUDE_DIR} + ${LibMagic_INCLUDE_DIR} + ${ZLIB_INCLUDE_DIR} ) # Optional Dependencies -set(HAVE_LIBMAGIC false) -find_package(LibMagic) -if (LIBMAGIC_FOUND) - set(HAVE_LIBMAGIC true) - include_directories(BEFORE ${LibMagic_INCLUDE_DIR}) - list(APPEND OPTLIBS ${LibMagic_LIBRARY}) -endif () - -set(HAVE_LIBZ false) -find_package(ZLIB) -if (ZLIB_FOUND) - set(HAVE_LIBZ true) - include_directories(BEFORE ${ZLIB_INCLUDE_DIR}) - list(APPEND OPTLIBS ${ZLIB_LIBRARY}) -endif () - set(USE_GEOIP false) find_package(LibGeoIP) if (LIBGEOIP_FOUND) @@ -100,16 +88,75 @@ if (LIBGEOIP_FOUND) list(APPEND OPTLIBS ${LibGeoIP_LIBRARY}) endif () -set(USE_PERFTOOLS false) -if (ENABLE_PERFTOOLS) - find_package(GooglePerftools) - if (GOOGLEPERFTOOLS_FOUND) - set(USE_PERFTOOLS true) - include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR}) - list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES}) +set(HAVE_PERFTOOLS false) +set(USE_PERFTOOLS_DEBUG false) +set(USE_PERFTOOLS_TCMALLOC false) + +if (NOT DISABLE_PERFTOOLS) + find_package(GooglePerftools) +endif () + +if (GOOGLEPERFTOOLS_FOUND) + set(HAVE_PERFTOOLS true) + # Non-Linux systems may not be well-supported by gperftools, so + # require explicit request from user to enable it in that case. + if (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ENABLE_PERFTOOLS) + set(USE_PERFTOOLS_TCMALLOC true) + + if (ENABLE_PERFTOOLS_DEBUG) + # Enable heap debugging with perftools. + set(USE_PERFTOOLS_DEBUG true) + include_directories(BEFORE ${GooglePerftools_INCLUDE_DIR}) + list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES_DEBUG}) + else () + # Link in tcmalloc for better performance. + list(APPEND OPTLIBS ${GooglePerftools_LIBRARIES}) + endif () endif () endif () +set(USE_DATASERIES false) +find_package(Lintel) +find_package(DataSeries) +find_package(LibXML2) + +if (NOT DISABLE_DATASERIES AND + LINTEL_FOUND AND DATASERIES_FOUND AND LIBXML2_FOUND) + set(USE_DATASERIES true) + include_directories(BEFORE ${Lintel_INCLUDE_DIR}) + include_directories(BEFORE ${DataSeries_INCLUDE_DIR}) + include_directories(BEFORE ${LibXML2_INCLUDE_DIR}) + list(APPEND OPTLIBS ${Lintel_LIBRARIES}) + list(APPEND OPTLIBS ${DataSeries_LIBRARIES}) + list(APPEND OPTLIBS ${LibXML2_LIBRARIES}) +endif() + +set(USE_ELASTICSEARCH false) +set(USE_CURL false) +find_package(LibCURL) + +if (NOT DISABLE_ELASTICSEARCH AND LIBCURL_FOUND) + set(USE_ELASTICSEARCH true) + set(USE_CURL true) + include_directories(BEFORE ${LibCURL_INCLUDE_DIR}) + list(APPEND OPTLIBS ${LibCURL_LIBRARIES}) +endif() + +if (ENABLE_PERFTOOLS_DEBUG) + # Just a no op to prevent CMake from complaining about manually-specified + # ENABLE_PERFTOOLS_DEBUG not being used if google perftools weren't found +endif () + +set(brodeps + ${BinPAC_LIBRARY} + ${PCAP_LIBRARY} + ${OpenSSL_LIBRARIES} + ${BIND_LIBRARY} + ${LibMagic_LIBRARY} + ${ZLIB_LIBRARY} + ${OPTLIBS} +) + ######################################################################## ## System Introspection @@ -184,9 +231,13 @@ message( "\nAux. Tools: ${INSTALL_AUX_TOOLS}" "\n" "\nGeoIP: ${USE_GEOIP}" - "\nlibz: ${HAVE_LIBZ}" - "\nlibmagic: ${HAVE_LIBMAGIC}" - "\nGoogle perftools: ${USE_PERFTOOLS}" + "\ngperftools found: ${HAVE_PERFTOOLS}" + "\n tcmalloc: ${USE_PERFTOOLS_TCMALLOC}" + "\n debugging: ${USE_PERFTOOLS_DEBUG}" + "\ncURL: ${USE_CURL}" + "\n" + "\nDataSeries: ${USE_DATASERIES}" + "\nElasticSearch: ${USE_ELASTICSEARCH}" "\n" "\n================================================================\n" ) diff --git a/COPYING b/COPYING index 5ae3c62e7a..7b0a94a03b 100644 --- a/COPYING +++ b/COPYING @@ -1,4 +1,4 @@ -Copyright (c) 1995-2011, The Regents of the University of California +Copyright (c) 1995-2012, The Regents of the University of California through the Lawrence Berkeley National Laboratory and the International Computer Science Institute. All rights reserved. diff --git a/DocSourcesList.cmake b/DocSourcesList.cmake new file mode 100644 index 0000000000..1743b0258f --- /dev/null +++ b/DocSourcesList.cmake @@ -0,0 +1,144 @@ +# DO NOT EDIT +# This file is auto-generated from the genDocSourcesList.sh script. +# +# This is a list of Bro script sources for which to generate reST documentation. +# It will be included inline in the CMakeLists.txt found in the same directory +# in order to create Makefile targets that define how to generate reST from +# a given Bro script. +# +# Note: any path prefix of the script (2nd argument of rest_target macro) +# will be used to derive what path under scripts/ the generated documentation +# will be placed. + +set(psd ${PROJECT_SOURCE_DIR}/scripts) + +rest_target(${CMAKE_CURRENT_SOURCE_DIR} example.bro internal) +rest_target(${psd} base/init-default.bro internal) +rest_target(${psd} base/init-bare.bro internal) + +rest_target(${CMAKE_BINARY_DIR}/src base/bro.bif.bro) +rest_target(${CMAKE_BINARY_DIR}/src base/const.bif.bro) +rest_target(${CMAKE_BINARY_DIR}/src base/event.bif.bro) +rest_target(${CMAKE_BINARY_DIR}/src base/logging.bif.bro) +rest_target(${CMAKE_BINARY_DIR}/src base/reporter.bif.bro) +rest_target(${CMAKE_BINARY_DIR}/src base/strings.bif.bro) +rest_target(${CMAKE_BINARY_DIR}/src base/types.bif.bro) +rest_target(${psd} base/frameworks/cluster/main.bro) +rest_target(${psd} base/frameworks/cluster/nodes/manager.bro) +rest_target(${psd} base/frameworks/cluster/nodes/proxy.bro) +rest_target(${psd} base/frameworks/cluster/nodes/worker.bro) +rest_target(${psd} base/frameworks/cluster/setup-connections.bro) +rest_target(${psd} base/frameworks/communication/main.bro) +rest_target(${psd} base/frameworks/control/main.bro) +rest_target(${psd} base/frameworks/dpd/main.bro) +rest_target(${psd} base/frameworks/intel/main.bro) +rest_target(${psd} base/frameworks/logging/main.bro) +rest_target(${psd} base/frameworks/logging/postprocessors/scp.bro) +rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro) +rest_target(${psd} base/frameworks/logging/writers/ascii.bro) +rest_target(${psd} base/frameworks/logging/writers/dataseries.bro) +rest_target(${psd} base/frameworks/metrics/cluster.bro) +rest_target(${psd} base/frameworks/metrics/main.bro) +rest_target(${psd} base/frameworks/metrics/non-cluster.bro) +rest_target(${psd} base/frameworks/notice/actions/add-geodata.bro) +rest_target(${psd} base/frameworks/notice/actions/drop.bro) +rest_target(${psd} base/frameworks/notice/actions/email_admin.bro) +rest_target(${psd} base/frameworks/notice/actions/page.bro) +rest_target(${psd} base/frameworks/notice/actions/pp-alarms.bro) +rest_target(${psd} base/frameworks/notice/cluster.bro) +rest_target(${psd} base/frameworks/notice/extend-email/hostnames.bro) +rest_target(${psd} base/frameworks/notice/main.bro) +rest_target(${psd} base/frameworks/notice/weird.bro) +rest_target(${psd} base/frameworks/packet-filter/main.bro) +rest_target(${psd} base/frameworks/packet-filter/netstats.bro) +rest_target(${psd} base/frameworks/reporter/main.bro) +rest_target(${psd} base/frameworks/signatures/main.bro) +rest_target(${psd} base/frameworks/software/main.bro) +rest_target(${psd} base/protocols/conn/contents.bro) +rest_target(${psd} base/protocols/conn/inactivity.bro) +rest_target(${psd} base/protocols/conn/main.bro) +rest_target(${psd} base/protocols/dns/consts.bro) +rest_target(${psd} base/protocols/dns/main.bro) +rest_target(${psd} base/protocols/ftp/file-extract.bro) +rest_target(${psd} base/protocols/ftp/main.bro) +rest_target(${psd} base/protocols/ftp/utils-commands.bro) +rest_target(${psd} base/protocols/http/file-extract.bro) +rest_target(${psd} base/protocols/http/file-hash.bro) +rest_target(${psd} base/protocols/http/file-ident.bro) +rest_target(${psd} base/protocols/http/main.bro) +rest_target(${psd} base/protocols/http/utils.bro) +rest_target(${psd} base/protocols/irc/dcc-send.bro) +rest_target(${psd} base/protocols/irc/main.bro) +rest_target(${psd} base/protocols/smtp/entities-excerpt.bro) +rest_target(${psd} base/protocols/smtp/entities.bro) +rest_target(${psd} base/protocols/smtp/main.bro) +rest_target(${psd} base/protocols/ssh/main.bro) +rest_target(${psd} base/protocols/ssl/consts.bro) +rest_target(${psd} base/protocols/ssl/main.bro) +rest_target(${psd} base/protocols/ssl/mozilla-ca-list.bro) +rest_target(${psd} base/protocols/syslog/consts.bro) +rest_target(${psd} base/protocols/syslog/main.bro) +rest_target(${psd} base/utils/addrs.bro) +rest_target(${psd} base/utils/conn-ids.bro) +rest_target(${psd} base/utils/directions-and-hosts.bro) +rest_target(${psd} base/utils/files.bro) +rest_target(${psd} base/utils/numbers.bro) +rest_target(${psd} base/utils/paths.bro) +rest_target(${psd} base/utils/patterns.bro) +rest_target(${psd} base/utils/site.bro) +rest_target(${psd} base/utils/strings.bro) +rest_target(${psd} base/utils/thresholds.bro) +rest_target(${psd} policy/frameworks/communication/listen.bro) +rest_target(${psd} policy/frameworks/control/controllee.bro) +rest_target(${psd} policy/frameworks/control/controller.bro) +rest_target(${psd} policy/frameworks/dpd/detect-protocols.bro) +rest_target(${psd} policy/frameworks/dpd/packet-segment-logging.bro) +rest_target(${psd} policy/frameworks/metrics/conn-example.bro) +rest_target(${psd} policy/frameworks/metrics/http-example.bro) +rest_target(${psd} policy/frameworks/metrics/ssl-example.bro) +rest_target(${psd} policy/frameworks/software/version-changes.bro) +rest_target(${psd} policy/frameworks/software/vulnerable.bro) +rest_target(${psd} policy/integration/barnyard2/main.bro) +rest_target(${psd} policy/integration/barnyard2/types.bro) +rest_target(${psd} policy/misc/analysis-groups.bro) +rest_target(${psd} policy/misc/capture-loss.bro) +rest_target(${psd} policy/misc/loaded-scripts.bro) +rest_target(${psd} policy/misc/profiling.bro) +rest_target(${psd} policy/misc/stats.bro) +rest_target(${psd} policy/misc/trim-trace-file.bro) +rest_target(${psd} policy/protocols/conn/known-hosts.bro) +rest_target(${psd} policy/protocols/conn/known-services.bro) +rest_target(${psd} policy/protocols/conn/weirds.bro) +rest_target(${psd} policy/protocols/dns/auth-addl.bro) +rest_target(${psd} policy/protocols/dns/detect-external-names.bro) +rest_target(${psd} policy/protocols/ftp/detect.bro) +rest_target(${psd} policy/protocols/ftp/software.bro) +rest_target(${psd} policy/protocols/http/detect-MHR.bro) +rest_target(${psd} policy/protocols/http/detect-intel.bro) +rest_target(${psd} policy/protocols/http/detect-sqli.bro) +rest_target(${psd} policy/protocols/http/detect-webapps.bro) +rest_target(${psd} policy/protocols/http/header-names.bro) +rest_target(${psd} policy/protocols/http/software-browser-plugins.bro) +rest_target(${psd} policy/protocols/http/software.bro) +rest_target(${psd} policy/protocols/http/var-extraction-cookies.bro) +rest_target(${psd} policy/protocols/http/var-extraction-uri.bro) +rest_target(${psd} policy/protocols/smtp/blocklists.bro) +rest_target(${psd} policy/protocols/smtp/detect-suspicious-orig.bro) +rest_target(${psd} policy/protocols/smtp/software.bro) +rest_target(${psd} policy/protocols/ssh/detect-bruteforcing.bro) +rest_target(${psd} policy/protocols/ssh/geo-data.bro) +rest_target(${psd} policy/protocols/ssh/interesting-hostnames.bro) +rest_target(${psd} policy/protocols/ssh/software.bro) +rest_target(${psd} policy/protocols/ssl/cert-hash.bro) +rest_target(${psd} policy/protocols/ssl/expiring-certs.bro) +rest_target(${psd} policy/protocols/ssl/extract-certs-pem.bro) +rest_target(${psd} policy/protocols/ssl/known-certs.bro) +rest_target(${psd} policy/protocols/ssl/validate-certs.bro) +rest_target(${psd} policy/tuning/defaults/packet-fragments.bro) +rest_target(${psd} policy/tuning/defaults/warnings.bro) +rest_target(${psd} policy/tuning/track-all-assets.bro) +rest_target(${psd} site/local-manager.bro) +rest_target(${psd} site/local-proxy.bro) +rest_target(${psd} site/local-worker.bro) +rest_target(${psd} site/local.bro) +rest_target(${psd} test-all-policy.bro) diff --git a/INSTALL b/INSTALL index d1351a502f..d9f7963ec4 100644 --- a/INSTALL +++ b/INSTALL @@ -5,33 +5,44 @@ Installing Bro Prerequisites ============= -Bro relies on the following libraries and tools, which need to be installed +Bro requires the following libraries and tools to be installed before you begin: - * CMake 2.6 or greater http://www.cmake.org + * CMake 2.6.3 or greater http://www.cmake.org - * Libpcap (headers and libraries) http://www.tcpdump.org + * Perl (used only during the Bro build process) - * OpenSSL (headers and libraries) http://www.openssl.org + * Libpcap headers and libraries http://www.tcpdump.org -Bro can make uses of some optional libraries if they are found at -installation time: + * OpenSSL headers and libraries http://www.openssl.org - * Libmagic For identifying file types (e.g., in FTP transfers). + * BIND8 headers and libraries - * LibGeoIP For geo-locating IP addresses. + * Libmagic - * Libz For decompressing HTTP bodies by the HTTP analyzer, and for - compressed Bro-to-Bro communication. + * Libz + * SWIG http://www.swig.org -Bro also needs the following tools, but on most systems they will -already come preinstalled: - - * BIND8 (headers and libraries) * Bison (GNU Parser Generator) + * Flex (Fast Lexical Analyzer) - * Perl (Used only during the Bro build process) + + * Bash (for BroControl) + + +Bro can make use of some optional libraries and tools if they are found at +build time: + + * LibGeoIP (for geo-locating IP addresses) + + * gperftools (tcmalloc is used to improve memory and CPU usage) + + * sendmail (for BroControl) + + * ipsumdump (for trace-summary) http://www.cs.ucla.edu/~kohler/ipsumdump + + * Ruby executable, library, and headers (for Broccoli Ruby bindings) Installation @@ -39,18 +50,18 @@ Installation To build and install into ``/usr/local/bro``:: - > ./configure - > make - > make install + ./configure + make + make install -This will first build Bro into a directory inside the distribution +This will first build Bro in a directory inside the distribution called ``build/``, using default build options. It then installs all required files into ``/usr/local/bro``, including the Bro binary in ``/usr/local/bro/bin/bro``. You can specify a different installation directory with:: - > ./configure --prefix= + ./configure --prefix= Note that ``/usr`` and ``/opt/bro`` are the standard prefixes for binary Bro packages to be installed, so those are typically not good @@ -59,23 +70,23 @@ choices unless you are creating such a package. Run ``./configure --help`` for more options. Depending on the Bro package you downloaded, there may be auxiliary -tools and libraries available in the ``aux/`` directory. All of them -except for ``aux/bro-aux`` will also be built and installed by doing -``make install``. To install the programs that come in the -``aux/bro-aux`` directory, use ``make install-aux``. There are +tools and libraries available in the ``aux/`` directory. Some of them +will be automatically built and installed along with Bro. There are ``--disable-*`` options that can be given to the configure script to -turn off unwanted auxiliary projects. +turn off unwanted auxiliary projects that would otherwise be installed +automatically. Finally, use ``make install-aux`` to install some of +the other programs that are in the ``aux/bro-aux`` directory. +OpenBSD users, please see our FAQ at +http://www.bro-ids.org/documentation/faq.html if you are having +problems installing Bro. Running Bro =========== Bro is a complex program and it takes a bit of time to get familiar -with it. A good place for newcomers to start is the quick start guide -available here: - - http://www.bro-ids.org/documentation/quickstart.html - +with it. A good place for newcomers to start is the Quick Start Guide +at http://www.bro-ids.org/documentation/quickstart.html. For developers that wish to run Bro directly from the ``build/`` directory (i.e., without performing ``make install``), they will have @@ -83,9 +94,9 @@ to first adjust ``BROPATH`` to look for scripts inside the build directory. Sourcing either ``build/bro-path-dev.sh`` or ``build/bro-path-dev.csh`` as appropriate for the current shell accomplishes this and also augments your ``PATH`` so you can use the -Bro binary directly: +Bro binary directly:: - > ./configure - > make - > source build/bro-path-dev.sh - > bro + ./configure + make + source build/bro-path-dev.sh + bro diff --git a/Makefile b/Makefile index c736ecdbcb..455fa6ed88 100644 --- a/Makefile +++ b/Makefile @@ -2,7 +2,7 @@ # A simple static wrapper for a number of standard Makefile targets, # mostly just forwarding to build/Makefile. This is provided only for # convenience and supports only a subset of what CMake's Makefile -# to offer. For more, execute that one directly. +# offers. For more, execute that one directly. # BUILD=build @@ -12,22 +12,34 @@ VERSION_MIN=$(REPO)-`cat VERSION`-minimal HAVE_MODULES=git submodule | grep -v cmake >/dev/null all: configured - ( cd $(BUILD) && make ) + $(MAKE) -C $(BUILD) $@ -install: configured - ( cd $(BUILD) && make install ) +install: configured all + $(MAKE) -C $(BUILD) $@ install-aux: configured - ( cd $(BUILD) && make install-aux ) + $(MAKE) -C $(BUILD) $@ clean: configured docclean - ( cd $(BUILD) && make clean ) + $(MAKE) -C $(BUILD) $@ doc: configured - ( cd $(BUILD) && make doc ) + $(MAKE) -C $(BUILD) $@ docclean: configured - ( cd $(BUILD) && make docclean ) + $(MAKE) -C $(BUILD) $@ + +restdoc: configured + $(MAKE) -C $(BUILD) $@ + +restclean: configured + $(MAKE) -C $(BUILD) $@ + +broxygen: configured + $(MAKE) -C $(BUILD) $@ + +broxygenclean: configured + $(MAKE) -C $(BUILD) $@ dist: @rm -rf $(VERSION_FULL) $(VERSION_FULL).tgz @@ -48,6 +60,9 @@ bindist: distclean: rm -rf $(BUILD) +test: + @(cd testing && make ) + configured: @test -d $(BUILD) || ( echo "Error: No build/ directory found. Did you run configure?" && exit 1 ) @test -e $(BUILD)/Makefile || ( echo "Error: No build/Makefile found. Did you run configure?" && exit 1 ) diff --git a/NEWS b/NEWS new file mode 100644 index 0000000000..3be3b7b4cc --- /dev/null +++ b/NEWS @@ -0,0 +1,257 @@ + +Release Notes +============= + +This document summarizes the most important changes in the current Bro +release. For a complete list of changes, see the ``CHANGES`` file +(note that submodules, such as BroControl and Broccoli, come with +their own CHANGES.) + +Bro 2.2 +------- + +New Functionality +~~~~~~~~~~~~~~~~~ + +- GridFTP support. TODO: Extend. + +- ssl.log now also records the subject client and issuer certificates. + +Changed Functionality +~~~~~~~~~~~~~~~~~~~~~ + +- We removed the following, already deprecated, functionality: + + * Scripting language: + - &disable_print_hook attribute. + + * BiF functions: + - parse_dotted_addr(), dump_config(), + make_connection_persistent(), generate_idmef(), + split_complete() + +- Removed a now unused argument from "do_split" helper function. + +- "this" is no longer a reserved keyword. + +- The Input Framework's update_finished event has been renamed to + end_of_data. It will now not only fire after table-reads have been + completed, but also after the last event of a whole-file-read (or + whole-db-read, etc.). + +Bro 2.1 +------- + +New Functionality +~~~~~~~~~~~~~~~~~ + +- Bro now comes with extensive IPv6 support. Past versions offered + only basic IPv6 functionality that was rarely used in practice as it + had to be enabled explicitly. IPv6 support is now fully integrated + into all parts of Bro including protocol analysis and the scripting + language. It's on by default and no longer requires any special + configuration. + + Some of the most significant enhancements include support for IPv6 + fragment reassembly, support for following IPv6 extension header + chains, and support for tunnel decapsulation (6to4 and Teredo). The + DNS analyzer now handles AAAA records properly, and DNS lookups that + Bro itself performs now include AAAA queries, so that, for example, + the result returned by script-level lookups is a set that can + contain both IPv4 and IPv6 addresses. Support for the most common + ICMPv6 message types has been added. Also, the FTP EPSV and EPRT + commands are now handled properly. Internally, the way IP addresses + are stored has been improved, so Bro can handle both IPv4 + and IPv6 by default without any special configuration. + + In addition to Bro itself, the other Bro components have also been + made IPv6-aware by default. In particular, significant changes were + made to trace-summary, PySubnetTree, and Broccoli to support IPv6. + +- Bro now decapsulates tunnels via its new tunnel framework located in + scripts/base/frameworks/tunnels. It currently supports Teredo, + AYIYA, IP-in-IP (both IPv4 and IPv6), and SOCKS. For all these, it + logs the outer tunnel connections in both conn.log and tunnel.log, + and then proceeds to analyze the inner payload as if it were not + tunneled, including also logging that session in conn.log. For + SOCKS, it generates a new socks.log in addition with more + information. + +- Bro now features a flexible input framework that allows users to + integrate external information in real-time into Bro while it's + processing network traffic. The most direct use-case at the moment + is reading data from ASCII files into Bro tables, with updates + picked up automatically when the file changes during runtime. See + doc/input.rst for more information. + + Internally, the input framework is structured around the notion of + "reader plugins" that make it easy to interface to different data + sources. We will add more in the future. + +- BroControl now has built-in support for host-based load-balancing + when using either PF_RING, Myricom cards, or individual interfaces. + Instead of adding a separate worker entry in node.cfg for each Bro + worker process on each worker host, it is now possible to just + specify the number of worker processes on each host and BroControl + configures everything correctly (including any neccessary enviroment + variables for the balancers). + + This change adds three new keywords to the node.cfg file (to be used + with worker entries): lb_procs (specifies number of workers on a + host), lb_method (specifies what type of load balancing to use: + pf_ring, myricom, or interfaces), and lb_interfaces (used only with + "lb_method=interfaces" to specify which interfaces to load-balance + on). + +- Bro's default ASCII log format is not exactly the most efficient way + for storing and searching large volumes of data. An alternatives, + Bro now comes with experimental support for two alternative output + formats: + + * DataSeries: an efficient binary format for recording structured + bulk data. DataSeries is developed and maintained at HP Labs. + See doc/logging-dataseries for more information. + + * ElasticSearch: a distributed RESTful, storage engine and search + engine built on top of Apache Lucene. It scales very well, both + for distributed indexing and distributed searching. See + doc/logging-elasticsearch.rst for more information. + + Note that at this point, we consider Bro's support for these two + formats as prototypes for collecting experience with alternative + outputs. We do not yet recommend them for production (but welcome + feedback!) + + +Changed Functionality +~~~~~~~~~~~~~~~~~~~~~ + +The following summarizes the most important differences in existing +functionality. Note that this list is not complete, see CHANGES for +the full set. + +- Changes in dependencies: + + * Bro now requires CMake >= 2.6.3. + + * On Linux, Bro now links in tcmalloc (part of Google perftools) + if found at configure time. Doing so can significantly improve + memory and CPU use. + + On the other platforms, the new configure option + --enable-perftools can be used to enable linking to tcmalloc. + (Note that perftools's support for non-Linux platforms may be + less reliable). + +- The configure switch --enable-brov6 is gone. + +- DNS name lookups performed by Bro now also query AAAA records. The + results of the A and AAAA queries for a given hostname are combined + such that at the scripting layer, the name resolution can yield a + set with both IPv4 and IPv6 addresses. + +- The connection compressor was already deprecated in 2.0 and has now + been removed from the code base. + +- We removed the "match" statement, which was no longer used by any of + the default scripts, nor was it likely to be used by anybody anytime + soon. With that, "match" and "using" are no longer reserved keywords. + +- The syntax for IPv6 literals changed from "2607:f8b0:4009:802::1012" + to "[2607:f8b0:4009:802::1012]". + +- Bro now spawns threads for doing its logging. From a user's + perspective not much should change, except that the OS may now show + a bunch of Bro threads. + +- We renamed the configure option --enable-perftools to + --enable-perftools-debug to indicate that the switch is only relevant + for debugging the heap. + +- Bro's ICMP analyzer now handles both IPv4 and IPv6 messages with a + joint set of events. The `icmp_conn` record got a new boolean field + 'v6' that indicates whether the ICMP message is v4 or v6. + +- Log postprocessor scripts get an additional argument indicating the + type of the log writer in use (e.g., "ascii"). + +- BroControl's make-archive-name script also receives the writer + type, but as its 2nd(!) argument. If you're using a custom version + of that script, you need to adapt it. See the shipped version for + details. + +- Signature files can now be loaded via the new "@load-sigs" + directive. In contrast to the existing (and still supported) + signature_files constant, this can be used to load signatures + relative to the current script (e.g., "@load-sigs ./foo.sig"). + +- The options "tunnel_port" and "parse_udp_tunnels" have been removed. + Bro now supports decapsulating tunnels directly for protocols it + understands. + +- ASCII logs now record the time when they were opened/closed at the + beginning and end of the file, respectively (wall clock). The + options LogAscii::header_prefix and LogAscii::include_header have + been renamed to LogAscii::meta_prefix and LogAscii::include_meta, + respectively. + +- The ASCII writers "header_*" options have been renamed to "meta_*" + (because there's now also a footer). + + +Bro 2.0 +------- + +As the version number jump suggests, Bro 2.0 is a major upgrade and +lots of things have changed. We have assembled a separate upgrade +guide with the most important changes compared to Bro 1.5 at +http://www.bro-ids.org/documentation/upgrade.html. You can find +the offline version of that document in ``doc/upgrade.rst.``. + +Compared to the earlier 2.0 Beta version, the major changes in the +final release are: + + * The default scripts now come with complete reference + documentation. See + http://www.bro-ids.org/documentation/index.html. + + * libz and libmagic are now required dependencies. + + * Reduced snaplen default from 65535 to old default of 8192. The + large value was introducing performance problems on many + systems. + + * Replaced the --snaplen/-l command line option with a + scripting-layer option called "snaplen". The new option can also + be redefined on the command line, e.g. ``bro -i eth0 + snaplen=65535``. + + * Reintroduced the BRO_LOG_SUFFIX environment variable that the + ASCII logger now respects to add a suffix to the log files it + creates. + + * The ASCII logs now include further header information, and + fields set to an empty value are now logged as ``(empty)`` by + default (instead of ``-``, which is already used for fields that + are not set at all). + + * Some NOTICES were renamed, and the signatures of some SSL events + have changed. + + * bro-cut got some new capabilities: + + - If no field names are given on the command line, we now pass + through all fields. + + - New options -u/-U for time output in UTC. + + - New option -F to give output field separator. + + * Broccoli supports more types internally, allowing to send + complex records. + + * Many smaller bug fixes, portability improvements, and general + polishing across all modules. + + + diff --git a/README b/README index 435e60225a..c837afaf92 100644 --- a/README +++ b/README @@ -4,13 +4,15 @@ Bro Network Security Monitor Bro is a powerful framework for network analysis and security monitoring. Please see the INSTALL file for installation instructions -and pointers for getting started. For more documentation, research -publications, and community contact information, see Bro's home page: +and pointers for getting started. NEWS contains release notes for the +current version, and CHANGES has the complete history of changes. +Please see COPYING for licensing information. + +For more documentation, research publications, and community contact +information, please see Bro's home page: http://www.bro-ids.org -Please see COPYING for licensing information. - On behalf of the Bro Development Team, Vern Paxson & Robin Sommer, diff --git a/VERSION b/VERSION index c7a306b6f3..c69ab9646c 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -2.0-beta-21 +2.1-84 diff --git a/aux/binpac b/aux/binpac index 777b8a21c4..74e6a5401c 160000 --- a/aux/binpac +++ b/aux/binpac @@ -1 +1 @@ -Subproject commit 777b8a21c4c74e1f62e8b9896b082e8c059b539f +Subproject commit 74e6a5401c4228d5293c0e309283f43c389e7c12 diff --git a/aux/bro-aux b/aux/bro-aux index 906f970df5..01bb93cb23 160000 --- a/aux/bro-aux +++ b/aux/bro-aux @@ -1 +1 @@ -Subproject commit 906f970df5f708582c7002069b787d5af586b46f +Subproject commit 01bb93cb23f31a98fb400584e8d2f2fbe8a589ef diff --git a/aux/broccoli b/aux/broccoli index e02e3cc89a..907210ce14 160000 --- a/aux/broccoli +++ b/aux/broccoli @@ -1 +1 @@ -Subproject commit e02e3cc89a3efb3d7ec376154e24835b4b828be8 +Subproject commit 907210ce1470724fb386f939cc1b10a4caa2ae39 diff --git a/aux/broctl b/aux/broctl index 288c8568d7..fd0e7e0b0c 160000 --- a/aux/broctl +++ b/aux/broctl @@ -1 +1 @@ -Subproject commit 288c8568d7aaa38cf7c05833c133a91cbadbfce4 +Subproject commit fd0e7e0b0cf50131efaf536a5683266cfe169455 diff --git a/aux/btest b/aux/btest index 7230a09a8c..44a43e6245 160000 --- a/aux/btest +++ b/aux/btest @@ -1 +1 @@ -Subproject commit 7230a09a8c220d2117e491fdf293bf5c19819b65 +Subproject commit 44a43e62452302277f88e8fac08d1f979dc53f98 diff --git a/cmake b/cmake index 704e255d7e..14537f56d6 160000 --- a/cmake +++ b/cmake @@ -1 +1 @@ -Subproject commit 704e255d7ef2faf926836c1c64d16c5b8a02b063 +Subproject commit 14537f56d66b18ab9d5024f798caf4d1f356fc67 diff --git a/config.h.in b/config.h.in index 3783a8390d..2d065f755e 100644 --- a/config.h.in +++ b/config.h.in @@ -1,6 +1,3 @@ -/* enable IPV6 processing */ -#cmakedefine BROv6 - /* Old libpcap versions (< 0.6.1) need defining pcap_freecode and pcap_compile_nopcap */ #cmakedefine DONT_HAVE_LIBPCAP_PCAP_FREECODE @@ -14,18 +11,9 @@ /* Define if you have the `getopt_long' function. */ #cmakedefine HAVE_GETOPT_LONG -/* Define if you have the `magic' library (-lmagic). */ -#cmakedefine HAVE_LIBMAGIC - -/* Define if you have the `z' library (-lz). */ -#cmakedefine HAVE_LIBZ - /* We are on a Linux system */ #cmakedefine HAVE_LINUX -/* Define if you have the header file. */ -#cmakedefine HAVE_MAGIC_H - /* Define if you have the `mallinfo' function. */ #cmakedefine HAVE_MALLINFO @@ -41,8 +29,8 @@ /* Define if you have the header file. */ #cmakedefine HAVE_NET_ETHERNET_H -/* We are on a OpenBSD system */ -#cmakedefine HAVE_OPENBSD +/* Define if you have the header file. */ +#cmakedefine HAVE_NET_ETHERTYPES_H /* have os-proto.h */ #cmakedefine HAVE_OS_PROTO_H @@ -121,7 +109,19 @@ #cmakedefine HAVE_GEOIP_CITY_EDITION_REV0_V6 /* Use Google's perftools */ -#cmakedefine USE_PERFTOOLS +#cmakedefine USE_PERFTOOLS_DEBUG + +/* Analyze Mobile IPv6 traffic */ +#cmakedefine ENABLE_MOBILE_IPV6 + +/* Use libCurl. */ +#cmakedefine USE_CURL + +/* Use the DataSeries writer. */ +#cmakedefine USE_DATASERIES + +/* Use the ElasticSearch writer. */ +#cmakedefine USE_ELASTICSEARCH /* Version number of package */ #define VERSION "@VERSION@" @@ -154,3 +154,58 @@ /* Define u_int8_t */ #cmakedefine u_int8_t @u_int8_t@ + +/* OpenBSD's bpf.h may not declare this data link type, but it's supposed to be + used consistently for the same purpose on all platforms. */ +#cmakedefine HAVE_DLT_PPP_SERIAL +#ifndef HAVE_DLT_PPP_SERIAL +#define DLT_PPP_SERIAL @DLT_PPP_SERIAL@ +#endif + +/* IPv6 Next Header values defined by RFC 3542 */ +#cmakedefine HAVE_IPPROTO_HOPOPTS +#ifndef HAVE_IPPROTO_HOPOPTS +#define IPPROTO_HOPOPTS 0 +#endif +#cmakedefine HAVE_IPPROTO_IPV6 +#ifndef HAVE_IPPROTO_IPV6 +#define IPPROTO_IPV6 41 +#endif +#cmakedefine HAVE_IPPROTO_IPV4 +#ifndef HAVE_IPPROTO_IPV4 +#define IPPROTO_IPV4 4 +#endif +#cmakedefine HAVE_IPPROTO_ROUTING +#ifndef HAVE_IPPROTO_ROUTING +#define IPPROTO_ROUTING 43 +#endif +#cmakedefine HAVE_IPPROTO_FRAGMENT +#ifndef HAVE_IPPROTO_FRAGMENT +#define IPPROTO_FRAGMENT 44 +#endif +#cmakedefine HAVE_IPPROTO_ESP +#ifndef HAVE_IPPROTO_ESP +#define IPPROTO_ESP 50 +#endif +#cmakedefine HAVE_IPPROTO_AH +#ifndef HAVE_IPPROTO_AH +#define IPPROTO_AH 51 +#endif +#cmakedefine HAVE_IPPROTO_ICMPV6 +#ifndef HAVE_IPPROTO_ICMPV6 +#define IPPROTO_ICMPV6 58 +#endif +#cmakedefine HAVE_IPPROTO_NONE +#ifndef HAVE_IPPROTO_NONE +#define IPPROTO_NONE 59 +#endif +#cmakedefine HAVE_IPPROTO_DSTOPTS +#ifndef HAVE_IPPROTO_DSTOPTS +#define IPPROTO_DSTOPTS 60 +#endif + +/* IPv6 options structure defined by RFC 3542 */ +#cmakedefine HAVE_IP6_OPT + +/* Common IPv6 extension structure */ +#cmakedefine HAVE_IP6_EXT diff --git a/configure b/configure index 0f74674c0f..6c557a22d0 100755 --- a/configure +++ b/configure @@ -1,7 +1,7 @@ #!/bin/sh # Convenience wrapper for easily viewing/setting options that # the project's CMake scripts will recognize - +set -e command="$0 $*" # check for `cmake` command @@ -24,16 +24,22 @@ Usage: $0 [OPTION]... [VAR=VALUE]... --prefix=PREFIX installation directory [/usr/local/bro] --scriptdir=PATH root installation directory for Bro scripts [PREFIX/share/bro] + --conf-files-dir=PATH config files installation directory [PREFIX/etc] Optional Features: --enable-debug compile in debugging mode - --enable-brov6 enable IPv6 processing - --enable-perftools use Google's perftools + --enable-mobile-ipv6 analyze mobile IPv6 features defined by RFC 6275 + --enable-perftools force use of Google perftools on non-Linux systems + (automatically on when perftools is present on Linux) + --enable-perftools-debug use Google's perftools for debugging --disable-broccoli don't build or install the Broccoli library --disable-broctl don't install Broctl - --disable-auxtools don't build or install auxilliary tools + --disable-auxtools don't build or install auxiliary tools + --disable-perftools don't try to build with Google Perftools --disable-python don't try to build python bindings for broccoli --disable-ruby don't try to build ruby bindings for broccoli + --disable-dataseries don't use the optional DataSeries log writer + --disable-elasticsearch don't use the optional ElasticSearch log writer Required Packages in Non-Standard Locations: --with-openssl=PATH path to OpenSSL install root @@ -55,6 +61,9 @@ Usage: $0 [OPTION]... [VAR=VALUE]... --with-ruby-lib=PATH path to ruby library --with-ruby-inc=PATH path to ruby headers --with-swig=PATH path to SWIG executable + --with-dataseries=PATH path to DataSeries and Lintel libraries + --with-xml2=PATH path to libxml2 installation (for DataSeries) + --with-curl=PATH path to libcurl install root (for ElasticSearch) Packaging Options (for developers): --binary-package toggle special logic for binary packaging @@ -86,20 +95,24 @@ append_cache_entry () { # set defaults builddir=build +prefix=/usr/local/bro CMakeCacheEntries="" -append_cache_entry CMAKE_INSTALL_PREFIX PATH /usr/local/bro -append_cache_entry BRO_ROOT_DIR PATH /usr/local/bro -append_cache_entry PY_MOD_INSTALL_DIR PATH /usr/local/bro/lib/broctl -append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING /usr/local/bro/share/bro +append_cache_entry CMAKE_INSTALL_PREFIX PATH $prefix +append_cache_entry BRO_ROOT_DIR PATH $prefix +append_cache_entry PY_MOD_INSTALL_DIR PATH $prefix/lib/broctl +append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro +append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc append_cache_entry ENABLE_DEBUG BOOL false -append_cache_entry BROv6 BOOL false append_cache_entry ENABLE_PERFTOOLS BOOL false +append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL false append_cache_entry BinPAC_SKIP_INSTALL BOOL true append_cache_entry BUILD_SHARED_LIBS BOOL true append_cache_entry INSTALL_AUX_TOOLS BOOL true append_cache_entry INSTALL_BROCCOLI BOOL true append_cache_entry INSTALL_BROCTL BOOL true append_cache_entry CPACK_SOURCE_IGNORE_FILES STRING +append_cache_entry ENABLE_MOBILE_IPV6 BOOL false +append_cache_entry DISABLE_PERFTOOLS BOOL false # parse arguments while [ $# -ne 0 ]; do @@ -120,26 +133,32 @@ while [ $# -ne 0 ]; do CMakeGenerator="$optarg" ;; --prefix=*) + prefix=$optarg append_cache_entry CMAKE_INSTALL_PREFIX PATH $optarg append_cache_entry BRO_ROOT_DIR PATH $optarg append_cache_entry PY_MOD_INSTALL_DIR PATH $optarg/lib/broctl - if [ "$user_set_scriptdir" != "true" ]; then - append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $optarg/share/bro - fi ;; --scriptdir=*) append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $optarg user_set_scriptdir="true" ;; + --conf-files-dir=*) + append_cache_entry BRO_ETC_INSTALL_DIR PATH $optarg + user_set_conffilesdir="true" + ;; --enable-debug) append_cache_entry ENABLE_DEBUG BOOL true ;; - --enable-brov6) - append_cache_entry BROv6 BOOL true + --enable-mobile-ipv6) + append_cache_entry ENABLE_MOBILE_IPV6 BOOL true ;; --enable-perftools) append_cache_entry ENABLE_PERFTOOLS BOOL true ;; + --enable-perftools-debug) + append_cache_entry ENABLE_PERFTOOLS BOOL true + append_cache_entry ENABLE_PERFTOOLS_DEBUG BOOL true + ;; --disable-broccoli) append_cache_entry INSTALL_BROCCOLI BOOL false ;; @@ -149,12 +168,21 @@ while [ $# -ne 0 ]; do --disable-auxtools) append_cache_entry INSTALL_AUX_TOOLS BOOL false ;; + --disable-perftools) + append_cache_entry DISABLE_PERFTOOLS BOOL true + ;; --disable-python) append_cache_entry DISABLE_PYTHON_BINDINGS BOOL true ;; --disable-ruby) append_cache_entry DISABLE_RUBY_BINDINGS BOOL true ;; + --disable-dataseries) + append_cache_entry DISABLE_DATASERIES BOOL true + ;; + --disable-elasticsearch) + append_cache_entry DISABLE_ELASTICSEARCH BOOL true + ;; --with-openssl=*) append_cache_entry OpenSSL_ROOT_DIR PATH $optarg ;; @@ -183,7 +211,6 @@ while [ $# -ne 0 ]; do append_cache_entry LibGeoIP_ROOT_DIR PATH $optarg ;; --with-perftools=*) - append_cache_entry ENABLE_PERFTOOLS BOOL true append_cache_entry GooglePerftools_ROOT_DIR PATH $optarg ;; --with-python=*) @@ -209,6 +236,16 @@ while [ $# -ne 0 ]; do --with-swig=*) append_cache_entry SWIG_EXECUTABLE PATH $optarg ;; + --with-dataseries=*) + append_cache_entry DataSeries_ROOT_DIR PATH $optarg + append_cache_entry Lintel_ROOT_DIR PATH $optarg + ;; + --with-xml2=*) + append_cache_entry LibXML2_ROOT_DIR PATH $optarg + ;; + --with-curl=*) + append_cache_entry LibCURL_ROOT_DIR PATH $optarg + ;; --binary-package) append_cache_entry BINARY_PACKAGING_MODE BOOL true ;; @@ -232,6 +269,14 @@ while [ $# -ne 0 ]; do shift done +if [ "$user_set_scriptdir" != "true" ]; then + append_cache_entry BRO_SCRIPT_INSTALL_PATH STRING $prefix/share/bro +fi + +if [ "$user_set_conffilesdir" != "true" ]; then + append_cache_entry BRO_ETC_INSTALL_DIR PATH $prefix/etc +fi + if [ -d $builddir ]; then # If build directory exists, check if it has a CMake cache if [ -f $builddir/CMakeCache.txt ]; then diff --git a/doc/.gitignore b/doc/.gitignore index 1936cc1d44..15972ee82a 100644 --- a/doc/.gitignore +++ b/doc/.gitignore @@ -1 +1,2 @@ html +*.pyc diff --git a/doc/CHANGES b/doc/CHANGES new file mode 120000 index 0000000000..3e8bc8c0c8 --- /dev/null +++ b/doc/CHANGES @@ -0,0 +1 @@ +../CHANGES \ No newline at end of file diff --git a/doc/CMakeLists.txt b/doc/CMakeLists.txt index acef059ce8..bdbb0e7b69 100644 --- a/doc/CMakeLists.txt +++ b/doc/CMakeLists.txt @@ -1,4 +1,75 @@ -add_custom_target(doc) -add_custom_target(docclean) +set(BIF_SRC_DIR ${PROJECT_SOURCE_DIR}/src) +set(RST_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/rest_output) +set(DOC_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/out) +set(DOC_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}) +set(DOC_SOURCE_WORKDIR ${CMAKE_CURRENT_BINARY_DIR}/sphinx-sources) + +set(MASTER_POLICY_INDEX ${CMAKE_CURRENT_BINARY_DIR}/scripts/policy_index) +set(MASTER_PACKAGE_INDEX ${CMAKE_CURRENT_BINARY_DIR}/scripts/pkg_index) + +file(GLOB_RECURSE DOC_SOURCES FOLLOW_SYMLINKS "*") + +# configure the Sphinx config file (expand variables CMake might know about) +configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in + ${CMAKE_CURRENT_BINARY_DIR}/conf.py + @ONLY) add_subdirectory(scripts) + +# The "broxygen" target generates reST documentation for any outdated bro +# scripts and then uses Sphinx to generate HTML documentation from the reST +add_custom_target(broxygen + # copy the template documentation to the build directory + # to give as input for sphinx + COMMAND "${CMAKE_COMMAND}" -E copy_directory + ${DOC_SOURCE_DIR} + ${DOC_SOURCE_WORKDIR} + # copy generated policy script documentation into the + # working copy of the template documentation + COMMAND "${CMAKE_COMMAND}" -E copy_directory + ${RST_OUTPUT_DIR} + ${DOC_SOURCE_WORKDIR}/scripts + # append to the master index of all policy scripts + COMMAND cat ${MASTER_POLICY_INDEX} >> + ${DOC_SOURCE_WORKDIR}/scripts/index.rst + # append to the master index of all policy packages + COMMAND cat ${MASTER_PACKAGE_INDEX} >> + ${DOC_SOURCE_WORKDIR}/scripts/packages.rst + # construct a reST file for each group + COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/bin/group_index_generator.py + ${CMAKE_CURRENT_BINARY_DIR}/scripts/group_list + ${CMAKE_CURRENT_BINARY_DIR}/scripts + ${DOC_SOURCE_WORKDIR}/scripts + # tell sphinx to generate html + COMMAND sphinx-build + -b html + -c ${CMAKE_CURRENT_BINARY_DIR} + -d ${DOC_OUTPUT_DIR}/doctrees + ${DOC_SOURCE_WORKDIR} + ${DOC_OUTPUT_DIR}/html + # create symlink to the html output directory for convenience + COMMAND "${CMAKE_COMMAND}" -E create_symlink + ${DOC_OUTPUT_DIR}/html + ${CMAKE_BINARY_DIR}/html + # copy Broccoli API reference into output dir if it exists + COMMAND test -d ${CMAKE_BINARY_DIR}/aux/broccoli/doc/html && ( rm -rf ${CMAKE_BINARY_DIR}/html/broccoli-api && cp -r ${CMAKE_BINARY_DIR}/aux/broccoli/doc/html ${CMAKE_BINARY_DIR}/html/broccoli-api ) || true + WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} + COMMENT "[Sphinx] Generating HTML policy script docs" + # SOURCES just adds stuff to IDE projects as a convenience + SOURCES ${DOC_SOURCES}) + +# The "sphinxclean" target removes just the Sphinx input/output directories +# from the build directory. +add_custom_target(broxygenclean + COMMAND "${CMAKE_COMMAND}" -E remove_directory + ${DOC_SOURCE_WORKDIR} + COMMAND "${CMAKE_COMMAND}" -E remove_directory + ${DOC_OUTPUT_DIR} + VERBATIM) + +add_dependencies(broxygen broxygenclean restdoc) + +add_custom_target(doc) +add_custom_target(docclean) +add_dependencies(doc broxygen) +add_dependencies(docclean broxygenclean restclean) diff --git a/doc/INSTALL.rst b/doc/INSTALL.rst new file mode 120000 index 0000000000..99d491b4f8 --- /dev/null +++ b/doc/INSTALL.rst @@ -0,0 +1 @@ +../INSTALL \ No newline at end of file diff --git a/doc/Makefile b/doc/Makefile deleted file mode 100644 index 2756093a27..0000000000 --- a/doc/Makefile +++ /dev/null @@ -1,7 +0,0 @@ - -all: - test -d html || mkdir html - for i in *.rst; do echo "$$i ..."; ./bin/rst2html.py $$i >html/`echo $$i | sed 's/rst$$/html/g'`; done - -clean: - rm -rf html diff --git a/doc/README b/doc/README index 1e30d5ccc9..0ba0a8587f 100644 --- a/doc/README +++ b/doc/README @@ -2,32 +2,41 @@ Documentation ============= -This directory contains Bro documentation in reStructured text format +This directory contains Bro documentation in reStructuredText format (see http://docutils.sourceforge.net/rst.html). -Please note that for now these files are primarily intended for use on -http://www.bro-ids.org. While the Bro build process will render local -versions into ``build/doc/`` (if docutils is found), the resulting -HTML is very minimalistic and some features are not supported. In -particular, some links will be broken. +It is the root of a Sphinx source tree and can be modified to add more +common/general documentation, style sheets, JavaScript, etc. The Sphinx +config file is produced from ``conf.py.in``, and can be edited to change +various Sphinx options. +There is also a custom Sphinx domain implemented in ``source/ext/bro.py`` +which adds some reST directives and roles that aid in generating useful +index entries and cross-references. Other extensions can be added in +a similar fashion. + +Either the ``make doc`` or ``make broxygen`` targets in the top-level +Makefile can be used to locally render the reST files into HTML. +Those targets depend on: + +* Python interpreter >= 2.5 +* `Sphinx `_ >= 1.0.1 + +After completion, HTML documentation is symlinked in ``build/html``. + +There's also ``make docclean`` and ``make broxygenclean`` targets to +clean the resulting documentation. Notes for Writing Documentation ------------------------------- -* If you want to refer to a Bro script that's part of the - distribution, use {{'`foo.bro - <{{autodoc_bro_scripts}}/path/to/foo.html>`_'}}. For example, - ``{{'{{autodoc_bro_scripts}}/scripts/base/frameworks/notice/main.html}}'}}``. +* If you want to refer to a document that's part of the + distribution, it currently needs to be copied or otherwise symlinked + somewhere in to this Sphinx source tree. Then, it can be referenced + in a toc tree or with the :doc: role. Use the :download: role to + refer to static files that will not undergo sphinx rendering. -* If you want to refer to a page on the Bro web site, use the - ``docroot`` macro (e.g., - ``{{'href="{{docroot}}/download/index.html"'}}). Make sure to - include the ``index.html`` for the main pages, just as in the - example. - -* If you want to refer to page inside this directory, use a relative - path with HTML extension. (e.g., ``href="quickstart.html``). +* If you want to refer to a page on the Bro web site, use an HTTP URL. Guidelines ---------- diff --git a/doc/_static/960.css b/doc/_static/960.css new file mode 100644 index 0000000000..22c5e18180 --- /dev/null +++ b/doc/_static/960.css @@ -0,0 +1 @@ +body{min-width:960px}.container_12,.container_16{margin-left:auto;margin-right:auto;width:960px}.grid_1,.grid_2,.grid_3,.grid_4,.grid_5,.grid_6,.grid_7,.grid_8,.grid_9,.grid_10,.grid_11,.grid_12,.grid_13,.grid_14,.grid_15,.grid_16{display:inline;float:left;margin-left:10px;margin-right:10px}.push_1,.pull_1,.push_2,.pull_2,.push_3,.pull_3,.push_4,.pull_4,.push_5,.pull_5,.push_6,.pull_6,.push_7,.pull_7,.push_8,.pull_8,.push_9,.pull_9,.push_10,.pull_10,.push_11,.pull_11,.push_12,.pull_12,.push_13,.pull_13,.push_14,.pull_14,.push_15,.pull_15{position:relative}.container_12 .grid_3,.container_16 .grid_4{width:220px}.container_12 .grid_6,.container_16 .grid_8{width:460px}.container_12 .grid_9,.container_16 .grid_12{width:700px}.container_12 .grid_12,.container_16 .grid_16{width:940px}.alpha{margin-left:0}.omega{margin-right:0}.container_12 .grid_1{width:60px}.container_12 .grid_2{width:140px}.container_12 .grid_4{width:300px}.container_12 .grid_5{width:380px}.container_12 .grid_7{width:540px}.container_12 .grid_8{width:620px}.container_12 .grid_10{width:780px}.container_12 .grid_11{width:860px}.container_16 .grid_1{width:40px}.container_16 .grid_2{width:100px}.container_16 .grid_3{width:160px}.container_16 .grid_5{width:280px}.container_16 .grid_6{width:340px}.container_16 .grid_7{width:400px}.container_16 .grid_9{width:520px}.container_16 .grid_10{width:580px}.container_16 .grid_11{width:640px}.container_16 .grid_13{width:760px}.container_16 .grid_14{width:820px}.container_16 .grid_15{width:880px}.container_12 .prefix_3,.container_16 .prefix_4{padding-left:240px}.container_12 .prefix_6,.container_16 .prefix_8{padding-left:480px}.container_12 .prefix_9,.container_16 .prefix_12{padding-left:720px}.container_12 .prefix_1{padding-left:80px}.container_12 .prefix_2{padding-left:160px}.container_12 .prefix_4{padding-left:320px}.container_12 .prefix_5{padding-left:400px}.container_12 .prefix_7{padding-left:560px}.container_12 .prefix_8{padding-left:640px}.container_12 .prefix_10{padding-left:800px}.container_12 .prefix_11{padding-left:880px}.container_16 .prefix_1{padding-left:60px}.container_16 .prefix_2{padding-left:120px}.container_16 .prefix_3{padding-left:180px}.container_16 .prefix_5{padding-left:300px}.container_16 .prefix_6{padding-left:360px}.container_16 .prefix_7{padding-left:420px}.container_16 .prefix_9{padding-left:540px}.container_16 .prefix_10{padding-left:600px}.container_16 .prefix_11{padding-left:660px}.container_16 .prefix_13{padding-left:780px}.container_16 .prefix_14{padding-left:840px}.container_16 .prefix_15{padding-left:900px}.container_12 .suffix_3,.container_16 .suffix_4{padding-right:240px}.container_12 .suffix_6,.container_16 .suffix_8{padding-right:480px}.container_12 .suffix_9,.container_16 .suffix_12{padding-right:720px}.container_12 .suffix_1{padding-right:80px}.container_12 .suffix_2{padding-right:160px}.container_12 .suffix_4{padding-right:320px}.container_12 .suffix_5{padding-right:400px}.container_12 .suffix_7{padding-right:560px}.container_12 .suffix_8{padding-right:640px}.container_12 .suffix_10{padding-right:800px}.container_12 .suffix_11{padding-right:880px}.container_16 .suffix_1{padding-right:60px}.container_16 .suffix_2{padding-right:120px}.container_16 .suffix_3{padding-right:180px}.container_16 .suffix_5{padding-right:300px}.container_16 .suffix_6{padding-right:360px}.container_16 .suffix_7{padding-right:420px}.container_16 .suffix_9{padding-right:540px}.container_16 .suffix_10{padding-right:600px}.container_16 .suffix_11{padding-right:660px}.container_16 .suffix_13{padding-right:780px}.container_16 .suffix_14{padding-right:840px}.container_16 .suffix_15{padding-right:900px}.container_12 .push_3,.container_16 .push_4{left:240px}.container_12 .push_6,.container_16 .push_8{left:480px}.container_12 .push_9,.container_16 .push_12{left:720px}.container_12 .push_1{left:80px}.container_12 .push_2{left:160px}.container_12 .push_4{left:320px}.container_12 .push_5{left:400px}.container_12 .push_7{left:560px}.container_12 .push_8{left:640px}.container_12 .push_10{left:800px}.container_12 .push_11{left:880px}.container_16 .push_1{left:60px}.container_16 .push_2{left:120px}.container_16 .push_3{left:180px}.container_16 .push_5{left:300px}.container_16 .push_6{left:360px}.container_16 .push_7{left:420px}.container_16 .push_9{left:540px}.container_16 .push_10{left:600px}.container_16 .push_11{left:660px}.container_16 .push_13{left:780px}.container_16 .push_14{left:840px}.container_16 .push_15{left:900px}.container_12 .pull_3,.container_16 .pull_4{left:-240px}.container_12 .pull_6,.container_16 .pull_8{left:-480px}.container_12 .pull_9,.container_16 .pull_12{left:-720px}.container_12 .pull_1{left:-80px}.container_12 .pull_2{left:-160px}.container_12 .pull_4{left:-320px}.container_12 .pull_5{left:-400px}.container_12 .pull_7{left:-560px}.container_12 .pull_8{left:-640px}.container_12 .pull_10{left:-800px}.container_12 .pull_11{left:-880px}.container_16 .pull_1{left:-60px}.container_16 .pull_2{left:-120px}.container_16 .pull_3{left:-180px}.container_16 .pull_5{left:-300px}.container_16 .pull_6{left:-360px}.container_16 .pull_7{left:-420px}.container_16 .pull_9{left:-540px}.container_16 .pull_10{left:-600px}.container_16 .pull_11{left:-660px}.container_16 .pull_13{left:-780px}.container_16 .pull_14{left:-840px}.container_16 .pull_15{left:-900px}.clear{clear:both;display:block;overflow:hidden;visibility:hidden;width:0;height:0}.clearfix:before,.clearfix:after{content:'\0020';display:block;overflow:hidden;visibility:hidden;width:0;height:0}.clearfix:after{clear:both}.clearfix{zoom:1} diff --git a/doc/_static/basic.css b/doc/_static/basic.css new file mode 100644 index 0000000000..1332c7b048 --- /dev/null +++ b/doc/_static/basic.css @@ -0,0 +1,513 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar input[type="text"] { + width: 170px; +} + +div.sphinxsidebar input[type="submit"] { + width: 30px; +} + +img { + border: 0; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li div.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable dl, table.indextable dd { + margin-top: 0; + margin-bottom: 0; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- general body styles --------------------------------------------------- */ + +a.headerlink { + visibility: hidden; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.field-list ul { + padding-left: 1em; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px 7px 0 7px; + background-color: #ffe; + width: 40%; + float: right; +} + +p.sidebar-title { + font-weight: bold; +} + +/* -- topics ---------------------------------------------------------------- */ + +div.topic { + border: 1px solid #ccc; + padding: 7px 7px 0 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +div.admonition dl { + margin-bottom: 0; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +table.footnote td, table.footnote th { + border: 0 !important; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +dd p { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +dt:target, .highlighted { + background-color: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.refcount { + color: #060; +} + +.optional { + font-size: 1.3em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +td.linenos pre { + padding: 5px 0px; + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + margin-left: 0.5em; +} + +table.highlighttable td { + padding: 0 0.5em 0 0.5em; +} + +tt.descname { + background-color: transparent; + font-weight: bold; +# font-size: 1.2em; +} + +tt.descclassname { + background-color: transparent; +} + +tt.xref, a tt { + background-color: transparent; +# font-weight: bold; +} + +h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} diff --git a/doc/_static/broxygen-extra.css b/doc/_static/broxygen-extra.css new file mode 100644 index 0000000000..051e12e0be --- /dev/null +++ b/doc/_static/broxygen-extra.css @@ -0,0 +1,160 @@ + +a.toc-backref { + color: #333; +} + +h1, h2, h3, h4, h5, h6, +h1 a, h2 a, h3 a, h4 a, h5 a, h6 a { + padding:0 0 0px 0; +} + +ul { + padding-bottom: 0px; +} + +h1 { + font-weight: bold; + font-size: 32px; + line-height:32px; + text-align: center; + padding-top: 3px; + margin-bottom: 30px; + font-family: Palatino,'Palatino Linotype',Georgia,serif;; + color: #000; + border-bottom: 0px; +} + +th.field-name +{ + white-space:nowrap; +} + +h2 { + margin-top: 50px; + padding-bottom: 5px; + margin-bottom: 30px; + border-bottom: 1px solid; + border-color: #aaa; + font-style: normal; +} + +div.section h3 { + font-style: normal; + } + +h3 { + font-size: 20px; + margin-top: 40px; + margin-bottom: 0¡px; + font-weight: bold; + font-style: normal; +} + +h3.widgettitle { + font-style: normal; +} + +h4 { + font-size:18px; + font-style: normal; + margin-bottom: 0em; + margin-top: 40px; + font-style: italic; +} + +h5 { + font-size:16px; +} + +h6 { + font-size:15px; +} + +.toc-backref { + color: #333; +} + +.contents ul { + padding-bottom: 1em; +} + +dl.namespace { + display: none; +} + +dl dt { + font-weight: normal; +} + +table.docutils tbody { + margin: 1em 1em 1em 1em; +} + +table.docutils td { + padding: 5pt 5pt 5pt 5pt; + font-size: 14px; + border-left: 0; + border-right: 0; +} + +dl pre { + font-size: 14px; +} + +table.docutils th { + padding: 5pt 5pt 5pt 5pt; + font-size: 14px; + font-style: normal; + border-left: 0; + border-right: 0; +} + +table.docutils tr:first-child td { + #border-top: 1px solid #aaa; +} + +.download { + font-family:"Courier New", Courier, mono; + font-weight: normal; +} + +dt:target, .highlighted { + background-color: #ccc; +} + +p { + padding-bottom: 0px; +} + +p.last { + margin-bottom: 0px; +} + +dl { + padding: 1em 1em 1em 1em; + background: #fffff0; + border: 1px solid #aaa; + +} + +dl { + margin-bottom: 10px; +} + + +table.docutils { + background: #fffff0; + border-collapse: collapse; + border: 1px solid #ddd; +} + +dl table.docutils { + border: 0; +} + +table.docutils dl { + border: 1px dashed #666; +} + + + diff --git a/doc/_static/broxygen-extra.js b/doc/_static/broxygen-extra.js new file mode 100644 index 0000000000..e69de29bb2 diff --git a/doc/_static/broxygen.css b/doc/_static/broxygen.css new file mode 100644 index 0000000000..967dcd6eaa --- /dev/null +++ b/doc/_static/broxygen.css @@ -0,0 +1,437 @@ +/* Automatically generated. Do not edit. */ + + + + + +#bro-main, #bro-standalone-main { + padding: 0 0 0 0; + position:relative; + z-index:1; +} + +#bro-main { + margin-bottom: 2em; + } + +#bro-standalone-main { + margin-bottom: 0em; + padding-left: 50px; + padding-right: 50px; + } + +#bro-outer { + color: #333; + background: #ffffff; +} + +#bro-title { + font-weight: bold; + font-size: 32px; + line-height:32px; + text-align: center; + padding-top: 3px; + margin-bottom: 30px; + font-family: Palatino,'Palatino Linotype',Georgia,serif;; + color: #000; + } + +.opening:first-letter { + font-size: 24px; + font-weight: bold; + letter-spacing: 0.05em; + } + +.opening { + font-size: 17px; +} + +.version { + text-align: right; + font-size: 12px; + color: #aaa; + line-height: 0; + height: 0; +} + +.git-info-version { + position: relative; + height: 2em; + top: -1em; + color: #ccc; + float: left; + font-size: 12px; +} + +.git-info-date { + position: relative; + height: 2em; + top: -1em; + color: #ccc; + float: right; + font-size: 12px; +} + +body { + font-family:Arial, Helvetica, sans-serif; + font-size:15px; + line-height:22px; + color: #333; + margin: 0px; +} + +h1, h2, h3, h4, h5, h6, +h1 a, h2 a, h3 a, h4 a, h5 a, h6 a { + padding:0 0 20px 0; + font-weight:bold; + text-decoration:none; +} + +div.section h3, div.section h4, div.section h5, div.section h6 { + font-style: italic; +} + +h1, h2 { + font-size:27px; + letter-spacing:-1px; +} + +h3 { + margin-top: 1em; + font-size:18px; +} + +h4 { + font-size:16px; +} + +h5 { + font-size:15px; +} + +h6 { + font-size:12px; +} + +p { + padding:0 0 20px 0; +} + +hr { + background:none; + height:1px; + line-height:1px; + border:0; + margin:0 0 20px 0; +} + +ul, ol { + margin:0 20px 20px 0; + padding-left:40px; +} + +ul.simple, ol.simple { + margin:0 0px 0px 0; +} + +blockquote { + margin:0 0 0 40px; +} + +strong, dfn { + font-weight:bold; +} + +em, dfn { + font-style:italic; +} + +sup, sub { + line-height:0; +} + +pre { + white-space:pre; +} + +pre, code, tt { + font-family:"Courier New", Courier, mono; +} + +dl { + margin: 0 0 20px 0; +} + +dl dt { + font-weight: bold; +} + +dd { + margin:0 0 20px 20px; +} + +small { + font-size:75%; +} + +a:link, +a:visited, +a:active +{ + color: #2a85a7; +} + +a:hover +{ + color:#c24444; +} + +h1, h2, h3, h4, h5, h6, +h1 a, h2 a, h3 a, h4 a, h5 a, h6 a +{ + color: #333; +} + +hr { + border-bottom:1px solid #ddd; +} + +pre { + color: #333; + background: #FFFAE2; + padding: 7px 5px 3px 5px; + margin-bottom: 25px; + margin-top: 0px; +} + +ul { + padding-bottom: 5px; + } + +h1, h2 { + margin-top: 30px; + } + +h1 { + margin-bottom: 50px; + margin-bottom: 20px; + padding-bottom: 5px; + border-bottom: 1px solid; + border-color: #aaa; + } + +h2 { + font-size: 24px; + } + +pre { + -moz-box-shadow:0 0 6px #ddd; + -webkit-box-shadow:0 0 6px #ddd; + box-shadow:0 0 6px #ddd; +} + +a { + text-decoration:none; + } + +p { + padding-bottom: 15px; + } + +p, dd, li { + text-align: justify; + } + +li { + margin-bottom: 5px; + } + + + +#footer .widget_links ul a, +#footer .widget_links ol a +{ + color: #ddd; +} + +#footer .widget_links ul a:hover, +#footer .widget_links ol a:hover +{ + color:#c24444; +} + + +#footer .widget li { + padding-bottom:10px; +} + +#footer .widget_links li { + padding-bottom:1px; +} + +#footer .widget li:last-child { + padding-bottom:0; +} + +#footer .widgettitle { + color: #ddd; +} + + +.widget { + margin:0 0 40px 0; +} + +.widget, .widgettitle { + font-size:12px; + line-height:18px; +} + +.widgettitle { + font-weight:bold; + text-transform:uppercase; + padding:0 0 10px 0; + margin:0 0 20px 0; + line-height:100%; +} + +.widget UL, .widget OL { + list-style-type:none; + margin:0; + padding:0; +} + +.widget p { + padding:0; +} + +.widget li { + padding-bottom:10px; +} + +.widget a { + text-decoration:none; +} + +#bro-main .widgettitle, +{ + color: #333; +} + + +.widget img.left { + padding:5px 10px 10px 0; +} + +.widget img.right { + padding:5px 0 10px 10px; +} + +.ads .widgettitle { + margin-right:16px; +} + +.widget { + margin-left: 1em; +} + +.widgettitle { + color: #333; +} + +.widgettitle { + border-bottom:1px solid #ddd; +} + + +.sidebar-toc ul li { + padding-bottom: 0px; + text-align: left; + list-style-type: square; + list-style-position: inside; + padding-left: 1em; + text-indent: -1em; + } + +.sidebar-toc ul li li { + margin-left: 1em; + margin-bottom: 0px; + list-style-type: square; + } + +.sidebar-toc ul li li a { + font-size: 8pt; +} + +.contents { + padding: 10px; + background: #FFFAE2; + margin: 20px; + } + +.topic-title { + font-size: 20px; + font-weight: bold; + padding: 0px 0px 5px 0px; + text-align: center; + padding-top: .5em; +} + +.contents li { + margin-bottom: 0px; + list-style-type: square; +} + +.contents ul ul li { + margin-left: 0px; + padding-left: 0px; + padding-top: 0em; + font-size: 90%; + list-style-type: square; + font-weight: normal; +} + +.contents ul ul ul li { + list-style-type: none; +} + +.contents ul ul ul ul li { + display:none; +} + +.contents ul li { + padding-top: 1em; + list-style-type: none; + font-weight: bold; +} + +.contents ul { + margin-left: 0px; + padding-left: 2em; + margin: 0px 0px 0px 0px; +} + +.note, .warning, .error { + margin-left: 2em; + margin-right: 2em; + margin-top: 1.5em; + margin-bottom: 1.5em; + padding: 0.5em 1em 0.5em 1em; + overflow: auto; + border-left: solid 3px #aaa; + font-size: 15px; + color: #333; +} + +.admonition p { + margin-left: 1em; + } + +.admonition-title { + font-size: 16px; + font-weight: bold; + color: #000; + padding-bottom: 0em; + margin-bottom: .5em; + margin-top: 0em; +} \ No newline at end of file diff --git a/doc/_static/logo-bro.png b/doc/_static/logo-bro.png new file mode 100644 index 0000000000..96cc5d443c Binary files /dev/null and b/doc/_static/logo-bro.png differ diff --git a/doc/_static/pygments.css b/doc/_static/pygments.css new file mode 100644 index 0000000000..3c96f6ae4e --- /dev/null +++ b/doc/_static/pygments.css @@ -0,0 +1,58 @@ +.hll { background-color: #ffffcc } +.c { color: #aaaaaa; font-style: italic } /* Comment */ +.err { color: #F00000; background-color: #F0A0A0 } /* Error */ +.k { color: #0000aa } /* Keyword */ +.cm { color: #aaaaaa; font-style: italic } /* Comment.Multiline */ +.cp { color: #4c8317 } /* Comment.Preproc */ +.c1 { color: #aaaaaa; font-style: italic } /* Comment.Single */ +.cs { color: #0000aa; font-style: italic } /* Comment.Special */ +.gd { color: #aa0000 } /* Generic.Deleted */ +.ge { font-style: italic } /* Generic.Emph */ +.gr { color: #aa0000 } /* Generic.Error */ +.gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.gi { color: #00aa00 } /* Generic.Inserted */ +.go { color: #888888 } /* Generic.Output */ +.gp { color: #555555 } /* Generic.Prompt */ +.gs { font-weight: bold } /* Generic.Strong */ +.gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.gt { color: #aa0000 } /* Generic.Traceback */ +.kc { color: #0000aa } /* Keyword.Constant */ +.kd { color: #0000aa } /* Keyword.Declaration */ +.kn { color: #0000aa } /* Keyword.Namespace */ +.kp { color: #0000aa } /* Keyword.Pseudo */ +.kr { color: #0000aa } /* Keyword.Reserved */ +.kt { color: #00aaaa } /* Keyword.Type */ +.m { color: #009999 } /* Literal.Number */ +.s { color: #aa5500 } /* Literal.String */ +.na { color: #1e90ff } /* Name.Attribute */ +.nb { color: #00aaaa } /* Name.Builtin */ +.nc { color: #00aa00; text-decoration: underline } /* Name.Class */ +.no { color: #aa0000 } /* Name.Constant */ +.nd { color: #888888 } /* Name.Decorator */ +.ni { color: #800000; font-weight: bold } /* Name.Entity */ +.nf { color: #00aa00 } /* Name.Function */ +.nn { color: #00aaaa; text-decoration: underline } /* Name.Namespace */ +.nt { color: #1e90ff; font-weight: bold } /* Name.Tag */ +.nv { color: #aa0000 } /* Name.Variable */ +.ow { color: #0000aa } /* Operator.Word */ +.w { color: #bbbbbb } /* Text.Whitespace */ +.mf { color: #009999 } /* Literal.Number.Float */ +.mh { color: #009999 } /* Literal.Number.Hex */ +.mi { color: #009999 } /* Literal.Number.Integer */ +.mo { color: #009999 } /* Literal.Number.Oct */ +.sb { color: #aa5500 } /* Literal.String.Backtick */ +.sc { color: #aa5500 } /* Literal.String.Char */ +.sd { color: #aa5500 } /* Literal.String.Doc */ +.s2 { color: #aa5500 } /* Literal.String.Double */ +.se { color: #aa5500 } /* Literal.String.Escape */ +.sh { color: #aa5500 } /* Literal.String.Heredoc */ +.si { color: #aa5500 } /* Literal.String.Interpol */ +.sx { color: #aa5500 } /* Literal.String.Other */ +.sr { color: #009999 } /* Literal.String.Regex */ +.s1 { color: #aa5500 } /* Literal.String.Single */ +.ss { color: #0000aa } /* Literal.String.Symbol */ +.bp { color: #00aaaa } /* Name.Builtin.Pseudo */ +.vc { color: #aa0000 } /* Name.Variable.Class */ +.vg { color: #aa0000 } /* Name.Variable.Global */ +.vi { color: #aa0000 } /* Name.Variable.Instance */ +.il { color: #009999 } /* Literal.Number.Integer.Long */ diff --git a/doc/_templates/layout.html b/doc/_templates/layout.html new file mode 100644 index 0000000000..77d9d1de1c --- /dev/null +++ b/doc/_templates/layout.html @@ -0,0 +1,113 @@ +{% extends "!layout.html" %} + +{% block extrahead %} + + + + + + +{% endblock %} + +{% block header %} + +{% endblock %} + +{% block relbar2 %}{% endblock %} +{% block relbar1 %}{% endblock %} + +{% block content %} + +
+
+ +
+ +
+ {{ relbar() }} +
+ +
+ {% block body %} + {% endblock %} +
+
+ + +
+ +
+ +
+
+ + + + + {% if next %} +
+

+ Next Page +

+

+ {{ next.title }} +

+
+ {% endif %} + + {% if prev %} +
+

+ Previous Page +

+

+ {{ prev.title }} +

+
+ {% endif %} + + {%- if pagename != "search" %} + + + {%- endif %} + +
+
+ +
+
+
+ + Copyright {{ copyright }}. + Last updated on {{ last_updated }}. + Created using Sphinx {{ sphinx_version }}. + +
+
+
+
+ + +{% endblock %} + +{% block footer %} + +{% endblock %} diff --git a/doc/scripts/group_index_generator.py b/doc/bin/group_index_generator.py similarity index 97% rename from doc/scripts/group_index_generator.py rename to doc/bin/group_index_generator.py index e9e17f3ca9..720081e8b5 100755 --- a/doc/scripts/group_index_generator.py +++ b/doc/bin/group_index_generator.py @@ -49,6 +49,7 @@ with open(group_list, 'r') as f_group_list: if not os.path.exists(os.path.dirname(group_file)): os.makedirs(os.path.dirname(group_file)) with open(group_file, 'w') as f_group_file: + f_group_file.write(":orphan:\n\n") title = "Package Index: %s\n" % os.path.dirname(group) f_group_file.write(title); for n in range(len(title)): diff --git a/doc/bin/rst2html.py b/doc/bin/rst2html.py deleted file mode 100755 index 79c835d6c4..0000000000 --- a/doc/bin/rst2html.py +++ /dev/null @@ -1,62 +0,0 @@ -#!/usr/bin/env python -# -# Derived from docutils standard rst2html.py. -# -# $Id: rst2html.py 4564 2006-05-21 20:44:42Z wiemann $ -# Author: David Goodger -# Copyright: This module has been placed in the public domain. -# -# -# Extension: we add to dummy directorives "code" and "console" to be -# compatible with Bro's web site setup. - -try: - import locale - locale.setlocale(locale.LC_ALL, '') -except: - pass - -import textwrap - -from docutils.core import publish_cmdline, default_description - -from docutils import nodes -from docutils.parsers.rst import directives, Directive -from docutils.parsers.rst.directives.body import LineBlock - -class Literal(Directive): - #max_line_length = 68 - max_line_length = 0 - - required_arguments = 0 - optional_arguments = 1 - final_argument_whitespace = True - has_content = True - - def wrapped_content(self): - content = [] - - if Literal.max_line_length: - for line in self.content: - content += textwrap.wrap(line, Literal.max_line_length, subsequent_indent=" ") - else: - content = self.content - - return u'\n'.join(content) - - def run(self): - self.assert_has_content() - content = self.wrapped_content() - literal = nodes.literal_block(content, content) - return [literal] - -directives.register_directive('code', Literal) -directives.register_directive('console', Literal) - -description = ('Generates (X)HTML documents from standalone reStructuredText ' - 'sources. ' + default_description) - -publish_cmdline(writer_name='html', description=description) - - - diff --git a/doc/cluster.rst b/doc/cluster.rst index afd5f2fe35..ba26471edc 100644 --- a/doc/cluster.rst +++ b/doc/cluster.rst @@ -11,9 +11,7 @@ Architecture The figure below illustrates the main components of a Bro cluster. -.. {{git_pull('bro:doc/deployment.png')}} - -.. image:: deployment.bro.png +.. image:: images/deployment.png Tap *** @@ -47,7 +45,7 @@ This is the Bro process that sniffs network traffic and does protocol analysis o The rule of thumb we have followed recently is to allocate approximately 1 core for every 80Mbps of traffic that is being analyzed, however this estimate could be extremely traffic mix specific. It has generally worked for mixed traffic with many users and servers. For example, if your traffic peaks around 2Gbps (combined) and you want to handle traffic at peak load, you may want to have 26 cores available (2048 / 80 == 25.6). If the 80Mbps estimate works for your traffic, this could be handled by 3 physical hosts dedicated to being workers with each one containing dual 6-core processors. -Once a flow based load balancer is put into place this model is extremely easy to scale as well so it’s recommended that you guess at the amount of hardware you will need to fully analyze your traffic. If it turns out that you need more, it’s relatively easy to easy increase the size of the cluster in most cases. +Once a flow based load balancer is put into place this model is extremely easy to scale as well so it’s recommended that you guess at the amount of hardware you will need to fully analyze your traffic. If it turns out that you need more, it’s relatively easy to increase the size of the cluster in most cases. Frontend Options ---------------- @@ -60,7 +58,7 @@ Discrete hardware flow balancers cPacket ^^^^^^^ -If you are monitoring one or more 10G physical interfaces, the recommended solution is to use either a cFlow or cVu device from cPacket because they are currently being used very successfully at a number of sites. These devices will perform layer-2 load balancing by rewriting the destination ethernet MAC address to cause each packet associated with a particular flow to have the same destination MAC. The packets can then be passed directly to a monitoring host where each worker has a BPF filter to limit it's visibility to only that stream of flows or onward to a commodity switch to split the traffic out to multiple 1G interfaces for the workers. This can ultimately greatly reduce costs since workers can use relatively inexpensive 1G interfaces. +If you are monitoring one or more 10G physical interfaces, the recommended solution is to use either a cFlow or cVu device from cPacket because they are currently being used very successfully at a number of sites. These devices will perform layer-2 load balancing by rewriting the destination ethernet MAC address to cause each packet associated with a particular flow to have the same destination MAC. The packets can then be passed directly to a monitoring host where each worker has a BPF filter to limit its visibility to only that stream of flows or onward to a commodity switch to split the traffic out to multiple 1G interfaces for the workers. This can ultimately greatly reduce costs since workers can use relatively inexpensive 1G interfaces. OpenFlow Switches ^^^^^^^^^^^^^^^^^ @@ -78,7 +76,7 @@ The PF_RING software for Linux has a “clustering” feature which will do flow Netmap ^^^^^^ -FreeBSD has an in-progress project named Netmap which will enabled flow based load balancing as well. When it becomes viable for real world use, this document will be updated. +FreeBSD has an in-progress project named Netmap which will enable flow based load balancing as well. When it becomes viable for real world use, this document will be updated. Click! Software Router ^^^^^^^^^^^^^^^^^^^^^^ diff --git a/doc/components/binpac/README.rst b/doc/components/binpac/README.rst new file mode 120000 index 0000000000..4eb90ef658 --- /dev/null +++ b/doc/components/binpac/README.rst @@ -0,0 +1 @@ +../../../aux/binpac/README \ No newline at end of file diff --git a/doc/components/bro-aux/README.rst b/doc/components/bro-aux/README.rst new file mode 120000 index 0000000000..628879525d --- /dev/null +++ b/doc/components/bro-aux/README.rst @@ -0,0 +1 @@ +../../../aux/bro-aux/README \ No newline at end of file diff --git a/doc/components/broccoli-python/README.rst b/doc/components/broccoli-python/README.rst new file mode 120000 index 0000000000..4187e87202 --- /dev/null +++ b/doc/components/broccoli-python/README.rst @@ -0,0 +1 @@ +../../../aux/broccoli/bindings/broccoli-python/README \ No newline at end of file diff --git a/doc/components/broccoli-ruby/README.rst b/doc/components/broccoli-ruby/README.rst new file mode 120000 index 0000000000..da71663099 --- /dev/null +++ b/doc/components/broccoli-ruby/README.rst @@ -0,0 +1 @@ +../../../aux/broccoli/bindings/broccoli-ruby/README \ No newline at end of file diff --git a/doc/components/broccoli/README.rst b/doc/components/broccoli/README.rst new file mode 120000 index 0000000000..d32c70ccd9 --- /dev/null +++ b/doc/components/broccoli/README.rst @@ -0,0 +1 @@ +../../../aux/broccoli/README \ No newline at end of file diff --git a/doc/components/broccoli/broccoli-manual.rst b/doc/components/broccoli/broccoli-manual.rst new file mode 120000 index 0000000000..bd5e8d711f --- /dev/null +++ b/doc/components/broccoli/broccoli-manual.rst @@ -0,0 +1 @@ +../../../aux/broccoli/doc/broccoli-manual.rst \ No newline at end of file diff --git a/doc/components/broctl/README.rst b/doc/components/broctl/README.rst new file mode 120000 index 0000000000..cba305f48a --- /dev/null +++ b/doc/components/broctl/README.rst @@ -0,0 +1 @@ +../../../aux/broctl/doc/broctl.rst \ No newline at end of file diff --git a/doc/components/btest/README.rst b/doc/components/btest/README.rst new file mode 120000 index 0000000000..0da2935df1 --- /dev/null +++ b/doc/components/btest/README.rst @@ -0,0 +1 @@ +../../../aux/btest/README \ No newline at end of file diff --git a/doc/components/capstats/README.rst b/doc/components/capstats/README.rst new file mode 120000 index 0000000000..cb2380145d --- /dev/null +++ b/doc/components/capstats/README.rst @@ -0,0 +1 @@ +../../../aux/broctl/aux/capstats/README \ No newline at end of file diff --git a/doc/components/pysubnettree/README.rst b/doc/components/pysubnettree/README.rst new file mode 120000 index 0000000000..42ce17d303 --- /dev/null +++ b/doc/components/pysubnettree/README.rst @@ -0,0 +1 @@ +../../../aux/broctl/aux/pysubnettree/README \ No newline at end of file diff --git a/doc/components/trace-summary/README.rst b/doc/components/trace-summary/README.rst new file mode 120000 index 0000000000..78778364bd --- /dev/null +++ b/doc/components/trace-summary/README.rst @@ -0,0 +1 @@ +../../../aux/broctl/aux/trace-summary/README \ No newline at end of file diff --git a/doc/scripts/conf.py.in b/doc/conf.py.in similarity index 90% rename from doc/scripts/conf.py.in rename to doc/conf.py.in index 419209f580..2e93e82502 100644 --- a/doc/scripts/conf.py.in +++ b/doc/conf.py.in @@ -15,7 +15,7 @@ import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. -sys.path.insert(0, os.path.abspath('source/ext')) +sys.path.insert(0, os.path.abspath('sphinx-sources/ext')) # -- General configuration ----------------------------------------------------- @@ -24,10 +24,10 @@ sys.path.insert(0, os.path.abspath('source/ext')) # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. -extensions = ['bro'] +extensions = ['bro', 'rst_directive', 'sphinx.ext.todo', 'adapt-toc'] # Add any paths that contain templates here, relative to this directory. -templates_path = ['source/_templates'] +templates_path = ['sphinx-sources/_templates', 'sphinx-sources/_static'] # The suffix of source filenames. source_suffix = '.rst' @@ -40,7 +40,7 @@ master_doc = 'index' # General information about the project. project = u'Bro' -copyright = u'2011, The Bro Project' +copyright = u'2012, The Bro Project' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the @@ -59,7 +59,7 @@ release = '@VERSION@' # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. -#today_fmt = '%B %d, %Y' +today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. @@ -90,18 +90,20 @@ pygments_style = 'sphinx' # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. -html_theme = 'default' +html_theme = 'basic' + +html_last_updated_fmt = '%B %d, %Y' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. -#html_theme_options = {} +html_theme_options = { } # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to -# " v documentation". +# " v Documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. @@ -119,7 +121,7 @@ html_theme = 'default' # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ['source/_static'] +html_static_path = ['sphinx-sources/_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. @@ -130,7 +132,9 @@ html_static_path = ['source/_static'] #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. -#html_sidebars = {} +html_sidebars = { +'**': ['localtoc.html', 'sourcelink.html', 'searchbox.html'], +} # Additional templates that should be rendered to pages, maps page names to # template names. @@ -163,8 +167,9 @@ html_static_path = ['source/_static'] #html_file_suffix = None # Output file base name for HTML help builder. -htmlhelp_basename = 'Brodoc' +htmlhelp_basename = 'Broxygen' +html_add_permalinks = None # -- Options for LaTeX output -------------------------------------------------- @@ -204,7 +209,6 @@ latex_documents = [ # If false, no module index is generated. #latex_domain_indices = True - # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples @@ -213,3 +217,6 @@ man_pages = [ ('index', 'bro', u'Bro Documentation', [u'The Bro Project'], 1) ] + +# -- Options for todo plugin -------------------------------------------- +todo_include_todos=True diff --git a/doc/ext/adapt-toc.py b/doc/ext/adapt-toc.py new file mode 100644 index 0000000000..12ee006977 --- /dev/null +++ b/doc/ext/adapt-toc.py @@ -0,0 +1,29 @@ + +import sys +import re + +# Removes the first TOC level, which is just the page title. +def process_html_toc(app, pagename, templatename, context, doctree): + + if not "toc" in context: + return + + toc = context["toc"] + + lines = toc.strip().split("\n") + lines = lines[2:-2] + + toc = "\n".join(lines) + toc = "
    " + toc + + context["toc"] = toc + + # print >>sys.stderr, pagename + # print >>sys.stderr, context["toc"] + # print >>sys.stderr, "-----" + # print >>sys.stderr, toc + # print >>sys.stderr, "====" + +def setup(app): + app.connect('html-page-context', process_html_toc) + diff --git a/doc/scripts/source/ext/bro.py b/doc/ext/bro.py similarity index 58% rename from doc/scripts/source/ext/bro.py rename to doc/ext/bro.py index 7ec02f76a8..9bdd86bd9a 100644 --- a/doc/scripts/source/ext/bro.py +++ b/doc/ext/bro.py @@ -4,6 +4,9 @@ def setup(Sphinx): Sphinx.add_domain(BroDomain) + Sphinx.add_node(see) + Sphinx.add_directive_to_domain('bro', 'see', SeeDirective) + Sphinx.connect('doctree-resolved', process_see_nodes) from sphinx import addnodes from sphinx.domains import Domain, ObjType, Index @@ -18,7 +21,57 @@ from docutils.parsers.rst import Directive from docutils.parsers.rst import directives from docutils.parsers.rst.roles import set_classes +class see(nodes.General, nodes.Element): + refs = [] + +class SeeDirective(Directive): + has_content = True + + def run(self): + n = see('') + n.refs = string.split(string.join(self.content)) + return [n] + +def process_see_nodes(app, doctree, fromdocname): + for node in doctree.traverse(see): + content = [] + para = nodes.paragraph() + para += nodes.Text("See also:", "See also:") + for name in node.refs: + join_str = " " + if name != node.refs[0]: + join_str = ", " + link_txt = join_str + name; + + if name not in app.env.domaindata['bro']['idtypes']: + # Just create the text and issue warning + app.env.warn(fromdocname, + 'unknown target for ".. bro:see:: %s"' % (name)) + para += nodes.Text(link_txt, link_txt) + else: + # Create a reference + typ = app.env.domaindata['bro']['idtypes'][name] + todocname = app.env.domaindata['bro']['objects'][(typ, name)] + + newnode = nodes.reference('', '') + innernode = nodes.literal(_(name), _(name)) + newnode['refdocname'] = todocname + newnode['refuri'] = app.builder.get_relative_uri( + fromdocname, todocname) + newnode['refuri'] += '#' + typ + '-' + name + newnode.append(innernode) + para += nodes.Text(join_str, join_str) + para += newnode + + content.append(para) + node.replace_self(content) + class BroGeneric(ObjectDescription): + def update_type_map(self, idname): + if 'idtypes' not in self.env.domaindata['bro']: + self.env.domaindata['bro']['idtypes'] = {} + self.env.domaindata['bro']['idtypes'][idname] = self.objtype + def add_target_and_index(self, name, sig, signode): targetname = self.objtype + '-' + name if targetname not in self.state.document.ids: @@ -29,9 +82,6 @@ class BroGeneric(ObjectDescription): objects = self.env.domaindata['bro']['objects'] key = (self.objtype, name) -# this is commented out mostly just to avoid having a special directive -# for events in order to avoid the duplicate warnings in that case - """ if key in objects: self.env.warn(self.env.docname, 'duplicate description of %s %s, ' % @@ -39,8 +89,9 @@ class BroGeneric(ObjectDescription): 'other instance in ' + self.env.doc2path(objects[key]), self.lineno) - """ objects[key] = self.env.docname + self.update_type_map(name) + indextext = self.get_index_text(self.objtype, name) if indextext: self.indexnode['entries'].append(('single', indextext, @@ -65,6 +116,8 @@ class BroNamespace(BroGeneric): objects = self.env.domaindata['bro']['objects'] key = (self.objtype, name) objects[key] = self.env.docname + self.update_type_map(name) + indextext = self.get_index_text(self.objtype, name) self.indexnode['entries'].append(('single', indextext, targetname, targetname)) @@ -91,10 +144,17 @@ class BroEnum(BroGeneric): objects = self.env.domaindata['bro']['objects'] key = (self.objtype, name) objects[key] = self.env.docname + self.update_type_map(name) + indextext = self.get_index_text(self.objtype, name) #self.indexnode['entries'].append(('single', indextext, # targetname, targetname)) m = sig.split() + if m[1] == "Notice::Type": + if 'notices' not in self.env.domaindata['bro']: + self.env.domaindata['bro']['notices'] = [] + self.env.domaindata['bro']['notices'].append( + (m[0], self.env.docname, targetname)) self.indexnode['entries'].append(('single', "%s (enum values); %s" % (m[1], m[0]), targetname, targetname)) @@ -113,6 +173,26 @@ class BroAttribute(BroGeneric): def get_index_text(self, objectname, name): return _('%s (attribute)') % (name) +class BroNotices(Index): + """ + Index subclass to provide the Bro notices index. + """ + + name = 'noticeindex' + localname = l_('Bro Notice Index') + shortname = l_('notices') + + def generate(self, docnames=None): + content = {} + for n in self.domain.env.domaindata['bro']['notices']: + modname = n[0].split("::")[0] + entries = content.setdefault(modname, []) + entries.append([n[0], 0, n[1], n[2], '', '', '']) + + content = sorted(content.iteritems()) + + return content, False + class BroDomain(Domain): """Bro domain.""" name = 'bro' @@ -140,8 +220,13 @@ class BroDomain(Domain): 'id': XRefRole(), 'enum': XRefRole(), 'attr': XRefRole(), + 'see': XRefRole(), } + indices = [ + BroNotices, + ] + initial_data = { 'objects': {}, # fullname -> docname, objtype } @@ -154,13 +239,27 @@ class BroDomain(Domain): def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): objects = self.data['objects'] - objtypes = self.objtypes_for_role(typ) - for objtype in objtypes: - if (objtype, target) in objects: - return make_refnode(builder, fromdocname, - objects[objtype, target], - objtype + '-' + target, - contnode, target + ' ' + objtype) + if typ == "see": + if target not in self.data['idtypes']: + self.env.warn(fromdocname, + 'unknown target for ":bro:see:`%s`"' % (target)) + return [] + objtype = self.data['idtypes'][target] + return make_refnode(builder, fromdocname, + objects[objtype, target], + objtype + '-' + target, + contnode, target + ' ' + objtype) + else: + objtypes = self.objtypes_for_role(typ) + for objtype in objtypes: + if (objtype, target) in objects: + return make_refnode(builder, fromdocname, + objects[objtype, target], + objtype + '-' + target, + contnode, target + ' ' + objtype) + else: + self.env.warn(fromdocname, + 'unknown target for ":bro:%s:`%s`"' % (typ, target)) def get_objects(self): for (typ, name), docname in self.data['objects'].iteritems(): diff --git a/doc/ext/bro_lexer/__init__.py b/doc/ext/bro_lexer/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/doc/ext/bro_lexer/__init__.pyc b/doc/ext/bro_lexer/__init__.pyc new file mode 100644 index 0000000000..1b03eaf1c1 Binary files /dev/null and b/doc/ext/bro_lexer/__init__.pyc differ diff --git a/doc/ext/bro_lexer/bro.py b/doc/ext/bro_lexer/bro.py new file mode 100644 index 0000000000..ae2566a8de --- /dev/null +++ b/doc/ext/bro_lexer/bro.py @@ -0,0 +1,76 @@ +from pygments.lexer import RegexLexer, bygroups, include +from pygments.token import * + +__all__ = ["BroLexer"] + +class BroLexer(RegexLexer): + name = 'Bro' + aliases = ['bro'] + filenames = ['*.bro'] + + _hex = r'[0-9a-fA-F_]+' + _float = r'((\d*\.?\d+)|(\d+\.?\d*))([eE][-+]?\d+)?' + _h = r'[A-Za-z0-9][-A-Za-z0-9]*' + + tokens = { + 'root': [ + # Whitespace + ('^@.*?\n', Comment.Preproc), + (r'#.*?\n', Comment.Single), + (r'\n', Text), + (r'\s+', Text), + (r'\\\n', Text), + # Keywords + (r'(add|alarm|break|case|const|continue|delete|do|else|enum|event' + r'|export|for|function|if|global|local|module|next' + r'|of|print|redef|return|schedule|when|while)\b', Keyword), + (r'(addr|any|bool|count|counter|double|file|int|interval|net' + r'|pattern|port|record|set|string|subnet|table|time|timer' + r'|vector)\b', Keyword.Type), + (r'(T|F)\b', Keyword.Constant), + (r'(&)((?:add|delete|expire)_func|attr|(create|read|write)_expire' + r'|default|raw_output|encrypt|group|log' + r'|mergeable|optional|persistent|priority|redef' + r'|rotate_(?:interval|size)|synchronized)\b', bygroups(Punctuation, + Keyword)), + (r'\s+module\b', Keyword.Namespace), + # Addresses, ports and networks + (r'\d+/(tcp|udp|icmp|unknown)\b', Number), + (r'(\d+\.){3}\d+', Number), + (r'(' + _hex + r'){7}' + _hex, Number), + (r'0x' + _hex + r'(' + _hex + r'|:)*::(' + _hex + r'|:)*', Number), + (r'((\d+|:)(' + _hex + r'|:)*)?::(' + _hex + r'|:)*', Number), + (r'(\d+\.\d+\.|(\d+\.){2}\d+)', Number), + # Hostnames + (_h + r'(\.' + _h + r')+', String), + # Numeric + (_float + r'\s+(day|hr|min|sec|msec|usec)s?\b', Literal.Date), + (r'0[xX]' + _hex, Number.Hex), + (_float, Number.Float), + (r'\d+', Number.Integer), + (r'/', String.Regex, 'regex'), + (r'"', String, 'string'), + # Operators + (r'[!%*/+-:<=>?~|]', Operator), + (r'([-+=&|]{2}|[+-=!><]=)', Operator), + (r'(in|match)\b', Operator.Word), + (r'[{}()\[\]$.,;]', Punctuation), + # Identfier + (r'([_a-zA-Z]\w*)(::)', bygroups(Name, Name.Namespace)), + (r'[a-zA-Z_][a-zA-Z_0-9]*', Name) + ], + 'string': [ + (r'"', String, '#pop'), + (r'\\([\\abfnrtv"\']|x[a-fA-F0-9]{2,4}|[0-7]{1,3})', String.Escape), + (r'[^\\"\n]+', String), + (r'\\\n', String), + (r'\\', String) + ], + 'regex': [ + (r'/', String.Regex, '#pop'), + (r'\\[\\nt/]', String.Regex), # String.Escape is too intense. + (r'[^\\/\n]+', String.Regex), + (r'\\\n', String.Regex), + (r'\\', String.Regex) + ] + } diff --git a/doc/ext/bro_lexer/bro.pyc b/doc/ext/bro_lexer/bro.pyc new file mode 100644 index 0000000000..c7b4fde790 Binary files /dev/null and b/doc/ext/bro_lexer/bro.pyc differ diff --git a/doc/ext/rst_directive.py b/doc/ext/rst_directive.py new file mode 100644 index 0000000000..434eef2c61 --- /dev/null +++ b/doc/ext/rst_directive.py @@ -0,0 +1,180 @@ +def setup(app): + pass + +# -*- coding: utf-8 -*- +""" + +Modified version of the the Pygments reStructuredText directive. -Robin + +This provides two new directives: + + - .. code:: [] + + Highlights the following code block according to if + given (e.g., "c", "python", etc.). + + - .. console:: + + Highlits the following code block as a shell session. + + For compatibility with the original version, "sourcecode" is + equivalent to "code". + +Original comment: + + The Pygments reStructuredText directive + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + This fragment is a Docutils_ 0.5 directive that renders source code + (to HTML only, currently) via Pygments. + + To use it, adjust the options below and copy the code into a module + that you import on initialization. The code then automatically + registers a ``sourcecode`` directive that you can use instead of + normal code blocks like this:: + + .. sourcecode:: python + + My code goes here. + + If you want to have different code styles, e.g. one with line numbers + and one without, add formatters with their names in the VARIANTS dict + below. You can invoke them instead of the DEFAULT one by using a + directive option:: + + .. sourcecode:: python + :linenos: + + My code goes here. + + Look at the `directive documentation`_ to get all the gory details. + + .. _Docutils: http://docutils.sf.net/ + .. _directive documentation: + http://docutils.sourceforge.net/docs/howto/rst-directives.html + + :copyright: Copyright 2006-2010 by the Pygments team, see AUTHORS. + :license: BSD, see LICENSE for details. +""" + +# Options +# ~~~~~~~ + +# Set to True if you want inline CSS styles instead of classes +INLINESTYLES = False + +from pygments.formatters import HtmlFormatter + +class MyHtmlFormatter(HtmlFormatter): + def format_unencoded(self, tokensource, outfile): + + # A NOP currently. + new_tokens = [] + for (i, piece) in tokensource: + new_tokens += [(i, piece)] + + return super(MyHtmlFormatter, self).format_unencoded(new_tokens, outfile) + +# The default formatter +DEFAULT = MyHtmlFormatter(noclasses=INLINESTYLES, cssclass="pygments") + +# Add name -> formatter pairs for every variant you want to use +VARIANTS = { + # 'linenos': HtmlFormatter(noclasses=INLINESTYLES, linenos=True), +} + + +import textwrap + +from docutils import nodes +from docutils.parsers.rst import directives, Directive + +from pygments import highlight +from pygments.lexers import get_lexer_by_name, guess_lexer, TextLexer +from pygments.token import Text, Keyword, Error, Operator, Name +from pygments.filter import Filter + +# Ugly hack to register the Bro lexer. I'm sure there's a better way to do it, +# but it's not obvious ... +from bro_lexer.bro import BroLexer +from pygments.lexers._mapping import LEXERS +LEXERS['BroLexer'] = ('bro_lexer.bro', BroLexer.name, BroLexer.aliases, BroLexer.filenames, ()) + +class Pygments(Directive): + """ Source code syntax hightlighting. + """ + #max_line_length = 68 + max_line_length = 0 + + required_arguments = 0 + optional_arguments = 1 + final_argument_whitespace = True + option_spec = dict([(key, directives.flag) for key in VARIANTS]) + has_content = True + + def wrapped_content(self): + content = [] + + if Console.max_line_length: + for line in self.content: + content += textwrap.wrap(line, Console.max_line_length, subsequent_indent=" ") + else: + content = self.content + + return u'\n'.join(content) + + def run(self): + self.assert_has_content() + + content = self.wrapped_content() + + if len(self.arguments) > 0: + try: + lexer = get_lexer_by_name(self.arguments[0]) + except (ValueError, IndexError): + # lexer not found, use default. + lexer = TextLexer() + else: + lexer = guess_lexer(content) + + # import sys + # print >>sys.stderr, self.arguments, lexer.__class__ + + # take an arbitrary option if more than one is given + formatter = self.options and VARIANTS[self.options.keys()[0]] or DEFAULT + parsed = highlight(content, lexer, formatter) + return [nodes.raw('', parsed, format='html')] + +class MyFilter(Filter): + def filter(self, lexer, stream): + + bol = True + + for (ttype, value) in stream: + # Color the '>' prompt sign. + if bol and ttype is Text and value == ">": + ttype = Name.Variable.Class # This gives us a nice red. + + # Discolor builtin, that can look funny. + if ttype is Name.Builtin: + ttype = Text + + bol = value.endswith("\n") + + yield (ttype, value) + +class Console(Pygments): + required_arguments = 0 + optional_arguments = 0 + + def run(self): + self.assert_has_content() + content = self.wrapped_content() + lexer = get_lexer_by_name("sh") + lexer.add_filter(MyFilter()) + parsed = highlight(content, lexer, DEFAULT) + return [nodes.raw('', parsed, format='html')] + +directives.register_directive('sourcecode', Pygments) +directives.register_directive('code', Pygments) +directives.register_directive('console', Console) diff --git a/doc/faq.rst b/doc/faq.rst new file mode 100644 index 0000000000..76f81cc618 --- /dev/null +++ b/doc/faq.rst @@ -0,0 +1,217 @@ + +========================== +Frequently Asked Questions +========================== + +.. raw:: html + +
    + +.. contents:: + +Installation and Configuration +============================== + +How do I upgrade to a new version of Bro? +----------------------------------------- + +There's two suggested approaches, either install Bro using the same +installation prefix directory as before, or pick a new prefix and copy +local customizations over. + +Re-Use Previous Install Prefix +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you choose to configure and install Bro with the same prefix +directory as before, local customization and configuration to files in +``$prefix/share/bro/site`` and ``$prefix/etc`` won't be overwritten +(``$prefix`` indicating the root of where Bro was installed). Also, logs +generated at run-time won't be touched by the upgrade. (But making +a backup of local changes before proceeding is still recommended.) + +After upgrading, remember to check ``$prefix/share/bro/site`` and +``$prefix/etc`` for ``.example`` files, which indicate the +distribution's version of the file differs from the local one, which may +include local changes. Review the differences, and make adjustments +as necessary (for differences that aren't the result of a local change, +use the new version's). + +Pick a New Install prefix +^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you want to install the newer version in a different prefix +directory than before, you can just copy local customization and +configuration files from ``$prefix/share/bro/site`` and ``$prefix/etc`` +to the new location (``$prefix`` indicating the root of where Bro was +originally installed). Make sure to review the files for difference +before copying and make adjustments as necessary (for differences that +aren't the result of a local change, use the new version's). Of +particular note, the copied version of ``$prefix/etc/broctl.cfg`` is +likely to need changes to the ``SpoolDir`` and ``LogDir`` settings. + +How can I tune my operating system for best capture performance? +---------------------------------------------------------------- + +Here are some pointers to more information: + +* Fabian Schneider's research on `high performance packet capture + `_ + +* `NSMWiki `_ has page on + *Collecting Data*. + +* An `IMC 2010 paper + `_ by + Lothar Braun et. al evaluates packet capture performance on + commodity hardware + +Are there any gotchas regarding interface configuration for live capture? Or why might I be seeing abnormally large packets much greater than interface MTU? +------------------------------------------------------------------------------------------------------------------------------------------------------------- + +Some NICs offload the reassembly of traffic into "superpackets" so that +fewer packets are then passed up the stack (e.g. "TCP segmentation +offload", or "generic segmentation offload"). The result is that the +capturing application will observe packets much larger than the MTU size +of the interface they were captured from and may also interfere with the +maximum packet capture length, ``snaplen``, so it's a good idea to disable +an interface's offloading features. + +You can use the ``ethtool`` program on Linux to view and disable +offloading features of an interface. See this page for more explicit +directions: + +http://securityonion.blogspot.com/2011/10/when-is-full-packet-capture-not-full.html + +What does an error message like ``internal error: NB-DNS error`` mean? +---------------------------------------------------------------------- + +That often means that DNS is not set up correctly on the system +running Bro. Try verifying from the command line that DNS lookups +work, e.g., ``host www.google.com``. + +I am using OpenBSD and having problems installing Bro? +------------------------------------------------------ + +One potential issue is that the top-level Makefile may not work with +OpenBSD's default make program, in which case you can either install +the ``gmake`` package and use it instead or first change into the +``build/`` directory before doing either ``make`` or ``make install`` +such that the CMake-generated Makefile's are used directly. + +Generally, please note that we do not regularly test OpenBSD builds. +We appreciate any patches that improve Bro's support for this +platform. + +How do BroControl options affect Bro script variables? +------------------------------------------------------ + +Some (but not all) BroControl options override a corresponding Bro script variable. +For example, setting the BroControl option "LogRotationInterval" will override +the value of the Bro script variable "Log::default_rotation_interval". +See the :doc:`BroControl Documentation ` to find out +which BroControl options override Bro script variables, and for more discussion +on site-specific customization. + +Usage +===== + +How can I identify backscatter? +------------------------------- + +Identifying backscatter via connections labeled as ``OTH`` is not a reliable +means to detect backscatter. Backscatter is however visible by interpreting +the contents of the ``history`` field in the ``conn.log`` file. The basic idea +is to watch for connections that never had an initial ``SYN`` but started +instead with a ``SYN-ACK`` or ``RST`` (though this latter generally is just +discarded). Here are some history fields which provide backscatter examples: +``hAFf``, ``r``. Refer to the conn protocol analysis scripts to interpret the +individual character meanings in the history field. + +Is there help for understanding Bro's resource consumption? +----------------------------------------------------------- + +There are two scripts that collect statistics on resource usage: +``misc/stats.bro`` and ``misc/profiling.bro``. The former is quite +lightweight, while the latter should only be used for debugging. + +How can I capture packets as an unprivileged user? +-------------------------------------------------- + +Normally, unprivileged users cannot capture packets from a network interface, +which means they would not be able to use Bro to read/analyze live traffic. +However, there are operating system specific ways to enable packet capture +permission for non-root users, which is worth doing in the context of using +Bro to monitor live traffic. + +With Linux Capabilities +^^^^^^^^^^^^^^^^^^^^^^^ + +Fully implemented since Linux kernel 2.6.24, capabilities are a way of +parceling superuser privileges into distinct units. Attach capabilities +required to capture packets to the ``bro`` executable file like this: + +.. console:: + + sudo setcap cap_net_raw,cap_net_admin=eip /path/to/bro + +Now any unprivileged user should have the capability to capture packets +using Bro provided that they have the traditional file permissions to +read/execute the ``bro`` binary. + +With BPF Devices +^^^^^^^^^^^^^^^^ + +Systems using Berkeley Packet Filter (BPF) (e.g. FreeBSD & Mac OS X) +can allow users with read access to a BPF device to capture packets from +it using libpcap. + +* Example of manually changing BPF device permissions to allow users in + the ``admin`` group to capture packets: + +.. console:: + + sudo chgrp admin /dev/bpf* + sudo chmod g+r /dev/bpf* + +* Example of configuring devfs to set permissions of BPF devices, adding + entries to ``/etc/devfs.conf`` to grant ``admin`` group permission to + capture packets: + +.. console:: + + sudo sh -c 'echo "own bpf root:admin" >> /etc/devfs.conf' + sudo sh -c 'echo "perm bpf 0640" >> /etc/devfs.conf' + sudo service devfs restart + +.. note:: As of Mac OS X 10.6, the BPF device is on devfs, but the used version + of devfs isn't capable of setting the device permissions. The permissions + can be changed manually, but they will not survive a reboot. + +Why isn't Bro producing the logs I expect? (A Note About Checksums) +------------------------------------------------------------------- + +Normally, Bro's event engine will discard packets which don't have valid +checksums. This can be a problem if one wants to analyze locally +generated/captured traffic on a system that offloads checksumming to the +network adapter. In that case, all transmitted/captured packets will have +bad checksums because they haven't yet been calculated by the NIC, thus +such packets will not undergo analysis defined in Bro policy scripts as they +normally would. Bad checksums in traces may also be a result of some packet +alteration tools. + +Bro has two options to workaround such situations and ignore bad checksums: + +1) The ``-C`` command line option to ``bro``. +2) An option called ``ignore_checksums`` that can be redefined at the + policy script layer (e.g. in your ``$PREFIX/share/bro/site/local.bro``): + + .. code:: bro + + redef ignore_checksums = T; + +The other alternative is to disable checksum offloading for your +network adapter, but this is not always possible or desirable. + +.. raw:: html + +
    diff --git a/doc/geoip.rst b/doc/geoip.rst index 53413b5bee..bd9ae0c08d 100644 --- a/doc/geoip.rst +++ b/doc/geoip.rst @@ -3,7 +3,7 @@ GeoLocation =========== -.. class:: opening +.. rst-class:: opening During the process of creating policy scripts the need may arise to find the geographic location for an IP address. Bro has support diff --git a/doc/deployment.png b/doc/images/deployment.png similarity index 100% rename from doc/deployment.png rename to doc/images/deployment.png diff --git a/doc/index.rst b/doc/index.rst index c7c85bd21f..ed14be1dd2 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -1,50 +1,90 @@ +.. Bro documentation master file - +================= Bro Documentation ================= -`Getting Started <{{git('bro:doc/quickstart.rst')}}>`_ - A quick introduction into using Bro 2.x. +Guides +------ -`Bro 1.5 to 2.0 Upgrade Guide <{{git('bro:doc/upgrade.rst')}}>`_ - Guidelines and notes about upgrading from Bro 1.5 to 2.x. Lots of - things have changed, so make sure to read this when upgrading. +.. toctree:: + :maxdepth: 1 -`BroControl <{{git('broctl:doc/broctl.rst')}}>`_ - An interactive console for managing Bro installations. - -`Script Reference <{{autodoc_bro_scripts}}/index.html>`_ - A complete reference of all policy scripts shipped with Bro. - -`FAQ <{{docroot}}/documentation/faq.html>`_ - A list with frequently asked questions. - -`How to Report a Problem <{{docroot}}/documentation/reporting-problems.html>`_ - Some advice for when you see Bro doing something you believe it - shouldn't. + INSTALL + quickstart + upgrade + faq + reporting-problems Frameworks ---------- -Bro comes with a number of frameworks, some of which are described in -more detail here: +.. toctree:: + :maxdepth: 1 -`Notice <{{git('bro:doc/notice.rst')}}>`_ - The notice framework. - -`Logging <{{git('bro:doc/logging.rst')}}>`_ - Customizing and extensing Bro's logging. - -`Cluster <{{git('bro:doc/cluster.rst')}}>`_ - Setting up a Bro Cluster when a single box can't handle the traffic anymore. - -`Signatures <{{git('bro:doc/signatures.rst')}}>`_ - Bro has support for traditional NIDS signatures as well. + notice + logging + input + cluster + signatures How-Tos ------- -We also collect more specific How-Tos on specific topics: +.. toctree:: + :maxdepth: 1 -`Using GeoIP in Bro scripts <{{git('bro:doc/geoip.rst')}}>`_ - Installation and usage of the the GeoIP library. + geoip + +Script Reference +---------------- + +.. toctree:: + :maxdepth: 1 + + scripts/packages + scripts/index + scripts/builtins + scripts/bifs + +Other Bro Components +-------------------- + +The following are snapshots of documentation for components that come +with this version of Bro (|version|). Since they can also be used +independently, see the `download page +`_ for documentation of any +current, independent component releases. + +.. toctree:: + :maxdepth: 1 + + BinPAC - A protocol parser generator + Broccoli - The Bro Client Communication Library (README) + Broccoli - User Manual + Broccoli Python Bindings + Broccoli Ruby Bindings + BroControl - Interactive Bro management shell + Bro-Aux - Small auxiliary tools for Bro + BTest - A unit testing framework + Capstats - Command-line packet statistic tool + PySubnetTree - Python module for CIDR lookups + trace-summary - Script for generating break-downs of network traffic + +The `Broccoli API Reference `_ may also be of +interest. + +Other Indices and References +---------------------------- + +* :ref:`General Index ` +* `Notice Index `_ +* :ref:`search` + +Internal References +------------------- + +.. toctree:: + :maxdepth: 1 + + scripts/internal diff --git a/doc/input.rst b/doc/input.rst new file mode 100644 index 0000000000..2945918733 --- /dev/null +++ b/doc/input.rst @@ -0,0 +1,407 @@ +============================================== +Loading Data into Bro with the Input Framework +============================================== + +.. rst-class:: opening + + Bro now features a flexible input framework that allows users + to import data into Bro. Data is either read into Bro tables or + converted to events which can then be handled by scripts. + This document gives an overview of how to use the input framework + with some examples. For more complex scenarios it is + worthwhile to take a look at the unit tests in + ``testing/btest/scripts/base/frameworks/input/``. + +.. contents:: + +Reading Data into Tables +======================== + +Probably the most interesting use-case of the input framework is to +read data into a Bro table. + +By default, the input framework reads the data in the same format +as it is written by the logging framework in Bro - a tab-separated +ASCII file. + +We will show the ways to read files into Bro with a simple example. +For this example we assume that we want to import data from a blacklist +that contains server IP addresses as well as the timestamp and the reason +for the block. + +An example input file could look like this: + +:: + + #fields ip timestamp reason + 192.168.17.1 1333252748 Malware host + 192.168.27.2 1330235733 Botnet server + 192.168.250.3 1333145108 Virus detected + +To read a file into a Bro table, two record types have to be defined. +One contains the types and names of the columns that should constitute the +table keys and the second contains the types and names of the columns that +should constitute the table values. + +In our case, we want to be able to lookup IPs. Hence, our key record +only contains the server IP. All other elements should be stored as +the table content. + +The two records are defined as: + +.. code:: bro + + type Idx: record { + ip: addr; + }; + + type Val: record { + timestamp: time; + reason: string; + }; + +Note that the names of the fields in the record definitions have to correspond +to the column names listed in the '#fields' line of the log file, in this +case 'ip', 'timestamp', and 'reason'. + +The log file is read into the table with a simple call of the ``add_table`` +function: + +.. code:: bro + + global blacklist: table[addr] of Val = table(); + + Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist]); + Input::remove("blacklist"); + +With these three lines we first create an empty table that should contain the +blacklist data and then instruct the input framework to open an input stream +named ``blacklist`` to read the data into the table. The third line removes the +input stream again, because we do not need it any more after the data has been +read. + +Because some data files can - potentially - be rather big, the input framework +works asynchronously. A new thread is created for each new input stream. +This thread opens the input data file, converts the data into a Bro format and +sends it back to the main Bro thread. + +Because of this, the data is not immediately accessible. Depending on the +size of the data source it might take from a few milliseconds up to a few +seconds until all data is present in the table. Please note that this means +that when Bro is running without an input source or on very short captured +files, it might terminate before the data is present in the system (because +Bro already handled all packets before the import thread finished). + +Subsequent calls to an input source are queued until the previous action has +been completed. Because of this, it is, for example, possible to call +``add_table`` and ``remove`` in two subsequent lines: the ``remove`` action +will remain queued until the first read has been completed. + +Once the input framework finishes reading from a data source, it fires +the ``end_of_data`` event. Once this event has been received all data +from the input file is available in the table. + +.. code:: bro + + event Input::end_of_data(name: string, source: string) { + # now all data is in the table + print blacklist; + } + +The table can also already be used while the data is still being read - it +just might not contain all lines in the input file when the event has not +yet fired. After it has been populated it can be used like any other Bro +table and blacklist entries can easily be tested: + +.. code:: bro + + if ( 192.168.18.12 in blacklist ) + # take action + + +Re-reading and streaming data +----------------------------- + +For many data sources, like for many blacklists, the source data is continually +changing. For these cases, the Bro input framework supports several ways to +deal with changing data files. + +The first, very basic method is an explicit refresh of an input stream. When +an input stream is open, the function ``force_update`` can be called. This +will trigger a complete refresh of the table; any changed elements from the +file will be updated. After the update is finished the ``end_of_data`` +event will be raised. + +In our example the call would look like: + +.. code:: bro + + Input::force_update("blacklist"); + +The input framework also supports two automatic refresh modes. The first mode +continually checks if a file has been changed. If the file has been changed, it +is re-read and the data in the Bro table is updated to reflect the current +state. Each time a change has been detected and all the new data has been +read into the table, the ``end_of_data`` event is raised. + +The second mode is a streaming mode. This mode assumes that the source data +file is an append-only file to which new data is continually appended. Bro +continually checks for new data at the end of the file and will add the new +data to the table. If newer lines in the file have the same index as previous +lines, they will overwrite the values in the output table. Because of the +nature of streaming reads (data is continually added to the table), +the ``end_of_data`` event is never raised when using streaming reads. + +The reading mode can be selected by setting the ``mode`` option of the +add_table call. Valid values are ``MANUAL`` (the default), ``REREAD`` +and ``STREAM``. + +Hence, when adding ``$mode=Input::REREAD`` to the previous example, the +blacklist table will always reflect the state of the blacklist input file. + +.. code:: bro + + Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD]); + +Receiving change events +----------------------- + +When re-reading files, it might be interesting to know exactly which lines in +the source files have changed. + +For this reason, the input framework can raise an event each time when a data +item is added to, removed from or changed in a table. + +The event definition looks like this: + +.. code:: bro + + event entry(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) { + # act on values + } + +The event has to be specified in ``$ev`` in the ``add_table`` call: + +.. code:: bro + + Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, $ev=entry]); + +The ``description`` field of the event contains the arguments that were +originally supplied to the add_table call. Hence, the name of the stream can, +for example, be accessed with ``description$name``. ``tpe`` is an enum +containing the type of the change that occurred. + +If a line that was not previously present in the table has been added, +then ``tpe`` will contain ``Input::EVENT_NEW``. In this case ``left`` contains +the index of the added table entry and ``right`` contains the values of the +added entry. + +If a table entry that already was present is altered during the re-reading or +streaming read of a file, ``tpe`` will contain ``Input::EVENT_CHANGED``. In +this case ``left`` contains the index of the changed table entry and ``right`` +contains the values of the entry before the change. The reason for this is +that the table already has been updated when the event is raised. The current +value in the table can be ascertained by looking up the current table value. +Hence it is possible to compare the new and the old values of the table. + +If a table element is removed because it was no longer present during a +re-read, then ``tpe`` will contain ``Input::REMOVED``. In this case ``left`` +contains the index and ``right`` the values of the removed element. + + +Filtering data during import +---------------------------- + +The input framework also allows a user to filter the data during the import. +To this end, predicate functions are used. A predicate function is called +before a new element is added/changed/removed from a table. The predicate +can either accept or veto the change by returning true for an accepted +change and false for a rejected change. Furthermore, it can alter the data +before it is written to the table. + +The following example filter will reject to add entries to the table when +they were generated over a month ago. It will accept all changes and all +removals of values that are already present in the table. + +.. code:: bro + + Input::add_table([$source="blacklist.file", $name="blacklist", $idx=Idx, $val=Val, $destination=blacklist, $mode=Input::REREAD, + $pred(typ: Input::Event, left: Idx, right: Val) = { + if ( typ != Input::EVENT_NEW ) { + return T; + } + return ( ( current_time() - right$timestamp ) < (30 day) ); + }]); + +To change elements while they are being imported, the predicate function can +manipulate ``left`` and ``right``. Note that predicate functions are called +before the change is committed to the table. Hence, when a table element is +changed (``tpe`` is ``INPUT::EVENT_CHANGED``), ``left`` and ``right`` +contain the new values, but the destination (``blacklist`` in our example) +still contains the old values. This allows predicate functions to examine +the changes between the old and the new version before deciding if they +should be allowed. + +Different readers +----------------- + +The input framework supports different kinds of readers for different kinds +of source data files. At the moment, the default reader reads ASCII files +formatted in the Bro log file format (tab-separated values). At the moment, +Bro comes with two other readers. The ``RAW`` reader reads a file that is +split by a specified record separator (usually newline). The contents are +returned line-by-line as strings; it can, for example, be used to read +configuration files and the like and is probably +only useful in the event mode and not for reading data to tables. + +Another included reader is the ``BENCHMARK`` reader, which is being used +to optimize the speed of the input framework. It can generate arbitrary +amounts of semi-random data in all Bro data types supported by the input +framework. + +In the future, the input framework will get support for new data sources +like, for example, different databases. + +Add_table options +----------------- + +This section lists all possible options that can be used for the add_table +function and gives a short explanation of their use. Most of the options +already have been discussed in the previous sections. + +The possible fields that can be set for a table stream are: + + ``source`` + A mandatory string identifying the source of the data. + For the ASCII reader this is the filename. + + ``name`` + A mandatory name for the filter that can later be used + to manipulate it further. + + ``idx`` + Record type that defines the index of the table. + + ``val`` + Record type that defines the values of the table. + + ``reader`` + The reader used for this stream. Default is ``READER_ASCII``. + + ``mode`` + The mode in which the stream is opened. Possible values are + ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``. + ``MANUAL`` means that the file is not updated after it has + been read. Changes to the file will not be reflected in the + data Bro knows. ``REREAD`` means that the whole file is read + again each time a change is found. This should be used for + files that are mapped to a table where individual lines can + change. ``STREAM`` means that the data from the file is + streamed. Events / table entries will be generated as new + data is appended to the file. + + ``destination`` + The destination table. + + ``ev`` + Optional event that is raised, when values are added to, + changed in, or deleted from the table. Events are passed an + Input::Event description as the first argument, the index + record as the second argument and the values as the third + argument. + + ``pred`` + Optional predicate, that can prevent entries from being added + to the table and events from being sent. + + ``want_record`` + Boolean value, that defines if the event wants to receive the + fields inside of a single record value, or individually + (default). This can be used if ``val`` is a record + containing only one type. In this case, if ``want_record`` is + set to false, the table will contain elements of the type + contained in ``val``. + +Reading Data to Events +====================== + +The second supported mode of the input framework is reading data to Bro +events instead of reading them to a table using event streams. + +Event streams work very similarly to table streams that were already +discussed in much detail. To read the blacklist of the previous example +into an event stream, the following Bro code could be used: + +.. code:: bro + + type Val: record { + ip: addr; + timestamp: time; + reason: string; + }; + + event blacklistentry(description: Input::EventDescription, tpe: Input::Event, ip: addr, timestamp: time, reason: string) { + # work with event data + } + + event bro_init() { + Input::add_event([$source="blacklist.file", $name="blacklist", $fields=Val, $ev=blacklistentry]); + } + + +The main difference in the declaration of the event stream is, that an event +stream needs no separate index and value declarations -- instead, all source +data types are provided in a single record definition. + +Apart from this, event streams work exactly the same as table streams and +support most of the options that are also supported for table streams. + +The options that can be set when creating an event stream with +``add_event`` are: + + ``source`` + A mandatory string identifying the source of the data. + For the ASCII reader this is the filename. + + ``name`` + A mandatory name for the stream that can later be used + to remove it. + + ``fields`` + Name of a record type containing the fields, which should be + retrieved from the input stream. + + ``ev`` + The event which is fired, after a line has been read from the + input source. The first argument that is passed to the event + is an Input::Event structure, followed by the data, either + inside of a record (if ``want_record is set``) or as + individual fields. The Input::Event structure can contain + information, if the received line is ``NEW``, has been + ``CHANGED`` or ``DELETED``. Since the ASCII reader cannot + track this information for event filters, the value is + always ``NEW`` at the moment. + + ``mode`` + The mode in which the stream is opened. Possible values are + ``MANUAL``, ``REREAD`` and ``STREAM``. Default is ``MANUAL``. + ``MANUAL`` means that the file is not updated after it has + been read. Changes to the file will not be reflected in the + data Bro knows. ``REREAD`` means that the whole file is read + again each time a change is found. This should be used for + files that are mapped to a table where individual lines can + change. ``STREAM`` means that the data from the file is + streamed. Events / table entries will be generated as new + data is appended to the file. + + ``reader`` + The reader used for this stream. Default is ``READER_ASCII``. + + ``want_record`` + Boolean value, that defines if the event wants to receive the + fields inside of a single record value, or individually + (default). If this is set to true, the event will receive a + single record of the type provided in ``fields``. + + + diff --git a/doc/logging-dataseries.rst b/doc/logging-dataseries.rst new file mode 100644 index 0000000000..139a13f813 --- /dev/null +++ b/doc/logging-dataseries.rst @@ -0,0 +1,186 @@ + +============================= +Binary Output with DataSeries +============================= + +.. rst-class:: opening + + Bro's default ASCII log format is not exactly the most efficient + way for storing and searching large volumes of data. An an + alternative, Bro comes with experimental support for `DataSeries + `_ + output, an efficient binary format for recording structured bulk + data. DataSeries is developed and maintained at HP Labs. + +.. contents:: + +Installing DataSeries +--------------------- + +To use DataSeries, its libraries must be available at compile-time, +along with the supporting *Lintel* package. Generally, both are +distributed on `HP Labs' web site +`_. Currently, however, you need +to use recent development versions for both packages, which you can +download from github like this:: + + git clone http://github.com/dataseries/Lintel + git clone http://github.com/dataseries/DataSeries + +To build and install the two into ````, do:: + + ( cd Lintel && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX= .. && make && make install ) + ( cd DataSeries && mkdir build && cd build && cmake -DCMAKE_INSTALL_PREFIX= .. && make && make install ) + +Please refer to the packages' documentation for more information about +the installation process. In particular, there's more information on +required and optional `dependencies for Lintel +`_ +and `dependencies for DataSeries +`_. +For users on RedHat-style systems, you'll need the following:: + + yum install libxml2-devel boost-devel + +Compiling Bro with DataSeries Support +------------------------------------- + +Once you have installed DataSeries, Bro's ``configure`` should pick it +up automatically as long as it finds it in a standard system location. +Alternatively, you can specify the DataSeries installation prefix +manually with ``--with-dataseries=``. Keep an eye on +``configure``'s summary output, if it looks like the following, Bro +found DataSeries and will compile in the support:: + + # ./configure --with-dataseries=/usr/local + [...] + ====================| Bro Build Summary |===================== + [...] + DataSeries: true + [...] + ================================================================ + +Activating DataSeries +--------------------- + +The direct way to use DataSeries is to switch *all* log files over to +the binary format. To do that, just add ``redef +Log::default_writer=Log::WRITER_DATASERIES;`` to your ``local.bro``. +For testing, you can also just pass that on the command line:: + + bro -r trace.pcap Log::default_writer=Log::WRITER_DATASERIES + +With that, Bro will now write all its output into DataSeries files +``*.ds``. You can inspect these using DataSeries's set of command line +tools, which its installation process installs into ``/bin``. +For example, to convert a file back into an ASCII representation:: + + $ ds2txt conn.log + [... We skip a bunch of metadata here ...] + ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes + 1300475167.096535 CRCC5OdDlXe 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0 + 1300475167.097012 o7XBsfvo3U1 fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0 + 1300475167.099816 pXPi1kPMgxb 141.142.220.50 5353 224.0.0.251 5353 udp 0.000000 0 0 S0 F 0 D 1 179 0 0 + 1300475168.853899 R7sOc16woCj 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 38 89 SF F 0 Dd 1 66 1 117 + 1300475168.854378 Z6dfHVmt0X7 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 52 99 SF F 0 Dd 1 80 1 127 + 1300475168.854837 k6T92WxgNAh 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 38 183 SF F 0 Dd 1 66 1 211 + [...] + +(``--skip-all`` suppresses the metadata.) + +Note that the ASCII conversion is *not* equivalent to Bro's default +output format. + +You can also switch only individual files over to DataSeries by adding +code like this to your ``local.bro``: + +.. code:: bro + + event bro_init() + { + local f = Log::get_filter(Conn::LOG, "default"); # Get default filter for connection log. + f$writer = Log::WRITER_DATASERIES; # Change writer type. + Log::add_filter(Conn::LOG, f); # Replace filter with adapted version. + } + +Bro's DataSeries writer comes with a few tuning options, see +:doc:`scripts/base/frameworks/logging/writers/dataseries`. + +Working with DataSeries +======================= + +Here are a few examples of using DataSeries command line tools to work +with the output files. + +* Printing CSV:: + + $ ds2txt --csv conn.log + ts,uid,id.orig_h,id.orig_p,id.resp_h,id.resp_p,proto,service,duration,orig_bytes,resp_bytes,conn_state,local_orig,missed_bytes,history,orig_pkts,orig_ip_bytes,resp_pkts,resp_ip_bytes + 1258790493.773208,ZTtgbHvf4s3,192.168.1.104,137,192.168.1.255,137,udp,dns,3.748891,350,0,S0,F,0,D,7,546,0,0 + 1258790451.402091,pOY6Rw7lhUd,192.168.1.106,138,192.168.1.255,138,udp,,0.000000,0,0,S0,F,0,D,1,229,0,0 + 1258790493.787448,pn5IiEslca9,192.168.1.104,138,192.168.1.255,138,udp,,2.243339,348,0,S0,F,0,D,2,404,0,0 + 1258790615.268111,D9slyIu3hFj,192.168.1.106,137,192.168.1.255,137,udp,dns,3.764626,350,0,S0,F,0,D,7,546,0,0 + [...] + + Add ``--separator=X`` to set a different separator. + +* Extracting a subset of columns:: + + $ ds2txt --select '*' ts,id.resp_h,id.resp_p --skip-all conn.log + 1258790493.773208 192.168.1.255 137 + 1258790451.402091 192.168.1.255 138 + 1258790493.787448 192.168.1.255 138 + 1258790615.268111 192.168.1.255 137 + 1258790615.289842 192.168.1.255 138 + [...] + +* Filtering rows:: + + $ ds2txt --where '*' 'duration > 5 && id.resp_p > 1024' --skip-all conn.ds + 1258790631.532888 V8mV5WLITu5 192.168.1.105 55890 239.255.255.250 1900 udp 15.004568 798 0 S0 F 0 D 6 966 0 0 + 1258792413.439596 tMcWVWQptvd 192.168.1.105 55890 239.255.255.250 1900 udp 15.004581 798 0 S0 F 0 D 6 966 0 0 + 1258794195.346127 cQwQMRdBrKa 192.168.1.105 55890 239.255.255.250 1900 udp 15.005071 798 0 S0 F 0 D 6 966 0 0 + 1258795977.253200 i8TEjhWd2W8 192.168.1.105 55890 239.255.255.250 1900 udp 15.004824 798 0 S0 F 0 D 6 966 0 0 + 1258797759.160217 MsLsBA8Ia49 192.168.1.105 55890 239.255.255.250 1900 udp 15.005078 798 0 S0 F 0 D 6 966 0 0 + 1258799541.068452 TsOxRWJRGwf 192.168.1.105 55890 239.255.255.250 1900 udp 15.004082 798 0 S0 F 0 D 6 966 0 0 + [...] + +* Calculate some statistics: + + Mean/stddev/min/max over a column:: + + $ dsstatgroupby '*' basic duration from conn.ds + # Begin DSStatGroupByModule + # processed 2159 rows, where clause eliminated 0 rows + # count(*), mean(duration), stddev, min, max + 2159, 42.7938, 1858.34, 0, 86370 + [...] + + Quantiles of total connection volume:: + + $ dsstatgroupby '*' quantile 'orig_bytes + resp_bytes' from conn.ds + [...] + 2159 data points, mean 24616 +- 343295 [0,1.26615e+07] + quantiles about every 216 data points: + 10%: 0, 124, 317, 348, 350, 350, 601, 798, 1469 + tails: 90%: 1469, 95%: 7302, 99%: 242629, 99.5%: 1226262 + [...] + +The ``man`` pages for these tools show further options, and their +``-h`` option gives some more information (either can be a bit cryptic +unfortunately though). + +Deficiencies +------------ + +Due to limitations of the DataSeries format, one cannot inspect its +files before they have been fully written. In other words, when using +DataSeries, it's currently not possible to inspect the live log +files inside the spool directory before they are rotated to their +final location. It seems that this could be fixed with some effort, +and we will work with DataSeries development team on that if the +format gains traction among Bro users. + +Likewise, we're considering writing custom command line tools for +interacting with DataSeries files, making that a bit more convenient +than what the standard utilities provide. diff --git a/doc/logging-elasticsearch.rst b/doc/logging-elasticsearch.rst new file mode 100644 index 0000000000..7571c68219 --- /dev/null +++ b/doc/logging-elasticsearch.rst @@ -0,0 +1,89 @@ + +========================================= +Indexed Logging Output with ElasticSearch +========================================= + +.. rst-class:: opening + + Bro's default ASCII log format is not exactly the most efficient + way for searching large volumes of data. ElasticSearch + is a new data storage technology for dealing with tons of data. + It's also a search engine built on top of Apache's Lucene + project. It scales very well, both for distributed indexing and + distributed searching. + +.. contents:: + +Warning +------- + +This writer plugin is still in testing and is not yet recommended for +production use! The approach to how logs are handled in the plugin is "fire +and forget" at this time, there is no error handling if the server fails to +respond successfully to the insertion request. + +Installing ElasticSearch +------------------------ + +Download the latest version from: . +Once extracted, start ElasticSearch with:: + +# ./bin/elasticsearch + +For more detailed information, refer to the ElasticSearch installation +documentation: http://www.elasticsearch.org/guide/reference/setup/installation.html + +Compiling Bro with ElasticSearch Support +---------------------------------------- + +First, ensure that you have libcurl installed the run configure.:: + + # ./configure + [...] + ====================| Bro Build Summary |===================== + [...] + cURL: true + [...] + ElasticSearch: true + [...] + ================================================================ + +Activating ElasticSearch +------------------------ + +The easiest way to enable ElasticSearch output is to load the tuning/logs-to- +elasticsearch.bro script. If you are using BroControl, the following line in +local.bro will enable it. + +.. console:: + + @load tuning/logs-to-elasticsearch + +With that, Bro will now write most of its logs into ElasticSearch in addition +to maintaining the Ascii logs like it would do by default. That script has +some tunable options for choosing which logs to send to ElasticSearch, refer +to the autogenerated script documentation for those options. + +There is an interface being written specifically to integrate with the data +that Bro outputs into ElasticSearch named Brownian. It can be found here:: + + https://github.com/grigorescu/Brownian + +Tuning +------ + +A common problem encountered with ElasticSearch is too many files being held +open. The ElasticSearch website has some suggestions on how to increase the +open file limit. + + - http://www.elasticsearch.org/tutorials/2011/04/06/too-many-open-files.html + +TODO +---- + +Lots. + +- Perform multicast discovery for server. +- Better error detection. +- Better defaults (don't index loaded-plugins, for instance). +- diff --git a/doc/logging.rst b/doc/logging.rst index e7151b7f47..7fb4205b9a 100644 --- a/doc/logging.rst +++ b/doc/logging.rst @@ -2,7 +2,7 @@ Customizing Bro's Logging ========================== -.. class:: opening +.. rst-class:: opening Bro comes with a flexible key-value based logging interface that allows fine-grained control of what gets logged and how it is @@ -43,13 +43,14 @@ Basics ====== The data fields that a stream records are defined by a record type -specified when it is created. Let's look at the script generating -Bro's connection summaries as an example, -``base/protocols/conn/main.bro``. It defines a record ``Conn::Info`` -that lists all the fields that go into ``conn.log``, each marked with -a ``&log`` attribute indicating that it is part of the information -written out. To write a log record, the script then passes an instance -of ``Conn::Info`` to the logging framework's ``Log::write`` function. +specified when it is created. Let's look at the script generating Bro's +connection summaries as an example, +:doc:`scripts/base/protocols/conn/main`. It defines a record +:bro:type:`Conn::Info` that lists all the fields that go into +``conn.log``, each marked with a ``&log`` attribute indicating that it +is part of the information written out. To write a log record, the +script then passes an instance of :bro:type:`Conn::Info` to the logging +framework's :bro:id:`Log::write` function. By default, each stream automatically gets a filter named ``default`` that generates the normal output by recording all record fields into a @@ -62,11 +63,11 @@ to work with. Filtering --------- -To create new a new output file for an existing stream, you can add a +To create a new output file for an existing stream, you can add a new filter. A filter can, e.g., restrict the set of fields being logged: -.. code:: bro: +.. code:: bro event bro_init() { @@ -85,14 +86,15 @@ Note the fields that are set for the filter: ``path`` The filename for the output file, without any extension (which may be automatically added by the writer). Default path values - are generated by taking the stream's ID and munging it - slightly. ``Conn::LOG`` is converted into ``conn``, - ``PacketFilter::LOG`` is converted into ``packet_filter``, and - ``Notice::POLICY_LOG`` is converted into ``notice_policy``. + are generated by taking the stream's ID and munging it slightly. + :bro:enum:`Conn::LOG` is converted into ``conn``, + :bro:enum:`PacketFilter::LOG` is converted into + ``packet_filter``, and :bro:enum:`Notice::POLICY_LOG` is + converted into ``notice_policy``. ``include`` A set limiting the fields to the ones given. The names - correspond to those in the ``Conn::LOG`` record, with + correspond to those in the :bro:type:`Conn::Info` record, with sub-records unrolled by concatenating fields (separated with dots). @@ -158,10 +160,10 @@ further for example to log information by subnets or even by IP address. Be careful, however, as it is easy to create many files very quickly ... -.. sidebar: +.. sidebar:: A More Generic Path Function - The show ``split_log`` method has one draw-back: it can be used - only with the ``Conn::Log`` stream as the record type is hardcoded + The ``split_log`` method has one draw-back: it can be used + only with the :bro:enum:`Conn::LOG` stream as the record type is hardcoded into its argument list. However, Bro allows to do a more generic variant: @@ -193,7 +195,7 @@ predicate that will be called for each log record: Log::add_filter(Conn::LOG, filter); } -This will results in a log file ``conn-http.log`` that contains only +This will result in a log file ``conn-http.log`` that contains only traffic detected and analyzed as HTTP traffic. Extending @@ -201,8 +203,8 @@ Extending You can add further fields to a log stream by extending the record type that defines its content. Let's say we want to add a boolean -field ``is_private`` to ``Conn::Info`` that indicates whether the -originator IP address is part of the RFC1918 space: +field ``is_private`` to :bro:type:`Conn::Info` that indicates whether the +originator IP address is part of the :rfc:`1918` space: .. code:: bro @@ -234,10 +236,10 @@ Notes: - For extending logs this way, one needs a bit of knowledge about how the script that creates the log stream is organizing its state keeping. Most of the standard Bro scripts attach their log state to - the ``connection`` record where it can then be accessed, just as the - ``c$conn`` above. For example, the HTTP analysis adds a field ``http - : HTTP::Info`` to the ``connection`` record. See the script - reference for more information. + the :bro:type:`connection` record where it can then be accessed, just + as the ``c$conn`` above. For example, the HTTP analysis adds a field + ``http`` of type :bro:type:`HTTP::Info` to the :bro:type:`connection` + record. See the script reference for more information. - When extending records as shown above, the new fields must always be declared either with a ``&default`` value or as ``&optional``. @@ -251,8 +253,8 @@ Sometimes it is helpful to do additional analysis of the information being logged. For these cases, a stream can specify an event that will be generated every time a log record is written to it. All of Bro's default log streams define such an event. For example, the connection -log stream raises the event ``Conn::log_conn(rec: Conn::Info)``: You -could use that for example for flagging when an a connection to +log stream raises the event :bro:id:`Conn::log_conn`. You +could use that for example for flagging when a connection to a specific destination exceeds a certain duration: .. code:: bro @@ -267,7 +269,7 @@ specific destination exceeds a certain duration: { if ( rec$duration > 5mins ) NOTICE([$note=Long_Conn_Found, - $msg=fmt("unsually long conn to %s", rec$id$resp_h), + $msg=fmt("unusually long conn to %s", rec$id$resp_h), $id=rec$id]); } @@ -279,11 +281,32 @@ real-time. Rotation -------- +By default, no log rotation occurs, but it's globally controllable for all +filters by redefining the :bro:id:`Log::default_rotation_interval` option: + +.. code:: bro + + redef Log::default_rotation_interval = 1 hr; + +Or specifically for certain :bro:type:`Log::Filter` instances by setting +their ``interv`` field. Here's an example of changing just the +:bro:enum:`Conn::LOG` stream's default filter rotation. + +.. code:: bro + + event bro_init() + { + local f = Log::get_filter(Conn::LOG, "default"); + f$interv = 1 min; + Log::remove_filter(Conn::LOG, "default"); + Log::add_filter(Conn::LOG, f); + } + ASCII Writer Configuration -------------------------- The ASCII writer has a number of options for customizing the format of -its output, see XXX.bro. +its output, see :doc:`scripts/base/frameworks/logging/writers/ascii`. Adding Streams ============== @@ -296,7 +319,7 @@ example for the ``Foo`` module: module Foo; export { - # Create an ID for the our new stream. By convention, this is + # Create an ID for our new stream. By convention, this is # called "LOG". redef enum Log::ID += { LOG }; @@ -321,8 +344,8 @@ example for the ``Foo`` module: Log::create_stream(Foo::LOG, [$columns=Info, $ev=log_foo]); } -You can also the state to the ``connection`` record to make it easily -accessible across event handlers: +You can also add the state to the :bro:type:`connection` record to make +it easily accessible across event handlers: .. code:: bro @@ -330,7 +353,7 @@ accessible across event handlers: foo: Info &optional; } -Now you can use the ``Log::write`` method to output log records and +Now you can use the :bro:id:`Log::write` method to output log records and save the logged ``Foo::Info`` record into the connection record: .. code:: bro @@ -343,10 +366,21 @@ save the logged ``Foo::Info`` record into the connection record: } See the existing scripts for how to work with such a new connection -field. A simple example is ``base/protocols/syslog/main.bro``. +field. A simple example is :doc:`scripts/base/protocols/syslog/main`. -When you are developing scripts that add data to the ``connection`` +When you are developing scripts that add data to the :bro:type:`connection` record, care must be given to when and how long data is stored. Normally data saved to the connection record will remain there for the duration of the connection and from a practical perspective it's not uncommon to need to delete that data before the end of the connection. + +Other Writers +------------- + +Bro supports the following output formats other than ASCII: + +.. toctree:: + :maxdepth: 1 + + logging-dataseries + logging-elasticsearch diff --git a/doc/notice.rst b/doc/notice.rst index 1322d44585..5849e605a9 100644 --- a/doc/notice.rst +++ b/doc/notice.rst @@ -2,7 +2,7 @@ Notice Framework ================ -.. class:: opening +.. rst-class:: opening One of the easiest ways to customize Bro is writing a local notice policy. Bro can detect a large number of potentially interesting @@ -21,7 +21,7 @@ Let's start with a little bit of background on Bro's philosophy on reporting things. Bro ships with a large number of policy scripts which perform a wide variety of analyses. Most of these scripts monitor for activity which might be of interest for the user. However, none of these scripts determines the -importance of what it finds itself. Instead, the scripts only flags situations +importance of what it finds itself. Instead, the scripts only flag situations as *potentially* interesting, leaving it to the local configuration to define which of them are in fact actionable. This decoupling of detection and reporting allows Bro to address the different needs that sites have: @@ -29,17 +29,18 @@ definitions of what constitutes an attack or even a compromise differ quite a bit between environments, and activity deemed malicious at one site might be fully acceptable at another. -Whenever one of Bro's analysis scripts sees something potentially interesting -it flags the situation by calling the ``NOTICE`` function and giving it a -single ``Notice::Info`` record. A Notice has a ``Notice::Type``, which -reflects the kind of activity that has been seen, and it is usually also -augmented with further context about the situation. +Whenever one of Bro's analysis scripts sees something potentially +interesting it flags the situation by calling the :bro:see:`NOTICE` +function and giving it a single :bro:see:`Notice::Info` record. A Notice +has a :bro:see:`Notice::Type`, which reflects the kind of activity that +has been seen, and it is usually also augmented with further context +about the situation. More information about raising notices can be found in the `Raising Notices`_ section. Once a notice is raised, it can have any number of actions applied to it by -the ``Notice::policy`` set which is described in the `Notice Policy`_ +the :bro:see:`Notice::policy` set which is described in the `Notice Policy`_ section below. Such actions can be to send a mail to the configured address(es) or to simply ignore the notice. Currently, the following actions are defined: @@ -52,20 +53,20 @@ are defined: - Description * - Notice::ACTION_LOG - - Write the notice to the ``Notice::LOG`` logging stream. + - Write the notice to the :bro:see:`Notice::LOG` logging stream. * - Notice::ACTION_ALARM - - Log into the ``Notice::ALARM_LOG`` stream which will rotate + - Log into the :bro:see:`Notice::ALARM_LOG` stream which will rotate hourly and email the contents to the email address or addresses - defined in the ``Notice::mail_dest`` variable. + defined in the :bro:see:`Notice::mail_dest` variable. * - Notice::ACTION_EMAIL - Send the notice in an email to the email address or addresses given in - the ``Notice::mail_dest`` variable. + the :bro:see:`Notice::mail_dest` variable. * - Notice::ACTION_PAGE - Send an email to the email address or addresses given in the - ``Notice::mail_page_dest`` variable. + :bro:see:`Notice::mail_page_dest` variable. * - Notice::ACTION_NO_SUPPRESS - This action will disable the built in notice suppression for the @@ -82,15 +83,17 @@ Processing Notices Notice Policy ************* -The predefined set ``Notice::policy`` provides the mechanism for applying -actions and other behavior modifications to notices. Each entry of -``Notice::policy`` is a record of the type ``Notice::PolicyItem`` which -defines a condition to be matched against all raised notices and one or more -of a variety of behavior modifiers. The notice policy is defined by adding any -number of ``Notice::PolicyItem`` records to the ``Notice::policy`` set. +The predefined set :bro:see:`Notice::policy` provides the mechanism for +applying actions and other behavior modifications to notices. Each entry +of :bro:see:`Notice::policy` is a record of the type +:bro:see:`Notice::PolicyItem` which defines a condition to be matched +against all raised notices and one or more of a variety of behavior +modifiers. The notice policy is defined by adding any number of +:bro:see:`Notice::PolicyItem` records to the :bro:see:`Notice::policy` +set. Here's a simple example which tells Bro to send an email for all notices of -type ``SSH::Login`` if the server is 10.0.0.1: +type :bro:see:`SSH::Login` if the server is 10.0.0.1: .. code:: bro @@ -113,11 +116,11 @@ flexibility due to having access to Bro's full programming language. Predicate Field ^^^^^^^^^^^^^^^ -The ``Notice::PolicyItem`` record type has a field name ``$pred`` which -defines the entry's condition in the form of a predicate written as a Bro -function. The function is passed the notice as a ``Notice::Info`` record and -it returns a boolean value indicating if the entry is applicable to that -particular notice. +The :bro:see:`Notice::PolicyItem` record type has a field name ``$pred`` +which defines the entry's condition in the form of a predicate written +as a Bro function. The function is passed the notice as a +:bro:see:`Notice::Info` record and it returns a boolean value indicating +if the entry is applicable to that particular notice. .. note:: @@ -125,14 +128,14 @@ particular notice. (``T``) since an implicit false (``F``) value would never be used. Bro evaluates the predicates of each entry in the order defined by the -``$priority`` field in ``Notice::PolicyItem`` records. The valid values are -0-10 with 10 being earliest evaluated. If ``$priority`` is omitted, the -default priority is 5. +``$priority`` field in :bro:see:`Notice::PolicyItem` records. The valid +values are 0-10 with 10 being earliest evaluated. If ``$priority`` is +omitted, the default priority is 5. Behavior Modification Fields ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -There are a set of fields in the ``Notice::PolicyItem`` record type that +There are a set of fields in the :bro:see:`Notice::PolicyItem` record type that indicate ways that either the notice or notice processing should be modified if the predicate field (``$pred``) evaluated to true (``T``). Those fields are explained in more detail in the following table. @@ -146,8 +149,8 @@ explained in more detail in the following table. - Example * - ``$action=`` - - Each Notice::PolicyItem can have a single action applied to the notice - with this field. + - Each :bro:see:`Notice::PolicyItem` can have a single action + applied to the notice with this field. - ``$action = Notice::ACTION_EMAIL`` * - ``$suppress_for=`` @@ -162,9 +165,9 @@ explained in more detail in the following table. - This field can be used for modification of the notice policy evaluation. To stop processing of notice policy items before evaluating all of them, set this field to ``T`` and make the ``$pred`` - field return ``T``. ``Notice::PolicyItem`` records defined at a higher - priority as defined by the ``$priority`` field will still be evaluated - but those at a lower priority won't. + field return ``T``. :bro:see:`Notice::PolicyItem` records defined at + a higher priority as defined by the ``$priority`` field will still be + evaluated but those at a lower priority won't. - ``$halt = T`` @@ -186,11 +189,11 @@ Notice Policy Shortcuts Although the notice framework provides a great deal of flexibility and configurability there are many times that the full expressiveness isn't needed and actually becomes a hindrance to achieving results. The framework provides -a default ``Notice::policy`` suite as a way of giving users the +a default :bro:see:`Notice::policy` suite as a way of giving users the shortcuts to easily apply many common actions to notices. These are implemented as sets and tables indexed with a -``Notice::Type`` enum value. The following table shows and describes +:bro:see:`Notice::Type` enum value. The following table shows and describes all of the variables available for shortcut configuration of the notice framework. @@ -201,40 +204,44 @@ framework. * - Variable name - Description - * - Notice::ignored_types - - Adding a ``Notice::Type`` to this set results in the notice + * - :bro:see:`Notice::ignored_types` + - Adding a :bro:see:`Notice::Type` to this set results in the notice being ignored. It won't have any other action applied to it, not even - ``Notice::ACTION_LOG``. + :bro:see:`Notice::ACTION_LOG`. - * - Notice::emailed_types - - Adding a ``Notice::Type`` to this set results in - ``Notice::ACTION_EMAIL`` being applied to the notices of that type. + * - :bro:see:`Notice::emailed_types` + - Adding a :bro:see:`Notice::Type` to this set results in + :bro:see:`Notice::ACTION_EMAIL` being applied to the notices of + that type. - * - Notice::alarmed_types - - Adding a Notice::Type to this set results in - ``Notice::ACTION_ALARM`` being applied to the notices of that type. + * - :bro:see:`Notice::alarmed_types` + - Adding a :bro:see:`Notice::Type` to this set results in + :bro:see:`Notice::ACTION_ALARM` being applied to the notices of + that type. - * - Notice::not_suppressed_types - - Adding a ``Notice::Type`` to this set results in that notice no longer - undergoing the normal notice suppression that would take place. Be - careful when using this in production it could result in a dramatic - increase in the number of notices being processed. + * - :bro:see:`Notice::not_suppressed_types` + - Adding a :bro:see:`Notice::Type` to this set results in that notice + no longer undergoing the normal notice suppression that would + take place. Be careful when using this in production it could + result in a dramatic increase in the number of notices being + processed. - * - Notice::type_suppression_intervals - - This is a table indexed on ``Notice::Type`` and yielding an interval. - It can be used as an easy way to extend the default suppression - interval for an entire ``Notice::Type`` without having to create a - whole ``Notice::policy`` entry and setting the ``$suppress_for`` - field. + * - :bro:see:`Notice::type_suppression_intervals` + - This is a table indexed on :bro:see:`Notice::Type` and yielding an + interval. It can be used as an easy way to extend the default + suppression interval for an entire :bro:see:`Notice::Type` + without having to create a whole :bro:see:`Notice::policy` entry + and setting the ``$suppress_for`` field. Raising Notices --------------- -A script should raise a notice for any occurrence that a user may want to be -notified about or take action on. For example, whenever the base SSH analysis -scripts sees an SSH session where it is heuristically guessed to be a -successful login, it raises a Notice of the type ``SSH::Login``. The code in -the base SSH analysis script looks like this: +A script should raise a notice for any occurrence that a user may want +to be notified about or take action on. For example, whenever the base +SSH analysis scripts sees an SSH session where it is heuristically +guessed to be a successful login, it raises a Notice of the type +:bro:see:`SSH::Login`. The code in the base SSH analysis script looks +like this: .. code:: bro @@ -242,10 +249,10 @@ the base SSH analysis script looks like this: $msg="Heuristically detected successful SSH login.", $conn=c]); -``NOTICE`` is a normal function in the global namespace which wraps a function -within the ``Notice`` namespace. It takes a single argument of the -``Notice::Info`` record type. The most common fields used when raising notices -are described in the following table: +:bro:see:`NOTICE` is a normal function in the global namespace which +wraps a function within the ``Notice`` namespace. It takes a single +argument of the :bro:see:`Notice::Info` record type. The most common +fields used when raising notices are described in the following table: .. list-table:: :widths: 32 40 @@ -263,21 +270,21 @@ are described in the following table: information about this particular instance of the notice type. * - ``$sub`` - - This is a sub-message which meant for human readability but will + - This is a sub-message meant for human readability but will frequently also be used to contain data meant to be matched with the ``Notice::policy``. * - ``$conn`` - If a connection record is available when the notice is being raised - and the notice represents some attribute of the connection the + and the notice represents some attribute of the connection, then the connection record can be given here. Other fields such as ``$id`` and ``$src`` will automatically be populated from this value. * - ``$id`` - If a conn_id record is available when the notice is being raised and - the notice represents some attribute of the connection, the connection - be given here. Other fields such as ``$src`` will automatically be - populated from this value. + the notice represents some attribute of the connection, then the + connection can be given here. Other fields such as ``$src`` will + automatically be populated from this value. * - ``$src`` - If the notice represents an attribute of a single host then it's @@ -295,9 +302,10 @@ are described in the following table: * - ``$suppress_for`` - This field can be set if there is a natural suppression interval for - the notice that may be different than the default value. The value set - to this field can also be modified by a user's ``Notice::policy`` so - the value is not set permanently and unchangeably. + the notice that may be different than the default value. The + value set to this field can also be modified by a user's + :bro:see:`Notice::policy` so the value is not set permanently + and unchangeably. When writing Bro scripts which raise notices, some thought should be given to what the notice represents and what data should be provided to give a consumer @@ -305,7 +313,7 @@ of the notice the best information about the notice. If the notice is representative of many connections and is an attribute of a host (e.g. a scanning host) it probably makes most sense to fill out the ``$src`` field and not give a connection or conn_id. If a notice is representative of a -connection attribute (e.g. an apparent SSH login) the it makes sense to fill +connection attribute (e.g. an apparent SSH login) then it makes sense to fill out either ``$conn`` or ``$id`` based on the data that is available when the notice is raised. Using care when inserting data into a notice will make later analysis easier when only the data to fully represent the occurrence that @@ -325,7 +333,7 @@ The notice framework supports suppression for notices if the author of the script that is generating the notice has indicated to the notice framework how to identify notices that are intrinsically the same. Identification of these "intrinsically duplicate" notices is implemented with an optional field in -``Notice::Info`` records named ``$identifier`` which is a simple string. +:bro:see:`Notice::Info` records named ``$identifier`` which is a simple string. If the ``$identifier`` and ``$type`` fields are the same for two notices, the notice framework actually considers them to be the same thing and can use that information to suppress duplicates for a configurable period of time. @@ -337,12 +345,13 @@ information to suppress duplicates for a configurable period of time. could be completely legitimate usage if no notices could ever be considered to be duplicates. -The ``$identifier`` field is typically comprised of several pieces of data -related to the notice that when combined represent a unique instance of that -notice. Here is an example of the script -``policy/protocols/ssl/validate-certs.bro`` raising a notice for session -negotiations where the certificate or certificate chain did not validate -successfully against the available certificate authority certificates. +The ``$identifier`` field is typically comprised of several pieces of +data related to the notice that when combined represent a unique +instance of that notice. Here is an example of the script +:doc:`scripts/policy/protocols/ssl/validate-certs` raising a notice +for session negotiations where the certificate or certificate chain did +not validate successfully against the available certificate authority +certificates. .. code:: bro @@ -369,7 +378,7 @@ it's assumed that the script author who is raising the notice understands the full problem set and edge cases of the notice which may not be readily apparent to users. If users don't want the suppression to take place or simply want a different interval, they can always modify it with the -``Notice::policy``. +:bro:see:`Notice::policy`. Extending Notice Framework diff --git a/doc/quickstart.rst b/doc/quickstart.rst index 22523e1618..3780eb982a 100644 --- a/doc/quickstart.rst +++ b/doc/quickstart.rst @@ -1,14 +1,16 @@ .. _CMake: http://www.cmake.org .. _SWIG: http://www.swig.org +.. _Xcode: https://developer.apple.com/xcode/ .. _MacPorts: http://www.macports.org .. _Fink: http://www.finkproject.org .. _Homebrew: http://mxcl.github.com/homebrew +.. _bro downloads page: http://bro-ids.org/download/index.html ================= Quick Start Guide ================= -.. class:: opening +.. rst-class:: opening The short story for getting Bro up and running in a simple configuration for analysis of either live traffic from a network interface or a packet @@ -26,24 +28,23 @@ source code forms. Pre-Built Binary Release Packages --------------------------------- -See the `downloads page <{{docroot}}/download/index.html>`_ for currently -supported/targeted platforms. +See the `bro downloads page`_ for currently supported/targeted platforms. * RPM -.. console:: + .. console:: - > sudo yum localinstall Bro-all*.rpm + sudo yum localinstall Bro-*.rpm * DEB -.. console:: + .. console:: - > sudo gdebi Bro-all-*.deb + sudo gdebi Bro-*.deb * MacOS Disk Image with Installer - Just open the ``Bro-all-*.dmg`` and then run the ``.pkg`` installer. + Just open the ``Bro-*.dmg`` and then run the ``.pkg`` installer. Everything installed by the package will go into ``/opt/bro``. The primary install prefix for binary packages is ``/opt/bro``. @@ -56,89 +57,103 @@ Building From Source Required Dependencies ~~~~~~~~~~~~~~~~~~~~~ +The following dependencies are required to build Bro: + * RPM/RedHat-based Linux: -.. console:: + .. console:: - > sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig + sudo yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel file-devel * DEB/Debian-based Linux: -.. console:: + .. console:: - > sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig + sudo apt-get install cmake make gcc g++ flex bison libpcap-dev libssl-dev python-dev swig zlib1g-dev libmagic-dev * FreeBSD Most required dependencies should come with a minimal FreeBSD install except for the following. -.. console:: + .. console:: - > sudo pkg_add -r cmake swig bison python + sudo pkg_add -r bash cmake swig bison python + + Note that ``bash`` needs to be in ``PATH``, which by default it is + not. The FreeBSD package installs the binary into + ``/usr/local/bin``. * Mac OS X - Snow Leopard (10.6) comes with all required dependencies except for CMake_. + Compiling source code on Macs requires first downloading Xcode_, + then going through its "Preferences..." -> "Downloads" menus to + install the "Command Line Tools" component. - Lion (10.7) comes with all required dependencies except for CMake_ and SWIG_. + Lion (10.7) and Mountain Lion (10.8) come with all required + dependencies except for CMake_, SWIG_, and ``libmagic``. - Distributions of these dependencies can be obtained from the project websites - linked above, but they're also likely available from your preferred Mac OS X - package management system (e.g. MacPorts_, Fink_, or Homebrew_). + Distributions of these dependencies can be obtained from the project + websites linked above, but they're also likely available from your + preferred Mac OS X package management system (e.g. MacPorts_, Fink_, + or Homebrew_). - Note that the MacPorts ``swig`` package may not include any specific - language support so you may need to also install ``swig-ruby`` and - ``swig-python``. + Specifically for MacPorts, the ``swig``, ``swig-ruby``, ``swig-python`` + and ``file`` packages provide the required dependencies. Optional Dependencies ~~~~~~~~~~~~~~~~~~~~~ -Bro can use libmagic for identifying file types, libGeoIP for geo-locating -IP addresses, libz for (de)compression during analysis and communication, -and sendmail for sending emails. +Bro can use libGeoIP for geo-locating IP addresses, and sendmail for +sending emails. -* RPM/RedHat-based Linux: +* RedHat Enterprise Linux: -.. console:: + .. console:: - > sudo yum install zlib-devel file-devel GeoIP-devel sendmail + sudo yum install geoip-devel sendmail + +* CentOS Linux: + + .. console:: + + sudo yum install GeoIP-devel sendmail * DEB/Debian-based Linux: -.. console:: + .. console:: - > sudo apt-get install zlib1g-dev libmagic-dev libgeoip-dev sendmail + sudo apt-get install libgeoip-dev sendmail * Ports-based FreeBSD -.. console:: + .. console:: - > sudo pkg_add -r GeoIP + sudo pkg_add -r GeoIP - libz, libmagic, and sendmail are typically already available. + sendmail is typically already available. * Mac OS X Vanilla OS X installations don't ship with libmagic or libGeoIP, but if installed from your preferred package management system (e.g. MacPorts, - Fink Homebrew), they should be automatically detected and Bro will compile + Fink, or Homebrew), they should be automatically detected and Bro will compile against them. -Additional steps may be needed to `get the right GeoIP database -<{{git('bro:doc/geoip.rst')}}>`_. +Additional steps may be needed to :doc:`get the right GeoIP database ` Compiling Bro Source Code ~~~~~~~~~~~~~~~~~~~~~~~~~ Bro releases are bundled into source packages for convenience and -available from the `downloads page <{{docroot}}/download/index.html>`_. +available from the `bro downloads page`_. -The latest Bro development versions are obtainable through git repositories -hosted at `git.bro-ids.org `_. See our `git development -documentation <{{docroot}}/development/process.html>`_ for comprehensive -information on Bro's use of git revision control, but the short story for -downloading the full source code experience for Bro via git is: +The latest Bro development versions are obtainable through git +repositories hosted at `git.bro-ids.org `_. See +our `git development documentation +`_ for comprehensive +information on Bro's use of git revision control, but the short story +for downloading the full source code experience for Bro via git is: .. console:: @@ -146,8 +161,8 @@ downloading the full source code experience for Bro via git is: .. note:: If you choose to clone the ``bro`` repository non-recursively for a "minimal Bro experience", be aware that compiling it depends on - BinPAC, which has it's own ``binpac`` repository. Either install it - first or initizalize/update the cloned ``bro`` repository's + BinPAC, which has its own ``binpac`` repository. Either install it + first or initialize/update the cloned ``bro`` repository's ``aux/binpac`` submodule. See the ``INSTALL`` file included with the source code for more information @@ -157,9 +172,9 @@ desired root install path): .. console:: - > ./configure --prefix=/desired/install/path - > make - > make install + ./configure --prefix=/desired/install/path + make + make install The default installation prefix is ``/usr/local/bro``, which would typically require root privileges when doing the ``make install``. @@ -174,13 +189,13 @@ Bourne-Shell Syntax: .. console:: - > export PATH=/usr/local/bro/bin:$PATH + export PATH=/usr/local/bro/bin:$PATH C-Shell Syntax: .. console:: - > setenv PATH /usr/local/bro/bin:$PATH + setenv PATH /usr/local/bro/bin:$PATH Or substitute ``/opt/bro/bin`` instead if you installed from a binary package. @@ -191,7 +206,7 @@ BroControl is an interactive shell for easily operating/managing Bro installations on a single system or even across multiple systems in a traffic-monitoring cluster. -.. note:: Below, ``$PREFIX``, is used to reference the Bro installation +.. note:: Below, ``$PREFIX`` is used to reference the Bro installation root directory. A Minimal Starting Configuration @@ -211,7 +226,7 @@ Now start the BroControl shell like: .. console:: - > broctl + broctl Since this is the first-time use of the shell, perform an initial installation of the BroControl configuration: @@ -233,10 +248,9 @@ policy and output the results in ``$PREFIX/logs``. .. note:: The user starting BroControl needs permission to capture network traffic. If you are not root, you may need to grant further - privileges to the account you're using; see the `FAQ - <{{docroot}}/documentation/faq.html>`_. Also, if it - looks like Bro is not seeing any traffic, check out the FAQ entry - checksum offloading. + privileges to the account you're using; see the :doc:`FAQ `. + Also, if it looks like Bro is not seeing any traffic, check out + the FAQ entry on checksum offloading. You can leave it running for now, but to stop this Bro instance you would do: @@ -244,9 +258,7 @@ You can leave it running for now, but to stop this Bro instance you would do: [BroControl] > stop -We also recommend to insert the following entry into `crontab`: - -.. console:: +We also recommend to insert the following entry into `crontab`:: 0-59/5 * * * * $PREFIX/bin/broctl cron @@ -373,9 +385,7 @@ the variable's value may not change at run-time, but whose initial value can be modified via the ``redef`` operator at parse-time. So let's continue on our path to modify the behavior for the two SSL -and SSH notices. Looking at -`$PREFIX/share/bro/base/frameworks/notice/main.bro -<{{autodoc_bro_scripts}}/scripts/base/frameworks/notice/main.html>`_, +and SSH notices. Looking at :doc:`scripts/base/frameworks/notice/main`, we see that it advertises: .. code:: bro @@ -449,7 +459,7 @@ that only takes the email action for SSH logins to a defined set of servers: ] }; -You'll just have to trust the syntax for now, but what we've done is first +You'll just have to trust the syntax for now, but what we've done is first declare our own variable to hold a set of watched addresses, ``watched_servers``; then added a record to the policy that will generate an email on the condition that the predicate function evaluates to true, which @@ -477,8 +487,7 @@ tweak the most basic options. Here's some suggestions on what to explore next: * Reading the code of scripts that ship with Bro is also a great way to gain understanding of the language and how you can start writing your own custom analysis. -* Review the `FAQ <{{docroot}}/documentation/faq.html>`_. -* Check out more `documentation <{{docroot}}/documentation/index.html>`_. +* Review the :doc:`FAQ `. * Continue reading below for another mini-tutorial on using Bro as a standalone command-line utility. @@ -496,7 +505,7 @@ Analyzing live traffic from an interface is simple: .. console:: - > bro -i en0 + bro -i en0 ``en0`` can be replaced by the interface of your choice and for the list of scripts, you can just use "all" for now to perform all the default analysis @@ -504,7 +513,7 @@ that's available. Bro will output log files into the working directory. -.. note:: The `FAQ <{{docroot}}/documentation/faq.html>`_ entries about +.. note:: The :doc:`FAQ ` entries about capturing as an unprivileged user and checksum offloading are particularly relevant at this point. @@ -513,7 +522,7 @@ command-line: .. console:: - > bro -i en0 local + bro -i en0 local This will cause Bro to print a warning about lacking the ``Site::local_nets`` variable being configured. You can supply this @@ -522,7 +531,7 @@ in place of the example subnets): .. console:: - > bro -r mypackets.trace local "Site::local_nets += { 1.2.3.0/24, 5.6.7.0/24 }" + bro -r mypackets.trace local "Site::local_nets += { 1.2.3.0/24, 5.6.7.0/24 }" Reading Packet Capture (pcap) Files @@ -533,7 +542,7 @@ like this: .. console:: - > sudo tcpdump -i en0 -s 0 -w mypackets.trace + sudo tcpdump -i en0 -s 0 -w mypackets.trace Where ``en0`` can be replaced by the correct interface for your system as shown by e.g. ``ifconfig``. (The ``-s 0`` argument tells it to capture @@ -544,7 +553,7 @@ and tell Bro to perform all the default analysis on the capture which primarily .. console:: - > bro -r mypackets.trace + bro -r mypackets.trace Bro will output log files into the working directory. @@ -553,7 +562,7 @@ script that we include as a suggested configuration: .. console:: - > bro -r mypackets.trace local + bro -r mypackets.trace local Telling Bro Which Scripts to Load @@ -563,7 +572,7 @@ A command-line invocation of Bro typically looks like: .. console:: - > bro + bro Where the last arguments are the specific policy scripts that this Bro instance will load. These arguments don't have to include the ``.bro`` @@ -578,7 +587,7 @@ logging) and adds SSL certificate validation. .. console:: - > bro -r mypackets.trace protocols/ssl/validate-certs + bro -r mypackets.trace protocols/ssl/validate-certs You might notice that a script you load from the command line uses the ``@load`` directive in the Bro language to declare dependence on other scripts. diff --git a/doc/reporting-problems.rst b/doc/reporting-problems.rst new file mode 100644 index 0000000000..5e55b2ac90 --- /dev/null +++ b/doc/reporting-problems.rst @@ -0,0 +1,194 @@ + +Reporting Problems +================== + +.. rst-class:: opening + + Here we summarize some steps to follow when you see Bro doing + something it shouldn't. To provide help, it is often crucial for + us to have a way of reliably reproducing the effect you're seeing. + Unfortunately, reproducing problems can be rather tricky with Bro + because more often than not, they occur only in either very rare + situations or only after Bro has been running for some time. In + particular, getting a small trace showing a specific effect can be + a real problem. In the following, we'll summarize some strategies + to this end. + +Reporting Problems +------------------ + +Generally, when you encounter a problem with Bro, the best thing to do +is opening a new ticket in `Bro's issue tracker +`__ and include information on how to +reproduce the issue. Ideally, your ticket should come with the +following: + +* The Bro version you're using (if working directly from the git + repository, the branch and revision number.) + +* The output you're seeing along with a description of what you'd expect + Bro to do instead. + +* A *small* trace in `libpcap format `__ + demonstrating the effect (assuming the problem doesn't happen right + at startup already). + +* The exact command-line you're using to run Bro with that trace. If + you can, please try to run the Bro binary directly from the command + line rather than using BroControl. + +* Any non-standard scripts you're using (but please only those really + necessary; just a small code snippet triggering the problem would + be perfect). + +* If you encounter a crash, information from the core dump, such as + the stack backtrace, can be very helpful. See below for more on + this. + + +How Do I Get a Trace File? +-------------------------- + +As Bro is usually running live, coming up with a small trace file that +reproduces a problem can turn out to be quite a challenge. Often it +works best to start with a large trace that triggers the problem, +and then successively thin it out as much as possible. + +To get to the initial large trace, here are a few things you can try: + +* Capture a trace with `tcpdump `__, either + on the same interface Bro is running on, or on another host where + you can generate traffic of the kind likely triggering the problem + (e.g., if you're seeing problems with the HTTP analyzer, record some + of your Web browsing on your desktop.) When using tcpdump, don't + forget to record *complete* packets (``tcpdump -s 0 ...``). You can + reduce the amount of traffic captured by using a suitable BPF filter + (e.g., for HTTP only, try ``port 80``). + +* Bro's command-line option ``-w `` records all packets it + processes into the given file. You can then later run Bro + offline on this trace and it will process the packets in the same + way as it did live. This is particularly helpful with problems that + only occur after Bro has already been running for some time. For + example, sometimes a crash may be triggered by a particular kind of + traffic only occurring rarely. Running Bro live with ``-w`` and + then, after the crash, offline on the recorded trace might, with a + little bit of luck, reproduce the problem reliably. However, be + careful with ``-w``: it can result in huge trace files, quickly + filling up your disk. (One way to mitigate the space issues is to + periodically delete the trace file by configuring + ``rotate-logs.bro`` accordingly. BroControl does that for you if you + set its ``SaveTraces`` option.) + +* Finally, you can try running Bro on a publically available trace + file, such as `anonymized FTP traffic `__, `headers-only enterprise traffic + `__, or + `Defcon traffic `__. Some of these + particularly stress certain components of Bro (e.g., the Defcon + traces contain tons of scans). + +Once you have a trace that demonstrates the effect, you will often +notice that it's pretty big, in particular if recorded from the link +you're monitoring. Therefore, the next step is to shrink its size as +much as possible. Here are a few things you can try to this end: + +* Very often, a single connection is able to demonstrate the problem. + If you can identify which one it is (e.g., from one of Bro's + ``*.log`` files) you can extract the connection's packets from the + trace using tcpdump by filtering for the corresponding 4-tuple of + addresses and ports: + + .. console:: + + > tcpdump -r large.trace -w small.trace host and port and host and port + +* If you can't reduce the problem to a connection, try to identify + either a host pair or a single host triggering it, and filter down + the trace accordingly. + +* You can try to extract a smaller time slice from the trace using + `TCPslice `__. For example, to + extract the first 100 seconds from the trace: + + .. console:: + + # Test comment + > tcpslice +100 out + +Alternatively, tcpdump extracts the first ``n`` packets with its +option ``-c ``. + + +Getting More Information After a Crash +-------------------------------------- + +If Bro crashes, a *core dump* can be very helpful to nail down the +problem. Examining a core is not for the faint of heart but can reveal +extremely useful information. + +First, you should configure Bro with the option ``--enable-debug`` and +recompile; this will disable all compiler optimizations and thus make +the core dump more useful (don't expect great performance with this +version though; compiling Bro without optimization has a noticeable +impact on its CPU usage.). Then enable core dumps if you haven't +already (e.g., ``ulimit -c unlimited`` if you're using bash). + +Once Bro has crashed, start gdb with the Bro binary and the file +containing the core dump. (Alternatively, you can also run Bro +directly inside gdb instead of working from a core file.) The first +helpful information to include with your tracker ticket is a stack +backtrace, which you get with gdb's ``bt`` command: + +.. console:: + + > gdb bro core + [...] + > bt + + +If the crash occurs inside Bro's script interpreter, the next thing to +do is identifying the line of script code processed just before the +abnormal termination. Look for methods in the stack backtrace which +belong to any of the script interpreter's classes. Roughly speaking, +these are all classes with names ending in ``Expr``, ``Stmt``, or +``Val``. Then climb up the stack with ``up`` until you reach the first +of these methods. The object to which ``this`` is pointing will have a +``Location`` object, which in turn contains the file name and line +number of the corresponding piece of script code. Continuing the +example from above, here's how to get that information: + +.. console:: + + [in gdb] + > up + > ... + > up + > print this->location->filename + > print this->location->first_line + + +If the crash occurs while processing input packets but you cannot +directly tell which connection is responsible (and thus not extract +its packets from the trace as suggested above), try getting the +4-tuple of the connection currently being processed from the core dump +by again examining the stack backtrace, this time looking for methods +belonging to the ``Connection`` class. That class has members +``orig_addr``/``resp_addr`` and ``orig_port``/``resp_port`` storing +(pointers to) the IP addresses and ports respectively: + +.. console:: + + [in gdb] + > up + > ... + > up + > printf "%08x:%04x %08x:%04x\n", *this->orig_addr, this->orig_port, *this->resp_addr, this->resp_port + + +Note that these values are stored in `network byte order +`__ +so you will need to flip the bytes around if you are on a low-endian +machine (which is why the above example prints them in hex). For +example, if an IP address prints as ``0100007f`` , that's 127.0.0.1 . + diff --git a/doc/scripts/CMakeLists.txt b/doc/scripts/CMakeLists.txt index 1cf2f768ec..33d473b005 100644 --- a/doc/scripts/CMakeLists.txt +++ b/doc/scripts/CMakeLists.txt @@ -1,16 +1,3 @@ -set(BIF_SRC_DIR ${PROJECT_SOURCE_DIR}/src) -set(RST_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/rest_output) -set(DOC_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/out) -set(DOC_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/source) -set(DOC_SOURCE_WORKDIR ${CMAKE_CURRENT_BINARY_DIR}/source) - -file(GLOB_RECURSE DOC_SOURCES FOLLOW_SYMLINKS "*") - -# configure the Sphinx config file (expand variables CMake might know about) -configure_file(${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in - ${CMAKE_CURRENT_BINARY_DIR}/conf.py - @ONLY) - # find out what BROPATH to use when executing bro execute_process(COMMAND ${CMAKE_BINARY_DIR}/bro-path-dev OUTPUT_VARIABLE BROPATH @@ -48,7 +35,7 @@ endif () # which summary text can be extracted at build time # ${group}_doc_names: a running list of reST style document names that can be # given to a :doc: role, shared indices with ${group}_files -# + macro(REST_TARGET srcDir broInput) set(absSrcPath ${srcDir}/${broInput}) get_filename_component(basename ${broInput} NAME) @@ -86,12 +73,14 @@ macro(REST_TARGET srcDir broInput) elseif (${extension} STREQUAL ".bif.bro") set(group bifs) elseif (relDstDir) - set(pkgIndex scripts/${relDstDir}/index) - set(group ${pkgIndex}) + set(group ${relDstDir}/index) # add package index to master package list if not already in it - list(FIND MASTER_PKG_LIST ${pkgIndex} _found) + # and if a __load__.bro exists in the original script directory + list(FIND MASTER_PKG_LIST ${relDstDir} _found) if (_found EQUAL -1) - list(APPEND MASTER_PKG_LIST ${pkgIndex}) + if (EXISTS ${CMAKE_SOURCE_DIR}/scripts/${relDstDir}/__load__.bro) + list(APPEND MASTER_PKG_LIST ${relDstDir}) + endif () endif () else () set(group "") @@ -134,6 +123,7 @@ macro(REST_TARGET srcDir broInput) ARGS -rf .state *.log *.rst DEPENDS bro DEPENDS ${absSrcPath} + WORKING_DIRECTORY ${CMAKE_BINARY_DIR} COMMENT "[Bro] Generating reST docs for ${broInput}" ) @@ -143,19 +133,21 @@ endmacro(REST_TARGET) include(DocSourcesList.cmake) # create temporary list of all docs to include in the master policy/index file -set(MASTER_POLICY_INDEX ${CMAKE_CURRENT_BINARY_DIR}/policy_index) file(WRITE ${MASTER_POLICY_INDEX} "${MASTER_POLICY_INDEX_TEXT}") # create the temporary list of all packages to include in the master # policy/packages.rst file -set(MASTER_PACKAGE_INDEX ${CMAKE_CURRENT_BINARY_DIR}/pkg_index) set(MASTER_PKG_INDEX_TEXT "") foreach (pkg ${MASTER_PKG_LIST}) - # strip of the trailing /index for the link name - get_filename_component(lnktxt ${pkg} PATH) - # pretty-up the link name by removing common scripts/ prefix - string(REPLACE "scripts/" "" lnktxt "${lnktxt}") - set(MASTER_PKG_INDEX_TEXT "${MASTER_PKG_INDEX_TEXT}\n ${lnktxt} <${pkg}>") + set(MASTER_PKG_INDEX_TEXT + "${MASTER_PKG_INDEX_TEXT}\n:doc:`${pkg} <${pkg}/index>`\n") + if (EXISTS ${CMAKE_SOURCE_DIR}/scripts/${pkg}/README) + file(STRINGS ${CMAKE_SOURCE_DIR}/scripts/${pkg}/README pkgreadme) + foreach (line ${pkgreadme}) + set(MASTER_PKG_INDEX_TEXT "${MASTER_PKG_INDEX_TEXT}\n ${line}") + endforeach () + set(MASTER_PKG_INDEX_TEXT "${MASTER_PKG_INDEX_TEXT}\n") + endif () endforeach () file(WRITE ${MASTER_PACKAGE_INDEX} "${MASTER_PKG_INDEX_TEXT}") @@ -186,11 +178,11 @@ if (EXISTS ${RST_OUTPUT_DIR}) list(FIND ALL_REST_OUTPUTS ${_doc} _found) if (_found EQUAL -1) file(REMOVE ${_doc}) - message(STATUS "AutoDoc: remove stale reST doc: ${_doc}") + message(STATUS "Broxygen: remove stale reST doc: ${_doc}") string(REPLACE .rst .bro _brofile ${_doc}) if (EXISTS ${_brofile}) file(REMOVE ${_brofile}) - message(STATUS "AutoDoc: remove stale bro source: ${_brofile}") + message(STATUS "Broxygen: remove stale bro source: ${_brofile}") endif () endif () endforeach () @@ -211,57 +203,3 @@ add_custom_target(restclean COMMAND "${CMAKE_COMMAND}" -E remove_directory ${RST_OUTPUT_DIR} VERBATIM) - -# The "sphinxdoc" target generates reST documentation for any outdated bro -# scripts and then uses Sphinx to generate HTML documentation from the reST -add_custom_target(sphinxdoc - # copy the template documentation to the build directory - # to give as input for sphinx - COMMAND "${CMAKE_COMMAND}" -E copy_directory - ${DOC_SOURCE_DIR} - ${DOC_SOURCE_WORKDIR} - # copy generated policy script documentation into the - # working copy of the template documentation - COMMAND "${CMAKE_COMMAND}" -E copy_directory - ${RST_OUTPUT_DIR} - ${DOC_SOURCE_WORKDIR}/scripts - # append to the master index of all policy scripts - COMMAND cat ${MASTER_POLICY_INDEX} >> - ${DOC_SOURCE_WORKDIR}/scripts/index.rst - # append to the master index of all policy packages - COMMAND cat ${MASTER_PACKAGE_INDEX} >> - ${DOC_SOURCE_WORKDIR}/packages.rst - # construct a reST file for each group - COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/group_index_generator.py - ${CMAKE_CURRENT_BINARY_DIR}/group_list - ${CMAKE_CURRENT_BINARY_DIR} - ${DOC_SOURCE_WORKDIR} - # tell sphinx to generate html - COMMAND sphinx-build - -b html - -c ${CMAKE_CURRENT_BINARY_DIR} - -d ${DOC_OUTPUT_DIR}/doctrees - ${DOC_SOURCE_WORKDIR} - ${DOC_OUTPUT_DIR}/html - # create symlink to the html output directory for convenience - COMMAND "${CMAKE_COMMAND}" -E create_symlink - ${DOC_OUTPUT_DIR}/html - ${CMAKE_BINARY_DIR}/html - WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} - COMMENT "[Sphinx] Generating HTML policy script docs" - # SOURCES just adds stuff to IDE projects as a convenience - SOURCES ${DOC_SOURCES}) - -# The "sphinxclean" target removes just the Sphinx input/output directories -# from the build directory. -add_custom_target(sphinxclean - COMMAND "${CMAKE_COMMAND}" -E remove_directory - ${DOC_SOURCE_WORKDIR} - COMMAND "${CMAKE_COMMAND}" -E remove_directory - ${DOC_OUTPUT_DIR} - VERBATIM) - -add_dependencies(sphinxdoc sphinxclean restdoc) - -add_dependencies(doc sphinxdoc) -add_dependencies(docclean sphinxclean restclean) diff --git a/doc/scripts/DocSourcesList.cmake b/doc/scripts/DocSourcesList.cmake index 9d99effc02..b127e1526d 100644 --- a/doc/scripts/DocSourcesList.cmake +++ b/doc/scripts/DocSourcesList.cmake @@ -19,6 +19,7 @@ rest_target(${psd} base/init-bare.bro internal) rest_target(${CMAKE_BINARY_DIR}/src base/bro.bif.bro) rest_target(${CMAKE_BINARY_DIR}/src base/const.bif.bro) rest_target(${CMAKE_BINARY_DIR}/src base/event.bif.bro) +rest_target(${CMAKE_BINARY_DIR}/src base/input.bif.bro) rest_target(${CMAKE_BINARY_DIR}/src base/logging.bif.bro) rest_target(${CMAKE_BINARY_DIR}/src base/reporter.bif.bro) rest_target(${CMAKE_BINARY_DIR}/src base/strings.bif.bro) @@ -31,10 +32,18 @@ rest_target(${psd} base/frameworks/cluster/setup-connections.bro) rest_target(${psd} base/frameworks/communication/main.bro) rest_target(${psd} base/frameworks/control/main.bro) rest_target(${psd} base/frameworks/dpd/main.bro) +rest_target(${psd} base/frameworks/input/main.bro) +rest_target(${psd} base/frameworks/input/readers/ascii.bro) +rest_target(${psd} base/frameworks/input/readers/benchmark.bro) +rest_target(${psd} base/frameworks/input/readers/raw.bro) rest_target(${psd} base/frameworks/intel/main.bro) rest_target(${psd} base/frameworks/logging/main.bro) rest_target(${psd} base/frameworks/logging/postprocessors/scp.bro) +rest_target(${psd} base/frameworks/logging/postprocessors/sftp.bro) rest_target(${psd} base/frameworks/logging/writers/ascii.bro) +rest_target(${psd} base/frameworks/logging/writers/dataseries.bro) +rest_target(${psd} base/frameworks/logging/writers/elasticsearch.bro) +rest_target(${psd} base/frameworks/logging/writers/none.bro) rest_target(${psd} base/frameworks/metrics/cluster.bro) rest_target(${psd} base/frameworks/metrics/main.bro) rest_target(${psd} base/frameworks/metrics/non-cluster.bro) @@ -52,12 +61,15 @@ rest_target(${psd} base/frameworks/packet-filter/netstats.bro) rest_target(${psd} base/frameworks/reporter/main.bro) rest_target(${psd} base/frameworks/signatures/main.bro) rest_target(${psd} base/frameworks/software/main.bro) +rest_target(${psd} base/frameworks/tunnels/main.bro) rest_target(${psd} base/protocols/conn/contents.bro) rest_target(${psd} base/protocols/conn/inactivity.bro) rest_target(${psd} base/protocols/conn/main.bro) +rest_target(${psd} base/protocols/conn/polling.bro) rest_target(${psd} base/protocols/dns/consts.bro) rest_target(${psd} base/protocols/dns/main.bro) rest_target(${psd} base/protocols/ftp/file-extract.bro) +rest_target(${psd} base/protocols/ftp/gridftp.bro) rest_target(${psd} base/protocols/ftp/main.bro) rest_target(${psd} base/protocols/ftp/utils-commands.bro) rest_target(${psd} base/protocols/http/file-extract.bro) @@ -70,6 +82,8 @@ rest_target(${psd} base/protocols/irc/main.bro) rest_target(${psd} base/protocols/smtp/entities-excerpt.bro) rest_target(${psd} base/protocols/smtp/entities.bro) rest_target(${psd} base/protocols/smtp/main.bro) +rest_target(${psd} base/protocols/socks/consts.bro) +rest_target(${psd} base/protocols/socks/main.bro) rest_target(${psd} base/protocols/ssh/main.bro) rest_target(${psd} base/protocols/ssl/consts.bro) rest_target(${psd} base/protocols/ssl/main.bro) @@ -102,6 +116,7 @@ rest_target(${psd} policy/misc/analysis-groups.bro) rest_target(${psd} policy/misc/capture-loss.bro) rest_target(${psd} policy/misc/loaded-scripts.bro) rest_target(${psd} policy/misc/profiling.bro) +rest_target(${psd} policy/misc/stats.bro) rest_target(${psd} policy/misc/trim-trace-file.bro) rest_target(${psd} policy/protocols/conn/known-hosts.bro) rest_target(${psd} policy/protocols/conn/known-services.bro) @@ -133,6 +148,7 @@ rest_target(${psd} policy/protocols/ssl/known-certs.bro) rest_target(${psd} policy/protocols/ssl/validate-certs.bro) rest_target(${psd} policy/tuning/defaults/packet-fragments.bro) rest_target(${psd} policy/tuning/defaults/warnings.bro) +rest_target(${psd} policy/tuning/logs-to-elasticsearch.bro) rest_target(${psd} policy/tuning/track-all-assets.bro) rest_target(${psd} site/local-manager.bro) rest_target(${psd} site/local-proxy.bro) diff --git a/doc/scripts/README b/doc/scripts/README index ca7f28492b..a15812609c 100644 --- a/doc/scripts/README +++ b/doc/scripts/README @@ -1,6 +1,6 @@ This directory contains scripts and templates that can be used to automate the generation of Bro script documentation. Several build targets are defined -by CMake: +by CMake and available in the top-level Makefile: ``restdoc`` @@ -15,29 +15,10 @@ by CMake: ``build/`` directory inside ``reST`` (a symlink to ``doc/scripts/rest_output``). -``doc`` - - This target depends on a Python interpreter (>=2.5) and - `Sphinx `_ being installed. Sphinx can be - installed like:: - - > sudo easy_install sphinx - - This target will first build ``restdoc`` target and then copy the - resulting reST files as an input directory to Sphinx. - - After completion, HTML documentation can be located in the CMake - ``build/`` directory inside ``html`` (a symlink to - ``doc/scripts/out/html``) - ``restclean`` This target removes any reST documentation that has been generated so far. -``docclean`` - - This target removes Sphinx inputs and outputs from the CMake ``build/`` dir. - The ``genDocSourcesList.sh`` script can be run to automatically generate ``DocSourcesList.cmake``, which is the file CMake uses to define the list of documentation targets. This script should be run after adding new @@ -54,18 +35,10 @@ script's name to the blacklist, then append a ``rest_target()`` to the ``statictext`` variable where the first argument is the source directory containing the policy script to document, the second argument is the file name of the policy script, and the third argument is the path/name of a -pre-created reST document in the ``source/`` directory to which the +pre-created reST document in the ``../`` source directory to which the ``make doc`` process can append script documentation references. This pre-created reST document should also then be linked to from the TOC tree -in ``source/index.rst``. - -The Sphinx source tree template in ``source/`` can be modified to add more -common/general documentation, style sheets, JavaScript, etc. The Sphinx -config file is produced from ``conf.py.in``, so that can be edited to change -various Sphinx options, like setting the default HTML rendering theme. -There is also a custom Sphinx domain implemented in ``source/ext/bro.py`` -which adds some reST directives and roles that aid in generating useful -index entries and cross-references. +in ``../index.rst``. See ``example.bro`` for an example of how to document a Bro script such that ``make doc`` will be able to produce reST/HTML documentation for it. diff --git a/doc/scripts/bifs.rst b/doc/scripts/bifs.rst new file mode 100644 index 0000000000..eaae0e13b8 --- /dev/null +++ b/doc/scripts/bifs.rst @@ -0,0 +1,5 @@ +.. This is a stub doc to which broxygen appends during the build process + +Built-In Functions (BIFs) +========================= + diff --git a/doc/scripts/builtins.rst b/doc/scripts/builtins.rst new file mode 100644 index 0000000000..d274de6b7b --- /dev/null +++ b/doc/scripts/builtins.rst @@ -0,0 +1,638 @@ +Builtin Types and Attributes +============================ + +Types +----- + +The Bro scripting language supports the following built-in types. + +.. bro:type:: void + + An internal Bro type representing an absence of a type. Should + most often be seen as a possible function return type. + +.. bro:type:: bool + + Reflects a value with one of two meanings: true or false. The two + ``bool`` constants are ``T`` and ``F``. + +.. bro:type:: int + + A numeric type representing a signed integer. An ``int`` constant + is a string of digits preceded by a ``+`` or ``-`` sign, e.g. + ``-42`` or ``+5``. When using type inferencing use care so that the + intended type is inferred, e.g. ``local size_difference = 0`` will + infer :bro:type:`count`, while ``local size_difference = +0`` + will infer :bro:type:`int`. + +.. bro:type:: count + + A numeric type representing an unsigned integer. A ``count`` + constant is a string of digits, e.g. ``1234`` or ``0``. + +.. bro:type:: counter + + An alias to :bro:type:`count`. + +.. TODO: is there anything special about this type? + +.. bro:type:: double + + A numeric type representing a double-precision floating-point + number. Floating-point constants are written as a string of digits + with an optional decimal point, optional scale-factor in scientific + notation, and optional ``+`` or ``-`` sign. Examples are ``-1234``, + ``-1234e0``, ``3.14159``, and ``.003e-23``. + +.. bro:type:: time + + A temporal type representing an absolute time. There is currently + no way to specify a ``time`` constant, but one can use the + :bro:id:`current_time` or :bro:id:`network_time` built-in functions + to assign a value to a ``time``-typed variable. + +.. bro:type:: interval + + A temporal type representing a relative time. An ``interval`` + constant can be written as a numeric constant followed by a time + unit where the time unit is one of ``usec``, ``msec``, ``sec``, ``min``, + ``hr``, or ``day`` which respectively represent microseconds, milliseconds, + seconds, minutes, hours, and days. Whitespace between the numeric + constant and time unit is optional. Appending the letter "s" to the + time unit in order to pluralize it is also optional (to no semantic + effect). Examples of ``interval`` constants are ``3.5 min`` and + ``3.5mins``. An ``interval`` can also be negated, for example ``- + 12 hr`` represents "twelve hours in the past". Intervals also + support addition, subtraction, multiplication, division, and + comparison operations. + +.. bro:type:: string + + A type used to hold character-string values which represent text. + String constants are created by enclosing text in double quotes (") + and the backslash character (\\) introduces escape sequences. + + Note that Bro represents strings internally as a count and vector of + bytes rather than a NUL-terminated byte string (although string + constants are also automatically NUL-terminated). This is because + network traffic can easily introduce NULs into strings either by + nature of an application, inadvertently, or maliciously. And while + NULs are allowed in Bro strings, when present in strings passed as + arguments to many functions, a run-time error can occur as their + presence likely indicates a sort of problem. In that case, the + string will also only be represented to the user as the literal + "" string. + +.. bro:type:: pattern + + A type representing regular-expression patterns which can be used + for fast text-searching operations. Pattern constants are created + by enclosing text within forward slashes (/) and is the same syntax + as the patterns supported by the `flex lexical analyzer + `_. The speed of + regular expression matching does not depend on the complexity or + size of the patterns. Patterns support two types of matching, exact + and embedded. + + In exact matching the ``==`` equality relational operator is used + with one :bro:type:`pattern` operand and one :bro:type:`string` + operand (order of operands does not matter) to check whether the full + string exactly matches the pattern. In exact matching, the ``^`` + beginning-of-line and ``$`` end-of-line anchors are redundant since + the pattern is implicitly anchored to the beginning and end of the + line to facilitate an exact match. For example:: + + /foo|bar/ == "foo" + + yields true, while:: + + /foo|bar/ == "foobar" + + yields false. The ``!=`` operator would yield the negation of ``==``. + + In embedded matching the ``in`` operator is used with one + :bro:type:`pattern` operand (which must be on the left-hand side) and + one :bro:type:`string` operand, but tests whether the pattern + appears anywhere within the given string. For example:: + + /foo|bar/ in "foobar" + + yields true, while:: + + /^oob/ in "foobar" + + is false since "oob" does not appear at the start of "foobar". The + ``!in`` operator would yield the negation of ``in``. + +.. bro:type:: enum + + A type allowing the specification of a set of related values that + have no further structure. The only operations allowed on + enumerations are equality comparisons and they do not have + associated values or ordering. An example declaration: + + .. code:: bro + + type color: enum { Red, White, Blue, }; + + The last comma after ``Blue`` is optional. + +.. bro:type:: timer + +.. TODO: is this a type that's exposed to users? + +.. bro:type:: port + + A type representing transport-level port numbers. Besides TCP and + UDP ports, there is a concept of an ICMP "port" where the source + port is the ICMP message type and the destination port the ICMP + message code. A ``port`` constant is written as an unsigned integer + followed by one of ``/tcp``, ``/udp``, ``/icmp``, or ``/unknown``. + + Ports can be compared for equality and also for ordering. When + comparing order across transport-level protocols, ``unknown`` < + ``tcp`` < ``udp`` < ``icmp``, for example ``65535/tcp`` is smaller + than ``0/udp``. + +.. bro:type:: addr + + A type representing an IP address. + + IPv4 address constants are written in "dotted quad" format, + ``A1.A2.A3.A4``, where Ai all lie between 0 and 255. + + IPv6 address constants are written as colon-separated hexadecimal form + as described by :rfc:`2373`, but additionally encased in square brackets. + The mixed notation with embedded IPv4 addresses as dotted-quads in the + lower 32 bits is also allowed. + Some examples: ``[2001:db8::1]``, ``[::ffff:192.168.1.100]``, or + ``[aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222]``. + + Hostname constants can also be used, but since a hostname can + correspond to multiple IP addresses, the type of such variable is a + :bro:type:`set` of :bro:type:`addr` elements. For example: + + .. code:: bro + + local a = www.google.com; + + Addresses can be compared for (in)equality using ``==`` and ``!=``. + They can also be masked with ``/`` to produce a :bro:type:`subnet`: + + .. code:: bro + + local a: addr = 192.168.1.100; + local s: subnet = 192.168.0.0/16; + if ( a/16 == s ) + print "true"; + + And checked for inclusion within a :bro:type:`subnet` using ``in`` : + + .. code:: bro + + local a: addr = 192.168.1.100; + local s: subnet = 192.168.0.0/16; + if ( a in s ) + print "true"; + +.. bro:type:: subnet + + A type representing a block of IP addresses in CIDR notation. A + ``subnet`` constant is written as an :bro:type:`addr` followed by a + slash (/) and then the network prefix size specified as a decimal + number. For example, ``192.168.0.0/16`` or ``[fe80::]/64``. + +.. bro:type:: any + + Used to bypass strong typing. For example, a function can take an + argument of type ``any`` when it may be of different types. + +.. bro:type:: table + + An associate array that maps from one set of values to another. The + values being mapped are termed the *index* or *indices* and the + result of the mapping is called the *yield*. Indexing into tables + is very efficient, and internally it is just a single hash table + lookup. + + The table declaration syntax is:: + + table [ type^+ ] of type + + where *type^+* is one or more types, separated by commas. For example: + + .. code:: bro + + global a: table[count] of string; + + declares a table indexed by :bro:type:`count` values and yielding + :bro:type:`string` values. The yield type can also be more complex: + + .. code:: bro + + global a: table[count] of table[addr, port] of string; + + which declares a table indexed by :bro:type:`count` and yielding + another :bro:type:`table` which is indexed by an :bro:type:`addr` + and :bro:type:`port` to yield a :bro:type:`string`. + + Initialization of tables occurs by enclosing a set of initializers within + braces, for example: + + .. code:: bro + + global t: table[count] of string = { + [11] = "eleven", + [5] = "five", + }; + + Accessing table elements if provided by enclosing values within square + brackets (``[]``), for example: + + .. code:: bro + + t[13] = "thirteen"; + + And membership can be tested with ``in``: + + .. code:: bro + + if ( 13 in t ) + ... + + Iterate over tables with a ``for`` loop: + + .. code:: bro + + local t: table[count] of string; + for ( n in t ) + ... + + local services: table[addr, port] of string; + for ( [a, p] in services ) + ... + + Remove individual table elements with ``delete``: + + .. code:: bro + + delete t[13]; + + Nothing happens if the element with value ``13`` isn't present in + the table. + + Table size can be obtained by placing the table identifier between + vertical pipe (|) characters: + + .. code:: bro + + |t| + +.. bro:type:: set + + A set is like a :bro:type:`table`, but it is a collection of indices + that do not map to any yield value. They are declared with the + syntax:: + + set [ type^+ ] + + where *type^+* is one or more types separated by commas. + + Sets are initialized by listing elements enclosed by curly braces: + + .. code:: bro + + global s: set[port] = { 21/tcp, 23/tcp, 80/tcp, 443/tcp }; + global s2: set[port, string] = { [21/tcp, "ftp"], [23/tcp, "telnet"] }; + + The types are explicitly shown in the example above, but they could + have been left to type inference. + + Set membership is tested with ``in``: + + .. code:: bro + + if ( 21/tcp in s ) + ... + + Elements are added with ``add``: + + .. code:: bro + + add s[22/tcp]; + + And removed with ``delete``: + + .. code:: bro + + delete s[21/tcp]; + + Set size can be obtained by placing the set identifier between + vertical pipe (|) characters: + + .. code:: bro + + |s| + +.. bro:type:: vector + + A vector is like a :bro:type:`table`, except it's always indexed by a + :bro:type:`count`. A vector is declared like: + + .. code:: bro + + global v: vector of string; + + And can be initialized with the vector constructor: + + .. code:: bro + + global v: vector of string = vector("one", "two", "three"); + + Adding an element to a vector involves accessing/assigning it: + + .. code:: bro + + v[3] = "four" + + Note how the vector indexing is 0-based. + + Vector size can be obtained by placing the vector identifier between + vertical pipe (|) characters: + + .. code:: bro + + |v| + +.. bro:type:: record + + A ``record`` is a collection of values. Each value has a field name + and a type. Values do not need to have the same type and the types + have no restrictions. An example record type definition: + + .. code:: bro + + type MyRecordType: record { + c: count; + s: string &optional; + }; + + Access to a record field uses the dollar sign (``$``) operator: + + .. code:: bro + + global r: MyRecordType; + r$c = 13; + + Record assignment can be done field by field or as a whole like: + + .. code:: bro + + r = [$c = 13, $s = "thirteen"]; + + When assigning a whole record value, all fields that are not + :bro:attr:`&optional` or have a :bro:attr:`&default` attribute must + be specified. + + To test for existence of a field that is :bro:attr:`&optional`, use the + ``?$`` operator: + + .. code:: bro + + if ( r?$s ) + ... + +.. bro:type:: file + + Bro supports writing to files, but not reading from them. For + example, declare, open, and write to a file and finally close it + like: + + .. code:: bro + + global f: file = open("myfile"); + print f, "hello, world"; + close(f); + + Writing to files like this for logging usually isn't recommended, for better + logging support see :doc:`/logging`. + +.. bro:type:: func + + See :bro:type:`function`. + +.. bro:type:: function + + Function types in Bro are declared using:: + + function( argument* ): type + + where *argument* is a (possibly empty) comma-separated list of + arguments, and *type* is an optional return type. For example: + + .. code:: bro + + global greeting: function(name: string): string; + + Here ``greeting`` is an identifier with a certain function type. + The function body is not defined yet and ``greeting`` could even + have different function body values at different times. To define + a function including a body value, the syntax is like: + + .. code:: bro + + function greeting(name: string): string + { + return "Hello, " + name; + } + + Note that in the definition above, it's not necessary for us to have + done the first (forward) declaration of ``greeting`` as a function + type, but when it is, the argument list and return type much match + exactly. + + Function types don't need to have a name and can be assigned anonymously: + + .. code:: bro + + greeting = function(name: string): string { return "Hi, " + name; }; + + And finally, the function can be called like: + + .. code:: bro + + print greeting("Dave"); + +.. bro:type:: event + + Event handlers are nearly identical in both syntax and semantics to + a :bro:type:`function`, with the two differences being that event + handlers have no return type since they never return a value, and + you cannot call an event handler. Instead of directly calling an + event handler from a script, event handler bodies are executed when + they are invoked by one of three different methods: + + - From the event engine + + When the event engine detects an event for which you have + defined a corresponding event handler, it queues an event for + that handler. The handler is invoked as soon as the event + engine finishes processing the current packet and flushing the + invocation of other event handlers that were queued first. + + - With the ``event`` statement from a script + + Immediately queuing invocation of an event handler occurs like: + + .. code:: bro + + event password_exposed(user, password); + + This assumes that ``password_exposed`` was previously declared + as an event handler type with compatible arguments. + + - Via the ``schedule`` expression in a script + + This delays the invocation of event handlers until some time in + the future. For example: + + .. code:: bro + + schedule 5 secs { password_exposed(user, password) }; + + Multiple event handler bodies can be defined for the same event handler + identifier and the body of each will be executed in turn. Ordering + of execution can be influenced with :bro:attr:`&priority`. + +Attributes +---------- + +Attributes occur at the end of type/event declarations and change their +behavior. The syntax is ``&key`` or ``&key=val``, e.g., ``type T: +set[count] &read_expire=5min`` or ``event foo() &priority=-3``. The Bro +scripting language supports the following built-in attributes. + +.. bro:attr:: &optional + + Allows a record field to be missing. For example the type ``record { + a: int, b: port &optional }`` could be instantiated both as + singleton ``[$a=127.0.0.1]`` or pair ``[$a=127.0.0.1, $b=80/tcp]``. + +.. bro:attr:: &default + + Uses a default value for a record field or container elements. For + example, ``table[int] of string &default="foo" }`` would create a + table that returns the :bro:type:`string` ``"foo"`` for any + non-existing index. + +.. bro:attr:: &redef + + Allows for redefinition of initial object values. This is typically + used with constants, for example, ``const clever = T &redef;`` would + allow the constant to be redefined at some later point during script + execution. + +.. bro:attr:: &rotate_interval + + Rotates a file after a specified interval. + +.. bro:attr:: &rotate_size + + Rotates a file after it has reached a given size in bytes. + +.. bro:attr:: &add_func + +.. TODO: needs to be documented. + +.. bro:attr:: &delete_func + +.. TODO: needs to be documented. + +.. bro:attr:: &expire_func + + Called right before a container element expires. The function's + first parameter is of the same type of the container and the second + parameter the same type of the container's index. The return + value is a :bro:type:`interval` indicating the amount of additional + time to wait before expiring the container element at the given + index (which will trigger another execution of this function). + +.. bro:attr:: &read_expire + + Specifies a read expiration timeout for container elements. That is, + the element expires after the given amount of time since the last + time it has been read. Note that a write also counts as a read. + +.. bro:attr:: &write_expire + + Specifies a write expiration timeout for container elements. That + is, the element expires after the given amount of time since the + last time it has been written. + +.. bro:attr:: &create_expire + + Specifies a creation expiration timeout for container elements. That + is, the element expires after the given amount of time since it has + been inserted into the container, regardless of any reads or writes. + +.. bro:attr:: &persistent + + Makes a variable persistent, i.e., its value is writen to disk (per + default at shutdown time). + +.. bro:attr:: &synchronized + + Synchronizes variable accesses across nodes. The value of a + ``&synchronized`` variable is automatically propagated to all peers + when it changes. + +.. bro:attr:: &postprocessor + +.. TODO: needs to be documented. + +.. bro:attr:: &encrypt + + Encrypts files right before writing them to disk. + +.. TODO: needs to be documented in more detail. + +.. bro:attr:: &match + +.. TODO: needs to be documented. + +.. bro:attr:: &raw_output + + Opens a file in raw mode, i.e., non-ASCII characters are not + escaped. + +.. bro:attr:: &mergeable + + Prefers set union to assignment for synchronized state. This + attribute is used in conjunction with :bro:attr:`&synchronized` + container types: when the same container is updated at two peers + with different value, the propagation of the state causes a race + condition, where the last update succeeds. This can cause + inconsistencies and can be avoided by unifying the two sets, rather + than merely overwriting the old value. + +.. bro:attr:: &priority + + Specifies the execution priority of an event handler. Higher values + are executed before lower ones. The default value is 0. + +.. bro:attr:: &group + + Groups event handlers such that those in the same group can be + jointly activated or deactivated. + +.. bro:attr:: &log + + Writes a record field to the associated log stream. + +.. bro:attr:: &error_handler + +.. TODO: needs documented + +.. bro:attr:: (&tracked) + +.. TODO: needs documented or removed if it's not used anywhere. diff --git a/doc/scripts/example.bro b/doc/scripts/example.bro index c239f2d2a2..9f6f656ee1 100644 --- a/doc/scripts/example.bro +++ b/doc/scripts/example.bro @@ -1,10 +1,20 @@ -##! This is an example script that demonstrates how to document. Comments -##! of the form ``##!`` are for the script summary. The contents of +##! This is an example script that demonstrates documentation features. +##! Comments of the form ``##!`` are for the script summary. The contents of ##! these comments are transferred directly into the auto-generated ##! `reStructuredText `_ ##! (reST) document's summary section. ##! ##! .. tip:: You can embed directives and roles within ``##``-stylized comments. +##! +##! There's also a custom role to reference any identifier node in +##! the Bro Sphinx domain that's good for "see alsos", e.g. +##! +##! See also: :bro:see:`Example::a_var`, :bro:see:`Example::ONE`, +##! :bro:see:`SSH::Info` +##! +##! And a custom directive does the equivalent references: +##! +##! .. bro:see:: Example::a_var Example::ONE SSH::Info # Comments that use a single pound sign (#) are not significant to # a script's auto-generated documentation, but ones that use a @@ -12,8 +22,8 @@ # field comments, it's necessary to disambiguate the field with # which a comment associates: e.g. "##<" can be used on the same line # as a field to signify the comment relates to it and not the -# following field. "##<" is not meant for general use, just -# record/enum fields. +# following field. "##<" can also be used more generally in any +# variable declarations to associate with the last-declared identifier. # # Generally, the auto-doc comments (##) are associated with the # next declaration/identifier found in the script, but the doc framework @@ -141,7 +151,7 @@ export { const an_option: set[addr, addr, string] &redef; # default initialization will be self-documenting - const option_with_init = 0.01 secs &redef; + const option_with_init = 0.01 secs &redef; ##< More docs can be added here. ############## state variables ############ # right now, I'm defining this as any global @@ -173,6 +183,7 @@ export { ## Summarize "an_event" here. ## Give more details about "an_event" here. + ## Example::an_event should not be confused as a parameter. ## name: describe the argument here global an_event: event(name: string); diff --git a/doc/scripts/index.rst b/doc/scripts/index.rst new file mode 100644 index 0000000000..bf0fa25f10 --- /dev/null +++ b/doc/scripts/index.rst @@ -0,0 +1,8 @@ +.. This is a stub doc to which broxygen appends during the build process + +Index of All Individual Bro Scripts +=================================== + +.. toctree:: + :maxdepth: 1 + diff --git a/doc/scripts/internal.rst b/doc/scripts/internal.rst new file mode 100644 index 0000000000..a6c10f1cfb --- /dev/null +++ b/doc/scripts/internal.rst @@ -0,0 +1,5 @@ +.. This is a stub doc to which broxygen appends during the build process + +Internal Scripts +================ + diff --git a/doc/scripts/source/packages.rst b/doc/scripts/packages.rst similarity index 76% rename from doc/scripts/source/packages.rst rename to doc/scripts/packages.rst index 35f6633fd4..56909ffbc6 100644 --- a/doc/scripts/source/packages.rst +++ b/doc/scripts/packages.rst @@ -1,7 +1,7 @@ -.. This is a stub doc to which the build process can append. +.. This is a stub doc to which broxygen appends during the build process -Bro Script Packages -=================== +Index of All Bro Script Packages +================================ Bro has the following script packages (e.g. collections of related scripts in a common directory). If the package directory contains a ``__load__.bro`` @@ -10,8 +10,3 @@ script, it supports being loaded in mass as a whole directory for convenience. Packages/scripts in the ``base/`` directory are all loaded by default, while ones in ``policy/`` provide functionality and customization options that are more appropriate for users to decide whether they'd like to load it or not. - -.. toctree:: - :maxdepth: 1 - - diff --git a/doc/scripts/source/_static/showhide.js b/doc/scripts/source/_static/showhide.js deleted file mode 100644 index d6a8923143..0000000000 --- a/doc/scripts/source/_static/showhide.js +++ /dev/null @@ -1,64 +0,0 @@ -// make literal blocks corresponding to identifier initial values -// hidden by default -$(document).ready(function() { - - var showText='(Show Value)'; - var hideText='(Hide Value)'; - - var is_visible = false; - - // select field-list tables that come before a literal block - tables = $('.highlight-python').prev('table.docutils.field-list'); - - tables.find('th.field-name').filter(function(index) { - return $(this).html() == "Default :"; - }).next().append(''+showText+''); - - // hide all literal blocks that follow a field-list table - tables.next('.highlight-python').hide(); - - // register handler for clicking a "toggle" link - $('a.toggleLink').click(function() { - is_visible = !is_visible; - - $(this).html( (!is_visible) ? showText : hideText); - - // the link is inside a
    and the next - // literal block after the table is the literal block that we want - // to show/hide - $(this).parent().parent().parent().parent().next('.highlight-python').slideToggle('fast'); - - // override default link behavior - return false; - }); -}); - -// make "Private Interface" sections hidden by default -$(document).ready(function() { - - var showText='Show Private Interface (for internal use)'; - var hideText='Hide Private Interface'; - - var is_visible = false; - - // insert show/hide links - $('#private-interface').children(":first-child").after(''+showText+''); - - // wrap all sub-sections in a new div that can be hidden/shown - $('#private-interface').children(".section").wrapAll('
    '); - - // hide the given class - $('.private').hide(); - - // register handler for clicking a "toggle" link - $('a.privateToggle').click(function() { - is_visible = !is_visible; - - $(this).html( (!is_visible) ? showText : hideText); - - $('.private').slideToggle('fast'); - - // override default link behavior - return false; - }); -}); diff --git a/doc/scripts/source/_templates/layout.html b/doc/scripts/source/_templates/layout.html deleted file mode 100644 index 5685e1dc37..0000000000 --- a/doc/scripts/source/_templates/layout.html +++ /dev/null @@ -1,5 +0,0 @@ -{% extends "!layout.html" %} -{% block extrahead %} - - {{ super() }} -{% endblock %} diff --git a/doc/scripts/source/bifs.rst b/doc/scripts/source/bifs.rst deleted file mode 100644 index 6a42cafafc..0000000000 --- a/doc/scripts/source/bifs.rst +++ /dev/null @@ -1,5 +0,0 @@ -.. This is a stub doc to which the build process can append. - -Built-In Functions (BIFs) -========================= - diff --git a/doc/scripts/source/builtins.rst b/doc/scripts/source/builtins.rst deleted file mode 100644 index e6a9e9b829..0000000000 --- a/doc/scripts/source/builtins.rst +++ /dev/null @@ -1,118 +0,0 @@ -Builtin Types and Attributes -============================ - -Types ------ - -The Bro scripting language supports the following built-in types. - -.. TODO: add documentation - -.. bro:type:: void - -.. bro:type:: bool - -.. bro:type:: int - -.. bro:type:: count - -.. bro:type:: counter - -.. bro:type:: double - -.. bro:type:: time - -.. bro:type:: interval - -.. bro:type:: string - -.. bro:type:: pattern - -.. bro:type:: enum - -.. bro:type:: timer - -.. bro:type:: port - -.. bro:type:: addr - -.. bro:type:: net - -.. bro:type:: subnet - -.. bro:type:: any - -.. bro:type:: table - -.. bro:type:: union - -.. bro:type:: record - -.. bro:type:: types - -.. bro:type:: func - -.. bro:type:: file - -.. bro:type:: vector - -.. TODO: below are kind of "special cases" that bro knows about? - -.. bro:type:: set - -.. bro:type:: function - -.. bro:type:: event - -Attributes ----------- - -The Bro scripting language supports the following built-in attributes. - -.. TODO: add documentation - -.. bro:attr:: &optional - -.. bro:attr:: &default - -.. bro:attr:: &redef - -.. bro:attr:: &rotate_interval - -.. bro:attr:: &rotate_size - -.. bro:attr:: &add_func - -.. bro:attr:: &delete_func - -.. bro:attr:: &expire_func - -.. bro:attr:: &read_expire - -.. bro:attr:: &write_expire - -.. bro:attr:: &create_expire - -.. bro:attr:: &persistent - -.. bro:attr:: &synchronized - -.. bro:attr:: &postprocessor - -.. bro:attr:: &encrypt - -.. bro:attr:: &match - -.. bro:attr:: &disable_print_hook - -.. bro:attr:: &raw_output - -.. bro:attr:: &mergeable - -.. bro:attr:: &priority - -.. bro:attr:: &group - -.. bro:attr:: &log - -.. bro:attr:: (&tracked) diff --git a/doc/scripts/source/common.rst b/doc/scripts/source/common.rst deleted file mode 100644 index 6105585b2c..0000000000 --- a/doc/scripts/source/common.rst +++ /dev/null @@ -1,19 +0,0 @@ -Common Documentation -==================== - -.. _common_port_analysis_doc: - -Port Analysis -------------- - -TODO: add some stuff here - -.. _common_packet_filter_doc: - -Packet Filter -------------- - -TODO: add some stuff here - -.. note:: Filters are only relevant when dynamic protocol detection (DPD) - is explicitly turned off (Bro release 1.6 enabled DPD by default). diff --git a/doc/scripts/source/index.rst b/doc/scripts/source/index.rst deleted file mode 100644 index a144644208..0000000000 --- a/doc/scripts/source/index.rst +++ /dev/null @@ -1,23 +0,0 @@ -.. Bro documentation master file - -Welcome to Bro's documentation! -=============================== - -Contents: - -.. toctree:: - :maxdepth: 1 - :glob: - - common - builtins - internal - bifs - packages - scripts/index - -Indices and tables -================== - -* :ref:`genindex` -* :ref:`search` diff --git a/doc/scripts/source/internal.rst b/doc/scripts/source/internal.rst deleted file mode 100644 index 817c76e725..0000000000 --- a/doc/scripts/source/internal.rst +++ /dev/null @@ -1,5 +0,0 @@ -.. This is a stub doc to which the build process can append. - -Internal Scripts -================ - diff --git a/doc/scripts/source/scripts/index.rst b/doc/scripts/source/scripts/index.rst deleted file mode 100644 index fd11a3924e..0000000000 --- a/doc/scripts/source/scripts/index.rst +++ /dev/null @@ -1,6 +0,0 @@ -Index of All Bro Script Documentation -===================================== - -.. toctree:: - :maxdepth: 1 - diff --git a/doc/signatures.rst b/doc/signatures.rst index 25dfc31f5e..59ca819636 100644 --- a/doc/signatures.rst +++ b/doc/signatures.rst @@ -3,7 +3,7 @@ Signatures ========== -.. class:: opening +.. rst-class:: opening Bro relies primarily on its extensive scripting language for defining and analyzing detection policies. In addition, however, @@ -34,7 +34,7 @@ Let's look at an example signature first: This signature asks Bro to match the regular expression ``.*root`` on all TCP connections going to port 80. When the signature triggers, Bro -will raise an event ``signature_match`` of the form: +will raise an event :bro:id:`signature_match` of the form: .. code:: bro @@ -45,19 +45,24 @@ triggered the match, ``msg`` is the string specified by the signature's event statement (``Found root!``), and data is the last piece of payload which triggered the pattern match. -To turn such ``signature_match`` events into actual alarms, you can -load Bro's ``signature.bro`` script. This script contains a default -event handler that raises ``SensitiveSignature`` `Notices -`_ (as well as others; see the beginning of the script). - -As signatures are independent of Bro's policy scripts, they are put -into their own file(s). There are two ways to specify which files -contain signatures: By using the ``-s`` flag when you invoke Bro, or -by extending the Bro variable ``signatures_files`` using the ``+=`` -operator. If a signature file is given without a path, it is searched -along the normal ``BROPATH``. The default extension of the file name -is ``.sig``, and Bro appends that automatically when neccesary. +To turn such :bro:id:`signature_match` events into actual alarms, you can +load Bro's :doc:`/scripts/base/frameworks/signatures/main` script. +This script contains a default event handler that raises +:bro:enum:`Signatures::Sensitive_Signature` :doc:`Notices ` +(as well as others; see the beginning of the script). +As signatures are independent of Bro's policy scripts, they are put into +their own file(s). There are three ways to specify which files contain +signatures: By using the ``-s`` flag when you invoke Bro, or by +extending the Bro variable :bro:id:`signature_files` using the ``+=`` +operator, or by using the ``@load-sigs`` directive inside a Bro script. +If a signature file is given without a full path, it is searched for +along the normal ``BROPATH``. Additionally, the ``@load-sigs`` +directive can be used to load signature files in a path relative to the +Bro script in which it's placed, e.g. ``@load-sigs ./mysigs.sig`` will +expect that signature file in the same directory as the Bro script. The +default extension of the file name is ``.sig``, and Bro appends that +automatically when necessary. Signature language ================== @@ -78,9 +83,8 @@ Header Conditions ~~~~~~~~~~~~~~~~~ Header conditions limit the applicability of the signature to a subset -of traffic that contains matching packet headers. For TCP, this match -is performed only for the first packet of a connection. For other -protocols, it is done on each individual packet. +of traffic that contains matching packet headers. This type of matching +is performed only for the first packet of a connection. There are pre-defined header conditions for some of the most used header fields. All of them generally have the format `` @@ -90,14 +94,22 @@ one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``; and against. The following keywords are defined: ``src-ip``/``dst-ip `` - Source and destination address, repectively. Addresses can be - given as IP addresses or CIDR masks. + Source and destination address, respectively. Addresses can be given + as IPv4 or IPv6 addresses or CIDR masks. For IPv6 addresses/masks + the colon-hexadecimal representation of the address must be enclosed + in square brackets (e.g. ``[fe80::1]`` or ``[fe80::0]/16``). -``src-port``/``dst-port`` ```` - Source and destination port, repectively. +``src-port``/``dst-port `` + Source and destination port, respectively. -``ip-proto tcp|udp|icmp`` - IP protocol. +``ip-proto tcp|udp|icmp|icmp6|ip|ip6`` + IPv4 header's Protocol field or the Next Header field of the final + IPv6 header (i.e. either Next Header field in the fixed IPv6 header + if no extension headers are present or that field from the last + extension header in the chain). Note that the IP-in-IP forms of + tunneling are automatically decapsulated by default and signatures + apply to only the inner-most packet, so specifying ``ip`` or ``ip6`` + is a no-op. For lists of multiple values, they are sequentially compared against the corresponding header field. If at least one of the comparisons @@ -111,30 +123,32 @@ condition can be defined either as header [:] [& ] -This compares the value found at the given position of the packet -header with a list of values. ``offset`` defines the position of the -value within the header of the protocol defined by ``proto`` (which -can be ``ip``, ``tcp``, ``udp`` or ``icmp``). ``size`` is either 1, 2, -or 4 and specifies the value to have a size of this many bytes. If the -optional ``& `` is given, the packet's value is first masked -with the integer before it is compared to the value-list. ``cmp`` is -one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``. ``value-list`` is -a list of comma-separated integers similar to those described above. -The integers within the list may be followed by an additional ``/ -mask`` where ``mask`` is a value from 0 to 32. This corresponds to the -CIDR notation for netmasks and is translated into a corresponding -bitmask applied to the packet's value prior to the comparison (similar -to the optional ``& integer``). +This compares the value found at the given position of the packet header +with a list of values. ``offset`` defines the position of the value +within the header of the protocol defined by ``proto`` (which can be +``ip``, ``ip6``, ``tcp``, ``udp``, ``icmp`` or ``icmp6``). ``size`` is +either 1, 2, or 4 and specifies the value to have a size of this many +bytes. If the optional ``& `` is given, the packet's value is +first masked with the integer before it is compared to the value-list. +``cmp`` is one of ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``. +``value-list`` is a list of comma-separated integers similar to those +described above. The integers within the list may be followed by an +additional ``/ mask`` where ``mask`` is a value from 0 to 32. This +corresponds to the CIDR notation for netmasks and is translated into a +corresponding bitmask applied to the packet's value prior to the +comparison (similar to the optional ``& integer``). IPv6 address values +are not allowed in the value-list, though you can still inspect any 1, +2, or 4 byte section of an IPv6 header using this keyword. -Putting all together, this is an example conditiation that is -equivalent to ``dst- ip == 1.2.3.4/16, 5.6.7.8/24``: +Putting it all together, this is an example condition that is +equivalent to ``dst-ip == 1.2.3.4/16, 5.6.7.8/24``: .. code:: bro-sig header ip[16:4] == 1.2.3.4/16, 5.6.7.8/24 -Internally, the predefined header conditions are in fact just -short-cuts and mappend into a generic condition. +Note that the analogous example for IPv6 isn't currently possible since +4 bytes is the max width of a value that can be compared. Content Conditions ~~~~~~~~~~~~~~~~~~ @@ -143,7 +157,7 @@ Content conditions are defined by regular expressions. We differentiate two kinds of content conditions: first, the expression may be declared with the ``payload`` statement, in which case it is matched against the raw payload of a connection (for reassembled TCP -streams) or of a each packet (for ICMP, UDP, and non-reassembled TCP). +streams) or of each packet (for ICMP, UDP, and non-reassembled TCP). Second, it may be prefixed with an analyzer-specific label, in which case the expression is matched against the data as extracted by the corresponding analyzer. @@ -208,7 +222,7 @@ To define dependencies between signatures, there are two conditions: ``requires-reverse-signature [!] `` Similar to ``requires-signature``, but ``id`` has to match for the - opposite direction of the same connection, compared the current + opposite direction of the same connection, compared to the current signature. This allows to model the notion of requests and replies. @@ -224,20 +238,10 @@ matched. The following context conditions are defined: confirming the match. If false is returned, no signature match is going to be triggered. The function has to be of type ``function cond(state: signature_state, data: string): bool``. Here, - ``content`` may contain the most recent content chunk available at + ``data`` may contain the most recent content chunk available at the time the signature was matched. If no such chunk is available, - ``content`` will be the empty string. ``signature_state`` is - defined as follows: - - .. code:: bro - - type signature_state: record { - id: string; # ID of the signature - conn: connection; # Current connection - is_orig: bool; # True if current endpoint is originator - payload_size: count; # Payload size of the first packet - }; - + ``data`` will be the empty string. See :bro:type:`signature_state` + for its definition. ``payload-size `` Compares the integer to the size of the payload of a packet. For @@ -265,7 +269,7 @@ Actions define what to do if a signature matches. Currently, there are two actions defined: ``event `` - Raises a ``signature_match`` event. The event handler has the + Raises a :bro:id:`signature_match` event. The event handler has the following type: .. code:: bro @@ -338,11 +342,11 @@ Things to keep in mind when writing signatures signature engine and can be matched with ``\r`` and ``\n``, respectively. Generally, Bro follows `flex's regular expression syntax - `_. - See the DPD signatures in ``policy/sigs/dpd.bro`` for some examples + `_. + See the DPD signatures in ``base/frameworks/dpd/dpd.sig`` for some examples of fairly complex payload patterns. -* The data argument of the ``signature_match`` handler might not carry +* The data argument of the :bro:id:`signature_match` handler might not carry the full text matched by the regular expression. Bro performs the matching incrementally as packets come in; when the signature eventually fires, it can only pass on the most recent chunk of data. diff --git a/doc/upgrade.rst b/doc/upgrade.rst index 696c64ef7f..9c1537754a 100644 --- a/doc/upgrade.rst +++ b/doc/upgrade.rst @@ -3,7 +3,7 @@ Upgrading From Bro 1.5 to 2.0 ============================= -.. class:: opening +.. rst-class:: opening This guide details differences between Bro versions 1.5 and 2.0 that may be important for users to know as they work on updating @@ -39,7 +39,7 @@ version. The two rules of thumb are: if you need help. Below we summarize changes from 1.x to 2.x in more detail. This list -isn't complete, see the `CHANGES <{{git('bro:CHANGES', 'txt')}}>`_ file in the +isn't complete, see the :download:`CHANGES ` file in the distribution for the full story. Default Scripts @@ -55,13 +55,13 @@ renamed to ``scripts/`` and contains major subdirectories ``base/``, further. The contents of the new ``scripts/`` directory, like the old/flat -``policy/`` still gets installed under under the ``share/bro`` +``policy/`` still gets installed under the ``share/bro`` subdirectory of the installation prefix path just like previous versions. For example, if Bro was compiled like ``./configure --prefix=/usr/local/bro && make && make install``, then the script hierarchy can be found in ``/usr/local/bro/share/bro``. -THe main +The main subdirectories of that hierarchy are as follows: - ``base/`` contains all scripts that are loaded by Bro by default @@ -131,8 +131,8 @@ Logging Framework endpoint. - The new logging framework makes it possible to extend, customize, - and filter logs very easily. See `the logging framework - <{{git('bro:doc/logging.rst')}}>`_ more information on usage. + and filter logs very easily. See the :doc:`logging framework ` + for more information on usage. - A common pattern found in the new scripts is to store logging stream records for protocols inside the ``connection`` records so that @@ -155,8 +155,7 @@ Notice Framework The way users interact with "notices" has changed significantly in order to make it easier to define a site policy and more extensible -for adding customized actions. See the `the notice framework -<{{git('bro:doc/notice.rst')}}>`_. +for adding customized actions. See the :doc:`notice framework `. New Default Settings @@ -169,10 +168,6 @@ New Default Settings are loaded. See ``PacketFilter::all_packets`` for how to revert to old behavior. -- By default, Bro now sets a libpcap snaplen of 65535. Depending on - the OS, this may have performance implications and you can use the - ``--snaplen`` option to change the value. - API Changes ----------- @@ -198,7 +193,7 @@ Variable Naming - Identifiers may have been renamed to conform to new `scripting conventions - <{{docroot}}/development/script-conventions.html>`_ + `_ BroControl @@ -214,8 +209,8 @@ live analysis. BroControl now has an extensive plugin interface for adding new commands and options. Note that this is still considered experimental. -We have remove the ``analysis`` command, and BroControl does currently -not not send daily alarm summaries anymore (this may be restored +We have removed the ``analysis`` command, and BroControl currently +does not send daily alarm summaries anymore (this may be restored later). Removed Functionality @@ -238,11 +233,11 @@ Development Infrastructure ========================== Bro development has moved from using SVN to Git for revision control. -Users that like to use the latest Bro developments by checking it out +Users that want to use the latest Bro development snapshot by checking it out from the source repositories should see the `development process -<{{docroot}}/development/process.html>`_. Note that all the various -sub-components now reside on their own repositories. However, the -top-level Bro repository includes them as git submodules so it's easu +`_. Note that all the various +sub-components now reside in their own repositories. However, the +top-level Bro repository includes them as git submodules so it's easy to check them all out simultaneously. Bro now uses `CMake `_ for its build system so diff --git a/pkg/check-cmake b/pkg/check-cmake index 2c3ed765a6..17531af2f7 100755 --- a/pkg/check-cmake +++ b/pkg/check-cmake @@ -5,7 +5,7 @@ # version of CMake is required to obtain consistency, but can be increased # as new versions of CMake come out that also produce working packages. -CMAKE_PACK_REQ="cmake version 2.8.4" +CMAKE_PACK_REQ="cmake version 2.8.6" CMAKE_VER=`cmake -version` if [ "${CMAKE_VER}" != "${CMAKE_PACK_REQ}" ]; then diff --git a/pkg/make-deb-packages b/pkg/make-deb-packages index a9de210e52..432de8336a 100755 --- a/pkg/make-deb-packages +++ b/pkg/make-deb-packages @@ -27,21 +27,21 @@ cd .. # Minimum Bro ./configure --prefix=${prefix} --disable-broccoli --disable-broctl \ - --pkg-name-prefix=Bro --binary-package + --pkg-name-prefix=Bro-minimal --binary-package ( cd build && make package ) # Full Bro package -./configure --prefix=${prefix} --pkg-name-prefix=Bro-all --binary-package +./configure --prefix=${prefix} --pkg-name-prefix=Bro --binary-package ( cd build && make package ) # Broccoli cd aux/broccoli ./configure --prefix=${prefix} --binary-package -( cd build && make package && mv Broccoli*.deb ../../../build/ ) +( cd build && make package && mv *.deb ../../../build/ ) cd ../.. # Broctl cd aux/broctl ./configure --prefix=${prefix} --binary-package -( cd build && make package && mv Broctl*.deb ../../../build/ ) +( cd build && make package && mv *.deb ../../../build/ ) cd ../.. diff --git a/pkg/make-mac-packages b/pkg/make-mac-packages index a8f7f965c8..2930f8f393 100755 --- a/pkg/make-mac-packages +++ b/pkg/make-mac-packages @@ -3,7 +3,13 @@ # This script creates binary packages for Mac OS X. # They can be found in ../build/ after running. -./check-cmake || { exit 1; } +cmake -P /dev/stdin << "EOF" +if ( ${CMAKE_VERSION} VERSION_LESS 2.8.9 ) + message(FATAL_ERROR "CMake >= 2.8.9 required to build package") +endif () +EOF + +[ $? -ne 0 ] && exit 1; type sw_vers > /dev/null 2>&1 || { echo "Unable to get Mac OS X version" >&2; @@ -34,26 +40,26 @@ prefix=/opt/bro cd .. # Minimum Bro -CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ - --disable-broccoli --disable-broctl --pkg-name-prefix=Bro \ +CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ + --disable-broccoli --disable-broctl --pkg-name-prefix=Bro-minimal \ --binary-package ( cd build && make package ) # Full Bro package -CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ - --pkg-name-prefix=Bro-all --binary-package +CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ + --pkg-name-prefix=Bro --binary-package ( cd build && make package ) # Broccoli cd aux/broccoli -CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ +CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ --binary-package -( cd build && make package && mv Broccoli*.dmg ../../../build/ ) +( cd build && make package && mv *.dmg ../../../build/ ) cd ../.. # Broctl cd aux/broctl -CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ +CMAKE_PREFIX_PATH=/usr CMAKE_OSX_ARCHITECTURES=${arch} ./configure --prefix=${prefix} \ --binary-package -( cd build && make package && mv Broctl*.dmg ../../../build/ ) +( cd build && make package && mv *.dmg ../../../build/ ) cd ../.. diff --git a/pkg/make-rpm-packages b/pkg/make-rpm-packages index ac8dfa97b4..9560cc80ff 100755 --- a/pkg/make-rpm-packages +++ b/pkg/make-rpm-packages @@ -20,21 +20,21 @@ cd .. # Minimum Bro ./configure --prefix=${prefix} --disable-broccoli --disable-broctl \ - --pkg-name-prefix=Bro --binary-package + --pkg-name-prefix=Bro-minimal --binary-package ( cd build && make package ) # Full Bro package -./configure --prefix=${prefix} --pkg-name-prefix=Bro-all --binary-package +./configure --prefix=${prefix} --pkg-name-prefix=Bro --binary-package ( cd build && make package ) # Broccoli cd aux/broccoli ./configure --prefix=${prefix} --binary-package -( cd build && make package && mv Broccoli*.rpm ../../../build/ ) +( cd build && make package && mv *.rpm ../../../build/ ) cd ../.. # Broctl cd aux/broctl ./configure --prefix=${prefix} --binary-package -( cd build && make package && mv Broctl*.rpm ../../../build/ ) +( cd build && make package && mv *.rpm ../../../build/ ) cd ../.. diff --git a/scripts/base/frameworks/cluster/__load__.bro b/scripts/base/frameworks/cluster/__load__.bro index 3334164866..0f9003514d 100644 --- a/scripts/base/frameworks/cluster/__load__.bro +++ b/scripts/base/frameworks/cluster/__load__.bro @@ -9,10 +9,10 @@ redef peer_description = Cluster::node; # Add a cluster prefix. @prefixes += cluster -## If this script isn't found anywhere, the cluster bombs out. -## Loading the cluster framework requires that a script by this name exists -## somewhere in the BROPATH. The only thing in the file should be the -## cluster definition in the :bro:id:`Cluster::nodes` variable. +# If this script isn't found anywhere, the cluster bombs out. +# Loading the cluster framework requires that a script by this name exists +# somewhere in the BROPATH. The only thing in the file should be the +# cluster definition in the :bro:id:`Cluster::nodes` variable. @load cluster-layout @if ( Cluster::node in Cluster::nodes ) @@ -28,17 +28,14 @@ redef Communication::listen_port = Cluster::nodes[Cluster::node]$p; @if ( Cluster::local_node_type() == Cluster::MANAGER ) @load ./nodes/manager -@load site/local-manager @endif @if ( Cluster::local_node_type() == Cluster::PROXY ) @load ./nodes/proxy -@load site/local-proxy @endif @if ( Cluster::local_node_type() == Cluster::WORKER ) @load ./nodes/worker -@load site/local-worker @endif @endif diff --git a/scripts/base/frameworks/cluster/main.bro b/scripts/base/frameworks/cluster/main.bro index 2e7fc32d8b..766dea912f 100644 --- a/scripts/base/frameworks/cluster/main.bro +++ b/scripts/base/frameworks/cluster/main.bro @@ -1,21 +1,45 @@ +##! A framework for establishing and controlling a cluster of Bro instances. +##! In order to use the cluster framework, a script named +##! ``cluster-layout.bro`` must exist somewhere in Bro's script search path +##! which has a cluster definition of the :bro:id:`Cluster::nodes` variable. +##! The ``CLUSTER_NODE`` environment variable or :bro:id:`Cluster::node` +##! must also be sent and the cluster framework loaded as a package like +##! ``@load base/frameworks/cluster``. + @load base/frameworks/control module Cluster; export { + ## The cluster logging stream identifier. redef enum Log::ID += { LOG }; - + + ## The record type which contains the column fields of the cluster log. type Info: record { + ## The time at which a cluster message was generated. ts: time; + ## A message indicating information about the cluster's operation. message: string; } &log; - + + ## Types of nodes that are allowed to participate in the cluster + ## configuration. type NodeType: enum { + ## A dummy node type indicating the local node is not operating + ## within a cluster. NONE, + ## A node type which is allowed to view/manipulate the configuration + ## of other nodes in the cluster. CONTROL, + ## A node type responsible for log and policy management. MANAGER, + ## A node type for relaying worker node communication and synchronizing + ## worker node state. PROXY, + ## The node type doing all the actual traffic analysis. WORKER, + ## A node acting as a traffic recorder using the + ## `Time Machine `_ software. TIME_MACHINE, }; @@ -49,30 +73,41 @@ export { ## Record type to indicate a node in a cluster. type Node: record { + ## Identifies the type of cluster node in this node's configuration. node_type: NodeType; + ## The IP address of the cluster node. ip: addr; + ## If the *ip* field is a non-global IPv6 address, this field + ## can specify a particular :rfc:`4007` ``zone_id``. + zone_id: string &default=""; + ## The port to which the this local node can connect when + ## establishing communication. p: port; - ## Identifier for the interface a worker is sniffing. interface: string &optional; - - ## Manager node this node uses. For workers and proxies. + ## Name of the manager node this node uses. For workers and proxies. manager: string &optional; - ## Proxy node this node uses. For workers and managers. + ## Name of the proxy node this node uses. For workers and managers. proxy: string &optional; - ## Worker nodes that this node connects with. For managers and proxies. + ## Names of worker nodes that this node connects with. + ## For managers and proxies. workers: set[string] &optional; + ## Name of a time machine node with which this node connects. time_machine: string &optional; }; ## This function can be called at any time to determine if the cluster ## framework is being enabled for this run. + ## + ## Returns: True if :bro:id:`Cluster::node` has been set. global is_enabled: function(): bool; ## This function can be called at any time to determine what type of ## cluster node the current Bro instance is going to be acting as. ## If :bro:id:`Cluster::is_enabled` returns false, then ## :bro:enum:`Cluster::NONE` is returned. + ## + ## Returns: The :bro:type:`Cluster::NodeType` the calling node acts as. global local_node_type: function(): NodeType; ## This gives the value for the number of workers currently connected to, diff --git a/scripts/base/frameworks/cluster/nodes/proxy.bro b/scripts/base/frameworks/cluster/nodes/proxy.bro index 426a5c26fc..e38a5e9109 100644 --- a/scripts/base/frameworks/cluster/nodes/proxy.bro +++ b/scripts/base/frameworks/cluster/nodes/proxy.bro @@ -1,3 +1,7 @@ +##! Redefines the options common to all proxy nodes within a Bro cluster. +##! In particular, proxies are not meant to produce logs locally and they +##! do not forward events anywhere, they mainly synchronize state between +##! worker nodes. @prefixes += cluster-proxy diff --git a/scripts/base/frameworks/cluster/nodes/worker.bro b/scripts/base/frameworks/cluster/nodes/worker.bro index 454cf57c85..61d5228c88 100644 --- a/scripts/base/frameworks/cluster/nodes/worker.bro +++ b/scripts/base/frameworks/cluster/nodes/worker.bro @@ -1,3 +1,7 @@ +##! Redefines some options common to all worker nodes within a Bro cluster. +##! In particular, worker nodes do not produce logs locally, instead they +##! send them off to a manager node for processing. + @prefixes += cluster-worker ## Don't do any local logging. diff --git a/scripts/base/frameworks/cluster/setup-connections.bro b/scripts/base/frameworks/cluster/setup-connections.bro index 059b984d61..4576f5b913 100644 --- a/scripts/base/frameworks/cluster/setup-connections.bro +++ b/scripts/base/frameworks/cluster/setup-connections.bro @@ -1,3 +1,6 @@ +##! This script establishes communication among all nodes in a cluster +##! as defined by :bro:id:`Cluster::nodes`. + @load ./main @load base/frameworks/communication @@ -16,23 +19,26 @@ event bro_init() &priority=9 # Connections from the control node for runtime control and update events. # Every node in a cluster is eligible for control from this host. if ( n$node_type == CONTROL ) - Communication::nodes["control"] = [$host=n$ip, $connect=F, - $class="control", $events=control_events]; + Communication::nodes["control"] = [$host=n$ip, $zone_id=n$zone_id, + $connect=F, $class="control", + $events=control_events]; if ( me$node_type == MANAGER ) { if ( n$node_type == WORKER && n$manager == node ) Communication::nodes[i] = - [$host=n$ip, $connect=F, + [$host=n$ip, $zone_id=n$zone_id, $connect=F, $class=i, $events=worker2manager_events, $request_logs=T]; if ( n$node_type == PROXY && n$manager == node ) Communication::nodes[i] = - [$host=n$ip, $connect=F, + [$host=n$ip, $zone_id=n$zone_id, $connect=F, $class=i, $events=proxy2manager_events, $request_logs=T]; if ( n$node_type == TIME_MACHINE && me?$time_machine && me$time_machine == i ) - Communication::nodes["time-machine"] = [$host=nodes[i]$ip, $p=nodes[i]$p, + Communication::nodes["time-machine"] = [$host=nodes[i]$ip, + $zone_id=nodes[i]$zone_id, + $p=nodes[i]$p, $connect=T, $retry=1min, $events=tm2manager_events]; } @@ -41,7 +47,8 @@ event bro_init() &priority=9 { if ( n$node_type == WORKER && n$proxy == node ) Communication::nodes[i] = - [$host=n$ip, $connect=F, $class=i, $events=worker2proxy_events]; + [$host=n$ip, $zone_id=n$zone_id, $connect=F, $class=i, + $sync=T, $auth=T, $events=worker2proxy_events]; # accepts connections from the previous one. # (This is not ideal for setups with many proxies) @@ -50,16 +57,18 @@ event bro_init() &priority=9 { if ( n?$proxy ) Communication::nodes[i] - = [$host=n$ip, $p=n$p, + = [$host=n$ip, $zone_id=n$zone_id, $p=n$p, $connect=T, $auth=F, $sync=T, $retry=1mins]; else if ( me?$proxy && me$proxy == i ) Communication::nodes[me$proxy] - = [$host=nodes[i]$ip, $connect=F, $auth=T, $sync=T]; + = [$host=nodes[i]$ip, $zone_id=nodes[i]$zone_id, + $connect=F, $auth=T, $sync=T]; } # Finally the manager, to send it status updates. if ( n$node_type == MANAGER && me$manager == i ) Communication::nodes["manager"] = [$host=nodes[i]$ip, + $zone_id=nodes[i]$zone_id, $p=nodes[i]$p, $connect=T, $retry=1mins, $class=node, @@ -69,6 +78,7 @@ event bro_init() &priority=9 { if ( n$node_type == MANAGER && me$manager == i ) Communication::nodes["manager"] = [$host=nodes[i]$ip, + $zone_id=nodes[i]$zone_id, $p=nodes[i]$p, $connect=T, $retry=1mins, $class=node, @@ -76,6 +86,7 @@ event bro_init() &priority=9 if ( n$node_type == PROXY && me$proxy == i ) Communication::nodes["proxy"] = [$host=nodes[i]$ip, + $zone_id=nodes[i]$zone_id, $p=nodes[i]$p, $connect=T, $retry=1mins, $sync=T, $class=node, @@ -84,6 +95,7 @@ event bro_init() &priority=9 if ( n$node_type == TIME_MACHINE && me?$time_machine && me$time_machine == i ) Communication::nodes["time-machine"] = [$host=nodes[i]$ip, + $zone_id=nodes[i]$zone_id, $p=nodes[i]$p, $connect=T, $retry=1min, diff --git a/scripts/base/frameworks/communication/main.bro b/scripts/base/frameworks/communication/main.bro index 569ba140a9..7ded67688a 100644 --- a/scripts/base/frameworks/communication/main.bro +++ b/scripts/base/frameworks/communication/main.bro @@ -1,34 +1,62 @@ -##! Connect to remote Bro or Broccoli instances to share state and/or transfer -##! events. +##! Facilitates connecting to remote Bro or Broccoli instances to share state +##! and/or transfer events. @load base/frameworks/packet-filter +@load base/utils/addrs module Communication; export { + + ## The communication logging stream identifier. redef enum Log::ID += { LOG }; - - ## Which interface to listen on (0.0.0.0 for any interface). + + ## Which interface to listen on. The addresses ``0.0.0.0`` and ``[::]`` + ## are wildcards. const listen_interface = 0.0.0.0 &redef; - + ## Which port to listen on. const listen_port = 47757/tcp &redef; - + ## This defines if a listening socket should use SSL. const listen_ssl = F &redef; - ## Default compression level. Compression level is 0-9, with 0 = no + ## Defines if a listening socket can bind to IPv6 addresses. + const listen_ipv6 = F &redef; + + ## If :bro:id:`Communication::listen_interface` is a non-global + ## IPv6 address and requires a specific :rfc:`4007` ``zone_id``, + ## it can be specified here. + const listen_ipv6_zone_id = "" &redef; + + ## Defines the interval at which to retry binding to + ## :bro:id:`Communication::listen_interface` on + ## :bro:id:`Communication::listen_port` if it's already in use. + const listen_retry = 30 secs &redef; + + ## Default compression level. Compression level is 0-9, with 0 = no ## compression. global compression_level = 0 &redef; + ## A record type containing the column fields of the communication log. type Info: record { + ## The network time at which a communication event occurred. ts: time &log; + ## The peer name (if any) with which a communication event is concerned. peer: string &log &optional; + ## Where the communication event message originated from, that is, + ## either from the scripting layer or inside the Bro process. src_name: string &log &optional; + ## .. todo:: currently unused. connected_peer_desc: string &log &optional; + ## .. todo:: currently unused. connected_peer_addr: addr &log &optional; + ## .. todo:: currently unused. connected_peer_port: port &log &optional; + ## The severity of the communication event message. level: string &log &optional; + ## A message describing the communication event between Bro or + ## Broccoli instances. message: string &log; }; @@ -38,7 +66,11 @@ export { type Node: record { ## Remote address. host: addr; - + + ## If the *host* field is a non-global IPv6 address, this field + ## can specify a particular :rfc:`4007` ``zone_id``. + zone_id: string &optional; + ## Port of the remote Bro communication endpoint if we are initiating ## the connection based on the :bro:id:`connect` field. p: port &optional; @@ -77,7 +109,7 @@ export { auth: bool &default = F; ## If not set, no capture filter is sent. - ## If set to "", the default cature filter is sent. + ## If set to "", the default capture filter is sent. capture_filter: string &optional; ## Whether to use SSL-based communication. @@ -88,7 +120,7 @@ export { ## The remote peer. peer: event_peer &optional; - + ## Indicates the status of the node. connected: bool &default = F; }; @@ -96,11 +128,25 @@ export { ## The table of Bro or Broccoli nodes that Bro will initiate connections ## to or respond to connections from. global nodes: table[string] of Node &redef; - + + ## A table of peer nodes for which this node issued a + ## :bro:id:`Communication::connect_peer` call but with which a connection + ## has not yet been established or with which a connection has been + ## closed and is currently in the process of retrying to establish. + ## When a connection is successfully established, the peer is removed + ## from the table. global pending_peers: table[peer_id] of Node; + + ## A table of peer nodes for which this node has an established connection. + ## Peers are automatically removed if their connection is closed and + ## automatically added back if a connection is re-established later. global connected_peers: table[peer_id] of Node; - ## Connect to nodes[node], independent of its "connect" flag. + ## Connect to a node in :bro:id:`Communication::nodes` independent + ## of its "connect" flag. + ## + ## peer: the string used to index a particular node within the + ## :bro:id:`Communication::nodes` table. global connect_peer: function(peer: string); } @@ -117,7 +163,7 @@ event bro_init() &priority=5 function do_script_log_common(level: count, src: count, msg: string) { - Log::write(Communication::LOG, [$ts = network_time(), + Log::write(Communication::LOG, [$ts = network_time(), $level = (level == REMOTE_LOG_INFO ? "info" : "error"), $src_name = src_names[src], $peer = get_event_peer()$descr, @@ -130,6 +176,13 @@ event remote_log(level: count, src: count, msg: string) do_script_log_common(level, src, msg); } +# This is a core generated event. +event remote_log_peer(p: event_peer, level: count, src: count, msg: string) + { + local rmsg = fmt("[#%d/%s:%d] %s", p$id, addr_to_uri(p$host), p$p, msg); + do_script_log_common(level, src, rmsg); + } + function do_script_log(p: event_peer, msg: string) { do_script_log_common(REMOTE_LOG_INFO, REMOTE_SRC_SCRIPT, msg); @@ -144,10 +197,11 @@ function connect_peer(peer: string) p = node$p; local class = node?$class ? node$class : ""; - local id = connect(node$host, p, class, node$retry, node$ssl); - + local zone_id = node?$zone_id ? node$zone_id : ""; + local id = connect(node$host, zone_id, p, class, node$retry, node$ssl); + if ( id == PEER_ID_NONE ) - Log::write(Communication::LOG, [$ts = network_time(), + Log::write(Communication::LOG, [$ts = network_time(), $peer = get_event_peer()$descr, $message = "can't trigger connect"]); pending_peers[id] = node; @@ -286,7 +340,7 @@ event bro_init() &priority = -10 # let others modify nodes { if ( |nodes| > 0 ) enable_communication(); - + for ( tag in nodes ) { if ( ! nodes[tag]$connect ) diff --git a/scripts/base/frameworks/control/main.bro b/scripts/base/frameworks/control/main.bro index 5aabaa4bac..63e5f639a0 100644 --- a/scripts/base/frameworks/control/main.bro +++ b/scripts/base/frameworks/control/main.bro @@ -1,43 +1,34 @@ -##! This is a utility script that sends the current values of all &redef'able -##! consts to a remote Bro then sends the :bro:id:`configuration_update` event -##! and terminates processing. -##! -##! Intended to be used from the command line like this when starting a controller:: -##! -##! bro frameworks/control/controller Control::host= Control::port= Control::cmd= [Control::arg=] -##! -##! A controllee only needs to load the controllee script in addition -##! to the specific analysis scripts desired. It may also need a node -##! configured as a controller node in the communications nodes configuration:: -##! -##! bro frameworks/control/controllee -##! -##! To use the framework as a controllee, it only needs to be loaded and -##! the controlled node need to accept all events in the "Control::" namespace -##! from the host where the control actions will be performed from along with -##! using the "control" class. +##! The control framework provides the foundation for providing "commands" +##! that can be taken remotely at runtime to modify a running Bro instance +##! or collect information from the running instance. module Control; export { - ## This is the address of the host that will be controlled. + ## The address of the host that will be controlled. const host = 0.0.0.0 &redef; - ## This is the port of the host that will be controlled. + ## The port of the host that will be controlled. const host_port = 0/tcp &redef; - ## This is the command that is being done. It's typically set on the - ## command line and influences whether this instance starts up as a - ## controller or controllee. + ## If :bro:id:`Control::host` is a non-global IPv6 address and + ## requires a specific :rfc:`4007` ``zone_id``, it can be set here. + const zone_id = "" &redef; + + ## The command that is being done. It's typically set on the + ## command line. const cmd = "" &redef; ## This can be used by commands that take an argument. const arg = "" &redef; + ## Events that need to be handled by controllers. const controller_events = /Control::.*_request/ &redef; + + ## Events that need to be handled by controllees. const controllee_events = /Control::.*_response/ &redef; - ## These are the commands that can be given on the command line for + ## The commands that can currently be given on the command line for ## remote control. const commands: set[string] = { "id_value", @@ -45,15 +36,15 @@ export { "net_stats", "configuration_update", "shutdown", - }; + } &redef; ## Variable IDs that are to be ignored by the update process. - const ignore_ids: set[string] = { - }; + const ignore_ids: set[string] = { }; ## Event for requesting the value of an ID (a variable). global id_value_request: event(id: string); - ## Event for returning the value of an ID after an :bro:id:`id_request` event. + ## Event for returning the value of an ID after an + ## :bro:id:`Control::id_value_request` event. global id_value_response: event(id: string, val: string); ## Requests the current communication status. @@ -68,7 +59,8 @@ export { ## Inform the remote Bro instance that it's configuration may have been updated. global configuration_update_request: event(); - ## This event is a wrapper and alias for the :bro:id:`configuration_update_request` event. + ## This event is a wrapper and alias for the + ## :bro:id:`Control::configuration_update_request` event. ## This event is also a primary hooking point for the control framework. global configuration_update: event(); ## Message in response to a configuration update request. diff --git a/scripts/base/frameworks/dpd/dpd.sig b/scripts/base/frameworks/dpd/dpd.sig index 8e07095b41..49e24cefc6 100644 --- a/scripts/base/frameworks/dpd/dpd.sig +++ b/scripts/base/frameworks/dpd/dpd.sig @@ -80,15 +80,15 @@ signature irc_server_reply { tcp-state responder } -signature irc_sig3 { +signature irc_server_to_server1 { ip-proto == tcp - payload /(.*\x0a)*(\x20)*[Ss][Ee][Rr][Vv][Ee][Rr](\x20)+.+\x0a/ + payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/ } -signature irc_sig4 { +signature irc_server_to_server2 { ip-proto == tcp - payload /(.*\x0a)*(\x20)*[Ss][Ee][Rr][Vv][Ee][Rr](\x20)+.+\x0a/ - requires-reverse-signature irc_sig3 + payload /(|.*[\r\n]) *[Ss][Ee][Rr][Vv][Ee][Rr] +[^ ]+ +[0-9]+ +:.+[\r\n]/ + requires-reverse-signature irc_server_to_server1 enable "irc" } @@ -149,3 +149,64 @@ signature dpd_ssl_client { payload /^(\x16\x03[\x00\x01\x02]..\x01...\x03[\x00\x01\x02]|...?\x01[\x00\x01\x02][\x02\x03]).*/ tcp-state originator } + +signature dpd_ayiya { + ip-proto = udp + payload /^..\x11\x29/ + enable "ayiya" +} + +signature dpd_teredo { + ip-proto = udp + payload /^(\x00\x00)|(\x00\x01)|([\x60-\x6f])/ + enable "teredo" +} + +signature dpd_socks4_client { + ip-proto == tcp + # '32' is a rather arbitrary max length for the user name. + payload /^\x04[\x01\x02].{0,32}\x00/ + tcp-state originator +} + +signature dpd_socks4_server { + ip-proto == tcp + requires-reverse-signature dpd_socks4_client + payload /^\x00[\x5a\x5b\x5c\x5d]/ + tcp-state responder + enable "socks" +} + +signature dpd_socks4_reverse_client { + ip-proto == tcp + # '32' is a rather arbitrary max length for the user name. + payload /^\x04[\x01\x02].{0,32}\x00/ + tcp-state responder +} + +signature dpd_socks4_reverse_server { + ip-proto == tcp + requires-reverse-signature dpd_socks4_reverse_client + payload /^\x00[\x5a\x5b\x5c\x5d]/ + tcp-state originator + enable "socks" +} + +signature dpd_socks5_client { + ip-proto == tcp + # Watch for a few authentication methods to reduce false positives. + payload /^\x05.[\x00\x01\x02]/ + tcp-state originator +} + +signature dpd_socks5_server { + ip-proto == tcp + requires-reverse-signature dpd_socks5_client + # Watch for a single authentication method to be chosen by the server or + # the server to indicate the no authentication is required. + payload /^\x05(\x00|\x01[\x00\x01\x02])/ + tcp-state responder + enable "socks" +} + + diff --git a/scripts/base/frameworks/dpd/main.bro b/scripts/base/frameworks/dpd/main.bro index d9288bdd04..a5349b6cfb 100644 --- a/scripts/base/frameworks/dpd/main.bro +++ b/scripts/base/frameworks/dpd/main.bro @@ -3,18 +3,19 @@ module DPD; -## Add the DPD signatures to the signature framework. -redef signature_files += "base/frameworks/dpd/dpd.sig"; +@load-sigs ./dpd.sig export { + ## Add the DPD logging stream identifier. redef enum Log::ID += { LOG }; + ## The record type defining the columns to log in the DPD logging stream. type Info: record { ## Timestamp for when protocol analysis failed. ts: time &log; ## Connection unique ID. uid: string &log; - ## Connection ID. + ## Connection ID containing the 4-tuple which identifies endpoints. id: conn_id &log; ## Transport protocol for the violation. proto: transport_proto &log; @@ -103,5 +104,8 @@ event protocol_violation(c: connection, atype: count, aid: count, reason: string) &priority=-5 { if ( c?$dpd ) + { Log::write(DPD::LOG, c$dpd); + delete c$dpd; + } } diff --git a/scripts/base/frameworks/input/__load__.bro b/scripts/base/frameworks/input/__load__.bro new file mode 100644 index 0000000000..0e7d8ffb73 --- /dev/null +++ b/scripts/base/frameworks/input/__load__.bro @@ -0,0 +1,5 @@ +@load ./main +@load ./readers/ascii +@load ./readers/raw +@load ./readers/benchmark + diff --git a/scripts/base/frameworks/input/main.bro b/scripts/base/frameworks/input/main.bro new file mode 100644 index 0000000000..742dc65568 --- /dev/null +++ b/scripts/base/frameworks/input/main.bro @@ -0,0 +1,158 @@ +##! The input framework provides a way to read previously stored data either +##! as an event stream or into a bro table. + +module Input; + +export { + + ## The default input reader used. Defaults to `READER_ASCII`. + const default_reader = READER_ASCII &redef; + + ## The default reader mode used. Defaults to `MANUAL`. + const default_mode = MANUAL &redef; + + ## Flag that controls if the input framework accepts records + ## that contain types that are not supported (at the moment + ## file and function). If true, the input framework will + ## warn in these cases, but continue. If false, it will + ## abort. Defaults to false (abort) + const accept_unsupported_types = F &redef; + + ## TableFilter description type used for the `table` method. + type TableDescription: record { + ## Common definitions for tables and events + + ## String that allows the reader to find the source. + ## For `READER_ASCII`, this is the filename. + source: string; + + ## Reader to use for this stream + reader: Reader &default=default_reader; + + ## Read mode to use for this stream + mode: Mode &default=default_mode; + + ## Descriptive name. Used to remove a stream at a later time + name: string; + + # Special definitions for tables + + ## Table which will receive the data read by the input framework + destination: any; + + ## Record that defines the values used as the index of the table + idx: any; + + ## Record that defines the values used as the elements of the table + ## If val is undefined, destination has to be a set. + val: any &optional; + + ## Defines if the value of the table is a record (default), or a single value. Val + ## can only contain one element when this is set to false. + want_record: bool &default=T; + + ## The event that is raised each time a value is added to, changed in or removed + ## from the table. The event will receive an Input::Event enum as the first + ## argument, the idx record as the second argument and the value (record) as the + ## third argument. + ev: any &optional; # event containing idx, val as values. + + ## Predicate function that can decide if an insertion, update or removal should + ## really be executed. Parameters are the same as for the event. If true is + ## returned, the update is performed. If false is returned, it is skipped. + pred: function(typ: Input::Event, left: any, right: any): bool &optional; + + ## A key/value table that will be passed on the reader. + ## Interpretation of the values is left to the writer, but + ## usually they will be used for configuration purposes. + config: table[string] of string &default=table(); + }; + + ## EventFilter description type used for the `event` method. + type EventDescription: record { + ## Common definitions for tables and events + + ## String that allows the reader to find the source. + ## For `READER_ASCII`, this is the filename. + source: string; + + ## Reader to use for this steam + reader: Reader &default=default_reader; + + ## Read mode to use for this stream + mode: Mode &default=default_mode; + + ## Descriptive name. Used to remove a stream at a later time + name: string; + + # Special definitions for events + + ## Record describing the fields to be retrieved from the source input. + fields: any; + + ## If want_record if false, the event receives each value in fields as a separate argument. + ## If it is set to true (default), the event receives all fields in a single record value. + want_record: bool &default=T; + + ## The event that is raised each time a new line is received from the reader. + ## The event will receive an Input::Event enum as the first element, and the fields as the following arguments. + ev: any; + + ## A key/value table that will be passed on the reader. + ## Interpretation of the values is left to the writer, but + ## usually they will be used for configuration purposes. + config: table[string] of string &default=table(); + }; + + ## Create a new table input from a given source. Returns true on success. + ## + ## description: `TableDescription` record describing the source. + global add_table: function(description: Input::TableDescription) : bool; + + ## Create a new event input from a given source. Returns true on success. + ## + ## description: `TableDescription` record describing the source. + global add_event: function(description: Input::EventDescription) : bool; + + ## Remove a input stream. Returns true on success and false if the named stream was + ## not found. + ## + ## id: string value identifying the stream to be removed + global remove: function(id: string) : bool; + + ## Forces the current input to be checked for changes. + ## Returns true on success and false if the named stream was not found + ## + ## id: string value identifying the stream + global force_update: function(id: string) : bool; + + ## Event that is called, when the end of a data source has been reached, including + ## after an update. + global end_of_data: event(name: string, source:string); +} + +@load base/input.bif + + +module Input; + +function add_table(description: Input::TableDescription) : bool + { + return __create_table_stream(description); + } + +function add_event(description: Input::EventDescription) : bool + { + return __create_event_stream(description); + } + +function remove(id: string) : bool + { + return __remove_stream(id); + } + +function force_update(id: string) : bool + { + return __force_update(id); + } + diff --git a/scripts/base/frameworks/input/readers/ascii.bro b/scripts/base/frameworks/input/readers/ascii.bro new file mode 100644 index 0000000000..7fca1ad795 --- /dev/null +++ b/scripts/base/frameworks/input/readers/ascii.bro @@ -0,0 +1,21 @@ +##! Interface for the ascii input reader. +##! +##! The defaults are set to match Bro's ASCII output. + +module InputAscii; + +export { + ## Separator between fields. + ## Please note that the separator has to be exactly one character long + const separator = "\t" &redef; + + ## Separator between set elements. + ## Please note that the separator has to be exactly one character long + const set_separator = "," &redef; + + ## String to use for empty fields. + const empty_field = "(empty)" &redef; + + ## String to use for an unset &optional field. + const unset_field = "-" &redef; +} diff --git a/scripts/base/frameworks/input/readers/benchmark.bro b/scripts/base/frameworks/input/readers/benchmark.bro new file mode 100644 index 0000000000..fe44914271 --- /dev/null +++ b/scripts/base/frameworks/input/readers/benchmark.bro @@ -0,0 +1,23 @@ +##! Interface for the ascii input reader. + +module InputBenchmark; + +export { + ## multiplication factor for each second + const factor = 1.0 &redef; + + ## spread factor between lines + const spread = 0 &redef; + + ## spreading where usleep = 1000000 / autospread * num_lines + const autospread = 0.0 &redef; + + ## addition factor for each heartbeat + const addfactor = 0 &redef; + + ## stop spreading at x lines per heartbeat + const stopspreadat = 0 &redef; + + ## 1 -> enable timed spreading + const timedspread = 0.0 &redef; +} diff --git a/scripts/base/frameworks/input/readers/raw.bro b/scripts/base/frameworks/input/readers/raw.bro new file mode 100644 index 0000000000..45deed3eda --- /dev/null +++ b/scripts/base/frameworks/input/readers/raw.bro @@ -0,0 +1,9 @@ +##! Interface for the raw input reader. + +module InputRaw; + +export { + ## Separator between input records. + ## Please note that the separator has to be exactly one character long + const record_separator = "\n" &redef; +} diff --git a/scripts/base/frameworks/intel/main.bro b/scripts/base/frameworks/intel/main.bro index 648c32bc57..9ee1c75100 100644 --- a/scripts/base/frameworks/intel/main.bro +++ b/scripts/base/frameworks/intel/main.bro @@ -11,7 +11,7 @@ # user_name # file_name # file_md5 -# x509_cert - DER encoded, not PEM (ascii armored) +# x509_md5 # Example tags: # infrastructure @@ -25,6 +25,7 @@ module Intel; export { + ## The intel logging stream identifier. redef enum Log::ID += { LOG }; redef enum Notice::Type += { @@ -33,72 +34,117 @@ export { Detection, }; + ## Record type used for logging information from the intelligence framework. + ## Primarily for problems or oddities with inserting and querying data. + ## This is important since the content of the intelligence framework can + ## change quite dramatically during runtime and problems may be introduced + ## into the data. type Info: record { + ## The current network time. ts: time &log; + ## Represents the severity of the message. ## This value should be one of: "info", "warn", "error" level: string &log; + ## The message. message: string &log; }; + ## Record to represent metadata associated with a single piece of + ## intelligence. type MetaData: record { + ## A description for the data. desc: string &optional; + ## A URL where more information may be found about the intelligence. url: string &optional; + ## The time at which the data was first declared to be intelligence. first_seen: time &optional; + ## When this data was most recent inserted into the framework. latest_seen: time &optional; + ## Arbitrary text tags for the data. tags: set[string]; }; + ## Record to represent a singular piece of intelligence. type Item: record { + ## If the data is an IP address, this hold the address. ip: addr &optional; + ## If the data is textual, this holds the text. str: string &optional; + ## If the data is numeric, this holds the number. num: int &optional; + ## The subtype of the data for when either the $str or $num fields are + ## given. If one of those fields are given, this field must be present. subtype: string &optional; + ## The next five fields are temporary until a better model for + ## attaching metadata to an intelligence item is created. desc: string &optional; url: string &optional; first_seen: time &optional; latest_seen: time &optional; tags: set[string]; - ## These single string tags are throw away until pybroccoli supports sets + ## These single string tags are throw away until pybroccoli supports sets. tag1: string &optional; tag2: string &optional; tag3: string &optional; }; + ## Record model used for constructing queries against the intelligence + ## framework. type QueryItem: record { - ip: addr &optional; - str: string &optional; - num: int &optional; - subtype: string &optional; + ## If an IP address is being queried for, this field should be given. + ip: addr &optional; + ## If a string is being queried for, this field should be given. + str: string &optional; + ## If numeric data is being queried for, this field should be given. + num: int &optional; + ## If either a string or number is being queried for, this field should + ## indicate the subtype of the data. + subtype: string &optional; - or_tags: set[string] &optional; - and_tags: set[string] &optional; + ## A set of tags where if a single metadata record attached to an item + ## has any one of the tags defined in this field, it will match. + or_tags: set[string] &optional; + ## A set of tags where a single metadata record attached to an item + ## must have all of the tags defined in this field. + and_tags: set[string] &optional; ## The predicate can be given when searching for a match. It will - ## be tested against every :bro:type:`MetaData` item associated with - ## the data being matched on. If it returns T a single time, the - ## matcher will consider that the item has matched. - pred: function(meta: Intel::MetaData): bool &optional; + ## be tested against every :bro:type:`Intel::MetaData` item associated + ## with the data being matched on. If it returns T a single time, the + ## matcher will consider that the item has matched. This field can + ## be used for constructing arbitrarily complex queries that may not + ## be possible with the $or_tags or $and_tags fields. + pred: function(meta: Intel::MetaData): bool &optional; }; - + ## Function to insert data into the intelligence framework. + ## + ## item: The data item. + ## + ## Returns: T if the data was successfully inserted into the framework, + ## otherwise it returns F. global insert: function(item: Item): bool; + + ## A wrapper for the :bro:id:`Intel::insert` function. This is primarily + ## used as the external API for inserting data into the intelligence + ## using Broccoli. global insert_event: event(item: Item); + + ## Function for matching data within the intelligence framework. global matcher: function(item: QueryItem): bool; - - type MetaDataStore: table[count] of MetaData; - type DataStore: record { - ip_data: table[addr] of MetaDataStore; - ## The first string is the actual value and the second string is the subtype. - string_data: table[string, string] of MetaDataStore; - int_data: table[int, string] of MetaDataStore; - }; - global data_store: DataStore; - - } +type MetaDataStore: table[count] of MetaData; +type DataStore: record { + ip_data: table[addr] of MetaDataStore; + # The first string is the actual value and the second string is the subtype. + string_data: table[string, string] of MetaDataStore; + int_data: table[int, string] of MetaDataStore; +}; +global data_store: DataStore; + event bro_init() { Log::create_stream(Intel::LOG, [$columns=Info]); diff --git a/scripts/base/frameworks/logging/__load__.bro b/scripts/base/frameworks/logging/__load__.bro index 42b2d7c564..b65cb1dea3 100644 --- a/scripts/base/frameworks/logging/__load__.bro +++ b/scripts/base/frameworks/logging/__load__.bro @@ -1,3 +1,6 @@ @load ./main @load ./postprocessors @load ./writers/ascii +@load ./writers/dataseries +@load ./writers/elasticsearch +@load ./writers/none diff --git a/scripts/base/frameworks/logging/main.bro b/scripts/base/frameworks/logging/main.bro index 440773233d..bed76a1ae5 100644 --- a/scripts/base/frameworks/logging/main.bro +++ b/scripts/base/frameworks/logging/main.bro @@ -1,16 +1,16 @@ ##! The Bro logging interface. ##! -##! See XXX for a introduction to Bro's logging framework. +##! See :doc:`/logging` for a introduction to Bro's logging framework. module Log; -# Log::ID and Log::Writer are defined in bro.init due to circular dependencies. +# Log::ID and Log::Writer are defined in types.bif due to circular dependencies. export { - ## If true, is local logging is by default enabled for all filters. + ## If true, local logging is by default enabled for all filters. const enable_local_logging = T &redef; - ## If true, is remote logging is by default enabled for all filters. + ## If true, remote logging is by default enabled for all filters. const enable_remote_logging = T &redef; ## Default writer to use if a filter does not specify @@ -23,21 +23,24 @@ export { columns: any; ## Event that will be raised once for each log entry. - ## The event receives a single same parameter, an instance of type ``columns``. + ## The event receives a single same parameter, an instance of type + ## ``columns``. ev: any &optional; }; - ## Default function for building the path values for log filters if not - ## speficied otherwise by a filter. The default implementation uses ``id`` + ## Builds the default path values for log filters if not otherwise + ## specified by a filter. The default implementation uses *id* ## to derive a name. ## - ## id: The log stream. + ## id: The ID associated with the log stream. + ## ## path: A suggested path value, which may be either the filter's ## ``path`` if defined, else a previous result from the function. ## If no ``path`` is defined for the filter, then the first call ## to the function will contain an empty string. + ## ## rec: An instance of the streams's ``columns`` type with its - ## fields set to the values to logged. + ## fields set to the values to be logged. ## ## Returns: The path to be used for the filter. global default_path_func: function(id: ID, path: string, rec: any) : string &redef; @@ -46,7 +49,7 @@ export { ## Information passed into rotation callback functions. type RotationInfo: record { - writer: Writer; ##< Writer. + writer: Writer; ##< The :bro:type:`Log::Writer` being used. fname: string; ##< Full name of the rotated file. path: string; ##< Original path value. open: time; ##< Time when opened. @@ -57,25 +60,26 @@ export { ## Default rotation interval. Zero disables rotation. const default_rotation_interval = 0secs &redef; - ## Default naming format for timestamps embedded into filenames. Uses a strftime() style. + ## Default naming format for timestamps embedded into filenames. + ## Uses a ``strftime()`` style. const default_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef; ## Default shell command to run on rotated files. Empty for none. const default_rotation_postprocessor_cmd = "" &redef; - ## Specifies the default postprocessor function per writer type. Entries in this - ## table are initialized by each writer type. + ## Specifies the default postprocessor function per writer type. + ## Entries in this table are initialized by each writer type. const default_rotation_postprocessors: table[Writer] of function(info: RotationInfo) : bool &redef; - ## Filter customizing logging. + ## A filter type describes how to customize logging streams. type Filter: record { ## Descriptive name to reference this filter. name: string; - ## The writer to use. + ## The logging writer implementation to use. writer: Writer &default=default_writer; - ## Predicate indicating whether a log entry should be recorded. + ## Indicates whether a log entry should be recorded. ## If not given, all entries are recorded. ## ## rec: An instance of the streams's ``columns`` type with its @@ -92,6 +96,12 @@ export { ## file name. Generally, filenames are expected to given ## without any extensions; writers will add appropiate ## extensions automatically. + ## + ## If this path is found to conflict with another filter's + ## for the same writer type, it is automatically corrected + ## by appending "-N", where N is the smallest integer greater + ## or equal to 2 that allows the corrected path name to not + ## conflict with another filter's. path: string &optional; ## A function returning the output path for recording entries @@ -101,15 +111,20 @@ export { ## easy to flood the disk by returning a new string for each ## connection ... ## - ## id: The log stream. + ## id: The ID associated with the log stream. + ## ## path: A suggested path value, which may be either the filter's ## ``path`` if defined, else a previous result from the function. ## If no ``path`` is defined for the filter, then the first call ## to the function will contain an empty string. - ## rec: An instance of the streams's ``columns`` type with its - ## fields set to the values to logged. ## - ## Returns: The path to be used for the filter. + ## rec: An instance of the streams's ``columns`` type with its + ## fields set to the values to be logged. + ## + ## Returns: The path to be used for the filter, which will be subject + ## to the same automatic correction rules as the *path* + ## field of :bro:type:`Log::Filter` in the case of conflicts + ## with other filters trying to use the same writer/path pair. path_func: function(id: ID, path: string, rec: any): string &optional; ## Subset of column names to record. If not given, all @@ -129,28 +144,194 @@ export { ## Rotation interval. interv: interval &default=default_rotation_interval; - ## Callback function to trigger for rotated files. If not set, - ## the default comes out of default_rotation_postprocessors. + ## Callback function to trigger for rotated files. If not set, the + ## default comes out of :bro:id:`Log::default_rotation_postprocessors`. postprocessor: function(info: RotationInfo) : bool &optional; + + ## A key/value table that will be passed on to the writer. + ## Interpretation of the values is left to the writer, but + ## usually they will be used for configuration purposes. + config: table[string] of string &default=table(); }; ## Sentinel value for indicating that a filter was not found when looked up. - const no_filter: Filter = [$name=""]; # Sentinel. + const no_filter: Filter = [$name=""]; - # TODO: Document. + ## Creates a new logging stream with the default filter. + ## + ## id: The ID enum to be associated with the new logging stream. + ## + ## stream: A record defining the content that the new stream will log. + ## + ## Returns: True if a new logging stream was successfully created and + ## a default filter added to it. + ## + ## .. bro:see:: Log::add_default_filter Log::remove_default_filter global create_stream: function(id: ID, stream: Stream) : bool; + + ## Enables a previously disabled logging stream. Disabled streams + ## will not be written to until they are enabled again. New streams + ## are enabled by default. + ## + ## id: The ID associated with the logging stream to enable. + ## + ## Returns: True if the stream is re-enabled or was not previously disabled. + ## + ## .. bro:see:: Log::disable_stream global enable_stream: function(id: ID) : bool; + + ## Disables a currently enabled logging stream. Disabled streams + ## will not be written to until they are enabled again. New streams + ## are enabled by default. + ## + ## id: The ID associated with the logging stream to disable. + ## + ## Returns: True if the stream is now disabled or was already disabled. + ## + ## .. bro:see:: Log::enable_stream global disable_stream: function(id: ID) : bool; + + ## Adds a custom filter to an existing logging stream. If a filter + ## with a matching ``name`` field already exists for the stream, it + ## is removed when the new filter is successfully added. + ## + ## id: The ID associated with the logging stream to filter. + ## + ## filter: A record describing the desired logging parameters. + ## + ## Returns: True if the filter was sucessfully added, false if + ## the filter was not added or the *filter* argument was not + ## the correct type. + ## + ## .. bro:see:: Log::remove_filter Log::add_default_filter + ## Log::remove_default_filter global add_filter: function(id: ID, filter: Filter) : bool; + + ## Removes a filter from an existing logging stream. + ## + ## id: The ID associated with the logging stream from which to + ## remove a filter. + ## + ## name: A string to match against the ``name`` field of a + ## :bro:type:`Log::Filter` for identification purposes. + ## + ## Returns: True if the logging stream's filter was removed or + ## if no filter associated with *name* was found. + ## + ## .. bro:see:: Log::remove_filter Log::add_default_filter + ## Log::remove_default_filter global remove_filter: function(id: ID, name: string) : bool; - global get_filter: function(id: ID, name: string) : Filter; # Returns no_filter if not found. + + ## Gets a filter associated with an existing logging stream. + ## + ## id: The ID associated with a logging stream from which to + ## obtain one of its filters. + ## + ## name: A string to match against the ``name`` field of a + ## :bro:type:`Log::Filter` for identification purposes. + ## + ## Returns: A filter attached to the logging stream *id* matching + ## *name* or, if no matches are found returns the + ## :bro:id:`Log::no_filter` sentinel value. + ## + ## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter + ## Log::remove_default_filter + global get_filter: function(id: ID, name: string) : Filter; + + ## Writes a new log line/entry to a logging stream. + ## + ## id: The ID associated with a logging stream to be written to. + ## + ## columns: A record value describing the values of each field/column + ## to write to the log stream. + ## + ## Returns: True if the stream was found and no error occurred in writing + ## to it or if the stream was disabled and nothing was written. + ## False if the stream was was not found, or the *columns* + ## argument did not match what the stream was initially defined + ## to handle, or one of the stream's filters has an invalid + ## ``path_func``. + ## + ## .. bro:see: Log::enable_stream Log::disable_stream global write: function(id: ID, columns: any) : bool; + + ## Sets the buffering status for all the writers of a given logging stream. + ## A given writer implementation may or may not support buffering and if it + ## doesn't then toggling buffering with this function has no effect. + ## + ## id: The ID associated with a logging stream for which to + ## enable/disable buffering. + ## + ## buffered: Whether to enable or disable log buffering. + ## + ## Returns: True if buffering status was set, false if the logging stream + ## does not exist. + ## + ## .. bro:see:: Log::flush global set_buf: function(id: ID, buffered: bool): bool; + + ## Flushes any currently buffered output for all the writers of a given + ## logging stream. + ## + ## id: The ID associated with a logging stream for which to flush buffered + ## data. + ## + ## Returns: True if all writers of a log stream were signalled to flush + ## buffered data or if the logging stream is disabled, + ## false if the logging stream does not exist. + ## + ## .. bro:see:: Log::set_buf Log::enable_stream Log::disable_stream global flush: function(id: ID): bool; + + ## Adds a default :bro:type:`Log::Filter` record with ``name`` field + ## set as "default" to a given logging stream. + ## + ## id: The ID associated with a logging stream for which to add a default + ## filter. + ## + ## Returns: The status of a call to :bro:id:`Log::add_filter` using a + ## default :bro:type:`Log::Filter` argument with ``name`` field + ## set to "default". + ## + ## .. bro:see:: Log::add_filter Log::remove_filter + ## Log::remove_default_filter global add_default_filter: function(id: ID) : bool; + + ## Removes the :bro:type:`Log::Filter` with ``name`` field equal to + ## "default". + ## + ## id: The ID associated with a logging stream from which to remove the + ## default filter. + ## + ## Returns: The status of a call to :bro:id:`Log::remove_filter` using + ## "default" as the argument. + ## + ## .. bro:see:: Log::add_filter Log::remove_filter Log::add_default_filter global remove_default_filter: function(id: ID) : bool; + ## Runs a command given by :bro:id:`Log::default_rotation_postprocessor_cmd` + ## on a rotated file. Meant to be called from postprocessor functions + ## that are added to :bro:id:`Log::default_rotation_postprocessors`. + ## + ## info: A record holding meta-information about the log being rotated. + ## + ## npath: The new path of the file (after already being rotated/processed + ## by writer-specific postprocessor as defined in + ## :bro:id:`Log::default_rotation_postprocessors`. + ## + ## Returns: True when :bro:id:`Log::default_rotation_postprocessor_cmd` + ## is empty or the system command given by it has been invoked + ## to postprocess a rotated log file. + ## + ## .. bro:see:: Log::default_rotation_date_format + ## Log::default_rotation_postprocessor_cmd + ## Log::default_rotation_postprocessors global run_rotation_postprocessor_cmd: function(info: RotationInfo, npath: string) : bool; + + ## The streams which are currently active and not disabled. + ## This table is not meant to be modified by users! Only use it for + ## examining which streams are active. + global active_streams: table[ID] of Stream = table(); } # We keep a script-level copy of all filters so that we can manipulate them. @@ -165,20 +346,23 @@ function __default_rotation_postprocessor(info: RotationInfo) : bool { if ( info$writer in default_rotation_postprocessors ) return default_rotation_postprocessors[info$writer](info); + else + # Return T by default so that postprocessor-less writers don't shutdown. + return T; } function default_path_func(id: ID, path: string, rec: any) : string { + # The suggested path value is a previous result of this function + # or a filter path explicitly set by the user, so continue using it. + if ( path != "" ) + return path; + local id_str = fmt("%s", id); - + local parts = split1(id_str, /::/); if ( |parts| == 2 ) { - # The suggested path value is a previous result of this function - # or a filter path explicitly set by the user, so continue using it. - if ( path != "" ) - return path; - # Example: Notice::LOG -> "notice" if ( parts[2] == "LOG" ) { @@ -194,11 +378,11 @@ function default_path_func(id: ID, path: string, rec: any) : string output = cat(output, sub_bytes(module_parts[4],1,1), "_", sub_bytes(module_parts[4], 2, |module_parts[4]|)); return to_lower(output); } - + # Example: Notice::POLICY_LOG -> "notice_policy" if ( /_LOG$/ in parts[2] ) parts[2] = sub(parts[2], /_LOG$/, ""); - + return cat(to_lower(parts[1]),"_",to_lower(parts[2])); } else @@ -214,13 +398,16 @@ function run_rotation_postprocessor_cmd(info: RotationInfo, npath: string) : boo if ( pp_cmd == "" ) return T; + # Turn, e.g., Log::WRITER_ASCII into "ascii". + local writer = subst_string(to_lower(fmt("%s", info$writer)), "log::writer_", ""); + # The date format is hard-coded here to provide a standardized # script interface. - system(fmt("%s %s %s %s %s %d", + system(fmt("%s %s %s %s %s %d %s", pp_cmd, npath, info$path, strftime("%y-%m-%d_%H.%M.%S", info$open), strftime("%y-%m-%d_%H.%M.%S", info$close), - info$terminating)); + info$terminating, writer)); return T; } @@ -230,11 +417,15 @@ function create_stream(id: ID, stream: Stream) : bool if ( ! __create_stream(id, stream) ) return F; + active_streams[id] = stream; + return add_default_filter(id); } function disable_stream(id: ID) : bool { + delete active_streams[id]; + return __disable_stream(id); } @@ -245,7 +436,7 @@ function add_filter(id: ID, filter: Filter) : bool # definition. if ( ! filter?$path_func ) filter$path_func = default_path_func; - + filters[id, filter$name] = filter; return __add_filter(id, filter); } diff --git a/scripts/base/frameworks/logging/postprocessors/__load__.bro b/scripts/base/frameworks/logging/postprocessors/__load__.bro index c5d92cfb4b..830a69aa75 100644 --- a/scripts/base/frameworks/logging/postprocessors/__load__.bro +++ b/scripts/base/frameworks/logging/postprocessors/__load__.bro @@ -1 +1,2 @@ @load ./scp +@load ./sftp diff --git a/scripts/base/frameworks/logging/postprocessors/scp.bro b/scripts/base/frameworks/logging/postprocessors/scp.bro index f27e748ae5..3aadc5bbf3 100644 --- a/scripts/base/frameworks/logging/postprocessors/scp.bro +++ b/scripts/base/frameworks/logging/postprocessors/scp.bro @@ -1,30 +1,56 @@ ##! This script defines a postprocessing function that can be applied ##! to a logging filter in order to automatically SCP (secure copy) ##! a log stream (or a subset of it) to a remote host at configurable -##! rotation time intervals. +##! rotation time intervals. Generally, to use this functionality +##! you must handle the :bro:id:`bro_init` event and do the following +##! in your handler: +##! +##! 1) Create a new :bro:type:`Log::Filter` record that defines a name/path, +##! rotation interval, and set the ``postprocessor`` to +##! :bro:id:`Log::scp_postprocessor`. +##! 2) Add the filter to a logging stream using :bro:id:`Log::add_filter`. +##! 3) Add a table entry to :bro:id:`Log::scp_destinations` for the filter's +##! writer/path pair which defines a set of :bro:type:`Log::SCPDestination` +##! records. module Log; export { - ## This postprocessor SCP's the rotated-log to all the remote hosts + ## Secure-copies the rotated-log to all the remote hosts ## defined in :bro:id:`Log::scp_destinations` and then deletes ## the local copy of the rotated-log. It's not active when ## reading from trace files. + ## + ## info: A record holding meta-information about the log file to be + ## postprocessed. + ## + ## Returns: True if secure-copy system command was initiated or + ## if no destination was configured for the log as described + ## by *info*. global scp_postprocessor: function(info: Log::RotationInfo): bool; ## A container that describes the remote destination for the SCP command ## argument as ``user@host:path``. type SCPDestination: record { + ## The remote user to log in as. A trust mechanism should be + ## pre-established. user: string; + ## The remote host to which to transfer logs. host: string; + ## The path/directory on the remote host to send logs. path: string; }; ## A table indexed by a particular log writer and filter path, that yields ## a set remote destinations. The :bro:id:`Log::scp_postprocessor` ## function queries this table upon log rotation and performs a secure - ## copy of the rotated-log to each destination in the set. + ## copy of the rotated-log to each destination in the set. This + ## table can be modified at run-time. global scp_destinations: table[Writer, string] of set[SCPDestination]; + + ## Default naming format for timestamps embedded into log filenames + ## that use the SCP rotator. + const scp_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef; } function scp_postprocessor(info: Log::RotationInfo): bool @@ -34,7 +60,11 @@ function scp_postprocessor(info: Log::RotationInfo): bool local command = ""; for ( d in scp_destinations[info$writer, info$path] ) - command += fmt("scp %s %s@%s:%s;", info$fname, d$user, d$host, d$path); + { + local dst = fmt("%s/%s.%s.log", d$path, info$path, + strftime(Log::scp_rotation_date_format, info$open)); + command += fmt("scp %s %s@%s:%s;", info$fname, d$user, d$host, dst); + } command += fmt("/bin/rm %s", info$fname); system(command); diff --git a/scripts/base/frameworks/logging/postprocessors/sftp.bro b/scripts/base/frameworks/logging/postprocessors/sftp.bro new file mode 100644 index 0000000000..5a31853063 --- /dev/null +++ b/scripts/base/frameworks/logging/postprocessors/sftp.bro @@ -0,0 +1,73 @@ +##! This script defines a postprocessing function that can be applied +##! to a logging filter in order to automatically SFTP +##! a log stream (or a subset of it) to a remote host at configurable +##! rotation time intervals. Generally, to use this functionality +##! you must handle the :bro:id:`bro_init` event and do the following +##! in your handler: +##! +##! 1) Create a new :bro:type:`Log::Filter` record that defines a name/path, +##! rotation interval, and set the ``postprocessor`` to +##! :bro:id:`Log::sftp_postprocessor`. +##! 2) Add the filter to a logging stream using :bro:id:`Log::add_filter`. +##! 3) Add a table entry to :bro:id:`Log::sftp_destinations` for the filter's +##! writer/path pair which defines a set of :bro:type:`Log::SFTPDestination` +##! records. + +module Log; + +export { + ## Securely transfers the rotated-log to all the remote hosts + ## defined in :bro:id:`Log::sftp_destinations` and then deletes + ## the local copy of the rotated-log. It's not active when + ## reading from trace files. + ## + ## info: A record holding meta-information about the log file to be + ## postprocessed. + ## + ## Returns: True if sftp system command was initiated or + ## if no destination was configured for the log as described + ## by *info*. + global sftp_postprocessor: function(info: Log::RotationInfo): bool; + + ## A container that describes the remote destination for the SFTP command, + ## comprised of the username, host, and path at which to upload the file. + type SFTPDestination: record { + ## The remote user to log in as. A trust mechanism should be + ## pre-established. + user: string; + ## The remote host to which to transfer logs. + host: string; + ## The path/directory on the remote host to send logs. + path: string; + }; + + ## A table indexed by a particular log writer and filter path, that yields + ## a set remote destinations. The :bro:id:`Log::sftp_postprocessor` + ## function queries this table upon log rotation and performs a secure + ## transfer of the rotated-log to each destination in the set. This + ## table can be modified at run-time. + global sftp_destinations: table[Writer, string] of set[SFTPDestination]; + + ## Default naming format for timestamps embedded into log filenames + ## that use the SFTP rotator. + const sftp_rotation_date_format = "%Y-%m-%d-%H-%M-%S" &redef; +} + +function sftp_postprocessor(info: Log::RotationInfo): bool + { + if ( reading_traces() || [info$writer, info$path] !in sftp_destinations ) + return T; + + local command = ""; + for ( d in sftp_destinations[info$writer, info$path] ) + { + local dst = fmt("%s/%s.%s.log", d$path, info$path, + strftime(Log::sftp_rotation_date_format, info$open)); + command += fmt("echo put %s %s | sftp -b - %s@%s;", info$fname, dst, + d$user, d$host); + } + + command += fmt("/bin/rm %s", info$fname); + system(command); + return T; + } diff --git a/scripts/base/frameworks/logging/writers/ascii.bro b/scripts/base/frameworks/logging/writers/ascii.bro index 5c04fdd3d9..bacb0996d0 100644 --- a/scripts/base/frameworks/logging/writers/ascii.bro +++ b/scripts/base/frameworks/logging/writers/ascii.bro @@ -1,4 +1,5 @@ -##! Interface for the ascii log writer. +##! Interface for the ASCII log writer. Redefinable options are available +##! to tweak the output format of ASCII logs. module LogAscii; @@ -7,11 +8,13 @@ export { ## into files. This is primarily for debugging purposes. const output_to_stdout = F &redef; - ## If true, include a header line with column names. - const include_header = T &redef; + ## If true, include lines with log meta information such as column names with + ## types, the values of ASCII logging options that in use, and the time when the + ## file was opened and closes (the latter at the end). + const include_meta = T &redef; - ## Prefix for the header line if included. - const header_prefix = "#" &redef; + ## Prefix for lines with meta information. + const meta_prefix = "#" &redef; ## Separator between fields. const separator = "\t" &redef; @@ -19,8 +22,9 @@ export { ## Separator between set elements. const set_separator = "," &redef; - ## String to use for empty fields. - const empty_field = "-" &redef; + ## String to use for empty fields. This should be different from + ## *unset_field* to make the output non-ambigious. + const empty_field = "(empty)" &redef; ## String to use for an unset &optional field. const unset_field = "-" &redef; diff --git a/scripts/base/frameworks/logging/writers/dataseries.bro b/scripts/base/frameworks/logging/writers/dataseries.bro new file mode 100644 index 0000000000..e85d9c8c49 --- /dev/null +++ b/scripts/base/frameworks/logging/writers/dataseries.bro @@ -0,0 +1,60 @@ +##! Interface for the DataSeries log writer. + +module LogDataSeries; + +export { + ## Compression to use with the DS output file. Options are: + ## + ## 'none' -- No compression. + ## 'lzf' -- LZF compression. Very quick, but leads to larger output files. + ## 'lzo' -- LZO compression. Very fast decompression times. + ## 'gz' -- GZIP compression. Slower than LZF, but also produces smaller output. + ## 'bz2' -- BZIP2 compression. Slower than GZIP, but also produces smaller output. + const compression = "gz" &redef; + + ## The extent buffer size. + ## Larger values here lead to better compression and more efficient writes, but + ## also increase the lag between the time events are received and the time they + ## are actually written to disk. + const extent_size = 65536 &redef; + + ## Should we dump the XML schema we use for this DS file to disk? + ## If yes, the XML schema shares the name of the logfile, but has + ## an XML ending. + const dump_schema = F &redef; + + ## How many threads should DataSeries spawn to perform compression? + ## Note that this dictates the number of threads per log stream. If + ## you're using a lot of streams, you may want to keep this number + ## relatively small. + ## + ## Default value is 1, which will spawn one thread / stream. + ## + ## Maximum is 128, minimum is 1. + const num_threads = 1 &redef; + + ## Should time be stored as an integer or a double? + ## Storing time as a double leads to possible precision issues and + ## can (significantly) increase the size of the resulting DS log. + ## That said, timestamps stored in double form are consistent + ## with the rest of Bro, including the standard ASCII log. Hence, we + ## use them by default. + const use_integer_for_time = F &redef; +} + +# Default function to postprocess a rotated DataSeries log file. It moves the +# rotated file to a new name that includes a timestamp with the opening time, and +# then runs the writer's default postprocessor command on it. +function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool + { + # Move file to name including both opening and closing time. + local dst = fmt("%s.%s.ds", info$path, + strftime(Log::default_rotation_date_format, info$open)); + + system(fmt("/bin/mv %s %s", info$fname, dst)); + + # Run default postprocessor. + return Log::run_rotation_postprocessor_cmd(info, dst); + } + +redef Log::default_rotation_postprocessors += { [Log::WRITER_DATASERIES] = default_rotation_postprocessor_func }; diff --git a/scripts/base/frameworks/logging/writers/elasticsearch.bro b/scripts/base/frameworks/logging/writers/elasticsearch.bro new file mode 100644 index 0000000000..1901759730 --- /dev/null +++ b/scripts/base/frameworks/logging/writers/elasticsearch.bro @@ -0,0 +1,48 @@ +##! Log writer for sending logs to an ElasticSearch server. +##! +##! Note: This module is in testing and is not yet considered stable! +##! +##! There is one known memory issue. If your elasticsearch server is +##! running slowly and taking too long to return from bulk insert +##! requests, the message queue to the writer thread will continue +##! growing larger and larger giving the appearance of a memory leak. + +module LogElasticSearch; + +export { + ## Name of the ES cluster + const cluster_name = "elasticsearch" &redef; + + ## ES Server + const server_host = "127.0.0.1" &redef; + + ## ES Port + const server_port = 9200 &redef; + + ## Name of the ES index + const index_prefix = "bro" &redef; + + ## The ES type prefix comes before the name of the related log. + ## e.g. prefix = "bro\_" would create types of bro_dns, bro_software, etc. + const type_prefix = "" &redef; + + ## The time before an ElasticSearch transfer will timeout. Note that + ## the fractional part of the timeout will be ignored. In particular, time + ## specifications less than a second result in a timeout value of 0, which + ## means "no timeout." + const transfer_timeout = 2secs; + + ## The batch size is the number of messages that will be queued up before + ## they are sent to be bulk indexed. + const max_batch_size = 1000 &redef; + + ## The maximum amount of wall-clock time that is allowed to pass without + ## finishing a bulk log send. This represents the maximum delay you + ## would like to have with your logs before they are sent to ElasticSearch. + const max_batch_interval = 1min &redef; + + ## The maximum byte size for a buffered JSON string to send to the bulk + ## insert API. + const max_byte_size = 1024 * 1024 &redef; +} + diff --git a/scripts/base/frameworks/logging/writers/none.bro b/scripts/base/frameworks/logging/writers/none.bro new file mode 100644 index 0000000000..869d7246c7 --- /dev/null +++ b/scripts/base/frameworks/logging/writers/none.bro @@ -0,0 +1,17 @@ +##! Interface for the None log writer. Thiis writer is mainly for debugging. + +module LogNone; + +export { + ## If true, output debugging output that can be useful for unit + ## testing the logging framework. + const debug = F &redef; +} + +function default_rotation_postprocessor_func(info: Log::RotationInfo) : bool + { + return T; + } + +redef Log::default_rotation_postprocessors += { [Log::WRITER_NONE] = default_rotation_postprocessor_func }; + diff --git a/scripts/base/frameworks/metrics/cluster.bro b/scripts/base/frameworks/metrics/cluster.bro index 6835c5bb9b..4804bc5005 100644 --- a/scripts/base/frameworks/metrics/cluster.bro +++ b/scripts/base/frameworks/metrics/cluster.bro @@ -13,11 +13,11 @@ module Metrics; export { - ## This value allows a user to decide how large of result groups the - ## workers should transmit values. + ## Allows a user to decide how large of result groups the + ## workers should transmit values for cluster metric aggregation. const cluster_send_in_groups_of = 50 &redef; - ## This is the percent of the full threshold value that needs to be met + ## The percent of the full threshold value that needs to be met ## on a single worker for that worker to send the value to its manager in ## order for it to request a global view for that value. There is no ## requirement that the manager requests a global view for the index @@ -25,11 +25,11 @@ export { ## recently. const cluster_request_global_view_percent = 0.1 &redef; - ## This event is sent by the manager in a cluster to initiate the + ## Event sent by the manager in a cluster to initiate the ## collection of metrics values for a filter. global cluster_filter_request: event(uid: string, id: ID, filter_name: string); - ## This event is sent by nodes that are collecting metrics after receiving + ## Event sent by nodes that are collecting metrics after receiving ## a request for the metric filter from the manager. global cluster_filter_response: event(uid: string, id: ID, filter_name: string, data: MetricTable, done: bool); @@ -40,12 +40,12 @@ export { global cluster_index_request: event(uid: string, id: ID, filter_name: string, index: Index); ## This event is sent by nodes in response to a - ## :bro:id:`cluster_index_request` event. + ## :bro:id:`Metrics::cluster_index_request` event. global cluster_index_response: event(uid: string, id: ID, filter_name: string, index: Index, val: count); ## This is sent by workers to indicate that they crossed the percent of the ## current threshold by the percentage defined globally in - ## :bro:id:`cluster_request_global_view_percent` + ## :bro:id:`Metrics::cluster_request_global_view_percent` global cluster_index_intermediate_response: event(id: Metrics::ID, filter_name: string, index: Metrics::Index, val: count); ## This event is scheduled internally on workers to send result chunks. diff --git a/scripts/base/frameworks/metrics/main.bro b/scripts/base/frameworks/metrics/main.bro index f53a86a977..d322d128fe 100644 --- a/scripts/base/frameworks/metrics/main.bro +++ b/scripts/base/frameworks/metrics/main.bro @@ -1,13 +1,16 @@ -##! This is the implementation of the metrics framework. +##! The metrics framework provides a way to count and measure data. @load base/frameworks/notice module Metrics; export { + ## The metrics logging stream identifier. redef enum Log::ID += { LOG }; + ## Identifiers for metrics to collect. type ID: enum { + ## Blank placeholder value. NOTHING, }; @@ -15,10 +18,13 @@ export { ## current value to the logging stream. const default_break_interval = 15mins &redef; - ## This is the interval for how often notices will happen after they have - ## already fired. + ## This is the interval for how often threshold based notices will happen + ## after they have already fired. const renotice_interval = 1hr &redef; + ## Represents a thing which is having metrics collected for it. An instance + ## of this record type and a :bro:type:`Metrics::ID` together represent a + ## single measurement. type Index: record { ## Host is the value to which this metric applies. host: addr &optional; @@ -37,17 +43,30 @@ export { network: subnet &optional; } &log; + ## The record type that is used for logging metrics. type Info: record { + ## Timestamp at which the metric was "broken". ts: time &log; + ## What measurement the metric represents. metric_id: ID &log; + ## The name of the filter being logged. :bro:type:`Metrics::ID` values + ## can have multiple filters which represent different perspectives on + ## the data so this is necessary to understand the value. filter_name: string &log; + ## What the metric value applies to. index: Index &log; + ## The simple numeric value of the metric. value: count &log; }; - # TODO: configure a metrics filter logging stream to log the current + # TODO: configure a metrics filter logging stream to log the current # metrics configuration in case someone is looking through # old logs and the configuration has changed since then. + + ## Filters define how the data from a metric is aggregated and handled. + ## Filters can be used to set how often the measurements are cut or "broken" + ## and logged or how the data within them is aggregated. It's also + ## possible to disable logging and use filters for thresholding. type Filter: record { ## The :bro:type:`Metrics::ID` that this filter applies to. id: ID &optional; @@ -62,7 +81,7 @@ export { aggregation_mask: count &optional; ## This is essentially a mapping table between addresses and subnets. aggregation_table: table[subnet] of subnet &optional; - ## The interval at which the metric should be "broken" and written + ## The interval at which this filter should be "broken" and written ## to the logging stream. The counters are also reset to zero at ## this time so any threshold based detection needs to be set to a ## number that should be expected to happen within this period. @@ -79,7 +98,7 @@ export { notice_threshold: count &optional; ## A series of thresholds at which to generate notices. notice_thresholds: vector of count &optional; - ## How often this notice should be raised for this metric index. It + ## How often this notice should be raised for this filter. It ## will be generated everytime it crosses a threshold, but if the ## $break_interval is set to 5mins and this is set to 1hr the notice ## only be generated once per hour even if something crosses the @@ -87,15 +106,43 @@ export { notice_freq: interval &optional; }; + ## Function to associate a metric filter with a metric ID. + ## + ## id: The metric ID that the filter should be associated with. + ## + ## filter: The record representing the filter configuration. global add_filter: function(id: ID, filter: Filter); + + ## Add data into a :bro:type:`Metrics::ID`. This should be called when + ## a script has measured some point value and is ready to increment the + ## counters. + ## + ## id: The metric ID that the data represents. + ## + ## index: The metric index that the value is to be added to. + ## + ## increment: How much to increment the counter by. global add_data: function(id: ID, index: Index, increment: count); + + ## Helper function to represent a :bro:type:`Metrics::Index` value as + ## a simple string + ## + ## index: The metric index that is to be converted into a string. + ## + ## Returns: A string reprentation of the metric index. global index2str: function(index: Index): string; - # This is the event that is used to "finish" metrics and adapt the metrics - # framework for clustered or non-clustered usage. + ## Event that is used to "finish" metrics and adapt the metrics + ## framework for clustered or non-clustered usage. + ## + ## ..note: This is primarily intended for internal use. global log_it: event(filter: Filter); + ## Event to access metrics records as they are passed to the logging framework. global log_metrics: event(rec: Info); + + ## Type to store a table of metrics values. Interal use only! + type MetricTable: table[Index] of count &default=0; } redef record Notice::Info += { @@ -105,7 +152,6 @@ redef record Notice::Info += { global metric_filters: table[ID] of vector of Filter = table(); global filter_store: table[ID, string] of Filter = table(); -type MetricTable: table[Index] of count &default=0; # This is indexed by metric ID and stream filter name. global store: table[ID, string] of MetricTable = table() &default=table(); diff --git a/scripts/base/frameworks/notice/actions/add-geodata.bro b/scripts/base/frameworks/notice/actions/add-geodata.bro index bc4021abea..9f6909595c 100644 --- a/scripts/base/frameworks/notice/actions/add-geodata.bro +++ b/scripts/base/frameworks/notice/actions/add-geodata.bro @@ -31,6 +31,7 @@ export { ## Add a helper to the notice policy for looking up GeoIP data. redef Notice::policy += { [$pred(n: Notice::Info) = { return (n$note in Notice::lookup_location_types); }, + $action = ACTION_ADD_GEODATA, $priority = 10], }; } diff --git a/scripts/base/frameworks/notice/actions/email_admin.bro b/scripts/base/frameworks/notice/actions/email_admin.bro index 56c0d5853d..7484a1c606 100644 --- a/scripts/base/frameworks/notice/actions/email_admin.bro +++ b/scripts/base/frameworks/notice/actions/email_admin.bro @@ -1,3 +1,8 @@ +##! Adds a new notice action type which can be used to email notices +##! to the administrators of a particular address space as set by +##! :bro:id:`Site::local_admins` if the notice contains a source +##! or destination address that lies within their space. + @load ../main @load base/utils/site @@ -6,8 +11,8 @@ module Notice; export { redef enum Action += { ## Indicate that the generated email should be addressed to the - ## appropriate email addresses as found in the - ## :bro:id:`Site::addr_to_emails` variable based on the relevant + ## appropriate email addresses as found by the + ## :bro:id:`Site::get_emails` function based on the relevant ## address or addresses indicated in the notice. ACTION_EMAIL_ADMIN }; diff --git a/scripts/base/frameworks/notice/actions/page.bro b/scripts/base/frameworks/notice/actions/page.bro index f88064ac47..16a3463126 100644 --- a/scripts/base/frameworks/notice/actions/page.bro +++ b/scripts/base/frameworks/notice/actions/page.bro @@ -1,3 +1,5 @@ +##! Allows configuration of a pager email address to which notices can be sent. + @load ../main module Notice; @@ -5,7 +7,7 @@ module Notice; export { redef enum Action += { ## Indicates that the notice should be sent to the pager email address - ## configured in the :bro:id:`mail_page_dest` variable. + ## configured in the :bro:id:`Notice::mail_page_dest` variable. ACTION_PAGE }; diff --git a/scripts/base/frameworks/notice/actions/pp-alarms.bro b/scripts/base/frameworks/notice/actions/pp-alarms.bro index 1284d7885f..82fda6db6c 100644 --- a/scripts/base/frameworks/notice/actions/pp-alarms.bro +++ b/scripts/base/frameworks/notice/actions/pp-alarms.bro @@ -1,6 +1,6 @@ -#! Notice extension that mails out a pretty-printed version of alarm.log -#! in regular intervals, formatted for better human readability. If activated, -#! that replaces the default summary mail having the raw log output. +##! Notice extension that mails out a pretty-printed version of alarm.log +##! in regular intervals, formatted for better human readability. If activated, +##! that replaces the default summary mail having the raw log output. @load base/frameworks/cluster @load ../main @@ -10,18 +10,21 @@ module Notice; export { ## Activate pretty-printed alarm summaries. const pretty_print_alarms = T &redef; - + ## Address to send the pretty-printed reports to. Default if not set is ## :bro:id:`Notice::mail_dest`. const mail_dest_pretty_printed = "" &redef; - - ## If an address from one of these networks is reported, we mark - ## the entry with an addition quote symbol (i.e., ">"). Many MUAs + ## If an address from one of these networks is reported, we mark + ## the entry with an additional quote symbol (i.e., ">"). Many MUAs ## then highlight such lines differently. global flag_nets: set[subnet] &redef; - + ## Function that renders a single alarm. Can be overidden. global pretty_print_alarm: function(out: file, n: Info) &redef; + + ## Force generating mail file, even if reading from traces or no mail + ## destination is defined. This is mainly for testing. + global force_email_summaries = F &redef; } # We maintain an old-style file recording the pretty-printed alarms. @@ -32,6 +35,9 @@ global pp_alarms_open: bool = F; # Returns True if pretty-printed alarm summaries are activated. function want_pp() : bool { + if ( force_email_summaries ) + return T; + return (pretty_print_alarms && ! reading_traces() && (mail_dest != "" || mail_dest_pretty_printed != "")); } @@ -41,38 +47,49 @@ function pp_open() { if ( pp_alarms_open ) return; - + pp_alarms_open = T; pp_alarms = open(pp_alarms_name); - - local dest = mail_dest_pretty_printed != "" ? mail_dest_pretty_printed - : mail_dest; - - local headers = email_headers("Alarm summary", dest); - write_file(pp_alarms, headers + "\n"); } # Closes and mails out the current output file. -function pp_send() +function pp_send(rinfo: Log::RotationInfo) { if ( ! pp_alarms_open ) return; - + write_file(pp_alarms, "\n\n--\n[Automatically generated]\n\n"); close(pp_alarms); - - system(fmt("/bin/cat %s | %s -t -oi && /bin/rm %s", - pp_alarms_name, sendmail, pp_alarms_name)); - pp_alarms_open = F; + + local from = strftime("%H:%M:%S", rinfo$open); + local to = strftime("%H:%M:%S", rinfo$close); + local subject = fmt("Alarm summary from %s-%s", from, to); + local dest = mail_dest_pretty_printed != "" ? mail_dest_pretty_printed + : mail_dest; + + if ( dest == "" ) + # No mail destination configured, just leave the file alone. This is mainly for + # testing. + return; + + local headers = email_headers(subject, dest); + + local header_name = pp_alarms_name + ".tmp"; + local header = open(header_name); + write_file(header, headers + "\n"); + close(header); + + system(fmt("/bin/cat %s %s | %s -t -oi && /bin/rm -f %s %s", + header_name, pp_alarms_name, sendmail, header_name, pp_alarms_name)); } # Postprocessor function that triggers the email. function pp_postprocessor(info: Log::RotationInfo): bool { if ( want_pp() ) - pp_send(); - + pp_send(info); + return T; } @@ -80,7 +97,7 @@ event bro_init() { if ( ! want_pp() ) return; - + # This replaces the standard non-pretty-printing filter. Log::add_filter(Notice::ALARM_LOG, [$name="alarm-mail", $writer=Log::WRITER_NONE, @@ -92,13 +109,13 @@ event notice(n: Notice::Info) &priority=-5 { if ( ! want_pp() ) return; - - if ( ACTION_LOG !in n$actions ) + + if ( ACTION_ALARM !in n$actions ) return; - + if ( ! pp_alarms_open ) pp_open(); - + pretty_print_alarm(pp_alarms, n); } @@ -108,12 +125,12 @@ function do_msg(out: file, n: Info, line1: string, line2: string, line3: string, @ifdef ( Notice::ACTION_ADD_GEODATA ) # Make tests happy, cyclic dependency. if ( n?$remote_location && n$remote_location?$country_code ) country = fmt(" (remote location %s)", n$remote_location$country_code); -@endif - +@endif + line1 = cat(line1, country); - + local resolved = ""; - + if ( host1 != 0.0.0.0 ) resolved = fmt("%s # %s = %s", resolved, host1, name1); @@ -133,64 +150,64 @@ function do_msg(out: file, n: Info, line1: string, line2: string, line3: string, function pretty_print_alarm(out: file, n: Info) { local pdescr = ""; - + @if ( Cluster::is_enabled() ) pdescr = "local"; - + if ( n?$src_peer ) pdescr = n$src_peer?$descr ? n$src_peer$descr : fmt("%s", n$src_peer$host); pdescr = fmt("<%s> ", pdescr); @endif - + local msg = fmt( "%s%s", pdescr, n$msg); - + local who = ""; local h1 = 0.0.0.0; local h2 = 0.0.0.0; - + local orig_p = ""; local resp_p = ""; - + if ( n?$id ) { - orig_p = fmt(":%s", n$id$orig_p); - resp_p = fmt(":%s", n$id$resp_p); + h1 = n$id$orig_h; + h2 = n$id$resp_h; + who = fmt("%s:%s -> %s:%s", h1, n$id$orig_p, h2, n$id$resp_p); } - - if ( n?$src && n?$dst ) + else if ( n?$src && n?$dst ) { h1 = n$src; h2 = n$dst; - who = fmt("%s%s -> %s%s", h1, orig_p, h2, resp_p); - - if ( n?$uid ) - who = fmt("%s (uid %s)", who, n$uid ); + who = fmt("%s -> %s", h1, h2); } - else if ( n?$src ) { - local p = ""; - - if ( n?$p ) - p = fmt(":%s", n$p); - h1 = n$src; - who = fmt("%s%s", h1, p); + who = fmt("%s%s", h1, (n?$p ? fmt(":%s", n$p) : "")); } - + + if ( n?$uid ) + who = fmt("%s (uid %s)", who, n$uid ); + local flag = (h1 in flag_nets || h2 in flag_nets); - + local line1 = fmt(">%s %D %s %s", (flag ? ">" : " "), network_time(), n$note, who); local line2 = fmt(" %s", msg); local line3 = n?$sub ? fmt(" %s", n$sub) : ""; - + if ( h1 == 0.0.0.0 ) { do_msg(out, n, line1, line2, line3, h1, "", h2, ""); return; } - + + if ( reading_traces() ) + { + do_msg(out, n, line1, line2, line3, h1, "", h2, ""); + return; + } + when ( local h1name = lookup_addr(h1) ) { if ( h2 == 0.0.0.0 ) diff --git a/scripts/base/frameworks/notice/cluster.bro b/scripts/base/frameworks/notice/cluster.bro index 7270e14933..3ee113acf3 100644 --- a/scripts/base/frameworks/notice/cluster.bro +++ b/scripts/base/frameworks/notice/cluster.bro @@ -1,4 +1,6 @@ -##! Implements notice functionality across clusters. +##! Implements notice functionality across clusters. Worker nodes +##! will disable notice/alarm logging streams and forward notice +##! events to the manager node for logging/processing. @load ./main @load base/frameworks/cluster @@ -7,16 +9,24 @@ module Notice; export { ## This is the event used to transport notices on the cluster. + ## + ## n: The notice information to be sent to the cluster manager for + ## further processing. global cluster_notice: event(n: Notice::Info); } +## Manager can communicate notice suppression to workers. redef Cluster::manager2worker_events += /Notice::begin_suppression/; +## Workers needs need ability to forward notices to manager. redef Cluster::worker2manager_events += /Notice::cluster_notice/; @if ( Cluster::local_node_type() != Cluster::MANAGER ) # The notice policy is completely handled by the manager and shouldn't be # done by workers or proxies to save time for packet processing. -redef policy = {}; +event bro_init() &priority=11 + { + Notice::policy = table(); + } event Notice::begin_suppression(n: Notice::Info) { diff --git a/scripts/base/frameworks/notice/extend-email/hostnames.bro b/scripts/base/frameworks/notice/extend-email/hostnames.bro index a73810c726..2ec6dbb23f 100644 --- a/scripts/base/frameworks/notice/extend-email/hostnames.bro +++ b/scripts/base/frameworks/notice/extend-email/hostnames.bro @@ -1,32 +1,52 @@ +##! Loading this script extends the :bro:enum:`Notice::ACTION_EMAIL` action +##! by appending to the email the hostnames associated with +##! :bro:type:`Notice::Info`'s *src* and *dst* fields as determined by a +##! DNS lookup. + @load ../main module Notice; -# This probably doesn't actually work due to the async lookup_addr. +# We have to store references to the notices here because the when statement +# clones the frame which doesn't give us access to modify values outside +# of it's execution scope. (we get a clone of the notice instead of a +# reference to the original notice) +global tmp_notice_storage: table[string] of Notice::Info &create_expire=max_email_delay+10secs; + event Notice::notice(n: Notice::Info) &priority=10 { if ( ! n?$src && ! n?$dst ) return; - + # This should only be done for notices that are being sent to email. if ( ACTION_EMAIL !in n$actions ) return; - + + # I'm not recovering gracefully from the when statements because I want + # the notice framework to detect that something has exceeded the maximum + # allowed email delay and tell the user. + local uid = unique_id(""); + tmp_notice_storage[uid] = n; + local output = ""; if ( n?$src ) { + add n$email_delay_tokens["hostnames-src"]; when ( local src_name = lookup_addr(n$src) ) { - output = string_cat("orig_h/src hostname: ", src_name, "\n"); - n$email_body_sections[|n$email_body_sections|] = output; + output = string_cat("orig/src hostname: ", src_name, "\n"); + tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output; + delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-src"]; } } if ( n?$dst ) { + add n$email_delay_tokens["hostnames-dst"]; when ( local dst_name = lookup_addr(n$dst) ) { - output = string_cat("resp_h/dst hostname: ", dst_name, "\n"); - n$email_body_sections[|n$email_body_sections|] = output; + output = string_cat("resp/dst hostname: ", dst_name, "\n"); + tmp_notice_storage[uid]$email_body_sections[|tmp_notice_storage[uid]$email_body_sections|] = output; + delete tmp_notice_storage[uid]$email_delay_tokens["hostnames-dst"]; } } } diff --git a/scripts/base/frameworks/notice/main.bro b/scripts/base/frameworks/notice/main.bro index 7d98c6464c..e9b29e7392 100644 --- a/scripts/base/frameworks/notice/main.bro +++ b/scripts/base/frameworks/notice/main.bro @@ -2,15 +2,14 @@ ##! are odd or potentially bad. Decisions of the meaning of various notices ##! need to be done per site because Bro does not ship with assumptions about ##! what is bad activity for sites. More extensive documetation about using -##! the notice framework can be found in the documentation section of the -##! http://www.bro-ids.org/ website. +##! the notice framework can be found in :doc:`/notice`. module Notice; export { - redef enum Log::ID += { + redef enum Log::ID += { ## This is the primary logging stream for notices. - LOG, + LOG, ## This is the notice policy auditing log. It records what the current ## notice policy is at Bro init time. POLICY_LOG, @@ -18,25 +17,25 @@ export { ALARM_LOG, }; - ## Scripts creating new notices need to redef this enum to add their own + ## Scripts creating new notices need to redef this enum to add their own ## specific notice types which would then get used when they call the ## :bro:id:`NOTICE` function. The convention is to give a general category - ## along with the specific notice separating words with underscores and using - ## leading capitals on each word except for abbreviations which are kept in - ## all capitals. For example, SSH::Login is for heuristically guessed - ## successful SSH logins. + ## along with the specific notice separating words with underscores and + ## using leading capitals on each word except for abbreviations which are + ## kept in all capitals. For example, SSH::Login is for heuristically + ## guessed successful SSH logins. type Type: enum { ## Notice reporting a count of how often a notice occurred. Tally, }; - + ## These are values representing actions that can be taken with notices. type Action: enum { ## Indicates that there is no action to be taken. ACTION_NONE, ## Indicates that the notice should be sent to the notice logging stream. ACTION_LOG, - ## Indicates that the notice should be sent to the email address(es) + ## Indicates that the notice should be sent to the email address(es) ## configured in the :bro:id:`Notice::mail_dest` variable. ACTION_EMAIL, ## Indicates that the notice should be alarmed. A readable ASCII @@ -47,30 +46,45 @@ export { ## duplicate notice suppression that the notice framework does. ACTION_NO_SUPPRESS, }; - - ## The notice framework is able to do automatic notice supression by - ## utilizing the $identifier field in :bro:type:`Info` records. + + ## The notice framework is able to do automatic notice supression by + ## utilizing the $identifier field in :bro:type:`Notice::Info` records. ## Set this to "0secs" to completely disable automated notice suppression. const default_suppression_interval = 1hrs &redef; - + type Info: record { + ## An absolute time indicating when the notice occurred, defaults + ## to the current network time. ts: time &log &optional; + + ## A connection UID which uniquely identifies the endpoints + ## concerned with the notice. uid: string &log &optional; + + ## A connection 4-tuple identifying the endpoints concerned with the + ## notice. id: conn_id &log &optional; - ## These are shorthand ways of giving the uid and id to a notice. The + ## A shorthand way of giving the uid and id to a notice. The ## reference to the actual connection will be deleted after applying ## the notice policy. conn: connection &optional; + ## A shorthand way of giving the uid and id to a notice. The + ## reference to the actual connection will be deleted after applying + ## the notice policy. iconn: icmp_conn &optional; - - ## The :bro:enum:`Notice::Type` of the notice. + + ## The transport protocol. Filled automatically when either conn, iconn + ## or p is specified. + proto: transport_proto &log &optional; + + ## The :bro:type:`Notice::Type` of the notice. note: Type &log; ## The human readable message for the notice. msg: string &log &optional; ## The human readable sub-message. sub: string &log &optional; - + ## Source address, if we don't have a :bro:type:`conn_id`. src: addr &log &optional; ## Destination address. @@ -79,33 +93,39 @@ export { p: port &log &optional; ## Associated count, or perhaps a status code. n: count &log &optional; - + ## Peer that raised this notice. src_peer: event_peer &optional; ## Textual description for the peer that raised this notice. peer_descr: string &log &optional; - + ## The actions which have been applied to this notice. actions: set[Notice::Action] &log &optional; - + ## These are policy items that returned T and applied their action ## to the notice. policy_items: set[count] &log &optional; - + ## By adding chunks of text into this element, other scripts can ## expand on notices that are being emailed. The normal way to add text ## is to extend the vector by handling the :bro:id:`Notice::notice` ## event and modifying the notice in place. - email_body_sections: vector of string &default=vector(); - + email_body_sections: vector of string &optional; + + ## Adding a string "token" to this set will cause the notice framework's + ## built-in emailing functionality to delay sending the email until + ## either the token has been removed or the email has been delayed + ## for :bro:id:`Notice::max_email_delay`. + email_delay_tokens: set[string] &optional; + ## This field is to be provided when a notice is generated for the ## purpose of deduplicating notices. The identifier string should - ## be unique for a single instance of the notice. This field should be - ## filled out in almost all cases when generating notices to define + ## be unique for a single instance of the notice. This field should be + ## filled out in almost all cases when generating notices to define ## when a notice is conceptually a duplicate of a previous notice. - ## - ## For example, an SSL certificate that is going to expire soon should - ## always have the same identifier no matter the client IP address + ## + ## For example, an SSL certificate that is going to expire soon should + ## always have the same identifier no matter the client IP address ## that connected and resulted in the certificate being exposed. In ## this case, the resp_h, resp_p, and hash of the certificate would be ## used to create this value. The hash of the cert is included @@ -114,19 +134,19 @@ export { ## Another example might be a host downloading a file which triggered ## a notice because the MD5 sum of the file it downloaded was known ## by some set of intelligence. In that case, the orig_h (client) - ## and MD5 sum would be used in this field to dedup because if the + ## and MD5 sum would be used in this field to dedup because if the ## same file is downloaded over and over again you really only want to ## know about it a single time. This makes it possible to send those ## notices to email without worrying so much about sending thousands ## of emails. identifier: string &optional; - + ## This field indicates the length of time that this - ## unique notice should be suppressed. This field is automatically + ## unique notice should be suppressed. This field is automatically ## filled out and should not be written to by any other script. suppress_for: interval &log &optional; }; - + ## Ignored notice types. const ignored_types: set[Notice::Type] = {} &redef; ## Emailed notice types. @@ -135,27 +155,28 @@ export { const alarmed_types: set[Notice::Type] = {} &redef; ## Types that should be suppressed for the default suppression interval. const not_suppressed_types: set[Notice::Type] = {} &redef; - ## This table can be used as a shorthand way to modify suppression + ## This table can be used as a shorthand way to modify suppression ## intervals for entire notice types. const type_suppression_intervals: table[Notice::Type] of interval = {} &redef; - + ## This is the record that defines the items that make up the notice policy. type PolicyItem: record { - ## This is the exact positional order in which the :bro:type:`PolicyItem` - ## records are checked. This is set internally by the notice framework. + ## This is the exact positional order in which the + ## :bro:type:`Notice::PolicyItem` records are checked. + ## This is set internally by the notice framework. position: count &log &optional; ## Define the priority for this check. Items are checked in ordered ## from highest value (10) to lowest value (0). priority: count &log &default=5; ## An action given to the notice if the predicate return true. action: Notice::Action &log &default=ACTION_NONE; - ## The pred (predicate) field is a function that returns a boolean T - ## or F value. If the predicate function return true, the action in - ## this record is applied to the notice that is given as an argument - ## to the predicate function. If no predicate is supplied, it's + ## The pred (predicate) field is a function that returns a boolean T + ## or F value. If the predicate function return true, the action in + ## this record is applied to the notice that is given as an argument + ## to the predicate function. If no predicate is supplied, it's ## assumed that the PolicyItem always applies. pred: function(n: Notice::Info): bool &log &optional; - ## Indicates this item should terminate policy processing if the + ## Indicates this item should terminate policy processing if the ## predicate returns T. halt: bool &log &default=F; ## This defines the length of time that this particular notice should @@ -163,8 +184,8 @@ export { suppress_for: interval &log &optional; }; - ## This is the where the :bro:id:`Notice::policy` is defined. All notice - ## processing is done through this variable. + ## Defines a notice policy that is extensible on a per-site basis. + ## All notice processing is done through this variable. const policy: set[PolicyItem] = { [$pred(n: Notice::Info) = { return (n$note in Notice::ignored_types); }, $halt=T, $priority = 9], @@ -177,84 +198,118 @@ export { [$pred(n: Notice::Info) = { return (n$note in Notice::emailed_types); }, $action = ACTION_EMAIL, $priority = 8], - [$pred(n: Notice::Info) = { - if (n$note in Notice::type_suppression_intervals) + [$pred(n: Notice::Info) = { + if (n$note in Notice::type_suppression_intervals) { n$suppress_for=Notice::type_suppression_intervals[n$note]; return T; } - return F; + return F; }, $action = ACTION_NONE, $priority = 8], [$action = ACTION_LOG, $priority = 0], } &redef; - + ## Local system sendmail program. const sendmail = "/usr/sbin/sendmail" &redef; - ## Email address to send notices with the :bro:enum:`ACTION_EMAIL` action - ## or to send bulk alarm logs on rotation with :bro:enum:`ACTION_ALARM`. + ## Email address to send notices with the :bro:enum:`Notice::ACTION_EMAIL` + ## action or to send bulk alarm logs on rotation with + ## :bro:enum:`Notice::ACTION_ALARM`. const mail_dest = "" &redef; - + ## Address that emails will be from. const mail_from = "Big Brother " &redef; ## Reply-to address used in outbound email. const reply_to = "" &redef; ## Text string prefixed to the subject of all emails sent out. const mail_subject_prefix = "[Bro]" &redef; + ## The maximum amount of time a plugin can delay email from being sent. + const max_email_delay = 15secs &redef; ## A log postprocessing function that implements emailing the contents ## of a log upon rotation to any configured :bro:id:`Notice::mail_dest`. ## The rotated log is removed upon being sent. + ## + ## info: A record containing the rotated log file information. + ## + ## Returns: True. global log_mailing_postprocessor: function(info: Log::RotationInfo): bool; - ## This is the event that is called as the entry point to the - ## notice framework by the global :bro:id:`NOTICE` function. By the time + ## This is the event that is called as the entry point to the + ## notice framework by the global :bro:id:`NOTICE` function. By the time ## this event is generated, default values have already been filled out in ## the :bro:type:`Notice::Info` record and synchronous functions in the - ## :bro:id:`Notice:sync_functions` have already been called. The notice + ## :bro:id:`Notice::sync_functions` have already been called. The notice ## policy has also been applied. + ## + ## n: The record containing notice data. global notice: event(n: Info); - ## This is a set of functions that provide a synchronous way for scripts + ## This is a set of functions that provide a synchronous way for scripts ## extending the notice framework to run before the normal event based ## notice pathway that most of the notice framework takes. This is helpful ## in cases where an action against a notice needs to happen immediately ## and can't wait the short time for the event to bubble up to the top of - ## the event queue. An example is the IP address dropping script that - ## can block IP addresses that have notices generated because it + ## the event queue. An example is the IP address dropping script that + ## can block IP addresses that have notices generated because it ## needs to operate closer to real time than the event queue allows it to. - ## Normally the event based extension model using the + ## Normally the event based extension model using the ## :bro:id:`Notice::notice` event will work fine if there aren't harder ## real time constraints. const sync_functions: set[function(n: Notice::Info)] = set() &redef; - + ## This event is generated when a notice begins to be suppressed. + ## + ## n: The record containing notice data regarding the notice type + ## about to be suppressed. global begin_suppression: event(n: Notice::Info); + ## This event is generated on each occurence of an event being suppressed. + ## + ## n: The record containing notice data regarding the notice type + ## being suppressed. global suppressed: event(n: Notice::Info); + ## This event is generated when a notice stops being suppressed. + ## + ## n: The record containing notice data regarding the notice type + ## that was being suppressed. global end_suppression: event(n: Notice::Info); - + ## Call this function to send a notice in an email. It is already used - ## by default with the built in :bro:enum:`ACTION_EMAIL` and - ## :bro:enum:`ACTION_PAGE` actions. + ## by default with the built in :bro:enum:`Notice::ACTION_EMAIL` and + ## :bro:enum:`Notice::ACTION_PAGE` actions. + ## + ## n: The record of notice data to email. + ## + ## dest: The intended recipient of the notice email. + ## + ## extend: Whether to extend the email using the ``email_body_sections`` + ## field of *n*. global email_notice_to: function(n: Info, dest: string, extend: bool); ## Constructs mail headers to which an email body can be appended for ## sending with sendmail. + ## ## subject_desc: a subject string to use for the mail + ## ## dest: recipient string to use for the mail + ## ## Returns: a string of mail headers to which an email body can be appended global email_headers: function(subject_desc: string, dest: string): string; - ## This event can be handled to access the :bro:type:`Info` + ## This event can be handled to access the :bro:type:`Notice::Info` ## record as it is sent on to the logging framework. + ## + ## rec: The record containing notice data before it is logged. global log_notice: event(rec: Info); - ## This is an internal wrapper for the global NOTICE function. Please + ## This is an internal wrapper for the global :bro:id:`NOTICE` function; ## disregard. + ## + ## n: The record of notice data. global internal_NOTICE: function(n: Notice::Info); } @@ -264,22 +319,22 @@ function per_notice_suppression_interval(t: table[Notice::Type, string] of Notic local n: Notice::Type; local s: string; [n,s] = idx; - + local suppress_time = t[n,s]$suppress_for - (network_time() - t[n,s]$ts); if ( suppress_time < 0secs ) suppress_time = 0secs; - + # If there is no more suppression time left, the notice needs to be sent # to the end_suppression event. if ( suppress_time == 0secs ) event Notice::end_suppression(t[n,s]); - + return suppress_time; } -# This is the internally maintained notice suppression table. It's +# This is the internally maintained notice suppression table. It's # indexed on the Notice::Type and the $identifier field from the notice. -global suppressing: table[Type, string] of Notice::Info = {} +global suppressing: table[Type, string] of Notice::Info = {} &create_expire=0secs &expire_func=per_notice_suppression_interval; @@ -306,7 +361,7 @@ function log_mailing_postprocessor(info: Log::RotationInfo): bool event bro_init() &priority=5 { Log::create_stream(Notice::LOG, [$columns=Info, $ev=log_notice]); - + Log::create_stream(Notice::ALARM_LOG, [$columns=Notice::Info]); # If Bro is configured for mailing notices, set up mailing for alarms. # Make sure that this alarm log is also output as text so that it can @@ -347,25 +402,49 @@ function email_headers(subject_desc: string, dest: string): string return header_text; } +event delay_sending_email(n: Notice::Info, dest: string, extend: bool) + { + email_notice_to(n, dest, extend); + } + function email_notice_to(n: Notice::Info, dest: string, extend: bool) { if ( reading_traces() || dest == "" ) return; - + + if ( extend ) + { + if ( |n$email_delay_tokens| > 0 ) + { + # If we still are within the max_email_delay, keep delaying. + if ( n$ts + max_email_delay > network_time() ) + { + schedule 1sec { delay_sending_email(n, dest, extend) }; + return; + } + else + { + event reporter_info(network_time(), + fmt("Notice email delay tokens weren't released in time (%s).", n$email_delay_tokens), + ""); + } + } + } + local email_text = email_headers(fmt("%s", n$note), dest); - + # First off, finish the headers and include the human readable messages # then leave a blank line after the message. email_text = string_cat(email_text, "\nMessage: ", n$msg); if ( n?$sub ) email_text = string_cat(email_text, "\nSub-message: ", n$sub); - + email_text = string_cat(email_text, "\n\n"); - + # Next, add information about the connection if it exists. if ( n?$id ) { - email_text = string_cat(email_text, "Connection: ", + email_text = string_cat(email_text, "Connection: ", fmt("%s", n$id$orig_h), ":", fmt("%d", n$id$orig_p), " -> ", fmt("%s", n$id$resp_h), ":", fmt("%d", n$id$resp_p), "\n"); if ( n?$uid ) @@ -373,17 +452,18 @@ function email_notice_to(n: Notice::Info, dest: string, extend: bool) } else if ( n?$src ) email_text = string_cat(email_text, "Address: ", fmt("%s", n$src), "\n"); - + # Add the extended information if it's requested. if ( extend ) { + email_text = string_cat(email_text, "\nEmail Extensions\n"); + email_text = string_cat(email_text, "----------------\n"); for ( i in n$email_body_sections ) { - email_text = string_cat(email_text, "******************\n"); email_text = string_cat(email_text, n$email_body_sections[i], "\n"); } } - + email_text = string_cat(email_text, "\n\n--\n[Automatically generated]\n\n"); piped_exec(fmt("%s -t -oi", sendmail), email_text); } @@ -396,10 +476,10 @@ event notice(n: Notice::Info) &priority=-5 Log::write(Notice::LOG, n); if ( ACTION_ALARM in n$actions ) Log::write(Notice::ALARM_LOG, n); - + # Normally suppress further notices like this one unless directed not to. # n$identifier *must* be specified for suppression to function at all. - if ( n?$identifier && + if ( n?$identifier && ACTION_NO_SUPPRESS !in n$actions && [n$note, n$identifier] !in suppressing && n$suppress_for != 0secs ) @@ -410,7 +490,8 @@ event notice(n: Notice::Info) &priority=-5 } ## This determines if a notice is being suppressed. It is only used -## internally as part of the mechanics for the global NOTICE function. +## internally as part of the mechanics for the global :bro:id:`NOTICE` +## function. function is_being_suppressed(n: Notice::Info): bool { if ( n?$identifier && [n$note, n$identifier] in suppressing ) @@ -421,7 +502,7 @@ function is_being_suppressed(n: Notice::Info): bool else return F; } - + # Executes a script with all of the notice fields put into the # new process' environment as "BRO_ARG_" variables. function execute_with_notice(cmd: string, n: Notice::Info) @@ -430,9 +511,9 @@ function execute_with_notice(cmd: string, n: Notice::Info) #local tgs = tags(n); #system_env(cmd, tags); } - -# This is run synchronously as a function before all of the other -# notice related functions and events. It also modifies the + +# This is run synchronously as a function before all of the other +# notice related functions and events. It also modifies the # :bro:type:`Notice::Info` record in place. function apply_policy(n: Notice::Info) { @@ -447,7 +528,7 @@ function apply_policy(n: Notice::Info) if ( ! n?$uid ) n$uid = n$conn$uid; } - + if ( n?$id ) { if ( ! n?$src ) @@ -458,8 +539,12 @@ function apply_policy(n: Notice::Info) n$p = n$id$resp_p; } + if ( n?$p ) + n$proto = get_port_transport_proto(n$p); + if ( n?$iconn ) { + n$proto = icmp; if ( ! n?$src ) n$src = n$iconn$orig_h; if ( ! n?$dst ) @@ -469,15 +554,20 @@ function apply_policy(n: Notice::Info) if ( ! n?$src_peer ) n$src_peer = get_event_peer(); if ( ! n?$peer_descr ) - n$peer_descr = n$src_peer?$descr ? + n$peer_descr = n$src_peer?$descr ? n$src_peer$descr : fmt("%s", n$src_peer$host); - + if ( ! n?$actions ) n$actions = set(); - + + if ( ! n?$email_body_sections ) + n$email_body_sections = vector(); + if ( ! n?$email_delay_tokens ) + n$email_delay_tokens = set(); + if ( ! n?$policy_items ) n$policy_items = set(); - + for ( i in ordered_policy ) { # If there's no predicate or the predicate returns F. @@ -485,51 +575,51 @@ function apply_policy(n: Notice::Info) { add n$actions[ordered_policy[i]$action]; add n$policy_items[int_to_count(i)]; - - # If the predicate matched and there was a suppression interval, + + # If the predicate matched and there was a suppression interval, # apply it to the notice now. if ( ordered_policy[i]?$suppress_for ) n$suppress_for = ordered_policy[i]$suppress_for; - + # If the policy item wants to halt policy processing, do it now! if ( ordered_policy[i]$halt ) break; } } - + # Apply the suppression time after applying the policy so that policy - # items can give custom suppression intervals. If there is no + # items can give custom suppression intervals. If there is no # suppression interval given yet, the default is applied. if ( ! n?$suppress_for ) n$suppress_for = default_suppression_interval; - + # Delete the connection record if it's there so we aren't sending that - # to remote machines. It can cause problems due to the size of the + # to remote machines. It can cause problems due to the size of the # connection record. if ( n?$conn ) delete n$conn; if ( n?$iconn ) delete n$iconn; } - -# Create the ordered notice policy automatically which will be used at runtime + +# Create the ordered notice policy automatically which will be used at runtime # for prioritized matching of the notice policy. event bro_init() &priority=10 { # Create the policy log here because it's only written to in this handler. Log::create_stream(Notice::POLICY_LOG, [$columns=PolicyItem]); - + local tmp: table[count] of set[PolicyItem] = table(); for ( pi in policy ) { if ( pi$priority < 0 || pi$priority > 10 ) Reporter::fatal("All Notice::PolicyItem priorities must be within 0 and 10"); - + if ( pi$priority !in tmp ) tmp[pi$priority] = set(); add tmp[pi$priority][pi]; } - + local rev_count = vector(10,9,8,7,6,5,4,3,2,1,0); for ( i in rev_count ) { @@ -545,7 +635,7 @@ event bro_init() &priority=10 } } } - + function internal_NOTICE(n: Notice::Info) { # Suppress this notice if necessary. diff --git a/scripts/base/frameworks/notice/weird.bro b/scripts/base/frameworks/notice/weird.bro index 2303c97fbc..f894a42464 100644 --- a/scripts/base/frameworks/notice/weird.bro +++ b/scripts/base/frameworks/notice/weird.bro @@ -1,3 +1,12 @@ +##! This script provides a default set of actions to take for "weird activity" +##! events generated from Bro's event engine. Weird activity is defined as +##! unusual or exceptional activity that can indicate malformed connections, +##! traffic that doesn't conform to a particular protocol, malfunctioning +##! or misconfigured hardware, or even an attacker attempting to avoid/confuse +##! a sensor. Without context, it's hard to judge whether a particular +##! category of weird activity is interesting, but this script provides +##! a starting point for the user. + @load base/utils/conn-ids @load base/utils/site @load ./main @@ -5,6 +14,7 @@ module Weird; export { + ## The weird logging stream identifier. redef enum Log::ID += { LOG }; redef enum Notice::Type += { @@ -12,6 +22,7 @@ export { Activity, }; + ## The record type which contains the column fields of the weird log. type Info: record { ## The time when the weird occurred. ts: time &log; @@ -32,19 +43,32 @@ export { peer: string &log &optional; }; + ## Types of actions that may be taken when handling weird activity events. type Action: enum { + ## A dummy action indicating the user does not care what internal + ## decision is made regarding a given type of weird. ACTION_UNSPECIFIED, + ## No action is to be taken. ACTION_IGNORE, + ## Log the weird event every time it occurs. ACTION_LOG, + ## Log the weird event only once. ACTION_LOG_ONCE, + ## Log the weird event once per connection. ACTION_LOG_PER_CONN, + ## Log the weird event once per originator host. ACTION_LOG_PER_ORIG, + ## Always generate a notice associated with the weird event. ACTION_NOTICE, + ## Generate a notice associated with the weird event only once. ACTION_NOTICE_ONCE, + ## Generate a notice for the weird event once per connection. ACTION_NOTICE_PER_CONN, + ## Generate a notice for the weird event once per originator host. ACTION_NOTICE_PER_ORIG, }; + ## A table specifying default/recommended actions per weird type. const actions: table[string] of Action = { ["unsolicited_SYN_response"] = ACTION_IGNORE, ["above_hole_data_without_any_acks"] = ACTION_LOG, @@ -174,7 +198,7 @@ export { ["SYN_after_reset"] = ACTION_LOG, ["SYN_inside_connection"] = ACTION_LOG, ["SYN_seq_jump"] = ACTION_LOG, - ["SYN_with_data"] = ACTION_LOG, + ["SYN_with_data"] = ACTION_LOG_PER_ORIG, ["TCP_christmas"] = ACTION_LOG, ["truncated_ARP"] = ACTION_LOG, ["truncated_NTP"] = ACTION_LOG, @@ -201,7 +225,7 @@ export { ["fragment_overlap"] = ACTION_LOG_PER_ORIG, ["fragment_protocol_inconsistency"] = ACTION_LOG, ["fragment_size_inconsistency"] = ACTION_LOG_PER_ORIG, - ## These do indeed happen! + # These do indeed happen! ["fragment_with_DF"] = ACTION_LOG, ["incompletely_captured_fragment"] = ACTION_LOG, ["bad_IP_checksum"] = ACTION_LOG_PER_ORIG, @@ -215,8 +239,8 @@ export { ## and weird name into this set. const ignore_hosts: set[addr, string] &redef; - # But don't ignore these (for the weird file), it's handy keeping - # track of clustered checksum errors. + ## Don't ignore repeats for weirds in this set. For example, + ## it's handy keeping track of clustered checksum errors. const weird_do_not_ignore_repeats = { "bad_IP_checksum", "bad_TCP_checksum", "bad_UDP_checksum", "bad_ICMP_checksum", @@ -236,7 +260,11 @@ export { ## A state set which tracks unique weirds solely by the name to reduce ## duplicate notices from being raised. global did_notice: set[string, string] &create_expire=1day &redef; - + + ## Handlers of this event are invoked one per write to the weird + ## logging stream before the data is actually written. + ## + ## rec: The weird columns about to be logged to the weird stream. global log_weird: event(rec: Info); } diff --git a/scripts/base/frameworks/packet-filter/main.bro b/scripts/base/frameworks/packet-filter/main.bro index 1097315172..afcb5bd700 100644 --- a/scripts/base/frameworks/packet-filter/main.bro +++ b/scripts/base/frameworks/packet-filter/main.bro @@ -9,30 +9,35 @@ module PacketFilter; export { + ## Add the packet filter logging stream. redef enum Log::ID += { LOG }; - + + ## Add notice types related to packet filter errors. redef enum Notice::Type += { ## This notice is generated if a packet filter is unable to be compiled. Compile_Failure, - - ## This notice is generated if a packet filter is unable to be installed. + + ## This notice is generated if a packet filter is fails to install. Install_Failure, }; - + + ## The record type defining columns to be logged in the packet filter + ## logging stream. type Info: record { + ## The time at which the packet filter installation attempt was made. ts: time &log; - + ## This is a string representation of the node that applied this ## packet filter. It's mostly useful in the context of dynamically ## changing filters on clusters. node: string &log &optional; - + ## The packet filter that is being set. filter: string &log; - + ## Indicate if this is the filter set during initialization. init: bool &log &default=F; - + ## Indicate if the filter was applied successfully. success: bool &log &default=T; }; @@ -40,19 +45,19 @@ export { ## By default, Bro will examine all packets. If this is set to false, ## it will dynamically build a BPF filter that only select protocols ## for which the user has loaded a corresponding analysis script. - ## The latter used to be default for Bro versions < 1.6. That has now + ## The latter used to be default for Bro versions < 2.0. That has now ## changed however to enable port-independent protocol analysis. const all_packets = T &redef; - - ## Filter string which is unconditionally or'ed to the beginning of every + + ## Filter string which is unconditionally or'ed to the beginning of every ## dynamically built filter. const unrestricted_filter = "" &redef; - + ## Call this function to build and install a new dynamically built ## packet filter. global install: function(); - - ## This is where the default packet filter is stored and it should not + + ## This is where the default packet filter is stored and it should not ## normally be modified by users. global default_filter = ""; } @@ -80,35 +85,26 @@ function build_default_filter(): string return cmd_line_bpf_filter; if ( all_packets ) - { # Return an "always true" filter. - if ( bro_has_ipv6() ) - return "ip or not ip"; - else - return "not ip6"; - } + return "ip or not ip"; # Build filter dynamically. - + # First the capture_filter. local cfilter = ""; for ( id in capture_filters ) cfilter = combine_filters(cfilter, capture_filters[id], "or"); - + # Then the restrict_filter. local rfilter = ""; for ( id in restrict_filters ) rfilter = combine_filters(rfilter, restrict_filters[id], "and"); - + # Finally, join them into one filter. local filter = combine_filters(rfilter, cfilter, "and"); if ( unrestricted_filter != "" ) filter = combine_filters(unrestricted_filter, filter, "or"); - - # Exclude IPv6 if we don't support it. - if ( ! bro_has_ipv6() ) - filter = combine_filters(filter, "not ip6", "and"); - + return filter; } @@ -118,32 +114,32 @@ function install() if ( ! precompile_pcap_filter(DefaultPcapFilter, default_filter) ) { - NOTICE([$note=Compile_Failure, + NOTICE([$note=Compile_Failure, $msg=fmt("Compiling packet filter failed"), $sub=default_filter]); Reporter::fatal(fmt("Bad pcap filter '%s'", default_filter)); } - + # Do an audit log for the packet filter. local info: Info; info$ts = network_time(); # If network_time() is 0.0 we're at init time so use the wall clock. - if ( info$ts == 0.0 ) + if ( info$ts == 0.0 ) { info$ts = current_time(); info$init = T; } info$filter = default_filter; - + if ( ! install_pcap_filter(DefaultPcapFilter) ) { # Installing the filter failed for some reason. info$success = F; - NOTICE([$note=Install_Failure, + NOTICE([$note=Install_Failure, $msg=fmt("Installing packet filter failed"), $sub=default_filter]); } - + if ( reading_live_traffic() || reading_traces() ) Log::write(PacketFilter::LOG, info); } diff --git a/scripts/base/frameworks/packet-filter/netstats.bro b/scripts/base/frameworks/packet-filter/netstats.bro index 69b5026515..9fbaa5cd1d 100644 --- a/scripts/base/frameworks/packet-filter/netstats.bro +++ b/scripts/base/frameworks/packet-filter/netstats.bro @@ -1,4 +1,6 @@ ##! This script reports on packet loss from the various packet sources. +##! When Bro is reading input from trace files, this script will not +##! report any packet loss statistics. @load base/frameworks/notice @@ -6,7 +8,7 @@ module PacketFilter; export { redef enum Notice::Type += { - ## Bro reported packets dropped by the packet filter. + ## Indicates packets were dropped by the packet filter. Dropped_Packets, }; diff --git a/scripts/base/frameworks/reporter/main.bro b/scripts/base/frameworks/reporter/main.bro index e70106f39a..edc5b1779a 100644 --- a/scripts/base/frameworks/reporter/main.bro +++ b/scripts/base/frameworks/reporter/main.bro @@ -1,44 +1,90 @@ -##! This framework is intended to create an output and filtering path for -##! internal messages/warnings/errors. It should typically be loaded to -##! avoid Bro spewing internal messages to standard error. +##! This framework is intended to create an output and filtering path for +##! internal messages/warnings/errors. It should typically be loaded to +##! avoid Bro spewing internal messages to standard error and instead log +##! them to a file in a standard way. Note that this framework deals with +##! the handling of internally-generated reporter messages, for the +##! interface into actually creating reporter messages from the scripting +##! layer, use the built-in functions in :doc:`/scripts/base/reporter.bif`. module Reporter; export { + ## The reporter logging stream identifier. redef enum Log::ID += { LOG }; - - type Level: enum { - INFO, - WARNING, + + ## An indicator of reporter message severity. + type Level: enum { + ## Informational, not needing specific attention. + INFO, + ## Warning of a potential problem. + WARNING, + ## A non-fatal error that should be addressed, but doesn't + ## terminate program execution. ERROR }; - + + ## The record type which contains the column fields of the reporter log. type Info: record { + ## The network time at which the reporter event was generated. ts: time &log; + ## The severity of the reporter message. level: Level &log; + ## An info/warning/error message that could have either been + ## generated from the internal Bro core or at the scripting-layer. message: string &log; ## This is the location in a Bro script where the message originated. ## Not all reporter messages will have locations in them though. location: string &log &optional; }; + + ## Tunable for sending reporter warning messages to STDERR. The option to + ## turn it off is presented here in case Bro is being run by some + ## external harness and shouldn't output anything to the console. + const warnings_to_stderr = T &redef; + + ## Tunable for sending reporter error messages to STDERR. The option to + ## turn it off is presented here in case Bro is being run by some + ## external harness and shouldn't output anything to the console. + const errors_to_stderr = T &redef; } +global stderr: file; + event bro_init() &priority=5 { Log::create_stream(Reporter::LOG, [$columns=Info]); + + if ( errors_to_stderr || warnings_to_stderr ) + stderr = open("/dev/stderr"); } -event reporter_info(t: time, msg: string, location: string) +event reporter_info(t: time, msg: string, location: string) &priority=-5 { Log::write(Reporter::LOG, [$ts=t, $level=INFO, $message=msg, $location=location]); } - -event reporter_warning(t: time, msg: string, location: string) + +event reporter_warning(t: time, msg: string, location: string) &priority=-5 { + if ( warnings_to_stderr ) + { + if ( t > double_to_time(0.0) ) + print stderr, fmt("WARNING: %.6f %s (%s)", t, msg, location); + else + print stderr, fmt("WARNING: %s (%s)", msg, location); + } + Log::write(Reporter::LOG, [$ts=t, $level=WARNING, $message=msg, $location=location]); } -event reporter_error(t: time, msg: string, location: string) +event reporter_error(t: time, msg: string, location: string) &priority=-5 { + if ( errors_to_stderr ) + { + if ( t > double_to_time(0.0) ) + print stderr, fmt("ERROR: %.6f %s (%s)", t, msg, location); + else + print stderr, fmt("ERROR: %s (%s)", msg, location); + } + Log::write(Reporter::LOG, [$ts=t, $level=ERROR, $message=msg, $location=location]); } diff --git a/scripts/base/frameworks/signatures/main.bro b/scripts/base/frameworks/signatures/main.bro index 4811fdd5a9..26f78a68d1 100644 --- a/scripts/base/frameworks/signatures/main.bro +++ b/scripts/base/frameworks/signatures/main.bro @@ -1,30 +1,36 @@ -##! Script level signature support. +##! Script level signature support. See the +##! :doc:`signature documentation ` for more information about +##! Bro's signature engine. @load base/frameworks/notice module Signatures; export { + ## Add various signature-related notice types. redef enum Notice::Type += { - ## Generic for alarm-worthy + ## Generic notice type for notice-worthy signature matches. Sensitive_Signature, ## Host has triggered many signatures on the same host. The number of - ## signatures is defined by the :bro:id:`vert_scan_thresholds` variable. + ## signatures is defined by the + ## :bro:id:`Signatures::vert_scan_thresholds` variable. Multiple_Signatures, - ## Host has triggered the same signature on multiple hosts as defined by the - ## :bro:id:`horiz_scan_thresholds` variable. + ## Host has triggered the same signature on multiple hosts as defined + ## by the :bro:id:`Signatures::horiz_scan_thresholds` variable. Multiple_Sig_Responders, - ## The same signature has triggered multiple times for a host. The number - ## of times the signature has be trigger is defined by the - ## :bro:id:`count_thresholds` variable. To generate this notice, the - ## :bro:enum:`SIG_COUNT_PER_RESP` action must be set for the signature. + ## The same signature has triggered multiple times for a host. The + ## number of times the signature has been triggered is defined by the + ## :bro:id:`Signatures::count_thresholds` variable. To generate this + ## notice, the :bro:enum:`Signatures::SIG_COUNT_PER_RESP` action must + ## bet set for the signature. Count_Signature, ## Summarize the number of times a host triggered a signature. The - ## interval between summaries is defined by the :bro:id:`summary_interval` - ## variable. + ## interval between summaries is defined by the + ## :bro:id:`Signatures::summary_interval` variable. Signature_Summary, }; + ## The signature logging stream identifier. redef enum Log::ID += { LOG }; ## These are the default actions you can apply to signature matches. @@ -39,8 +45,8 @@ export { SIG_QUIET, ## Generate a notice. SIG_LOG, - ## The same as :bro:enum:`SIG_FILE`, but ignore for aggregate/scan - ## processing. + ## The same as :bro:enum:`Signatures::SIG_LOG`, but ignore for + ## aggregate/scan processing. SIG_FILE_BUT_NO_SCAN, ## Generate a notice and set it to be alarmed upon. SIG_ALARM, @@ -49,22 +55,33 @@ export { ## Alarm once and then never again. SIG_ALARM_ONCE, ## Count signatures per responder host and alarm with the - ## :bro:enum:`Count_Signature` notice if a threshold defined by - ## :bro:id:`count_thresholds` is reached. + ## :bro:enum:`Signatures::Count_Signature` notice if a threshold + ## defined by :bro:id:`Signatures::count_thresholds` is reached. SIG_COUNT_PER_RESP, ## Don't alarm, but generate per-orig summary. SIG_SUMMARY, }; - + + ## The record type which contains the column fields of the signature log. type Info: record { + ## The network time at which a signature matching type of event to + ## be logged has occurred. ts: time &log; + ## The host which triggered the signature match event. src_addr: addr &log &optional; + ## The host port on which the signature-matching activity occurred. src_port: port &log &optional; + ## The destination host which was sent the payload that triggered the + ## signature match. dst_addr: addr &log &optional; + ## The destination host port which was sent the payload that triggered + ## the signature match. dst_port: port &log &optional; ## Notice associated with signature event note: Notice::Type &log; + ## The name of the signature that matched. sig_id: string &log &optional; + ## A more descriptive message of the signature-matching event. event_msg: string &log &optional; ## Extracted payload data or extra message. sub_msg: string &log &optional; @@ -82,22 +99,26 @@ export { ## Signature IDs that should always be ignored. const ignored_ids = /NO_DEFAULT_MATCHES/ &redef; - ## Alarm if, for a pair [orig, signature], the number of different - ## responders has reached one of the thresholds. + ## Generate a notice if, for a pair [orig, signature], the number of + ## different responders has reached one of the thresholds. const horiz_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef; - ## Alarm if, for a pair [orig, resp], the number of different signature - ## matches has reached one of the thresholds. + ## Generate a notice if, for a pair [orig, resp], the number of different + ## signature matches has reached one of the thresholds. const vert_scan_thresholds = { 5, 10, 50, 100, 500, 1000 } &redef; - ## Alarm if a :bro:enum:`SIG_COUNT_PER_RESP` signature is triggered as - ## often as given by one of these thresholds. + ## Generate a notice if a :bro:enum:`Signatures::SIG_COUNT_PER_RESP` + ## signature is triggered as often as given by one of these thresholds. const count_thresholds = { 5, 10, 50, 100, 500, 1000, 10000, 1000000, } &redef; - ## The interval between when :bro:id:`Signature_Summary` notices are - ## generated. + ## The interval between when :bro:enum:`Signatures::Signature_Summary` + ## notice are generated. const summary_interval = 1 day &redef; - + + ## This event can be handled to access/alter data about to be logged + ## to the signature logging stream. + ## + ## rec: The record of signature data about to be logged. global log_signature: event(rec: Info); } diff --git a/scripts/base/frameworks/software/main.bro b/scripts/base/frameworks/software/main.bro index 574886288a..7471076335 100644 --- a/scripts/base/frameworks/software/main.bro +++ b/scripts/base/frameworks/software/main.bro @@ -1,5 +1,5 @@ ##! This script provides the framework for software version detection and -##! parsing, but doesn't actually do any detection on it's own. It relys on +##! parsing but doesn't actually do any detection on it's own. It relys on ##! other protocol specific scripts to parse out software from the protocols ##! that they analyze. The entry point for providing new software detections ##! to this framework is through the :bro:id:`Software::found` function. @@ -10,39 +10,46 @@ module Software; export { - + ## The software logging stream identifier. redef enum Log::ID += { LOG }; - + + ## Scripts detecting new types of software need to redef this enum to add + ## their own specific software types which would then be used when they + ## create :bro:type:`Software::Info` records. type Type: enum { + ## A placeholder type for when the type of software is not known. UNKNOWN, - OPERATING_SYSTEM, - DATABASE_SERVER, - # There are a number of ways to detect printers on the - # network, we just need to codify them in a script and move - # this out of here. It isn't currently used for anything. - PRINTER, }; - + + ## A structure to represent the numeric version of software. type Version: record { - major: count &optional; ##< Major version number - minor: count &optional; ##< Minor version number - minor2: count &optional; ##< Minor subversion number - addl: string &optional; ##< Additional version string (e.g. "beta42") + ## Major version number + major: count &optional; + ## Minor version number + minor: count &optional; + ## Minor subversion number + minor2: count &optional; + ## Additional version string (e.g. "beta42") + addl: string &optional; } &log; - + + ## The record type that is used for representing and logging software. type Info: record { - ## The time at which the software was first detected. - ts: time &log; + ## The time at which the software was detected. + ts: time &log &optional; ## The IP address detected running the software. host: addr &log; - ## The type of software detected (e.g. WEB_SERVER) + ## The Port on which the software is running. Only sensible for server software. + host_p: port &log &optional; + ## The type of software detected (e.g. :bro:enum:`HTTP::SERVER`). software_type: Type &log &default=UNKNOWN; - ## Name of the software (e.g. Apache) - name: string &log; - ## Version of the software - version: Version &log; + ## Name of the software (e.g. Apache). + name: string &log &optional; + ## Version of the software. + version: Version &log &optional; ## The full unparsed version string found because the version parsing - ## doesn't work 100% reliably and this acts as a fall back in the logs. + ## doesn't always work reliably in all cases and this acts as a + ## fallback in the logs. unparsed_version: string &log &optional; ## This can indicate that this software being detected should @@ -55,37 +62,29 @@ export { force_log: bool &default=F; }; - ## The hosts whose software should be detected and tracked. + ## Hosts whose software should be detected and tracked. ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS const asset_tracking = LOCAL_HOSTS &redef; - ## Other scripts should call this function when they detect software. - ## unparsed_version: This is the full string from which the - ## :bro:type:`Software::Info` was extracted. + ## id: The connection id where the software was discovered. + ## + ## info: A record representing the software discovered. + ## ## Returns: T if the software was logged, F otherwise. - global found: function(id: conn_id, info: Software::Info): bool; + global found: function(id: conn_id, info: Info): bool; - ## This function can take many software version strings and parse them - ## into a sensible :bro:type:`Software::Version` record. There are - ## still many cases where scripts may have to have their own specific - ## version parsing though. - global parse: function(unparsed_version: string, - host: addr, - software_type: Type): Info; - - ## Compare two versions. + ## Compare two version records. + ## ## Returns: -1 for v1 < v2, 0 for v1 == v2, 1 for v1 > v2. ## If the numerical version numbers match, the addl string ## is compared lexicographically. global cmp_versions: function(v1: Version, v2: Version): int; - ## This type represents a set of software. It's used by the - ## :bro:id:`tracked` variable to store all known pieces of software - ## for a particular host. It's indexed with the name of a piece of - ## software such as "Firefox" and it yields a - ## :bro:type:`Software::Info` record with more information about the - ## software. + ## Type to represent a collection of :bro:type:`Software::Info` records. + ## It's indexed with the name of a piece of software such as "Firefox" + ## and it yields a :bro:type:`Software::Info` record with more information + ## about the software. type SoftwareSet: table[string] of Info; ## The set of software associated with an address. Data expires from @@ -101,112 +100,23 @@ export { global log_software: event(rec: Info); } -event bro_init() +event bro_init() &priority=5 { Log::create_stream(Software::LOG, [$columns=Info, $ev=log_software]); } - -function parse_mozilla(unparsed_version: string, - host: addr, - software_type: Type): Info - { - local software_name = ""; - local v: Version; - local parts: table[count] of string; - if ( /Opera [0-9\.]*$/ in unparsed_version ) - { - software_name = "Opera"; - parts = split_all(unparsed_version, /Opera [0-9\.]*$/); - if ( 2 in parts ) - v = parse(parts[2], host, software_type)$version; - } - else if ( / MSIE / in unparsed_version ) - { - software_name = "MSIE"; - if ( /Trident\/4\.0/ in unparsed_version ) - v = [$major=8,$minor=0]; - else if ( /Trident\/5\.0/ in unparsed_version ) - v = [$major=9,$minor=0]; - else if ( /Trident\/6\.0/ in unparsed_version ) - v = [$major=10,$minor=0]; - else - { - parts = split_all(unparsed_version, /MSIE [0-9]{1,2}\.*[0-9]*b?[0-9]*/); - if ( 2 in parts ) - v = parse(parts[2], host, software_type)$version; - } - } - else if ( /Version\/.*Safari\// in unparsed_version ) - { - software_name = "Safari"; - parts = split_all(unparsed_version, /Version\/[0-9\.]*/); - if ( 2 in parts ) - { - v = parse(parts[2], host, software_type)$version; - if ( / Mobile\/?.* Safari/ in unparsed_version ) - v$addl = "Mobile"; - } - } - else if ( /(Firefox|Netscape|Thunderbird)\/[0-9\.]*/ in unparsed_version ) - { - parts = split_all(unparsed_version, /(Firefox|Netscape|Thunderbird)\/[0-9\.]*/); - if ( 2 in parts ) - { - local tmp_s = parse(parts[2], host, software_type); - software_name = tmp_s$name; - v = tmp_s$version; - } - } - else if ( /Chrome\/.*Safari\// in unparsed_version ) - { - software_name = "Chrome"; - parts = split_all(unparsed_version, /Chrome\/[0-9\.]*/); - if ( 2 in parts ) - v = parse(parts[2], host, software_type)$version; - } - else if ( /^Opera\// in unparsed_version ) - { - if ( /Opera M(ini|obi)\// in unparsed_version ) - { - parts = split_all(unparsed_version, /Opera M(ini|obi)/); - if ( 2 in parts ) - software_name = parts[2]; - parts = split_all(unparsed_version, /Version\/[0-9\.]*/); - if ( 2 in parts ) - v = parse(parts[2], host, software_type)$version; - else - { - parts = split_all(unparsed_version, /Opera Mini\/[0-9\.]*/); - if ( 2 in parts ) - v = parse(parts[2], host, software_type)$version; - } - } - else - { - software_name = "Opera"; - parts = split_all(unparsed_version, /Version\/[0-9\.]*/); - if ( 2 in parts ) - v = parse(parts[2], host, software_type)$version; - } - } - else if ( /AppleWebKit\/[0-9\.]*/ in unparsed_version ) - { - software_name = "Unspecified WebKit"; - parts = split_all(unparsed_version, /AppleWebKit\/[0-9\.]*/); - if ( 2 in parts ) - v = parse(parts[2], host, software_type)$version; - } +type Description: record { + name: string; + version: Version; + unparsed_version: string; +}; - return [$ts=network_time(), $host=host, $name=software_name, $version=v, - $software_type=software_type, $unparsed_version=unparsed_version]; - } +# Defining this here because of a circular dependency between two functions. +global parse_mozilla: function(unparsed_version: string): Description; # Don't even try to understand this now, just make sure the tests are # working. -function parse(unparsed_version: string, - host: addr, - software_type: Type): Info +function parse(unparsed_version: string): Description { local software_name = ""; local v: Version; @@ -214,7 +124,7 @@ function parse(unparsed_version: string, # Parse browser-alike versions separately if ( /^(Mozilla|Opera)\/[0-9]\./ in unparsed_version ) { - return parse_mozilla(unparsed_version, host, software_type); + return parse_mozilla(unparsed_version); } else { @@ -239,7 +149,7 @@ function parse(unparsed_version: string, if ( 4 in version_numbers && version_numbers[4] != "" ) v$addl = strip(version_numbers[4]); else if ( 3 in version_parts && version_parts[3] != "" && - version_parts[3] != ")" ) + version_parts[3] != ")" ) { if ( /^[[:blank:]]*\([a-zA-Z0-9\-\._[:blank:]]*\)/ in version_parts[3] ) { @@ -276,9 +186,102 @@ function parse(unparsed_version: string, v$major = extract_count(version_numbers[1]); } } - return [$ts=network_time(), $host=host, $name=software_name, - $version=v, $unparsed_version=unparsed_version, - $software_type=software_type]; + + return [$version=v, $unparsed_version=unparsed_version, $name=software_name]; + } + + +function parse_mozilla(unparsed_version: string): Description + { + local software_name = ""; + local v: Version; + local parts: table[count] of string; + + if ( /Opera [0-9\.]*$/ in unparsed_version ) + { + software_name = "Opera"; + parts = split_all(unparsed_version, /Opera [0-9\.]*$/); + if ( 2 in parts ) + v = parse(parts[2])$version; + } + else if ( / MSIE / in unparsed_version ) + { + software_name = "MSIE"; + if ( /Trident\/4\.0/ in unparsed_version ) + v = [$major=8,$minor=0]; + else if ( /Trident\/5\.0/ in unparsed_version ) + v = [$major=9,$minor=0]; + else if ( /Trident\/6\.0/ in unparsed_version ) + v = [$major=10,$minor=0]; + else + { + parts = split_all(unparsed_version, /MSIE [0-9]{1,2}\.*[0-9]*b?[0-9]*/); + if ( 2 in parts ) + v = parse(parts[2])$version; + } + } + else if ( /Version\/.*Safari\// in unparsed_version ) + { + software_name = "Safari"; + parts = split_all(unparsed_version, /Version\/[0-9\.]*/); + if ( 2 in parts ) + { + v = parse(parts[2])$version; + if ( / Mobile\/?.* Safari/ in unparsed_version ) + v$addl = "Mobile"; + } + } + else if ( /(Firefox|Netscape|Thunderbird)\/[0-9\.]*/ in unparsed_version ) + { + parts = split_all(unparsed_version, /(Firefox|Netscape|Thunderbird)\/[0-9\.]*/); + if ( 2 in parts ) + { + local tmp_s = parse(parts[2]); + software_name = tmp_s$name; + v = tmp_s$version; + } + } + else if ( /Chrome\/.*Safari\// in unparsed_version ) + { + software_name = "Chrome"; + parts = split_all(unparsed_version, /Chrome\/[0-9\.]*/); + if ( 2 in parts ) + v = parse(parts[2])$version; + } + else if ( /^Opera\// in unparsed_version ) + { + if ( /Opera M(ini|obi)\// in unparsed_version ) + { + parts = split_all(unparsed_version, /Opera M(ini|obi)/); + if ( 2 in parts ) + software_name = parts[2]; + parts = split_all(unparsed_version, /Version\/[0-9\.]*/); + if ( 2 in parts ) + v = parse(parts[2])$version; + else + { + parts = split_all(unparsed_version, /Opera Mini\/[0-9\.]*/); + if ( 2 in parts ) + v = parse(parts[2])$version; + } + } + else + { + software_name = "Opera"; + parts = split_all(unparsed_version, /Version\/[0-9\.]*/); + if ( 2 in parts ) + v = parse(parts[2])$version; + } + } + else if ( /AppleWebKit\/[0-9\.]*/ in unparsed_version ) + { + software_name = "Unspecified WebKit"; + parts = split_all(unparsed_version, /AppleWebKit\/[0-9\.]*/); + if ( 2 in parts ) + v = parse(parts[2])$version; + } + + return [$version=v, $unparsed_version=unparsed_version, $name=software_name]; } @@ -363,7 +366,7 @@ function software_fmt(i: Info): string # Insert a mapping into the table # Overides old entries for the same software and generates events if needed. -event software_register(id: conn_id, info: Info) +event register(id: conn_id, info: Info) { # Host already known? if ( info$host !in tracked ) @@ -391,7 +394,31 @@ function found(id: conn_id, info: Info): bool { if ( info$force_log || addr_matches_host(info$host, asset_tracking) ) { - event software_register(id, info); + if ( !info?$ts ) + info$ts=network_time(); + + if ( info?$version ) # we have a version number and don't have to parse. check if the name is also set... + { + if ( ! info?$name ) + { + Reporter::error("Required field name not present in Software::found"); + return F; + } + } + else # no version present, we have to parse... + { + if ( !info?$unparsed_version ) + { + Reporter::error("No unparsed version string present in Info record with version in Software::found"); + return F; + } + local sw = parse(info$unparsed_version); + info$unparsed_version = sw$unparsed_version; + info$name = sw$name; + info$version = sw$version; + } + + event register(id, info); return T; } else diff --git a/scripts/base/frameworks/tunnels/__load__.bro b/scripts/base/frameworks/tunnels/__load__.bro new file mode 100644 index 0000000000..a10fe855df --- /dev/null +++ b/scripts/base/frameworks/tunnels/__load__.bro @@ -0,0 +1 @@ +@load ./main diff --git a/scripts/base/frameworks/tunnels/main.bro b/scripts/base/frameworks/tunnels/main.bro new file mode 100644 index 0000000000..0861559558 --- /dev/null +++ b/scripts/base/frameworks/tunnels/main.bro @@ -0,0 +1,149 @@ +##! This script handles the tracking/logging of tunnels (e.g. Teredo, +##! AYIYA, or IP-in-IP such as 6to4 where "IP" is either IPv4 or IPv6). +##! +##! For any connection that occurs over a tunnel, information about its +##! encapsulating tunnels is also found in the *tunnel* field of +##! :bro:type:`connection`. + +module Tunnel; + +export { + ## The tunnel logging stream identifier. + redef enum Log::ID += { LOG }; + + ## Types of interesting activity that can occur with a tunnel. + type Action: enum { + ## A new tunnel (encapsulating "connection") has been seen. + DISCOVER, + ## A tunnel connection has closed. + CLOSE, + ## No new connections over a tunnel happened in the amount of + ## time indicated by :bro:see:`Tunnel::expiration_interval`. + EXPIRE, + }; + + ## The record type which contains column fields of the tunnel log. + type Info: record { + ## Time at which some tunnel activity occurred. + ts: time &log; + ## The unique identifier for the tunnel, which may correspond + ## to a :bro:type:`connection`'s *uid* field for non-IP-in-IP tunnels. + ## This is optional because there could be numerous connections + ## for payload proxies like SOCKS but we should treat it as a single + ## tunnel. + uid: string &log &optional; + ## The tunnel "connection" 4-tuple of endpoint addresses/ports. + ## For an IP tunnel, the ports will be 0. + id: conn_id &log; + ## The type of tunnel. + tunnel_type: Tunnel::Type &log; + ## The type of activity that occurred. + action: Action &log; + }; + + ## Logs all tunnels in an encapsulation chain with action + ## :bro:see:`Tunnel::DISCOVER` that aren't already in the + ## :bro:id:`Tunnel::active` table and adds them if not. + global register_all: function(ecv: EncapsulatingConnVector); + + ## Logs a single tunnel "connection" with action + ## :bro:see:`Tunnel::DISCOVER` if it's not already in the + ## :bro:id:`Tunnel::active` table and adds it if not. + global register: function(ec: EncapsulatingConn); + + ## Logs a single tunnel "connection" with action + ## :bro:see:`Tunnel::EXPIRE` and removes it from the + ## :bro:id:`Tunnel::active` table. + ## + ## t: A table of tunnels. + ## + ## idx: The index of the tunnel table corresponding to the tunnel to expire. + ## + ## Returns: 0secs, which when this function is used as an + ## :bro:attr:`&expire_func`, indicates to remove the element at + ## *idx* immediately. + global expire: function(t: table[conn_id] of Info, idx: conn_id): interval; + + ## Removes a single tunnel from the :bro:id:`Tunnel::active` table + ## and logs the closing/expiration of the tunnel. + ## + ## tunnel: The tunnel which has closed or expired. + ## + ## action: The specific reason for the tunnel ending. + global close: function(tunnel: Info, action: Action); + + ## The amount of time a tunnel is not used in establishment of new + ## connections before it is considered inactive/expired. + const expiration_interval = 1hrs &redef; + + ## Currently active tunnels. That is, tunnels for which new, encapsulated + ## connections have been seen in the interval indicated by + ## :bro:see:`Tunnel::expiration_interval`. + global active: table[conn_id] of Info = table() &read_expire=expiration_interval &expire_func=expire; +} + +const ayiya_ports = { 5072/udp }; +redef dpd_config += { [ANALYZER_AYIYA] = [$ports = ayiya_ports] }; + +const teredo_ports = { 3544/udp }; +redef dpd_config += { [ANALYZER_TEREDO] = [$ports = teredo_ports] }; + +redef likely_server_ports += { ayiya_ports, teredo_ports }; + +event bro_init() &priority=5 + { + Log::create_stream(Tunnel::LOG, [$columns=Info]); + } + +function register_all(ecv: EncapsulatingConnVector) + { + for ( i in ecv ) + register(ecv[i]); + } + +function register(ec: EncapsulatingConn) + { + if ( ec$cid !in active ) + { + local tunnel: Info; + tunnel$ts = network_time(); + if ( ec?$uid ) + tunnel$uid = ec$uid; + tunnel$id = ec$cid; + tunnel$action = DISCOVER; + tunnel$tunnel_type = ec$tunnel_type; + active[ec$cid] = tunnel; + Log::write(LOG, tunnel); + } + } + +function close(tunnel: Info, action: Action) + { + tunnel$action = action; + tunnel$ts = network_time(); + Log::write(LOG, tunnel); + delete active[tunnel$id]; + } + +function expire(t: table[conn_id] of Info, idx: conn_id): interval + { + close(t[idx], EXPIRE); + return 0secs; + } + +event new_connection(c: connection) &priority=5 + { + if ( c?$tunnel ) + register_all(c$tunnel); + } + +event tunnel_changed(c: connection, e: EncapsulatingConnVector) &priority=5 + { + register_all(e); + } + +event connection_state_remove(c: connection) &priority=-5 + { + if ( c$id in active ) + close(active[c$id], CLOSE); + } diff --git a/scripts/base/init-bare.bro b/scripts/base/init-bare.bro index 859a69f2dc..70026394e9 100644 --- a/scripts/base/init-bare.bro +++ b/scripts/base/init-bare.bro @@ -2,172 +2,412 @@ @load base/types.bif # Type declarations + +## An ordered array of strings. The entries are indexed by succesive numbers. Note +## that it depends on the usage whether the first index is zero or one. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type string_array: table[count] of string; + +## A set of strings. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type string_set: set[string]; + +## A set of addresses. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type addr_set: set[addr]; + +## A set of counts. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type count_set: set[count]; + +## A vector of counts, used by some builtin functions to store a list of indices. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type index_vec: vector of count; + +## A vector of strings. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type string_vec: vector of string; +## A vector of addresses. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. +type addr_vec: vector of addr; + +## A table of strings indexed by strings. +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type table_string_of_string: table[string] of string; -type transport_proto: enum { unknown_transport, tcp, udp, icmp }; +## A connection's transport-layer protocol. Note that Bro uses the term +## "connection" broadly, using flow semantics for ICMP and UDP. +type transport_proto: enum { + unknown_transport, ##< An unknown transport-layer protocol. + tcp, ##< TCP. + udp, ##< UDP. + icmp ##< ICMP. +}; +## A connection's identifying 4-tuple of endpoints and ports. +## +## .. note:: It's actually a 5-tuple: the transport-layer protocol is stored as +## part of the port values, `orig_p` and `resp_p`, and can be extracted from them +## with :bro:id:`get_port_transport_proto`. type conn_id: record { - orig_h: addr; - orig_p: port; - resp_h: addr; - resp_p: port; + orig_h: addr; ##< The originator's IP address. + orig_p: port; ##< The originator's port number. + resp_h: addr; ##< The responder's IP address. + resp_p: port; ##< The responder's port number. } &log; +## Specifics about an ICMP conversation. ICMP events typically pass this in +## addition to :bro:type:`conn_id`. +## +## .. bro:see:: icmp_echo_reply icmp_echo_request icmp_redirect icmp_sent +## icmp_time_exceeded icmp_unreachable type icmp_conn: record { - orig_h: addr; - resp_h: addr; - itype: count; - icode: count; - len: count; -}; - -type icmp_hdr: record { - icmp_type: count; ##< type of message + orig_h: addr; ##< The originator's IP address. + resp_h: addr; ##< The responder's IP address. + itype: count; ##< The ICMP type of the packet that triggered the instantiation of the record. + icode: count; ##< The ICMP code of the packet that triggered the instantiation of the record. + len: count; ##< The length of the ICMP payload of the packet that triggered the instantiation of the record. + hlim: count; ##< The encapsulating IP header's Hop Limit value. + v6: bool; ##< True if it's an ICMPv6 packet. }; +## Packet context part of an ICMP message. The fields of this record reflect the +## packet that is described by the context. +## +## .. bro:see:: icmp_time_exceeded icmp_unreachable type icmp_context: record { - id: conn_id; - len: count; - proto: count; - frag_offset: count; + id: conn_id; ##< The packet's 4-tuple. + len: count; ##< The length of the IP packet (headers + payload). + proto: count; ##< The packet's transport-layer protocol. + frag_offset: count; ##< The packet's fragementation offset. + ## True if the packet's IP header is not fully included in the context + ## or if there is not enough of the transport header to determine source + ## and destination ports. If that is the cast, the appropriate fields + ## of this record will be set to null values. bad_hdr_len: bool; - bad_checksum: bool; - MF: bool; - DF: bool; + bad_checksum: bool; ##< True if the packet's IP checksum is not correct. + MF: bool; ##< True if the packets *more fragements* flag is set. + DF: bool; ##< True if the packets *don't fragment* flag is set. }; +## Values extracted from a Prefix Information option in an ICMPv6 neighbor +## discovery message as specified by :rfc:`4861`. +## +## .. bro:see:: icmp6_nd_option +type icmp6_nd_prefix_info: record { + ## Number of leading bits of the *prefix* that are valid. + prefix_len: count; + ## Flag indicating the prefix can be used for on-link determination. + L_flag: bool; + ## Autonomous address-configuration flag. + A_flag: bool; + ## Length of time in seconds that the prefix is valid for purpose of + ## on-link determination (0xffffffff represents infinity). + valid_lifetime: interval; + ## Length of time in seconds that the addresses generated from the prefix + ## via stateless address autoconfiguration remain preferred + ## (0xffffffff represents infinity). + preferred_lifetime: interval; + ## An IP address or prefix of an IP address. Use the *prefix_len* field + ## to convert this into a :bro:type:`subnet`. + prefix: addr; +}; + +## Options extracted from ICMPv6 neighbor discovery messages as specified +## by :rfc:`4861`. +## +## .. bro:see:: icmp_router_solicitation icmp_router_advertisement +## icmp_neighbor_advertisement icmp_neighbor_solicitation icmp_redirect +## icmp6_nd_options +type icmp6_nd_option: record { + ## 8-bit identifier of the type of option. + otype: count; + ## 8-bit integer representing the length of the option (including the type + ## and length fields) in units of 8 octets. + len: count; + ## Source Link-Layer Address (Type 1) or Target Link-Layer Address (Type 2). + ## Byte ordering of this is dependent on the actual link-layer. + link_address: string &optional; + ## Prefix Information (Type 3). + prefix: icmp6_nd_prefix_info &optional; + ## Redirected header (Type 4). This field contains the context of the + ## original, redirected packet. + redirect: icmp_context &optional; + ## Recommended MTU for the link (Type 5). + mtu: count &optional; + ## The raw data of the option (everything after type & length fields), + ## useful for unknown option types or when the full option payload is + ## truncated in the captured packet. In those cases, option fields + ## won't be pre-extracted into the fields above. + payload: string &optional; +}; + +## A type alias for a vector of ICMPv6 neighbor discovery message options. +type icmp6_nd_options: vector of icmp6_nd_option; + +# A DNS mapping between IP address and hostname resolved by Bro's internal +# resolver. +# +# .. bro:see:: dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +# dns_mapping_unverified dns_mapping_valid type dns_mapping: record { + ## The time when the mapping was created, which corresponds to the when the DNS + ## query was sent out. creation_time: time; - + ## If the mapping is the result of a name lookup, the queried host name; otherwise + ## empty. req_host: string; + ## If the mapping is the result of a pointer lookup, the queried address; otherwise + ## null. req_addr: addr; - + ## True if the lookup returned success. Only then, the result ields are valid. valid: bool; + ## If the mapping is the result of a pointer lookup, the resolved hostname; + ## otherwise empty. hostname: string; + ## If the mapping is the result of an address lookup, the resolved address(es); + ## otherwise empty. addrs: addr_set; }; +## A parsed host/port combination describing server endpoint for an upcoming +## data transfert. +## +## .. bro:see:: fmt_ftp_port parse_eftp_port parse_ftp_epsv parse_ftp_pasv +## parse_ftp_port type ftp_port: record { - h: addr; - p: port; - valid: bool; ##< true if format was right -}; - -type endpoint: record { - size: count; ##< logical size (for TCP: from seq numbers) - state: count; - - ## Number of packets on the wire - ## Set if :bro:id:`use_conn_size_analyzer` is true. - num_pkts: count &optional; - ## Number of IP-level bytes on the wire - ## Set if :bro:id:`use_conn_size_analyzer` is true. - num_bytes_ip: count &optional; + h: addr; ##< The host's address. + p: port; ##< The host's port. + valid: bool; ##< True if format was right. Only then, *h* and *p* are valid. }; +## Statistics about what a TCP endpoint sent. +## +## .. bro:see:: conn_stats type endpoint_stats: record { - num_pkts: count; - num_rxmit: count; - num_rxmit_bytes: count; - num_in_order: count; - num_OO: count; - num_repl: count; + num_pkts: count; ##< Number of packets. + num_rxmit: count; ##< Number of retransmission. + num_rxmit_bytes: count; ##< Number of retransmitted bytes. + num_in_order: count; ##< Number of in-order packets. + num_OO: count; ##< Number out-of-order packets. + num_repl: count; ##< Number of replicated packets (last packet was sent again). + ## Endian type used by the endpoint, if it it could be determined from the sequence + ## numbers used. This is one of :bro:see:`ENDIAN_UNKNOWN`, :bro:see:`ENDIAN_BIG`, + ## :bro:see:`ENDIAN_LITTLE`, and :bro:see:`ENDIAN_CONFUSED`. endian_type: count; }; +## A unique analyzer instance ID. Each time instantiates a protocol analyzers +## for a connection, it assigns it a unique ID that can be used to reference +## that instance. +## +## .. bro:see:: analyzer_name disable_analyzer protocol_confirmation +## protocol_violation +## +## .. todo::While we declare an alias for the type here, the events/functions still +## use ``count``. That should be changed. type AnalyzerID: count; +module Tunnel; +export { + ## Records the identity of an encapsulating parent of a tunneled connection. + type EncapsulatingConn: record { + ## The 4-tuple of the encapsulating "connection". In case of an IP-in-IP + ## tunnel the ports will be set to 0. The direction (i.e., orig and + ## resp) are set according to the first tunneled packet seen + ## and not according to the side that established the tunnel. + cid: conn_id; + ## The type of tunnel. + tunnel_type: Tunnel::Type; + ## A globally unique identifier that, for non-IP-in-IP tunnels, + ## cross-references the *uid* field of :bro:type:`connection`. + uid: string &optional; + } &log; +} # end export +module GLOBAL; + +## A type alias for a vector of encapsulating "connections", i.e for when +## there are tunnels within tunnels. +## +## .. todo:: We need this type definition only for declaring builtin functions +## via ``bifcl``. We should extend ``bifcl`` to understand composite types +## directly and then remove this alias. +type EncapsulatingConnVector: vector of Tunnel::EncapsulatingConn; + +## Statistics about a :bro:type:`connection` endpoint. +## +## .. bro:see:: connection +type endpoint: record { + size: count; ##< Logical size of data sent (for TCP: derived from sequence numbers). + ## Endpoint state. For TCP connection, one of the constants: + ## :bro:see:`TCP_INACTIVE` :bro:see:`TCP_SYN_SENT` :bro:see:`TCP_SYN_ACK_SENT` + ## :bro:see:`TCP_PARTIAL` :bro:see:`TCP_ESTABLISHED` :bro:see:`TCP_CLOSED` + ## :bro:see:`TCP_RESET`. For UDP, one of :bro:see:`UDP_ACTIVE` and + ## :bro:see:`UDP_INACTIVE`. + state: count; + ## Number of packets sent. Only set if :bro:id:`use_conn_size_analyzer` is true. + num_pkts: count &optional; + ## Number of IP-level bytes sent. Only set if :bro:id:`use_conn_size_analyzer` is + ## true. + num_bytes_ip: count &optional; + ## The current IPv6 flow label that the connection endpoint is using. + ## Always 0 if the connection is over IPv4. + flow_label: count; +}; + +## A connection. This is Bro's basic connection type describing IP- and +## transport-layer information about the conversation. Note that Bro uses a +## liberal interpreation of "connection" and associates instances of this type +## also with UDP and ICMP flows. type connection: record { - id: conn_id; - orig: endpoint; - resp: endpoint; - start_time: time; + id: conn_id; ##< The connection's identifying 4-tuple. + orig: endpoint; ##< Statistics about originator side. + resp: endpoint; ##< Statistics about responder side. + start_time: time; ##< The timestamp of the connection's first packet. + ## The duration of the conversation. Roughly speaking, this is the interval between + ## first and last data packet (low-level TCP details may adjust it somewhat in + ## ambigious cases). duration: interval; - service: string_set; ##< if empty, service hasn't been determined - addl: string; - hot: count; ##< how hot; 0 = don't know or not hot - history: string; + ## The set of services the connection is using as determined by Bro's dynamic + ## protocol detection. Each entry is the label of an analyzer that confirmed that + ## it could parse the connection payload. While typically, there will be at + ## most one entry for each connection, in principle it is possible that more than + ## one protocol analyzer is able to parse the same data. If so, all will + ## be recorded. Also note that the recorced services are independent of any + ## transport-level protocols. + service: set[string]; + addl: string; ##< Deprecated. + hot: count; ##< Deprecated. + history: string; ##< State history of connections. See *history* in :bro:see:`Conn::Info`. + ## A globally unique connection identifier. For each connection, Bro creates an ID + ## that is very likely unique across independent Bro runs. These IDs can thus be + ## used to tag and locate information associated with that connection. uid: string; + ## If the connection is tunneled, this field contains information about + ## the encapsulating "connection(s)" with the outermost one starting + ## at index zero. It's also always the first such enapsulation seen + ## for the connection unless the :bro:id:`tunnel_changed` event is handled + ## and re-assigns this field to the new encapsulation. + tunnel: EncapsulatingConnVector &optional; }; +## Fields of a SYN packet. +## +## .. bro:see:: connection_SYN_packet type SYN_packet: record { - is_orig: bool; - DF: bool; - ttl: count; - size: count; - win_size: count; - win_scale: int; - MSS: count; - SACK_OK: bool; + is_orig: bool; ##< True if the packet was sent the connection's originator. + DF: bool; ##< True if the *don't fragment* is set in the IP header. + ttl: count; ##< The IP header's time-to-live. + size: count; ##< The size of the packet's payload as specified in the IP header. + win_size: count; ##< The window size from the TCP header. + win_scale: int; ##< The window scale option if present, or -1 if not. + MSS: count; ##< The maximum segement size if present, or 0 if not. + SACK_OK: bool; ##< True if the *SACK* option is present. }; -## This record is used for grabbing packet capturing information from -## the core with the :bro:id:`net_stats` BiF. All counts are cumulative. +## Packet capture statistics. All counts are cumulative. +## +## .. bro:see:: net_stats type NetStats: record { - pkts_recvd: count &default=0; ##< Packets received by Bro. - pkts_dropped: count &default=0; ##< Packets dropped. - pkts_link: count &default=0; ##< Packets seen on the link (not always available). + pkts_recvd: count &default=0; ##< Packets received by Bro. + pkts_dropped: count &default=0; ##< Packets reported dropped by the system. + ## Packets seen on the link. Note that this may differ + ## from *pkts_recvd* because of a potential capture_filter. See + ## :doc:`/scripts/base/frameworks/packet-filter/main`. Depending on the packet + ## capture system, this value may not be available and will then be always set to + ## zero. + pkts_link: count &default=0; }; +## Statistics about Bro's resource consumption. +## +## .. bro:see:: resource_usage +## +## .. note:: All process-level values refer to Bro's main process only, not to +## the child process it spawns for doing communication. type bro_resources: record { - version: string; ##< Bro version string - debug: bool; ##< true if compiled with --enable-debug - start_time: time; ##< start time of process - real_time: interval; ##< elapsed real time since Bro started running - user_time: interval; ##< user CPU seconds - system_time: interval; ##< system CPU seconds - mem: count; ##< maximum memory consumed, in KB - minor_faults: count; ##< page faults not requiring actual I/O - major_faults: count; ##< page faults requiring actual I/O - num_swap: count; ##< times swapped out - blocking_input: count; ##< blocking input operations - blocking_output: count; ##< blocking output operations - num_context: count; ##< number of involuntary context switches + version: string; ##< Bro version string. + debug: bool; ##< True if compiled with --enable-debug. + start_time: time; ##< Start time of process. + real_time: interval; ##< Elapsed real time since Bro started running. + user_time: interval; ##< User CPU seconds. + system_time: interval; ##< System CPU seconds. + mem: count; ##< Maximum memory consumed, in KB. + minor_faults: count; ##< Page faults not requiring actual I/O. + major_faults: count; ##< Page faults requiring actual I/O. + num_swap: count; ##< Times swapped out. + blocking_input: count; ##< Blocking input operations. + blocking_output: count; ##< Blocking output operations. + num_context: count; ##< Number of involuntary context switches. - num_TCP_conns: count; ##< current number of TCP connections - num_UDP_conns: count; - num_ICMP_conns: count; - num_fragments: count; ##< current number of fragments pending reassembly - num_packets: count; ##< total number packets processed to date - num_timers: count; ##< current number of pending timers - num_events_queued: count; ##< total number of events queued so far - num_events_dispatched: count; ##< same for events dispatched + num_TCP_conns: count; ##< Current number of TCP connections in memory. + num_UDP_conns: count; ##< Current number of UDP flows in memory. + num_ICMP_conns: count; ##< Current number of ICMP flows in memory. + num_fragments: count; ##< Current number of fragments pending reassembly. + num_packets: count; ##< Total number packets processed to date. + num_timers: count; ##< Current number of pending timers. + num_events_queued: count; ##< Total number of events queued so far. + num_events_dispatched: count; ##< Total number of events dispatched so far. - max_TCP_conns: count; ##< maximum number of TCP connections, etc. - max_UDP_conns: count; - max_ICMP_conns: count; - max_fragments: count; - max_timers: count; + max_TCP_conns: count; ##< Maximum number of concurrent TCP connections so far. + max_UDP_conns: count; ##< Maximum number of concurrent UDP connections so far. + max_ICMP_conns: count; ##< Maximum number of concurrent ICMP connections so far. + max_fragments: count; ##< Maximum number of concurrently buffered fragements so far. + max_timers: count; ##< Maximum number of concurrent timers pending so far. }; - -## Summary statistics of all DFA_State_Caches. +## Summary statistics of all regular expression matchers. +## +## .. bro:see:: get_matcher_stats type matcher_stats: record { - matchers: count; ##< number of distinct RE matchers - dfa_states: count; ##< number of DFA states across all matchers - computed: count; ##< number of computed DFA state transitions - mem: count; ##< number of bytes used by DFA states - hits: count; ##< number of cache hits - misses: count; ##< number of cache misses - avg_nfa_states: count; ##< average # NFA states across all matchers + matchers: count; ##< Number of distinct RE matchers. + dfa_states: count; ##< Number of DFA states across all matchers. + computed: count; ##< Number of computed DFA state transitions. + mem: count; ##< Number of bytes used by DFA states. + hits: count; ##< Number of cache hits. + misses: count; ##< Number of cache misses. + avg_nfa_states: count; ##< Average number of NFA states across all matchers. }; -## Info provided to gap_report, and also available by get_gap_summary(). +## Statistics about number of gaps in TCP connections. +## +## .. bro:see:: gap_report get_gap_summary type gap_info: record { - ack_events: count; ##< how many ack events *could* have had gaps - ack_bytes: count; ##< how many bytes those covered - gap_events: count; ##< how many *did* have gaps - gap_bytes: count; ##< how many bytes were missing in the gaps: + ack_events: count; ##< How many ack events *could* have had gaps. + ack_bytes: count; ##< How many bytes those covered. + gap_events: count; ##< How many *did* have gaps. + gap_bytes: count; ##< How many bytes were missing in the gaps. }; -# This record should be read-only. +## Deprecated. +## +## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere +## else. type packet: record { conn: connection; is_orig: bool; @@ -175,38 +415,92 @@ type packet: record { timestamp: time; }; -type var_sizes: table[string] of count; ##< indexed by var's name, returns size +## Table type used to map variable names to their memory allocation. +## +## .. bro:see:: global_sizes +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. +type var_sizes: table[string] of count; +## Meta-information about a script-level identifier. +## +## .. bro:see:: global_ids id_table type script_id: record { - type_name: string; - exported: bool; - constant: bool; - enum_constant: bool; - redefinable: bool; - value: any &optional; + type_name: string; ##< The name of the identifier's type. + exported: bool; ##< True if the identifier is exported. + constant: bool; ##< True if the identifier is a constant. + enum_constant: bool; ##< True if the identifier is an enum value. + redefinable: bool; ##< True if the identifier is declared with the :bro:attr:`&redef` attribute. + value: any &optional; ##< The current value of the identifier. }; +## Table type used to map script-level identifiers to meta-information +## describing them. +## +## .. bro:see:: global_ids script_id +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type id_table: table[string] of script_id; +## Meta-information about a record-field. +## +## .. bro:see:: record_fields record_field_table type record_field: record { - type_name: string; - log: bool; + type_name: string; ##< The name of the field's type. + log: bool; ##< True of the field is declared with :bro:attr:`&log` attribute. + ## The current value of the field in the record instance passed into + ## :bro:see:`record_fields` (if it has one). value: any &optional; - default_val: any &optional; + default_val: any &optional; ##< The value of the :bro:attr:`&default` attribute if defined. }; +## Table type used to map record field declarations to meta-information describing +## them. +## +## .. bro:see:: record_fields record_field +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type record_field_table: table[string] of record_field; +# todo::Do we still needs these here? Can they move into the packet filter +# framework? +# # The following two variables are defined here until the core is not # dependent on the names remaining as they are now. -## This is the list of capture filters indexed by some user-definable ID. + +## Set of BPF capture filters to use for capturing, indexed by a user-definable +## ID (which must be unique). If Bro is *not* configured to examine +## :bro:id:`PacketFilter::all_packets`, all packets matching at least +## one of the filters in this table (and all in :bro:id:`restrict_filters`) +## will be analyzed. +## +## .. bro:see:: PacketFilter PacketFilter::all_packets +## PacketFilter::unrestricted_filter restrict_filters global capture_filters: table[string] of string &redef; -## This is the list of restriction filters indexed by some user-definable ID. + +## Set of BPF filters to restrict capturing, indexed by a user-definable ID (which +## must be unique). If Bro is *not* configured to examine +## :bro:id:`PacketFilter::all_packets`, only packets matching *all* of the +## filters in this table (and any in :bro:id:`capture_filters`) will be +## analyzed. +## +## .. bro:see:: PacketFilter PacketFilter::all_packets +## PacketFilter::unrestricted_filter capture_filters global restrict_filters: table[string] of string &redef; -# {precompile,install}_pcap_filter identify the filter by IDs +## Enum type identifying dynamic BPF filters. These are used by +## :bro:see:`precompile_pcap_filter` and :bro:see:`precompile_pcap_filter`. type PcapFilterID: enum { None }; +## Deprecated. +## +## .. bro:see:: anonymize_addr type IPAddrAnonymization: enum { KEEP_ORIG_ADDR, SEQUENTIALLY_NUMBERED, @@ -215,34 +509,54 @@ type IPAddrAnonymization: enum { PREFIX_PRESERVING_MD5, }; +## Deprecated. +## +## .. bro:see:: anonymize_addr type IPAddrAnonymizationClass: enum { - ORIG_ADDR, ##< client address - RESP_ADDR, ##< server address + ORIG_ADDR, + RESP_ADDR, OTHER_ADDR, }; - -## Events are generated by event_peer's (which may be either ourselves, or -## some remote process). +## A locally unique ID identifying a communication peer. The ID is returned by +## :bro:id:`connect`. +## +## .. bro:see:: connect Communication type peer_id: count; +## A communication peer. +## +## .. bro:see:: complete_handshake disconnect finished_send_state +## get_event_peer get_local_event_peer remote_capture_filter +## remote_connection_closed remote_connection_error +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_log_peer remote_pong +## request_remote_events request_remote_logs request_remote_sync +## send_capture_filter send_current_packet send_id send_ping send_state +## set_accept_state set_compression_level +## +## .. todo::The type's name is to narrow these days, should rename. type event_peer: record { - id: peer_id; ##< locally unique ID of peer (returned by connect()) - host: addr; + id: peer_id; ##< Locally unique ID of peer (returned by :bro:id:`connect`). + host: addr; ##< The IP address of the peer. + ## Either the port we connected to at the peer; or our port the peer + ## connected to if the session is remotely initiated. p: port; - is_local: bool; ##< true if this peer describes the current process. - descr: string; ##< source's external_source_description - class: string &optional; # self-assigned class of the peer + is_local: bool; ##< True if this record describes the local process. + descr: string; ##< The peer's :bro:see:`peer_description`. + class: string &optional; ##< The self-assigned *class* of the peer. See :bro:see:`Communication::Node`. }; +## Deprecated. +## +## .. bro:see:: rotate_file rotate_file_by_name rotate_interval type rotate_info: record { - old_name: string; ##< original filename - new_name: string; ##< file name after rotation - open: time; ##< time when opened - close: time; ##< time when closed + old_name: string; ##< Original filename. + new_name: string; ##< File name after rotation. + open: time; ##< Time when opened. + close: time; ##< Time when closed. }; - ### The following aren't presently used, though they should be. # # Structures needed for subsequence computations (str_smith_waterman): # # @@ -251,6 +565,9 @@ type rotate_info: record { # SW_MULTIPLE, # }; +## Paramerts for the Smith-Waterman algorithm. +## +## .. bro:see:: str_smith_waterman type sw_params: record { ## Minimum size of a substring, minimum "granularity". min_strlen: count &default = 3; @@ -259,45 +576,73 @@ type sw_params: record { sw_variant: count &default = 0; }; +## Helper type for return value of Smith-Waterman algorithm. +## +## .. bro:see:: str_smith_waterman sw_substring_vec sw_substring sw_align_vec sw_params type sw_align: record { - str: string; ##< string a substring is part of - index: count; ##< at which offset + str: string; ##< String a substring is part of. + index: count; ##< Offset substring is located. }; +## Helper type for return value of Smith-Waterman algorithm. +## +## .. bro:see:: str_smith_waterman sw_substring_vec sw_substring sw_align sw_params type sw_align_vec: vector of sw_align; +## Helper type for return value of Smith-Waterman algorithm. +## +## .. bro:see:: str_smith_waterman sw_substring_vec sw_align_vec sw_align sw_params +## type sw_substring: record { - str: string; ##< a substring - aligns: sw_align_vec; ##< all strings of which it's a substring - new: bool; ##< true if start of new alignment + str: string; ##< A substring. + aligns: sw_align_vec; ##< All strings of which it's a substring. + new: bool; ##< True if start of new alignment. }; +## Return type for Smith-Waterman algorithm. +## +## .. bro:see:: str_smith_waterman sw_substring sw_align_vec sw_align sw_params +## +## .. todo:: We need this type definition only for declaring builtin functions via +## ``bifcl``. We should extend ``bifcl`` to understand composite types directly and +## then remove this alias. type sw_substring_vec: vector of sw_substring; -## Policy-level handling of pcap packets. +## Policy-level representation of a packet passed on by libpcap. The data includes +## the complete packet as returned by libpcap, including the link-layer header. +## +## .. bro:see:: dump_packet get_current_packet type pcap_packet: record { - ts_sec: count; - ts_usec: count; - caplen: count; - len: count; - data: string; + ts_sec: count; ##< The non-fractional part of the packet's timestamp (i.e., full seconds since the epoch). + ts_usec: count; ##< The fractional part of the packet's timestamp. + caplen: count; ##< The number of bytes captured (<= *len*). + len: count; ##< The length of the packet in bytes, including `_ for more information, Bro uses the same +## code. +## +## .. bro:see:: entropy_test_add entropy_test_finish entropy_test_init find_entropy type entropy_test_result: record { - entropy: double; - chi_square: double; - mean: double; - monte_carlo_pi: double; - serial_correlation: double; + entropy: double; ##< Information density. + chi_square: double; ##< Chi-Square value. + mean: double; ##< Arithmetic Mean. + monte_carlo_pi: double; ##< Monte-carlo value for pi. + serial_correlation: double; ##< Serial correlation coefficient. }; # Prototypes of Bro built-in functions. @@ -305,13 +650,19 @@ type entropy_test_result: record { @load base/bro.bif @load base/reporter.bif +## Deprecated. This is superseded by the new logging framework. global log_file_name: function(tag: string): string &redef; + +## Deprecated. This is superseded by the new logging framework. global open_log_file: function(tag: string): file &redef; -## Where to store the persistent state. +## Specifies a directory for Bro store its persistent state. All globals can +## be declared persistent via the :bro:attr:`&persistent` attribute. const state_dir = ".state" &redef; -## Length of the delays added when storing state incrementally. +## Length of the delays inserted when storing state incrementally. To avoid +## dropping packets when serializing larger volumes of persistent state to +## disk, Bro interleaves the operation with continued packet processing. const state_write_delay = 0.01 secs &redef; global done_with_network = F; @@ -328,6 +679,7 @@ function open_log_file(tag: string): file return open(log_file_name(tag)); } +## Internal function. function add_interface(iold: string, inew: string): string { if ( iold == "" ) @@ -335,8 +687,12 @@ function add_interface(iold: string, inew: string): string else return fmt("%s %s", iold, inew); } + +## Network interfaces to listen on. Use ``redef interfaces += "eth0"`` to +## extend. global interfaces = "" &add_func = add_interface; +## Internal function. function add_signature_file(sold: string, snew: string): string { if ( sold == "" ) @@ -344,11 +700,17 @@ function add_signature_file(sold: string, snew: string): string else return cat(sold, " ", snew); } + +## Signature files to read. Use ``redef signature_files += "foo.sig"`` to +## extend. Signature files added this way will be searched relative to +## ``BROPATH``. Using the ``@load-sigs`` directive instead is preferred +## since that can search paths relative to the current script. global signature_files = "" &add_func = add_signature_file; +## ``p0f`` fingerprint file to use. Will be searched relative to ``BROPATH``. const passive_fingerprint_file = "base/misc/p0f.fp" &redef; -# TODO: testing to see if I can remove these without causing problems. +# todo::testing to see if I can remove these without causing problems. #const ftp = 21/tcp; #const ssh = 22/tcp; #const telnet = 23/tcp; @@ -361,17 +723,24 @@ const passive_fingerprint_file = "base/misc/p0f.fp" &redef; #const bgp = 179/tcp; #const rlogin = 513/tcp; -const TCP_INACTIVE = 0; -const TCP_SYN_SENT = 1; -const TCP_SYN_ACK_SENT = 2; -const TCP_PARTIAL = 3; -const TCP_ESTABLISHED = 4; -const TCP_CLOSED = 5; -const TCP_RESET = 6; +# TCP values for :bro:see:`endpoint` *state* field. +# todo::these should go into an enum to make them autodoc'able. +const TCP_INACTIVE = 0; ##< Endpoint is still inactive. +const TCP_SYN_SENT = 1; ##< Endpoint has sent SYN. +const TCP_SYN_ACK_SENT = 2; ##< Endpoint has sent SYN/ACK. +const TCP_PARTIAL = 3; ##< Endpoint has sent data but no initial SYN. +const TCP_ESTABLISHED = 4; ##< Endpoint has finished initial handshake regularly. +const TCP_CLOSED = 5; ##< Endpoint has closed connection. +const TCP_RESET = 6; ##< Endpoint has sent RST. + +# UDP values for :bro:see:`endpoint` *state* field. +# todo::these should go into an enum to make them autodoc'able. +const UDP_INACTIVE = 0; ##< Endpoint is still inactive. +const UDP_ACTIVE = 1; ##< Endpoint has sent something. ## If true, don't verify checksums. Useful for running on altered trace -## files, and for saving a few cycles, but of course dangerous, too ... -## Note that the -C command-line option overrides the setting of this +## files, and for saving a few cycles, but at the risk of analyzing invalid +## data. Note that the ``-C`` command-line option overrides the setting of this ## variable. const ignore_checksums = F &redef; @@ -379,13 +748,13 @@ const ignore_checksums = F &redef; ## (one missing its initial establishment negotiation) is seen. const partial_connection_ok = T &redef; -## If true, instantiate connection state when a SYN ack is seen -## but not the initial SYN (even if partial_connection_ok is false). +## If true, instantiate connection state when a SYN/ACK is seen but not the initial +## SYN (even if :bro:see:`partial_connection_ok` is false). const tcp_SYN_ack_ok = T &redef; -## If a connection state is removed there may still be some undelivered -## data waiting in the reassembler. If true, pass this to the signature -## engine before flushing the state. +## If true, pass any undelivered to the signature engine before flushing the state. +## If a connection state is removed, there may still be some data waiting in the +## reassembler. const tcp_match_undelivered = T &redef; ## Check up on the result of an initial SYN after this much time. @@ -416,33 +785,55 @@ const tcp_reset_delay = 5 secs &redef; const tcp_partial_close_delay = 3 secs &redef; ## If a connection belongs to an application that we don't analyze, -## time it out after this interval. If 0 secs, then don't time it out. +## time it out after this interval. If 0 secs, then don't time it out (but +## :bro:see:`tcp_inactivity_timeout`/:bro:see:`udp_inactivity_timeout`/:bro:see:`icmp_inactivity_timeout` +## still apply). const non_analyzed_lifetime = 0 secs &redef; -## If a connection is inactive, time it out after this interval. -## If 0 secs, then don't time it out. +## If a TCP connection is inactive, time it out after this interval. If 0 secs, +## then don't time it out. +## +## .. bro:see:: udp_inactivity_timeout icmp_inactivity_timeout set_inactivity_timeout const tcp_inactivity_timeout = 5 min &redef; -## See :bro:id:`tcp_inactivity_timeout` + +## If a UDP flow is inactive, time it out after this interval. If 0 secs, then +## don't time it out. +## +## .. bro:see:: tcp_inactivity_timeout icmp_inactivity_timeout set_inactivity_timeout const udp_inactivity_timeout = 1 min &redef; -## See :bro:id:`tcp_inactivity_timeout` + +## If an ICMP flow is inactive, time it out after this interval. If 0 secs, then +## don't time it out. +## +## .. bro:see:: tcp_inactivity_timeout udp_inactivity_timeout set_inactivity_timeout const icmp_inactivity_timeout = 1 min &redef; -## This many FINs/RSTs in a row constitutes a "storm". +## Number of FINs/RSTs in a row that constitute a "storm". Storms are reported via +## as ``weird`` via the notice framework, and they must also come within +## intervals of at most :bro:see:`tcp_storm_interarrival_thresh`. +## +## .. bro:see:: tcp_storm_interarrival_thresh const tcp_storm_thresh = 1000 &redef; -## The FINs/RSTs must come with this much time or less between them. +## FINs/RSTs must come with this much time or less between them to be +## considered a "storm". +## +## .. bro:see:: tcp_storm_thresh const tcp_storm_interarrival_thresh = 1 sec &redef; -## Maximum amount of data that might plausibly be sent in an initial -## flight (prior to receiving any acks). Used to determine whether we -## must not be seeing our peer's acks. Set to zero to turn off this -## determination. +## Maximum amount of data that might plausibly be sent in an initial flight (prior +## to receiving any acks). Used to determine whether we must not be seeing our +## peer's ACKs. Set to zero to turn off this determination. +## +## .. bro:see:: tcp_max_above_hole_without_any_acks tcp_excessive_data_without_further_acks const tcp_max_initial_window = 4096; -## If we're not seeing our peer's acks, the maximum volume of data above -## a sequence hole that we'll tolerate before assuming that there's -## been a packet drop and we should give up on tracking a connection. -## If set to zero, then we don't ever give up. +## If we're not seeing our peer's ACKs, the maximum volume of data above a sequence +## hole that we'll tolerate before assuming that there's been a packet drop and we +## should give up on tracking a connection. If set to zero, then we don't ever give +## up. +## +## .. bro:see:: tcp_max_initial_window tcp_excessive_data_without_further_acks const tcp_max_above_hole_without_any_acks = 4096; ## If we've seen this much data without any of it being acked, we give up @@ -450,87 +841,151 @@ const tcp_max_above_hole_without_any_acks = 4096; ## stuff. If set to zero, then we don't ever give up. Ideally, Bro would ## track the current window on a connection and use it to infer that data ## has in fact gone too far, but for now we just make this quite beefy. +## +## .. bro:see:: tcp_max_initial_window tcp_max_above_hole_without_any_acks const tcp_excessive_data_without_further_acks = 10 * 1024 * 1024; -## For services without a handler, these sets define which -## side of a connection is to be reassembled. +## For services without an a handler, these sets define originator-side ports that +## still trigger reassembly. +## +## .. :bro:see:: tcp_reassembler_ports_resp const tcp_reassembler_ports_orig: set[port] = {} &redef; -## See :bro:id:`tcp_reassembler_ports_orig` + +## For services without an a handler, these sets define responder-side ports that +## still trigger reassembly. +## +## .. :bro:see:: tcp_reassembler_ports_orig const tcp_reassembler_ports_resp: set[port] = {} &redef; -## These sets define destination ports for which the contents -## of the originator (responder, respectively) stream should -## be delivered via tcp_contents. +## Defines destination TCP ports for which the contents of the originator stream +## should be delivered via :bro:see:`tcp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_resp tcp_content_deliver_all_orig +## tcp_content_deliver_all_resp udp_content_delivery_ports_orig +## udp_content_delivery_ports_resp udp_content_deliver_all_orig +## udp_content_deliver_all_resp tcp_contents const tcp_content_delivery_ports_orig: table[port] of bool = {} &redef; -## See :bro:id:`tcp_content_delivery_ports_orig` + +## Defines destination TCP ports for which the contents of the responder stream should +## be delivered via :bro:see:`tcp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_orig tcp_content_deliver_all_orig +## tcp_content_deliver_all_resp udp_content_delivery_ports_orig +## udp_content_delivery_ports_resp udp_content_deliver_all_orig +## udp_content_deliver_all_resp tcp_contents const tcp_content_delivery_ports_resp: table[port] of bool = {} &redef; -# To have all TCP orig->resp/resp->orig traffic reported via tcp_contents, -# redef these to T. +## If true, all TCP originator-side traffic is reported via +## :bro:see:`tcp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_orig tcp_content_delivery_ports_resp +## tcp_content_deliver_all_resp udp_content_delivery_ports_orig +## udp_content_delivery_ports_resp udp_content_deliver_all_orig +## udp_content_deliver_all_resp tcp_contents const tcp_content_deliver_all_orig = F &redef; -## See :bro:id:`tcp_content_deliver_all_orig` + +## If true, all TCP responder-side traffic is reported via +## :bro:see:`tcp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_orig +## tcp_content_delivery_ports_resp +## tcp_content_deliver_all_orig udp_content_delivery_ports_orig +## udp_content_delivery_ports_resp udp_content_deliver_all_orig +## udp_content_deliver_all_resp tcp_contents const tcp_content_deliver_all_resp = F &redef; -## These sets define destination ports for which the contents -## of the originator (responder, respectively) stream should -## be delivered via udp_contents. +## Defines UDP destination ports for which the contents of the originator stream +## should be delivered via :bro:see:`udp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_orig +## tcp_content_delivery_ports_resp +## tcp_content_deliver_all_orig tcp_content_deliver_all_resp +## udp_content_delivery_ports_resp udp_content_deliver_all_orig +## udp_content_deliver_all_resp udp_contents const udp_content_delivery_ports_orig: table[port] of bool = {} &redef; -## See :bro:id:`udp_content_delivery_ports_orig` + +## Defines UDP destination ports for which the contents of the originator stream +## should be delivered via :bro:see:`udp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_orig +## tcp_content_delivery_ports_resp tcp_content_deliver_all_orig +## tcp_content_deliver_all_resp udp_content_delivery_ports_orig +## udp_content_deliver_all_orig udp_content_deliver_all_resp udp_contents const udp_content_delivery_ports_resp: table[port] of bool = {} &redef; -## To have all UDP orig->resp/resp->orig traffic reported via udp_contents, -## redef these to T. +## If true, all UDP originator-side traffic is reported via +## :bro:see:`tcp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_orig +## tcp_content_delivery_ports_resp tcp_content_deliver_all_resp +## tcp_content_delivery_ports_orig udp_content_delivery_ports_orig +## udp_content_delivery_ports_resp udp_content_deliver_all_resp +## udp_contents const udp_content_deliver_all_orig = F &redef; -## See :bro:id:`udp_content_deliver_all_orig` + +## If true, all UDP responder-side traffic is reported via +## :bro:see:`tcp_contents`. +## +## .. bro:see:: tcp_content_delivery_ports_orig +## tcp_content_delivery_ports_resp tcp_content_deliver_all_resp +## tcp_content_delivery_ports_orig udp_content_delivery_ports_orig +## udp_content_delivery_ports_resp udp_content_deliver_all_orig +## udp_contents const udp_content_deliver_all_resp = F &redef; -## Check for expired table entries after this amount of time +## Check for expired table entries after this amount of time. +## +## .. bro:see:: table_incremental_step table_expire_delay const table_expire_interval = 10 secs &redef; -## When expiring/serializing, don't work on more than this many table -## entries at a time. +## When expiring/serializing table entries, don't work on more than this many table +## at a time. +## +## .. bro:see:: table_expire_interval table_expire_delay const table_incremental_step = 5000 &redef; -## When expiring, wait this amount of time before checking the next chunk -## of entries. +## When expiring table entries, wait this amount of time before checking the next +## chunk of entries. +## +## .. :bro:see:: table_expire_interval table_incremental_step const table_expire_delay = 0.01 secs &redef; ## Time to wait before timing out a DNS request. const dns_session_timeout = 10 sec &redef; -## Time to wait before timing out a NTP request. + +## Time to wait before timing out an NTP request. const ntp_session_timeout = 300 sec &redef; -## Time to wait before timing out a RPC request. + +## Time to wait before timing out an RPC request. const rpc_timeout = 24 sec &redef; -## Time window for reordering packets (to deal with timestamp -## discrepency between multiple packet sources). -const packet_sort_window = 0 usecs &redef; - -## How long to hold onto fragments for possible reassembly. A value -## of 0.0 means "forever", which resists evasion, but can lead to -## state accrual. +## How long to hold onto fragments for possible reassembly. A value of 0.0 means +## "forever", which resists evasion, but can lead to state accrual. const frag_timeout = 0.0 sec &redef; -## If positive, indicates the encapsulation header size that should -## be skipped over for each captured packet .... -const encap_hdr_size = 0 &redef; -## ... or just for the following UDP port. -const tunnel_port = 0/udp &redef; +## Time window for reordering packets. This is used for dealing with timestamp +## discrepency between multiple packet sources. +## +## .. note:: Setting this can have a major performance impact as now packets need +## to be potentially copied and buffered. +const packet_sort_window = 0 usecs &redef; -## Whether to use the ConnSize analyzer to count the number of -## packets and IP-level bytes transfered by each endpoint. If -## true, these values are returned in the connection's endpoint -## record val. +## If positive, indicates the encapsulation header size that should +## be skipped. This applies to all packets. +const encap_hdr_size = 0 &redef; + +## Whether to use the ``ConnSize`` analyzer to count the number of packets and +## IP-level bytes transfered by each endpoint. If true, these values are returned +## in the connection's :bro:see:`endpoint` record value. const use_conn_size_analyzer = T &redef; -const UDP_INACTIVE = 0; -const UDP_ACTIVE = 1; # means we've seen something from this endpoint - -const ENDIAN_UNKNOWN = 0; -const ENDIAN_LITTLE = 1; -const ENDIAN_BIG = 2; -const ENDIAN_CONFUSED = 3; +# todo::these should go into an enum to make them autodoc'able. +const ENDIAN_UNKNOWN = 0; ##< Endian not yet determined. +const ENDIAN_LITTLE = 1; ##< Little endian. +const ENDIAN_BIG = 2; ##< Big endian. +const ENDIAN_CONFUSED = 3; ##< Tried to determine endian, but failed. +## Deprecated. function append_addl(c: connection, addl: string) { if ( c$addl == "" ) @@ -540,6 +995,7 @@ function append_addl(c: connection, addl: string) c$addl = fmt("%s %s", c$addl, addl); } +## Deprecated. function append_addl_marker(c: connection, addl: string, marker: string) { if ( c$addl == "" ) @@ -550,54 +1006,378 @@ function append_addl_marker(c: connection, addl: string, marker: string) } -# Values for set_contents_file's "direction" argument. -# TODO: these should go into an enum to make them autodoc'able -const CONTENTS_NONE = 0; # turn off recording of contents -const CONTENTS_ORIG = 1; # record originator contents -const CONTENTS_RESP = 2; # record responder contents -const CONTENTS_BOTH = 3; # record both originator and responder contents - -const ICMP_UNREACH_NET = 0; -const ICMP_UNREACH_HOST = 1; -const ICMP_UNREACH_PROTOCOL = 2; -const ICMP_UNREACH_PORT = 3; -const ICMP_UNREACH_NEEDFRAG = 4; -const ICMP_UNREACH_ADMIN_PROHIB = 13; -# The above list isn't exhaustive ... +# Values for :bro:see:`set_contents_file` *direction* argument. +# todo::these should go into an enum to make them autodoc'able +const CONTENTS_NONE = 0; ##< Turn off recording of contents. +const CONTENTS_ORIG = 1; ##< Record originator contents. +const CONTENTS_RESP = 2; ##< Record responder contents. +const CONTENTS_BOTH = 3; ##< Record both originator and responder contents. +# Values for code of ICMP *unreachable* messages. The list is not exhaustive. +# todo::these should go into an enum to make them autodoc'able +# +# .. bro:see:: :bro:see:`icmp_unreachable ` +const ICMP_UNREACH_NET = 0; ##< Network unreachable. +const ICMP_UNREACH_HOST = 1; ##< Host unreachable. +const ICMP_UNREACH_PROTOCOL = 2; ##< Protocol unreachable. +const ICMP_UNREACH_PORT = 3; ##< Port unreachable. +const ICMP_UNREACH_NEEDFRAG = 4; ##< Fragement needed. +const ICMP_UNREACH_ADMIN_PROHIB = 13; ##< Adminstratively prohibited. # Definitions for access to packet headers. Currently only used for # discarders. -const IPPROTO_IP = 0; # dummy for IP -const IPPROTO_ICMP = 1; # control message protocol -const IPPROTO_IGMP = 2; # group mgmt protocol -const IPPROTO_IPIP = 4; # IP encapsulation in IP -const IPPROTO_TCP = 6; # TCP -const IPPROTO_UDP = 17; # user datagram protocol -const IPPROTO_RAW = 255; # raw IP packet +# todo::these should go into an enum to make them autodoc'able +const IPPROTO_IP = 0; ##< Dummy for IP. +const IPPROTO_ICMP = 1; ##< Control message protocol. +const IPPROTO_IGMP = 2; ##< Group management protocol. +const IPPROTO_IPIP = 4; ##< IP encapsulation in IP. +const IPPROTO_TCP = 6; ##< TCP. +const IPPROTO_UDP = 17; ##< User datagram protocol. +const IPPROTO_IPV6 = 41; ##< IPv6 header. +const IPPROTO_ICMPV6 = 58; ##< ICMP for IPv6. +const IPPROTO_RAW = 255; ##< Raw IP packet. -type ip_hdr: record { - hl: count; ##< header length (in bytes) - tos: count; ##< type of service - len: count; ##< total length - id: count; ##< identification - ttl: count; ##< time to live - p: count; ##< protocol - src: addr; ##< source address - dst: addr; ##< dest address +# Definitions for IPv6 extension headers. +const IPPROTO_HOPOPTS = 0; ##< IPv6 hop-by-hop-options header. +const IPPROTO_ROUTING = 43; ##< IPv6 routing header. +const IPPROTO_FRAGMENT = 44; ##< IPv6 fragment header. +const IPPROTO_ESP = 50; ##< IPv6 encapsulating security payload header. +const IPPROTO_AH = 51; ##< IPv6 authentication header. +const IPPROTO_NONE = 59; ##< IPv6 no next header. +const IPPROTO_DSTOPTS = 60; ##< IPv6 destination options header. +const IPPROTO_MOBILITY = 135; ##< IPv6 mobility header. + +## Values extracted from an IPv6 extension header's (e.g. hop-by-hop or +## destination option headers) option field. +## +## .. bro:see:: ip6_hdr ip6_ext_hdr ip6_hopopts ip6_dstopts +type ip6_option: record { + otype: count; ##< Option type. + len: count; ##< Option data length. + data: string; ##< Option data. }; -## TCP flags. -const TH_FIN = 1; -const TH_SYN = 2; -const TH_RST = 4; -const TH_PUSH = 8; -const TH_ACK = 16; -const TH_URG = 32; -const TH_FLAGS = 63; ##< (TH_FIN|TH_SYN|TH_RST|TH_ACK|TH_URG) +## A type alias for a vector of IPv6 options. +type ip6_options: vector of ip6_option; +## Values extracted from an IPv6 Hop-by-Hop options extension header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hdr ip6_ext_hdr ip6_option +type ip6_hopopts: record { + ## Protocol number of the next header (RFC 1700 et seq., IANA assigned + ## number), e.g. :bro:id:`IPPROTO_ICMP`. + nxt: count; + ## Length of header in 8-octet units, excluding first unit. + len: count; + ## The TLV encoded options; + options: ip6_options; +}; + +## Values extracted from an IPv6 Destination options extension header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hdr ip6_ext_hdr ip6_option +type ip6_dstopts: record { + ## Protocol number of the next header (RFC 1700 et seq., IANA assigned + ## number), e.g. :bro:id:`IPPROTO_ICMP`. + nxt: count; + ## Length of header in 8-octet units, excluding first unit. + len: count; + ## The TLV encoded options; + options: ip6_options; +}; + +## Values extracted from an IPv6 Routing extension header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hdr ip6_ext_hdr +type ip6_routing: record { + ## Protocol number of the next header (RFC 1700 et seq., IANA assigned + ## number), e.g. :bro:id:`IPPROTO_ICMP`. + nxt: count; + ## Length of header in 8-octet units, excluding first unit. + len: count; + ## Routing type. + rtype: count; + ## Segments left. + segleft: count; + ## Type-specific data. + data: string; +}; + +## Values extracted from an IPv6 Fragment extension header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hdr ip6_ext_hdr +type ip6_fragment: record { + ## Protocol number of the next header (RFC 1700 et seq., IANA assigned + ## number), e.g. :bro:id:`IPPROTO_ICMP`. + nxt: count; + ## 8-bit reserved field. + rsv1: count; + ## Fragmentation offset. + offset: count; + ## 2-bit reserved field. + rsv2: count; + ## More fragments. + more: bool; + ## Fragment identification. + id: count; +}; + +## Values extracted from an IPv6 Authentication extension header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hdr ip6_ext_hdr +type ip6_ah: record { + ## Protocol number of the next header (RFC 1700 et seq., IANA assigned + ## number), e.g. :bro:id:`IPPROTO_ICMP`. + nxt: count; + ## Length of header in 4-octet units, excluding first two units. + len: count; + ## Reserved field. + rsv: count; + ## Security Parameter Index. + spi: count; + ## Sequence number, unset in the case that *len* field is zero. + seq: count &optional; + ## Authentication data, unset in the case that *len* field is zero. + data: string &optional; +}; + +## Values extracted from an IPv6 ESP extension header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hdr ip6_ext_hdr +type ip6_esp: record { + ## Security Parameters Index. + spi: count; + ## Sequence number. + seq: count; +}; + +## Values extracted from an IPv6 Mobility Binding Refresh Request message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_brr: record { + ## Reserved. + rsv: count; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility Home Test Init message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_hoti: record { + ## Reserved. + rsv: count; + ## Home Init Cookie. + cookie: count; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility Care-of Test Init message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_coti: record { + ## Reserved. + rsv: count; + ## Care-of Init Cookie. + cookie: count; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility Home Test message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_hot: record { + ## Home Nonce Index. + nonce_idx: count; + ## Home Init Cookie. + cookie: count; + ## Home Keygen Token. + token: count; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility Care-of Test message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_cot: record { + ## Care-of Nonce Index. + nonce_idx: count; + ## Care-of Init Cookie. + cookie: count; + ## Care-of Keygen Token. + token: count; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility Binding Update message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_bu: record { + ## Sequence number. + seq: count; + ## Acknowledge bit. + a: bool; + ## Home Registration bit. + h: bool; + ## Link-Local Address Compatibility bit. + l: bool; + ## Key Management Mobility Capability bit. + k: bool; + ## Lifetime. + life: count; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility Binding Acknowledgement message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_back: record { + ## Status. + status: count; + ## Key Management Mobility Capability. + k: bool; + ## Sequence number. + seq: count; + ## Lifetime. + life: count; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility Binding Error message. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr ip6_mobility_msg +type ip6_mobility_be: record { + ## Status. + status: count; + ## Home Address. + hoa: addr; + ## Mobility Options. + options: vector of ip6_option; +}; + +## Values extracted from an IPv6 Mobility header's message data. +## +## .. bro:see:: ip6_mobility_hdr ip6_hdr ip6_ext_hdr +type ip6_mobility_msg: record { + ## The type of message from the header's MH Type field. + id: count; + ## Binding Refresh Request. + brr: ip6_mobility_brr &optional; + ## Home Test Init. + hoti: ip6_mobility_hoti &optional; + ## Care-of Test Init. + coti: ip6_mobility_coti &optional; + ## Home Test. + hot: ip6_mobility_hot &optional; + ## Care-of Test. + cot: ip6_mobility_cot &optional; + ## Binding Update. + bu: ip6_mobility_bu &optional; + ## Binding Acknowledgement. + back: ip6_mobility_back &optional; + ## Binding Error. + be: ip6_mobility_be &optional; +}; + +## Values extracted from an IPv6 Mobility header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hdr ip6_ext_hdr +type ip6_mobility_hdr: record { + ## Protocol number of the next header (RFC 1700 et seq., IANA assigned + ## number), e.g. :bro:id:`IPPROTO_ICMP`. + nxt: count; + ## Length of header in 8-octet units, excluding first unit. + len: count; + ## Mobility header type used to identify header's the message. + mh_type: count; + ## Reserved field. + rsv: count; + ## Mobility header checksum. + chksum: count; + ## Mobility header message + msg: ip6_mobility_msg; +}; + +## A general container for a more specific IPv6 extension header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_hopopts ip6_dstopts ip6_routing ip6_fragment +## ip6_ah ip6_esp +type ip6_ext_hdr: record { + ## The RFC 1700 et seq. IANA assigned number identifying the type of + ## the extension header. + id: count; + ## Hop-by-hop option extension header. + hopopts: ip6_hopopts &optional; + ## Destination option extension header. + dstopts: ip6_dstopts &optional; + ## Routing extension header. + routing: ip6_routing &optional; + ## Fragment header. + fragment: ip6_fragment &optional; + ## Authentication extension header. + ah: ip6_ah &optional; + ## Encapsulating security payload header. + esp: ip6_esp &optional; + ## Mobility header. + mobility: ip6_mobility_hdr &optional; +}; + +## A type alias for a vector of IPv6 extension headers. +type ip6_ext_hdr_chain: vector of ip6_ext_hdr; + +## Values extracted from an IPv6 header. +## +## .. bro:see:: pkt_hdr ip4_hdr ip6_ext_hdr ip6_hopopts ip6_dstopts +## ip6_routing ip6_fragment ip6_ah ip6_esp +type ip6_hdr: record { + class: count; ##< Traffic class. + flow: count; ##< Flow label. + len: count; ##< Payload length. + nxt: count; ##< Protocol number of the next header + ##< (RFC 1700 et seq., IANA assigned number) + ##< e.g. :bro:id:`IPPROTO_ICMP`. + hlim: count; ##< Hop limit. + src: addr; ##< Source address. + dst: addr; ##< Destination address. + exts: ip6_ext_hdr_chain; ##< Extension header chain. +}; + +## Values extracted from an IPv4 header. +## +## .. bro:see:: pkt_hdr ip6_hdr discarder_check_ip +type ip4_hdr: record { + hl: count; ##< Header length in bytes. + tos: count; ##< Type of service. + len: count; ##< Total length. + id: count; ##< Identification. + ttl: count; ##< Time to live. + p: count; ##< Protocol. + src: addr; ##< Source address. + dst: addr; ##< Destination address. +}; + +# TCP flags. +# +# todo::these should go into an enum to make them autodoc'able +const TH_FIN = 1; ##< FIN. +const TH_SYN = 2; ##< SYN. +const TH_RST = 4; ##< RST. +const TH_PUSH = 8; ##< PUSH. +const TH_ACK = 16; ##< ACK. +const TH_URG = 32; ##< URG. +const TH_FLAGS = 63; ##< Mask combining all flags. + +## Values extracted from a TCP header. +## +## .. bro:see:: pkt_hdr discarder_check_tcp type tcp_hdr: record { - sport: port; ##< source port + sport: port; ##< source port. dport: port; ##< destination port seq: count; ##< sequence number ack: count; ##< acknowledgement number @@ -607,36 +1387,150 @@ type tcp_hdr: record { win: count; ##< window }; +## Values extracted from a UDP header. +## +## .. bro:see:: pkt_hdr discarder_check_udp type udp_hdr: record { sport: port; ##< source port dport: port; ##< destination port ulen: count; ##< udp length }; - -## Holds an ip_hdr and one of tcp_hdr, udp_hdr, or icmp_hdr. -type pkt_hdr: record { - ip: ip_hdr; - tcp: tcp_hdr &optional; - udp: udp_hdr &optional; - icmp: icmp_hdr &optional; +## Values extracted from an ICMP header. +## +## .. bro:see:: pkt_hdr discarder_check_icmp +type icmp_hdr: record { + icmp_type: count; ##< type of message }; +## A packet header, consisting of an IP header and transport-layer header. +## +## .. bro:see:: new_packet +type pkt_hdr: record { + ip: ip4_hdr &optional; ##< The IPv4 header if an IPv4 packet. + ip6: ip6_hdr &optional; ##< The IPv6 header if an IPv6 packet. + tcp: tcp_hdr &optional; ##< The TCP header if a TCP packet. + udp: udp_hdr &optional; ##< The UDP header if a UDP packet. + icmp: icmp_hdr &optional; ##< The ICMP header if an ICMP packet. +}; -## If you add elements here, then for a given BPF filter as index, when -## a packet matching that filter is captured, the corresponding event handler -## will be invoked. +## A Teredo origin indication header. See :rfc:`4380` for more information +## about the Teredo protocol. +## +## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication +## teredo_hdr +type teredo_auth: record { + id: string; ##< Teredo client identifier. + value: string; ##< HMAC-SHA1 over shared secret key between client and + ##< server, nonce, confirmation byte, origin indication + ##< (if present), and the IPv6 packet. + nonce: count; ##< Nonce chosen by Teredo client to be repeated by + ##< Teredo server. + confirm: count; ##< Confirmation byte to be set to 0 by Teredo client + ##< and non-zero by server if client needs new key. +}; + +## A Teredo authentication header. See :rfc:`4380` for more information +## about the Teredo protocol. +## +## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication +## teredo_hdr +type teredo_origin: record { + p: port; ##< Unobfuscated UDP port of Teredo client. + a: addr; ##< Unobfuscated IPv4 address of Teredo client. +}; + +## A Teredo packet header. See :rfc:`4380` for more information about the +## Teredo protocol. +## +## .. bro:see:: teredo_bubble teredo_origin_indication teredo_authentication +type teredo_hdr: record { + auth: teredo_auth &optional; ##< Teredo authentication header. + origin: teredo_origin &optional; ##< Teredo origin indication header. + hdr: pkt_hdr; ##< IPv6 and transport protocol headers. +}; + +## Definition of "secondary filters". A secondary filter is a BPF filter given as +## index in this table. For each such filter, the corresponding event is raised for +## all matching packets. global secondary_filters: table[string] of event(filter: string, pkt: pkt_hdr) &redef; -global discarder_maxlen = 128 &redef; ##< maximum amount of data passed to fnc +## Maximum length of payload passed to discarder functions. +## +## .. :bro:see:: discarder_check_tcp discarder_check_udp discarder_check_icmp +## discarder_check_ip +global discarder_maxlen = 128 &redef; -global discarder_check_ip: function(i: ip_hdr): bool; -global discarder_check_tcp: function(i: ip_hdr, t: tcp_hdr, d: string): bool; -global discarder_check_udp: function(i: ip_hdr, u: udp_hdr, d: string): bool; -global discarder_check_icmp: function(i: ip_hdr, ih: icmp_hdr): bool; -# End of definition of access to packet headers, discarders. +## Function for skipping packets based on their IP header. If defined, this +## function will be called for all IP packets before Bro performs any further +## analysis. If the function signals to discard a packet, no further processing +## will be performed on it. +## +## p: The IP header of the considered packet. +## +## Returns: True if the packet should not be analyzed any further. +## +## .. :bro:see:: discarder_check_tcp discarder_check_udp discarder_check_icmp +## discarder_maxlen +## +## .. note:: This is very low-level functionality and potentially expensive. +## Avoid using it. +global discarder_check_ip: function(p: pkt_hdr): bool; +## Function for skipping packets based on their TCP header. If defined, this +## function will be called for all TCP packets before Bro performs any further +## analysis. If the function signals to discard a packet, no further processing +## will be performed on it. +## +## p: The IP and TCP headers of the considered packet. +## +## d: Up to :bro:see:`discarder_maxlen` bytes of the TCP payload. +## +## Returns: True if the packet should not be analyzed any further. +## +## .. :bro:see:: discarder_check_ip discarder_check_udp discarder_check_icmp +## discarder_maxlen +## +## .. note:: This is very low-level functionality and potentially expensive. +## Avoid using it. +global discarder_check_tcp: function(p: pkt_hdr, d: string): bool; + +## Function for skipping packets based on their UDP header. If defined, this +## function will be called for all UDP packets before Bro performs any further +## analysis. If the function signals to discard a packet, no further processing +## will be performed on it. +## +## p: The IP and UDP headers of the considered packet. +## +## d: Up to :bro:see:`discarder_maxlen` bytes of the UDP payload. +## +## Returns: True if the packet should not be analyzed any further. +## +## .. :bro:see:: discarder_check_ip discarder_check_tcp discarder_check_icmp +## discarder_maxlen +## +## .. note:: This is very low-level functionality and potentially expensive. +## Avoid using it. +global discarder_check_udp: function(p: pkt_hdr, d: string): bool; + +## Function for skipping packets based on their ICMP header. If defined, this +## function will be called for all ICMP packets before Bro performs any further +## analysis. If the function signals to discard a packet, no further processing +## will be performed on it. +## +## p: The IP and ICMP headers of the considered packet. +## +## Returns: True if the packet should not be analyzed any further. +## +## .. :bro:see:: discarder_check_ip discarder_check_tcp discarder_check_udp +## discarder_maxlen +## +## .. note:: This is very low-level functionality and potentially expensive. +## Avoid using it. +global discarder_check_icmp: function(p: pkt_hdr): bool; + +## Bro's watchdog interval. const watchdog_interval = 10 sec &redef; ## The maximum number of timers to expire after processing each new @@ -650,56 +1544,141 @@ const max_timer_expires = 300 &redef; const max_remote_events_processed = 10 &redef; # These need to match the definitions in Login.h. -# TODO: use enum to make them autodoc'able -const LOGIN_STATE_AUTHENTICATE = 0; # trying to authenticate -const LOGIN_STATE_LOGGED_IN = 1; # successful authentication -const LOGIN_STATE_SKIP = 2; # skip any further processing -const LOGIN_STATE_CONFUSED = 3; # we're confused +# +# .. bro:see:: get_login_state +# +# todo::use enum to make them autodoc'able +const LOGIN_STATE_AUTHENTICATE = 0; # Trying to authenticate. +const LOGIN_STATE_LOGGED_IN = 1; # Successful authentication. +const LOGIN_STATE_SKIP = 2; # Skip any further processing. +const LOGIN_STATE_CONFUSED = 3; # We're confused. # It would be nice to replace these function definitions with some # form of parameterized types. + +## Returns minimum of two ``double`` values. +## +## a: First value. +## b: Second value. +## +## Returns: The minimum of *a* and *b*. function min_double(a: double, b: double): double { return a < b ? a : b; } + +## Returns maximum of two ``double`` values. +## +## a: First value. +## b: Second value. +## +## Returns: The maximum of *a* and *b*. function max_double(a: double, b: double): double { return a > b ? a : b; } + +## Returns minimum of two ``interval`` values. +## +## a: First value. +## b: Second value. +## +## Returns: The minimum of *a* and *b*. function min_interval(a: interval, b: interval): interval { return a < b ? a : b; } + +## Returns maximum of two ``interval`` values. +## +## a: First value. +## b: Second value. +## +## Returns: The maximum of *a* and *b*. function max_interval(a: interval, b: interval): interval { return a > b ? a : b; } + +## Returns minimum of two ``count`` values. +## +## a: First value. +## b: Second value. +## +## Returns: The minimum of *a* and *b*. function min_count(a: count, b: count): count { return a < b ? a : b; } + +## Returns maximum of two ``count`` values. +## +## a: First value. +## b: Second value. +## +## Returns: The maximum of *a* and *b*. function max_count(a: count, b: count): count { return a > b ? a : b; } +## TODO. global skip_authentication: set[string] &redef; + +## TODO. global direct_login_prompts: set[string] &redef; + +## TODO. global login_prompts: set[string] &redef; + +## TODO. global login_non_failure_msgs: set[string] &redef; + +## TODO. global login_failure_msgs: set[string] &redef; + +## TODO. global login_success_msgs: set[string] &redef; + +## TODO. global login_timeouts: set[string] &redef; +## A MIME header key/value pair. +## +## .. bro:see:: mime_header_list http_all_headers mime_all_headers mime_one_header type mime_header_rec: record { - name: string; - value: string; + name: string; ##< The header name. + value: string; ##< The header value. }; + +## A list of MIME headers. +## +## .. bro:see:: mime_header_rec http_all_headers mime_all_headers type mime_header_list: table[count] of mime_header_rec; + +## The length of MIME data segments delivered to handlers of +## :bro:see:`mime_segment_data`. +## +## .. bro:see:: mime_segment_data mime_segment_overlap_length global mime_segment_length = 1024 &redef; + +## The number of bytes of overlap between successive segments passed to +## :bro:see:`mime_segment_data`. global mime_segment_overlap_length = 0 &redef; +## An RPC portmapper mapping. +## +## .. bro:see:: pm_mappings type pm_mapping: record { - program: count; - version: count; - p: port; + program: count; ##< The RPC program. + version: count; ##< The program version. + p: port; ##< The port. }; +## Table of RPC portmapper mappings. +## +## .. bro:see:: pm_request_dump type pm_mappings: table[count] of pm_mapping; +## An RPC portmapper request. +## +## .. bro:see:: pm_attempt_getport pm_request_getport type pm_port_request: record { - program: count; - version: count; - is_tcp: bool; + program: count; ##< The RPC program. + version: count; ##< The program version. + is_tcp: bool; ##< True if using TCP. }; +## An RPC portmapper *callit* request. +## +## .. bro:see:: pm_attempt_callit pm_request_callit type pm_callit_request: record { - program: count; - version: count; - proc: count; - arg_size: count; + program: count; ##< The RPC program. + version: count; ##< The program version. + proc: count; ##< The procedure being called. + arg_size: count; ##< The size of the argument. }; # See const.bif @@ -713,6 +1692,10 @@ type pm_callit_request: record { # const RPC_AUTH_ERROR = 7; # const RPC_UNKNOWN_ERROR = 8; +## Mapping of numerical RPC status codes to readable messages. +## +## .. bro:see:: pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset rpc_dialogue rpc_reply const RPC_status = { [RPC_SUCCESS] = "ok", [RPC_PROG_UNAVAIL] = "prog unavail", @@ -728,247 +1711,315 @@ const RPC_status = { module NFS3; export { - ## Should the read and write events return the file data that has been - ## read/written? + ## If true, :bro:see:`nfs_proc_read` and :bro:see:`nfs_proc_write` events return + ## the file data that has been read/written. + ## + ## .. .. bro:see:: return_data_max return_data_first_only const return_data = F &redef; - ## If bro:id:`nfs_return_data` is true, how much data should be returned at most. + ## If bro:id:`NFS3::return_data` is true, how much data should be returned at + ## most. const return_data_max = 512 &redef; - ## If nfs_return_data is true, whether to *only* return data if the read or write - ## offset is 0, i.e., only return data for the beginning of the file. + ## If bro:id:`NFS3::return_data` is true, whether to *only* return data if the read + ## or write offset is 0, i.e., only return data for the beginning of the file. const return_data_first_only = T &redef; - ## This record summarizes the general results and status of NFSv3 request/reply - ## pairs. It's part of every NFSv3 event. + ## Record summarizing the general results and status of NFSv3 request/reply pairs. + ## + ## Note that when *rpc_stats* or *nfs_stats* indicates not successful, the reply + ## record passed to the correpsonding event will be empty and contain uninitialized + ## fields, so don't use it. Also note that time and duration values might not be + ## fully accurate. For TCP, we record times when the corresponding chunk of data + ## is delivered to the analyzer. Depending on the reassembler, this might be well + ## after the first packet of the request was received. + ## + ## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup + ## nfs_proc_mkdir nfs_proc_not_implemented nfs_proc_null + ## nfs_proc_read nfs_proc_readdir nfs_proc_readlink nfs_proc_remove + ## nfs_proc_rmdir nfs_proc_write nfs_reply_status type info_t: record { - ## If this indicates not successful, the reply record in the - ## events will be empty and contain uninitialized fields, so - ## don't use it. - rpc_stat: rpc_status; + ## The RPC status. + rpc_stat: rpc_status; + ## The NFS status. nfs_stat: status_t; - - ## The start time, duration, and length in bytes of the request (call). Note that - ## the start and end time might not be accurate. For TCP, we record the - ## time when a chunk of data is delivered to the analyzer. Depending on the - ## Reassembler, this might be well after the first packet of the request - ## was received. + ## The start time of the request. req_start: time; - ## See :bro:id:`req_start` + ## The duration of the request. req_dur: interval; - ## See :bro:id:`req_start` + ## The length in bytes of the request. req_len: count; - - ## Like :bro:id:`req_start` but for reply. + ## The start time of the reply. rep_start: time; - ## Like :bro:id:`req_dur` but for reply. + ## The duration of the reply. rep_dur: interval; - ## Like :bro:id:`req_len` but for reply. + ## The length in bytes of the reply. rep_len: count; }; - # NFSv3 types. Type names are based on RFC 1813. + ## NFS file attributes. Field names are based on RFC 1813. + ## + ## .. bro:see:: nfs_proc_getattr type fattr_t: record { - ftype: file_type_t; - mode: count; - nlink: count; - uid: count; - gid: count; - size: count; - used: count; - rdev1: count; - rdev2: count; - fsid: count; - fileid: count; - atime: time; - mtime: time; - ctime: time; + ftype: file_type_t; ##< File type. + mode: count; ##< Mode + nlink: count; ##< Number of links. + uid: count; ##< User ID. + gid: count; ##< Group ID. + size: count; ##< Size. + used: count; ##< TODO. + rdev1: count; ##< TODO. + rdev2: count; ##< TODO. + fsid: count; ##< TODO. + fileid: count; ##< TODO. + atime: time; ##< Time of last access. + mtime: time; ##< Time of last modification. + ctime: time; ##< Time of creation. }; + ## NFS *readdir* arguments. + ## + ## .. bro:see:: nfs_proc_readdir type diropargs_t : record { - dirfh: string; ##< the file handle of the directory - fname: string; ##< the name of the file we are interested in + dirfh: string; ##< The file handle of the directory. + fname: string; ##< The name of the file we are interested in. }; - # Note, we don't need a "post_op_attr" type. We use an "fattr_t &optional" - # instead. - - ## If the lookup failed, dir_attr may be set. - ## If the lookup succeeded, fh is always set and obj_attr and dir_attr may be set. + ## NFS lookup reply. If the lookup failed, *dir_attr* may be set. If the lookup + ## succeeded, *fh* is always set and *obj_attr* and *dir_attr* may be set. + ## + ## .. bro:see:: nfs_proc_lookup type lookup_reply_t: record { - fh: string &optional; ##< file handle of object looked up - obj_attr: fattr_t &optional; ##< optional attributes associated w/ file - dir_attr: fattr_t &optional; ##< optional attributes associated w/ dir. + fh: string &optional; ##< File handle of object looked up. + obj_attr: fattr_t &optional; ##< Optional attributes associated w/ file + dir_attr: fattr_t &optional; ##< Optional attributes associated w/ dir. }; + ## NFS *read* arguments. + ## + ## .. bro:see:: nfs_proc_read type readargs_t: record { - fh: string; ##< file handle to read from - offset: count; ##< offset in file - size: count; ##< number of bytes to read + fh: string; ##< File handle to read from. + offset: count; ##< Offset in file. + size: count; ##< Number of bytes to read. }; - ## If the lookup fails, attr may be set. If the lookup succeeds, attr may be set - ## and all other fields are set. + ## NFS *read* reply. If the lookup fails, *attr* may be set. If the lookup succeeds, + ## *attr* may be set and all other fields are set. type read_reply_t: record { - attr: fattr_t &optional; ##< attributes - size: count &optional; ##< number of bytes read - eof: bool &optional; ##< did the read end at EOF - data: string &optional; ##< the actual data; not yet implemented. + attr: fattr_t &optional; ##< Attributes. + size: count &optional; ##< Number of bytes read. + eof: bool &optional; ##< Sid the read end at EOF. + data: string &optional; ##< The actual data; not yet implemented. }; - ## If the request fails, attr may be set. If the request succeeds, attr may be - ## set and all other fields are set. + ## NFS *readline* reply. If the request fails, *attr* may be set. If the request + ## succeeds, *attr* may be set and all other fields are set. + ## + ## .. bro:see:: nfs_proc_readlink type readlink_reply_t: record { - attr: fattr_t &optional; ##< attributes - nfspath: string &optional; ##< the contents of the symlink; in general a pathname as text + attr: fattr_t &optional; ##< Attributes. + nfspath: string &optional; ##< Contents of the symlink; in general a pathname as text. }; + ## NFS *write* arguments. + ## + ## .. bro:see:: nfs_proc_write type writeargs_t: record { - fh: string; ##< file handle to write to - offset: count; ##< offset in file - size: count; ##< number of bytes to write - stable: stable_how_t; ##< how and when data is commited - data: string &optional; ##< the actual data; not implemented yet + fh: string; ##< File handle to write to. + offset: count; ##< Offset in file. + size: count; ##< Number of bytes to write. + stable: stable_how_t; ##< How and when data is commited. + data: string &optional; ##< The actual data; not implemented yet. }; + ## NFS *wcc* attributes. + ## + ## .. bro:see:: NFS3::write_reply_t type wcc_attr_t: record { - size: count; - atime: time; - mtime: time; + size: count; ##< The dize. + atime: time; ##< Access time. + mtime: time; ##< Modification time. }; - ## If the request fails, pre|post attr may be set. If the request succeeds, - ## pre|post attr may be set and all other fields are set. + ## NFS *write* reply. If the request fails, *pre|post* attr may be set. If the + ## request succeeds, *pre|post* attr may be set and all other fields are set. + ## + ## .. bro:see:: nfs_proc_write type write_reply_t: record { - preattr: wcc_attr_t &optional; ##< pre operation attributes - postattr: fattr_t &optional; ##< post operation attributes - size: count &optional; - commited: stable_how_t &optional; - verf: count &optional; ##< write verifier cookue + preattr: wcc_attr_t &optional; ##< Pre operation attributes. + postattr: fattr_t &optional; ##< Post operation attributes. + size: count &optional; ##< Size. + commited: stable_how_t &optional; ##< TODO. + verf: count &optional; ##< Write verifier cookie. }; - ## reply for create, mkdir, symlink - ## If the proc failed, dir_*_attr may be set. If the proc succeeded, fh and - ## the attr's may be set. Note: no guarantee that fh is set after - ## success. + ## NFS reply for *create*, *mkdir*, and *symlink*. If the proc + ## failed, *dir_\*_attr* may be set. If the proc succeeded, *fh* and the *attr*'s + ## may be set. Note: no guarantee that *fh* is set after success. + ## + ## .. bro:see:: nfs_proc_create nfs_proc_mkdir type newobj_reply_t: record { - fh: string &optional; ##< file handle of object created - obj_attr: fattr_t &optional; ##< optional attributes associated w/ new object - dir_pre_attr: wcc_attr_t &optional; ##< optional attributes associated w/ dir - dir_post_attr: fattr_t &optional; ##< optional attributes associated w/ dir + fh: string &optional; ##< File handle of object created. + obj_attr: fattr_t &optional; ##< Optional attributes associated w/ new object. + dir_pre_attr: wcc_attr_t &optional; ##< Optional attributes associated w/ dir. + dir_post_attr: fattr_t &optional; ##< Optional attributes associated w/ dir. }; - ## reply for remove, rmdir - ## Corresponds to "wcc_data" in the spec. + ## NFS reply for *remove*, *rmdir*. Corresponds to *wcc_data* in the spec. + ## + ## .. bro:see:: nfs_proc_remove nfs_proc_rmdir type delobj_reply_t: record { - dir_pre_attr: wcc_attr_t &optional; ##< optional attributes associated w/ dir - dir_post_attr: fattr_t &optional; ##< optional attributes associated w/ dir + dir_pre_attr: wcc_attr_t &optional; ##< Optional attributes associated w/ dir. + dir_post_attr: fattr_t &optional; ##< Optional attributes associated w/ dir. }; - ## This record is used for both readdir and readdirplus. + ## NFS *readdir* arguments. Used for both *readdir* and *readdirplus*. + ## + ## .. bro:see:: nfs_proc_readdir type readdirargs_t: record { - isplus: bool; ##< is this a readdirplus request? - dirfh: string; ##< the directory filehandle - cookie: count; ##< cookie / pos in dir; 0 for first call - cookieverf: count; ##< the cookie verifier - dircount: count; ##< "count" field for readdir; maxcount otherwise (in bytes) - maxcount: count &optional; ##< only used for readdirplus. in bytes + isplus: bool; ##< Is this a readdirplus request? + dirfh: string; ##< The directory filehandle. + cookie: count; ##< Cookie / pos in dir; 0 for first call. + cookieverf: count; ##< The cookie verifier. + dircount: count; ##< "count" field for readdir; maxcount otherwise (in bytes). + maxcount: count &optional; ##< Only used for readdirplus. in bytes. }; - ## fh and attr are used for readdirplus. However, even for readdirplus they may - ## not be filled out. + ## NFS *direntry*. *fh* and *attr* are used for *readdirplus*. However, even + ## for *readdirplus* they may not be filled out. + ## + ## .. bro:see:: NFS3::direntry_vec_t NFS3::readdir_reply_t type direntry_t: record { - fileid: count; ##< e.g., inode number - fname: string; ##< filename - cookie: count; - attr: fattr_t &optional; ##< readdirplus: the FH attributes for the entry - fh: string &optional; ##< readdirplus: the FH for the entry + fileid: count; ##< E.g., inode number. + fname: string; ##< Filename. + cookie: count; ##< Cookie value. + attr: fattr_t &optional; ##< *readdirplus*: the *fh* attributes for the entry. + fh: string &optional; ##< *readdirplus*: the *fh* for the entry }; + ## Vector of NFS *direntry*. + ## + ## .. bro:see:: NFS3::readdir_reply_t type direntry_vec_t: vector of direntry_t; - ## Used for readdir and readdirplus. - ## If error: dir_attr might be set. If success: dir_attr may be set, all others + ## NFS *readdir* reply. Used for *readdir* and *readdirplus*. If an is + ## returned, *dir_attr* might be set. On success, *dir_attr* may be set, all others ## must be set. type readdir_reply_t: record { - isplus: bool; ##< is the reply for a readdirplus request - dir_attr: fattr_t &optional; - cookieverf: count &optional; - entries: direntry_vec_t &optional; - eof: bool; ##< if true, no more entries in dir. + isplus: bool; ##< True if the reply for a *readdirplus* request. + dir_attr: fattr_t &optional; ##< Directory attributes. + cookieverf: count &optional; ##< TODO. + entries: direntry_vec_t &optional; ##< Returned directory entries. + eof: bool; ##< If true, no more entries in directory. }; + ## NFS *fsstat*. type fsstat_t: record { - attrs: fattr_t &optional; - tbytes: double; - fbytes: double; - abytes: double; - tfiles: double; - ffiles: double; - afiles: double; - invarsec: interval; + attrs: fattr_t &optional; ##< Attributes. + tbytes: double; ##< TODO. + fbytes: double; ##< TODO. + abytes: double; ##< TODO. + tfiles: double; ##< TODO. + ffiles: double; ##< TODO. + afiles: double; ##< TODO. + invarsec: interval; ##< TODO. }; } # end export +module Threading; + +export { + ## The heartbeat interval used by the threading framework. + ## Changing this should usually not be neccessary and will break several tests. + const heartbeat_interval = 1.0 secs &redef; +} + module GLOBAL; +## An NTP message. +## +## .. bro:see:: ntp_message type ntp_msg: record { - id: count; - code: count; - stratum: count; - poll: count; - precision: int; - distance: interval; - dispersion: interval; - ref_t: time; - originate_t: time; - receive_t: time; - xmit_t: time; + id: count; ##< Message ID. + code: count; ##< Message code. + stratum: count; ##< Stratum. + poll: count; ##< Poll. + precision: int; ##< Precision. + distance: interval; ##< Distance. + dispersion: interval; ##< Dispersion. + ref_t: time; ##< Reference time. + originate_t: time; ##< Originating time. + receive_t: time; ##< Receive time. + xmit_t: time; ##< Send time. }; -## Maps Samba command numbers to descriptive names. +## Maps SMB command numbers to descriptive names. global samba_cmds: table[count] of string &redef &default = function(c: count): string { return fmt("samba-unknown-%d", c); }; +## An SMB command header. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction +## smb_com_transaction2 smb_com_tree_connect_andx smb_com_tree_disconnect +## smb_com_write_andx smb_error smb_get_dfs_referral smb_message type smb_hdr : record { - command: count; - status: count; - flags: count; - flags2: count; - tid: count; - pid: count; - uid: count; - mid: count; + command: count; ##< The command number (see :bro:see:`samba_cmds` ). + status: count; ##< The status code. + flags: count; ##< Flag set 1. + flags2: count; ##< Flag set 2. + tid: count; ##< TODO. + pid: count; ##< Process ID. + uid: count; ##< User ID. + mid: count; ##< TODO. }; +## An SMB transaction. +## +## .. bro:see:: smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 type smb_trans : record { - word_count: count; - total_param_count: count; - total_data_count: count; - max_param_count: count; - max_data_count: count; - max_setup_count: count; + word_count: count; ##< TODO. + total_param_count: count; ##< TODO. + total_data_count: count; ##< TODO. + max_param_count: count; ##< TODO. + max_data_count: count; ##< TODO. + max_setup_count: count; ##< TODO. # flags: count; # timeout: count; - param_count: count; - param_offset: count; - data_count: count; - data_offset: count; - setup_count: count; - setup0: count; - setup1: count; - setup2: count; - setup3: count; - byte_count: count; - parameters: string; + param_count: count; ##< TODO. + param_offset: count; ##< TODO. + data_count: count; ##< TODO. + data_offset: count; ##< TODO. + setup_count: count; ##< TODO. + setup0: count; ##< TODO. + setup1: count; ##< TODO. + setup2: count; ##< TODO. + setup3: count; ##< TODO. + byte_count: count; ##< TODO. + parameters: string; ##< TODO. }; + +## SMB transaction data. +## +## .. bro:see:: smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 +## +## .. todo:: Should this really be a record type? type smb_trans_data : record { - data : string; + data : string; ##< The transaction's data. }; +## Deprecated. +## +## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere +## else. type smb_tree_connect : record { flags: count; password: string; @@ -976,177 +2027,257 @@ type smb_tree_connect : record { service: string; }; +## Deprecated. +## +## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere +## else. type smb_negotiate : table[count] of string; -## A list of router addresses offered by the server. +## A list of router addresses offered by a DHCP server. +## +## .. bro:see:: dhcp_ack dhcp_offer type dhcp_router_list: table[count] of addr; +## A DHCP message. +## +## .. bro:see:: dhcp_ack dhcp_decline dhcp_discover dhcp_inform dhcp_nak +## dhcp_offer dhcp_release dhcp_request type dhcp_msg: record { - op: count; ##< message OP code. 1 = BOOTREQUEST, 2 = BOOTREPLY - m_type: count; ##< the type of DHCP message - xid: count; ##< transaction ID of a DHCP session - h_addr: string; ##< hardware address of the client - ciaddr: addr; ##< original IP address of the client - yiaddr: addr; ##< IP address assigned to the client + op: count; ##< Message OP code. 1 = BOOTREQUEST, 2 = BOOTREPLY + m_type: count; ##< The type of DHCP message. + xid: count; ##< Transaction ID of a DHCP session. + h_addr: string; ##< Hardware address of the client. + ciaddr: addr; ##< Original IP address of the client. + yiaddr: addr; ##< IP address assigned to the client. }; +## A DNS message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end dns_message +## dns_query_reply dns_rejected dns_request type dns_msg: record { - id: count; + id: count; ##< Transaction ID. - opcode: count; - rcode: count; + opcode: count; ##< Operation code. + rcode: count; ##< Return code. - QR: bool; - AA: bool; - TC: bool; - RD: bool; - RA: bool; - Z: count; + QR: bool; ##< Query response flag. + AA: bool; ##< Authoritative answer flag. + TC: bool; ##< Truncated packet flag. + RD: bool; ##< Recursion desired flag. + RA: bool; ##< Recursion available flag. + Z: count; ##< TODO. - num_queries: count; - num_answers: count; - num_auth: count; - num_addl: count; + num_queries: count; ##< Number of query records. + num_answers: count; ##< Number of answer records. + num_auth: count; ##< Number of authoritative records. + num_addl: count; ##< Number of additional records. }; +## A DNS SOA record. +## +## .. bro:see:: dns_SOA_reply type dns_soa: record { - mname: string; ##< primary source of data for zone - rname: string; ##< mailbox for responsible person - serial: count; ##< version number of zone - refresh: interval; ##< seconds before refreshing - retry: interval; ##< how long before retrying failed refresh - expire: interval; ##< when zone no longer authoritative - minimum: interval; ##< minimum TTL to use when exporting + mname: string; ##< Primary source of data for zone. + rname: string; ##< Mailbox for responsible person. + serial: count; ##< Version number of zone. + refresh: interval; ##< Seconds before refreshing. + retry: interval; ##< How long before retrying failed refresh. + expire: interval; ##< When zone no longer authoritative. + minimum: interval; ##< Minimum TTL to use when exporting. }; +## An additional DNS EDNS record. +## +## .. bro:see:: dns_EDNS_addl type dns_edns_additional: record { - query: string; - qtype: count; - t: count; - payload_size: count; - extended_rcode: count; - version: count; - z_field: count; - TTL: interval; - is_query: count; + query: string; ##< Query. + qtype: count; ##< Query type. + t: count; ##< TODO. + payload_size: count; ##< TODO. + extended_rcode: count; ##< Extended return code. + version: count; ##< Version. + z_field: count; ##< TODO. + TTL: interval; ##< Time-to-live. + is_query: count; ##< TODO. }; +## An additional DNS TSIG record. +## +## bro:see:: dns_TSIG_addl type dns_tsig_additional: record { - query: string; - qtype: count; - alg_name: string; - sig: string; - time_signed: time; - fudge: time; - orig_id: count; - rr_error: count; - is_query: count; + query: string; ##< Query. + qtype: count; ##< Query type. + alg_name: string; ##< Algorithm name. + sig: string; ##< Signature. + time_signed: time; ##< Time when signed. + fudge: time; ##< TODO. + orig_id: count; ##< TODO. + rr_error: count; ##< TODO. + is_query: count; ##< TODO. }; -# Different values for "answer_type" in the following. DNS_QUERY -# shouldn't occur, it's just for completeness. -# TODO: use enums to help autodoc -const DNS_QUERY = 0; -const DNS_ANS = 1; -const DNS_AUTH = 2; -const DNS_ADDL = 3; +# DNS answer types. +# +# .. .. bro:see:: dns_answerr +# +# todo::use enum to make them autodoc'able +const DNS_QUERY = 0; ##< A query. This shouldn't occur, just for completeness. +const DNS_ANS = 1; ##< An answer record. +const DNS_AUTH = 2; ##< An authorative record. +const DNS_ADDL = 3; ##< An additional record. +## The general part of a DNS reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_HINFO_reply +## dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply +## dns_TXT_reply dns_WKS_reply type dns_answer: record { + ## Answer type. One of :bro:see:`DNS_QUERY`, :bro:see:`DNS_ANS`, + ## :bro:see:`DNS_AUTH` and :bro:see:`DNS_ADDL`. answer_type: count; - query: string; - qtype: count; - qclass: count; - TTL: interval; + query: string; ##< Query. + qtype: count; ##< Query type. + qclass: count; ##< Query class. + TTL: interval; ##< Time-to-live. }; -## For servers in these sets, omit processing the AUTH records -## they include in their replies. +## For DNS servers in these sets, omit processing the AUTH records they include in +## their replies. +## +## .. bro:see:: dns_skip_all_auth dns_skip_addl global dns_skip_auth: set[addr] &redef; -## For servers in these sets, omit processing the ADDL records -## they include in their replies. + +## For DNS servers in these sets, omit processing the ADDL records they include in +## their replies. +## +## .. bro:see:: dns_skip_all_addl dns_skip_auth global dns_skip_addl: set[addr] &redef; -## If the following are true, then all AUTH records are skipped. +## If true, all DNS AUTH records are skipped. +## +## .. bro:see:: dns_skip_all_addl dns_skip_auth global dns_skip_all_auth = T &redef; -## If the following are true, then all ADDL records are skipped. + +## If true, all DNS ADDL records are skipped. +## +## .. bro:see:: dns_skip_all_auth dns_skip_addl global dns_skip_all_addl = T &redef; -## If a DNS request includes more than this many queries, assume it's -## non-DNS traffic and do not process it. Set to 0 to turn off this -## functionality. +## If a DNS request includes more than this many queries, assume it's non-DNS +## traffic and do not process it. Set to 0 to turn off this functionality. global dns_max_queries = 5; -## The maxiumum size in bytes for an SSL cipherspec. If we see a packet that -## has bigger cipherspecs, we won't do a comparisons of cipherspecs. -const ssl_max_cipherspec_size = 68 &redef; - -type X509_extensions: table[count] of string; - +## An X509 certificate. +## +## .. bro:see:: x509_certificate type X509: record { - version: count; - serial: string; - subject: string; - issuer: string; - not_valid_before: time; - not_valid_after: time; + version: count; ##< Version number. + serial: string; ##< Serial number. + subject: string; ##< Subject. + issuer: string; ##< Issuer. + not_valid_before: time; ##< Timestamp before when certificate is not valid. + not_valid_after: time; ##< Timestamp after when certificate is not valid. }; -## This is indexed with the CA's name and yields a DER (binary) encoded certificate. -const root_ca_certs: table[string] of string = {} &redef; - +## HTTP session statistics. +## +## .. bro:see:: http_stats type http_stats_rec: record { - num_requests: count; - num_replies: count; - request_version: double; - reply_version: double; + num_requests: count; ##< Number of requests. + num_replies: count; ##< Number of replies. + request_version: double; ##< HTTP version of the requests. + reply_version: double; ##< HTTP Version of the replies. }; +## HTTP message statistics. +## +## .. bro:see:: http_message_done type http_message_stat: record { - ## when the request/reply line was complete + ## When the request/reply line was complete. start: time; - ## whether the message is interrupted - interrupted: bool; - ## reason phrase if interrupted - finish_msg: string; - ## length of body processed (before finished/interrupted) - body_length: count; - ## total len of gaps within body_length - content_gap_length: count; - ## length of headers (including the req/reply line, but not CR/LF's) - header_length: count; + ## Whether the message was interrupted. + interrupted: bool; + ## Reason phrase if interrupted. + finish_msg: string; + ## Length of body processed (before finished/interrupted). + body_length: count; + ## Total length of gaps within body_length. + content_gap_length: count; + ## Length of headers (including the req/reply line, but not CR/LF's). + header_length: count; }; +## Maximum number of HTTP entity data delivered to events. The amount of data +## can be limited for better performance, zero disables truncation. +## +## .. bro:see:: http_entity_data skip_http_entity_data skip_http_data global http_entity_data_delivery_size = 1500 &redef; -## Truncate URIs longer than this to prevent over-long URIs (usually sent -## by worms) from slowing down event processing. A value of -1 means "do -## not truncate". +## Skip HTTP data for performance considerations. The skipped +## portion will not go through TCP reassembly. +## +## .. bro:see:: http_entity_data skip_http_entity_data http_entity_data_delivery_size +const skip_http_data = F &redef; + +## Maximum length of HTTP URIs passed to events. Longer ones will be truncated +## to prevent over-long URIs (usually sent by worms) from slowing down event +## processing. A value of -1 means "do not truncate". +## +## .. bro:see:: http_request const truncate_http_URI = -1 &redef; -## IRC-related globals to which the event engine is sensitive. +## IRC join information. +## +## .. bro:see:: irc_join_list type irc_join_info: record { nick: string; channel: string; password: string; usermode: string; }; + +## Set of IRC join information. +## +## .. bro:see:: irc_join_message type irc_join_list: set[irc_join_info]; + +## Deprecated. +## +## .. todo:: Remove. It's still declared internally but doesn't seem used anywhere +## else. global irc_servers : set[addr] &redef; -## Stepping-stone globals. +## Internal to the stepping stone detector. const stp_delta: interval &redef; + +## Internal to the stepping stone detector. const stp_idle_min: interval &redef; -## Don't do analysis on these sources. Used to avoid overload from scanners. +## Internal to the stepping stone detector. global stp_skip_src: set[addr] &redef; +## Deprecated. const interconn_min_interarrival: interval &redef; + +## Deprecated. const interconn_max_interarrival: interval &redef; + +## Deprecated. const interconn_max_keystroke_pkt_size: count &redef; + +## Deprecated. const interconn_default_pkt_size: count &redef; + +## Deprecated. const interconn_stat_period: interval &redef; + +## Deprecated. const interconn_stat_backoff: double &redef; +## Deprecated. type interconn_endp_stats: record { num_pkts: count; num_keystrokes_two_in_row: count; @@ -1160,9 +2291,13 @@ type interconn_endp_stats: record { num_normal_lines: count; }; +## Deprecated. const backdoor_stat_period: interval &redef; + +## Deprecated. const backdoor_stat_backoff: double &redef; +## Deprecated. type backdoor_endp_stats: record { is_partial: bool; num_pkts: count; @@ -1174,295 +2309,418 @@ type backdoor_endp_stats: record { num_7bit_ascii: count; }; +## Description of a signature match. +## +## .. bro:see:: signature_match type signature_state: record { - sig_id: string; ##< ID of the signature - conn: connection; ##< Current connection - is_orig: bool; ##< True if current endpoint is originator - payload_size: count; ##< Payload size of the first pkt of curr. endpoint - + sig_id: string; ##< ID of the matching signature. + conn: connection; ##< Matching connection. + is_orig: bool; ##< True if matching endpoint is originator. + payload_size: count; ##< Payload size of the first matching packet of current endpoint. }; -# This type is no longer used -# TODO: remove any use of this from the core. +# Deprecated. +# +# .. todo:: This type is no longer used. Remove any reference of this from the +# core. type software_version: record { - major: int; # Major version number - minor: int; # Minor version number - minor2: int; # Minor subversion number - addl: string; # Additional version string (e.g. "beta42") + major: int; + minor: int; + minor2: int; + addl: string; }; -# This type is no longer used -# TODO: remove any use of this from the core. +# Deprecated. +# +# .. todo:: This type is no longer used. Remove any reference of this from the +# core. type software: record { - name: string; # Unique name of a software, e.g., "OS" + name: string; version: software_version; }; -# The following describe the quality of signature matches used -# for passive fingerprinting. +## Quality of passive fingerprinting matches. +## +## .. .. bro:see:: OS_version type OS_version_inference: enum { - direct_inference, generic_inference, fuzzy_inference, + direct_inference, ##< TODO. + generic_inference, ##< TODO. + fuzzy_inference, ##< TODO. }; +## Passive fingerprinting match. +## +## .. bro:see:: OS_version_found type OS_version: record { - genre: string; # Linux, Windows, AIX, ... - detail: string; # kernel version or such - dist: count; # how far is the host away from the sensor (TTL)? - match_type: OS_version_inference; + genre: string; ##< Linux, Windows, AIX, ... + detail: string; ##< Lernel version or such. + dist: count; ##< How far is the host away from the sensor (TTL)?. + match_type: OS_version_inference; ##< Quality of the match. }; -# Defines for which subnets we should do passive fingerprinting. +## Defines for which subnets we should do passive fingerprinting. +## +## .. bro:see:: OS_version_found global generate_OS_version_event: set[subnet] &redef; -# Type used to report load samples via load_sample(). For now, -# it's a set of names (event names, source file names, and perhaps -# 's, which were seen during the sample. +# Type used to report load samples via :bro:see:`load_sample`. For now, it's a +# set of names (event names, source file names, and perhaps ````, which were seen during the sample. type load_sample_info: set[string]; -# NetFlow-related data structures. - -## The following provides a mean to sort together NetFlow headers and flow -## records at the script level. rcvr_id equals the name of the file -## (e.g., netflow.dat) or the socket address (e.g., 127.0.0.1:5555), -## or an explicit name if specified to -y or -Y; pdu_id is just a serial -## number, ignoring any overflows. +## ID for NetFlow header. This is primarily a means to sort together NetFlow +## headers and flow records at the script level. type nfheader_id: record { + ## Name of the NetFlow file (e.g., ``netflow.dat``) or the receiving socket address + ## (e.g., ``127.0.0.1:5555``), or an explicit name if specified to + ## ``-y`` or ``-Y``. rcvr_id: string; + ## A serial number, ignoring any overflows. pdu_id: count; }; +## A NetFlow v5 header. +## +## .. bro:see:: netflow_v5_header type nf_v5_header: record { - h_id: nfheader_id; ##< ID for sorting, per the above - cnt: count; - sysuptime: interval; ##< router's uptime - exporttime: time; ##< when the data was exported - flow_seq: count; - eng_type: count; - eng_id: count; - sample_int: count; - exporter: addr; + h_id: nfheader_id; ##< ID for sorting. + cnt: count; ##< TODO. + sysuptime: interval; ##< Router's uptime. + exporttime: time; ##< When the data was exported. + flow_seq: count; ##< Sequence number. + eng_type: count; ##< Engine type. + eng_id: count; ##< Engine ID. + sample_int: count; ##< Sampling interval. + exporter: addr; ##< Exporter address. }; +## A NetFlow v5 record. +## +## .. bro:see:: netflow_v5_record type nf_v5_record: record { - h_id: nfheader_id; - id: conn_id; - nexthop: addr; - input: count; - output: count; - pkts: count; - octets: count; - first: time; - last: time; - tcpflag_fin: bool; ##< Taken from tcpflags in NF V5; or directly. - tcpflag_syn: bool; - tcpflag_rst: bool; - tcpflag_psh: bool; - tcpflag_ack: bool; - tcpflag_urg: bool; - proto: count; - tos: count; - src_as: count; - dst_as: count; - src_mask: count; - dst_mask: count; + h_id: nfheader_id; ##< ID for sorting. + id: conn_id; ##< Connection ID. + nexthop: addr; ##< Address of next hop. + input: count; ##< Input interface. + output: count; ##< Output interface. + pkts: count; ##< Number of packets. + octets: count; ##< Number of bytes. + first: time; ##< Timestamp of first packet. + last: time; ##< Timestamp of last packet. + tcpflag_fin: bool; ##< FIN flag for TCP flows. + tcpflag_syn: bool; ##< SYN flag for TCP flows. + tcpflag_rst: bool; ##< RST flag for TCP flows. + tcpflag_psh: bool; ##< PSH flag for TCP flows. + tcpflag_ack: bool; ##< ACK flag for TCP flows. + tcpflag_urg: bool; ##< URG flag for TCP flows. + proto: count; ##< IP protocol. + tos: count; ##< Type of service. + src_as: count; ##< Source AS. + dst_as: count; ##< Destination AS. + src_mask: count; ##< Source mask. + dst_mask: count; ##< Destination mask. }; -## The peer record and the corresponding set type used by the -## BitTorrent analyzer. +## A BitTorrent peer. +## +## .. bro:see:: bittorrent_peer_set type bittorrent_peer: record { - h: addr; - p: port; + h: addr; ##< The peer's address. + p: port; ##< The peer's port. }; + +## A set of BitTorrent peers. +## +## .. bro:see:: bt_tracker_response type bittorrent_peer_set: set[bittorrent_peer]; -## The benc value record and the corresponding table type used by the -## BitTorrenttracker analyzer. Note that "benc" = Bencode ("Bee-Encode"), -## per http://en.wikipedia.org/wiki/Bencode. +## BitTorrent "benc" value. Note that "benc" = Bencode ("Bee-Encode"), per +## http://en.wikipedia.org/wiki/Bencode. +## +## .. bro:see:: bittorrent_benc_dir type bittorrent_benc_value: record { - i: int &optional; - s: string &optional; - d: string &optional; - l: string &optional; + i: int &optional; ##< TODO. + s: string &optional; ##< TODO. + d: string &optional; ##< TODO. + l: string &optional; ##< TODO. }; + +## A table of BitTorrent "benc" values. +## +## .. bro:see:: bt_tracker_response type bittorrent_benc_dir: table[string] of bittorrent_benc_value; -## The header table type used by the bittorrenttracker analyzer. +## Header table type used by BitTorrent analyzer. +## +## .. bro:see:: bt_tracker_request bt_tracker_response +## bt_tracker_response_not_ok type bt_tracker_headers: table[string] of string; +module SOCKS; +export { + ## This record is for a SOCKS client or server to provide either a + ## name or an address to represent a desired or established connection. + type Address: record { + host: addr &optional; + name: string &optional; + } &log; +} +module GLOBAL; + @load base/event.bif -# The filter the user has set via the -f command line options, or -# empty if none. +## BPF filter the user has set via the -f command line options. Empty if none. const cmd_line_bpf_filter = "" &redef; -## Rotate logs every x interval. +## The maximum number of open files to keep cached at a given time. +## If set to zero, this is automatically determined by inspecting +## the current/maximum limit on open files for the process. +const max_files_in_cache = 0 &redef; + +## Deprecated. const log_rotate_interval = 0 sec &redef; -## If set, rotate logs at given time + i * log_rotate_interval. -## (string is time in 24h format, e.g., "18:00"). +## Deprecated. const log_rotate_base_time = "0:00" &redef; -## Rotate logs when they reach this size (in bytes). Note, the -## parameter is a double rather than a count to enable easy expression -## of large values such as 1e7 or exceeding 2^32. +## Deprecated. const log_max_size = 0.0 &redef; -## Default public key for encrypting log files. +## Deprecated. const log_encryption_key = "" &redef; -## Write profiling info into this file. +## Write profiling info into this file in regular intervals. The easiest way to +## activate profiling is loading :doc:`/scripts/policy/misc/profiling`. +## +## .. bro:see:: profiling_interval expensive_profiling_multiple segment_profiling global profiling_file: file &redef; -## Update interval for profiling (0 disables). +## Update interval for profiling (0 disables). The easiest way to activate +## profiling is loading :doc:`/scripts/policy/misc/profiling`. +## +## .. bro:see:: profiling_file expensive_profiling_multiple segment_profiling const profiling_interval = 0 secs &redef; -## Multiples of profiling_interval at which (expensive) memory -## profiling is done (0 disables). +## Multiples of profiling_interval at which (more expensive) memory profiling is +## done (0 disables). +## +## .. bro:see:: profiling_interval profiling_file segment_profiling const expensive_profiling_multiple = 0 &redef; ## If true, then write segment profiling information (very high volume!) -## in addition to statistics. +## in addition to profiling statistics. +## +## .. bro:see:: profiling_interval expensive_profiling_multiple profiling_file const segment_profiling = F &redef; -## Output packet profiling information every secs (mode 1), -## every packets (mode 2), or every bytes (mode 3). -## Mode 0 disables. +## Output modes for packet profiling information. +## +## .. bro:see:: pkt_profile_mode pkt_profile_freq pkt_profile_mode pkt_profile_file type pkt_profile_modes: enum { - PKT_PROFILE_MODE_NONE, - PKT_PROFILE_MODE_SECS, - PKT_PROFILE_MODE_PKTS, - PKT_PROFILE_MODE_BYTES, + PKT_PROFILE_MODE_NONE, ##< No output. + PKT_PROFILE_MODE_SECS, ##< Output every :bro:see:`pkt_profile_freq` seconds. + PKT_PROFILE_MODE_PKTS, ##< Output every :bro:see:`pkt_profile_freq` packets. + PKT_PROFILE_MODE_BYTES, ##< Output every :bro:see:`pkt_profile_freq` bytes. }; + +## Output modes for packet profiling information. +## +## .. bro:see:: pkt_profile_modes pkt_profile_freq pkt_profile_mode pkt_profile_file const pkt_profile_mode = PKT_PROFILE_MODE_NONE &redef; ## Frequency associated with packet profiling. +## +## .. bro:see:: pkt_profile_modes pkt_profile_mode pkt_profile_mode pkt_profile_file const pkt_profile_freq = 0.0 &redef; ## File where packet profiles are logged. +## +## .. bro:see:: pkt_profile_modes pkt_profile_freq pkt_profile_mode pkt_profile_mode global pkt_profile_file: file &redef; -## Rate at which to generate load_sample events, *if* you've also -## defined a load_sample handler. Units are inverse number of packets; -## e.g., a value of 20 means "roughly one in every 20 packets". +## Rate at which to generate :bro:see:`load_sample` events. As all +## events, the event is only generated if you've also defined a +## :bro:see:`load_sample` handler. Units are inverse number of packets; e.g., a +## value of 20 means "roughly one in every 20 packets". +## +## .. bro:see:: load_sample global load_sample_freq = 20 &redef; -## Rate at which to generate gap_report events assessing to what -## degree the measurement process appears to exhibit loss. +## Rate at which to generate :bro:see:`gap_report` events assessing to what degree +## the measurement process appears to exhibit loss. +## +## .. bro:see:: gap_report const gap_report_freq = 1.0 sec &redef; -## Whether we want content_gap and drop reports for partial connections -## (a connection is partial if it is missing a full handshake). Note that -## gap reports for partial connections might not be reliable. +## Whether we want :bro:see:`content_gap` and :bro:see:`gap_report` for partial +## connections. A connection is partial if it is missing a full handshake. Note +## that gap reports for partial connections might not be reliable. +## +## .. bro:see:: content_gap gap_report partial_connection const report_gaps_for_partial = F &redef; -## Globals associated with entire-run statistics on gaps (useful -## for final summaries). - -## The CA certificate file to authorize remote Bros. +## The CA certificate file to authorize remote Bros/Broccolis. +## +## .. bro:see:: ssl_private_key ssl_passphrase const ssl_ca_certificate = "" &redef; ## File containing our private key and our certificate. +## +## .. bro:see:: ssl_ca_certificate ssl_passphrase const ssl_private_key = "" &redef; ## The passphrase for our private key. Keeping this undefined ## causes Bro to prompt for the passphrase. +## +## .. bro:see:: ssl_private_key ssl_ca_certificate const ssl_passphrase = "" &redef; -## Whether the Bro-level packet filter drops packets per default or not. +## Default mode for Bro's user-space dynamic packet filter. If true, packets that +## aren't explicitly allowed through, are dropped from any further processing. +## +## .. note:: This is not the BPF packet filter but an additional dynamic filter +## that Bro optionally applies just before normal processing starts. +## +## .. bro:see:: install_dst_addr_filter install_dst_net_filter +## install_src_addr_filter install_src_net_filter uninstall_dst_addr_filter +## uninstall_dst_net_filter uninstall_src_addr_filter uninstall_src_net_filter const packet_filter_default = F &redef; ## Maximum size of regular expression groups for signature matching. const sig_max_group_size = 50 &redef; -## If true, send logger messages to syslog. +## Deprecated. No longer functional. const enable_syslog = F &redef; -## This is transmitted to peers receiving our events. +## Description transmitted to remote communication peers for identification. const peer_description = "bro" &redef; -## If true, broadcast events/state received from one peer to other peers. +## If true, broadcast events received from one peer to all other peers. ## -## .. note:: These options are only temporary. They will disappear when we get -## a more sophisticated script-level communication framework. +## .. bro:see:: forward_remote_state_changes +## +## .. note:: This option is only temporary and will disappear once we get a more +## sophisticated script-level communication framework. const forward_remote_events = F &redef; -## See :bro:id:`forward_remote_events` + +## If true, broadcast state updates received from one peer to all other peers. +## +## .. bro:see:: forward_remote_events +## +## .. note:: This option is only temporary and will disappear once we get a more +## sophisticated script-level communication framework. const forward_remote_state_changes = F &redef; +## Place-holder constant indicating "no peer". const PEER_ID_NONE = 0; -## Whether to use the connection tracker. -const use_connection_compressor = F &redef; +# Signature payload pattern types. +# todo::use enum to help autodoc +# todo::Still used? +#const SIG_PATTERN_PAYLOAD = 0; +#const SIG_PATTERN_HTTP = 1; +#const SIG_PATTERN_FTP = 2; +#const SIG_PATTERN_FINGER = 3; -## Whether compressor should handle refused connections itself. -const cc_handle_resets = F &redef; +# Deprecated. +# todo::Should use the new logging framework directly. +const REMOTE_LOG_INFO = 1; ##< Deprecated. +const REMOTE_LOG_ERROR = 2; ##< Deprecated. -## Whether compressor should only take care of initial SYNs. -## (By default on, this is basically "connection compressor lite".) -const cc_handle_only_syns = T &redef; - -## Whether compressor instantiates full state when originator sends a -## non-control packet. -const cc_instantiate_on_data = F &redef; - -# Signature payload pattern types -# TODO: use enum to help autodoc -const SIG_PATTERN_PAYLOAD = 0; -const SIG_PATTERN_HTTP = 1; -const SIG_PATTERN_FTP = 2; -const SIG_PATTERN_FINGER = 3; - -# Log-levels for remote_log. -# Eventually we should create a general logging framework and merge these in. -# TODO: use enum to help autodoc -const REMOTE_LOG_INFO = 1; -const REMOTE_LOG_ERROR = 2; - -# Sources for remote_log. -# TODO: use enum to help autodoc -const REMOTE_SRC_CHILD = 1; -const REMOTE_SRC_PARENT = 2; -const REMOTE_SRC_SCRIPT = 3; +# Source of logging messages from the communication framework. +# todo::these should go into an enum to make them autodoc'able. +const REMOTE_SRC_CHILD = 1; ##< Message from the child process. +const REMOTE_SRC_PARENT = 2; ##< Message from the parent process. +const REMOTE_SRC_SCRIPT = 3; ##< Message from a policy script. ## Synchronize trace processing at a regular basis in pseudo-realtime mode. +## +## .. bro:see:: remote_trace_sync_peers const remote_trace_sync_interval = 0 secs &redef; -## Number of peers across which to synchronize trace processing. +## Number of peers across which to synchronize trace processing in +## pseudo-realtime mode. +## +## .. bro:see:: remote_trace_sync_interval const remote_trace_sync_peers = 0 &redef; -## Whether for &synchronized state to send the old value as a consistency check. +## Whether for :bro:attr:`&synchronized` state to send the old value as a +## consistency check. const remote_check_sync_consistency = F &redef; ## Analyzer tags. The core automatically defines constants -## ANALYZER_*, e.g., ANALYZER_HTTP. +## ``ANALYZER_*``, e.g., ``ANALYZER_HTTP``. +## +## .. bro:see:: dpd_config +## +## .. todo::We should autodoc these automaticallty generated constants. type AnalyzerTag: count; -# DPD configuration. - +## Set of ports activating a particular protocol analysis. +## +## .. bro:see:: dpd_config type dpd_protocol_config: record { - ports: set[port] &optional; + ports: set[port] &optional; ##< Set of ports. }; +## Port configuration for Bro's "dynamic protocol detection". Protocol +## analyzers can be activated via either well-known ports or content analysis. +## This table defines the ports. +## +## .. bro:see:: dpd_reassemble_first_packets dpd_buffer_size +## dpd_match_only_beginning dpd_ignore_ports const dpd_config: table[AnalyzerTag] of dpd_protocol_config = {} &redef; ## Reassemble the beginning of all TCP connections before doing -## signature-matching for protocol detection. +## signature-matching. Enabling this provides more accurate matching at the +## expensive of CPU cycles. +## +## .. bro:see:: dpd_config dpd_buffer_size +## dpd_match_only_beginning dpd_ignore_ports +## +## .. note:: Despite the name, this option affects *all* signature matching, not +## only signatures used for dynamic protocol detection. const dpd_reassemble_first_packets = T &redef; -## Size of per-connection buffer in bytes. If the buffer is full, data is -## deleted and lost to analyzers that are activated afterwards. +## Size of per-connection buffer used for dynamic protocol detection. For each +## connection, Bro buffers this initial amount of payload in memory so that +## complete protocol analysis can start even after the initial packets have +## already passed through (i.e., when a DPD signature matches only later). +## However, once the buffer is full, data is deleted and lost to analyzers that are +## activated afterwards. Then only analyzers that can deal with partial +## connections will be able to analyze the session. +## +## .. bro:see:: dpd_reassemble_first_packets dpd_config dpd_match_only_beginning +## dpd_ignore_ports const dpd_buffer_size = 1024 &redef; ## If true, stops signature matching if dpd_buffer_size has been reached. +## +## .. bro:see:: dpd_reassemble_first_packets dpd_buffer_size +## dpd_config dpd_ignore_ports +## +## .. note:: Despite the name, this option affects *all* signature matching, not +## only signatures used for dynamic protocol detection. const dpd_match_only_beginning = T &redef; -## If true, don't consider any ports for deciding which analyzer to use. +## If true, don't consider any ports for deciding which protocol analyzer to +## use. If so, the value of :bro:see:`dpd_config` is ignored. +## +## .. bro:see:: dpd_reassemble_first_packets dpd_buffer_size +## dpd_match_only_beginning dpd_config const dpd_ignore_ports = F &redef; -## Ports which the core considers being likely used by servers. +## Ports which the core considers being likely used by servers. For ports in +## this set, is may heuristically decide to flip the direction of the +## connection if it misses the initial handshake. const likely_server_ports: set[port] &redef; -## Set of all ports for which we know an analyzer. +## Deprated. Set of all ports for which we know an analyzer, built by +## :doc:`/scripts/base/frameworks/dpd/main`. +## +## .. todo::This should be defined by :doc:`/scripts/base/frameworks/dpd/main` +## itself we still need it. global dpd_analyzer_ports: table[port] of set[AnalyzerTag]; ## Per-incident timer managers are drained after this amount of inactivity. @@ -1474,37 +2732,77 @@ const time_machine_profiling = F &redef; ## If true, warns about unused event handlers at startup. const check_for_unused_event_handlers = F &redef; -## If true, dumps all invoked event handlers at startup. -const dump_used_event_handlers = F &redef; +# If true, dumps all invoked event handlers at startup. +# todo::Still used? +# const dump_used_event_handlers = F &redef; -## If true, we suppress prints to local files if we have a receiver for -## print_hook events. Ignored for files with a &disable_print_hook attribute. +## Deprecated. const suppress_local_output = F &redef; ## Holds the filename of the trace file given with -w (empty if none). +## +## .. bro:see:: record_all_packets const trace_output_file = ""; -## If a trace file is given, dump *all* packets seen by Bro into it. -## By default, Bro applies (very few) heuristics to reduce the volume. -## A side effect of setting this to true is that we can write the -## packets out before we actually process them, which can be helpful -## for debugging in case the analysis triggers a crash. +## If a trace file is given with ``-w``, dump *all* packets seen by Bro into it. By +## default, Bro applies (very few) heuristics to reduce the volume. A side effect +## of setting this to true is that we can write the packets out before we actually +## process them, which can be helpful for debugging in case the analysis triggers a +## crash. +## +## .. bro:see:: trace_output_file const record_all_packets = F &redef; -## Some connections (e.g., SSH) retransmit the acknowledged last -## byte to keep the connection alive. If ignore_keep_alive_rexmit -## is set to T, such retransmissions will be excluded in the rexmit -## counter in conn_stats. +## Ignore certain TCP retransmissions for :bro:see:`conn_stats`. Some connections +## (e.g., SSH) retransmit the acknowledged last byte to keep the connection alive. +## If *ignore_keep_alive_rexmit* is set to true, such retransmissions will be +## excluded in the rexmit counter in :bro:see:`conn_stats`. +## +## .. bro:see:: conn_stats const ignore_keep_alive_rexmit = F &redef; -## Skip HTTP data portions for performance considerations (the skipped -## portion will not go through TCP reassembly). -const skip_http_data = F &redef; +module Tunnel; +export { + ## The maximum depth of a tunnel to decapsulate until giving up. + ## Setting this to zero will disable all types of tunnel decapsulation. + const max_depth: count = 2 &redef; -## Whether the analysis engine parses IP packets encapsulated in -## UDP tunnels. See also: udp_tunnel_port, policy/udp-tunnel.bro. -const parse_udp_tunnels = F &redef; + ## Toggle whether to do IPv{4,6}-in-IPv{4,6} decapsulation. + const enable_ip = T &redef; -# Load the logging framework here because it uses fairly deep integration with + ## Toggle whether to do IPv{4,6}-in-AYIYA decapsulation. + const enable_ayiya = T &redef; + + ## Toggle whether to do IPv6-in-Teredo decapsulation. + const enable_teredo = T &redef; + + ## With this option set, the Teredo analysis will first check to see if + ## other protocol analyzers have confirmed that they think they're + ## parsing the right protocol and only continue with Teredo tunnel + ## decapsulation if nothing else has yet confirmed. This can help + ## reduce false positives of UDP traffic (e.g. DNS) that also happens + ## to have a valid Teredo encapsulation. + const yielding_teredo_decapsulation = T &redef; + + ## With this set, the Teredo analyzer waits until it sees both sides + ## of a connection using a valid Teredo encapsulation before issuing + ## a :bro:see:`protocol_confirmation`. If it's false, the first + ## occurence of a packet with valid Teredo encapsulation causes a + ## confirmation. Both cases are still subject to effects of + ## :bro:see:`Tunnel::yielding_teredo_decapsulation`. + const delay_teredo_confirmation = T &redef; + + ## How often to cleanup internal state for inactive IP tunnels. + const ip_tunnel_timeout = 24hrs &redef; +} # end export +module GLOBAL; + +## Number of bytes per packet to capture from live interfaces. +const snaplen = 8192 &redef; + +# Load the logging framework here because it uses fairly deep integration with # BiFs and script-land defined types. @load base/frameworks/logging + +@load base/frameworks/input + diff --git a/scripts/base/init-default.bro b/scripts/base/init-default.bro index 1cf125c3ab..91011738d1 100644 --- a/scripts/base/init-default.bro +++ b/scripts/base/init-default.bro @@ -29,6 +29,7 @@ @load base/frameworks/metrics @load base/frameworks/intel @load base/frameworks/reporter +@load base/frameworks/tunnels @load base/protocols/conn @load base/protocols/dns @@ -36,6 +37,7 @@ @load base/protocols/http @load base/protocols/irc @load base/protocols/smtp +@load base/protocols/socks @load base/protocols/ssh @load base/protocols/ssl @load base/protocols/syslog diff --git a/scripts/base/protocols/conn/__load__.bro b/scripts/base/protocols/conn/__load__.bro index 8c673eca85..719486d885 100644 --- a/scripts/base/protocols/conn/__load__.bro +++ b/scripts/base/protocols/conn/__load__.bro @@ -1,3 +1,4 @@ @load ./main @load ./contents @load ./inactivity +@load ./polling diff --git a/scripts/base/protocols/conn/contents.bro b/scripts/base/protocols/conn/contents.bro index feabb1303c..2e6b547ab1 100644 --- a/scripts/base/protocols/conn/contents.bro +++ b/scripts/base/protocols/conn/contents.bro @@ -1,23 +1,27 @@ ##! This script can be used to extract either the originator's data or the ##! responders data or both. By default nothing is extracted, and in order ##! to actually extract data the ``c$extract_orig`` and/or the -##! ``c$extract_resp`` variable must be set to T. One way to achieve this -##! would be to handle the connection_established event elsewhere and set the -##! extract_orig and extract_resp options there. However, there may be trouble -##! with the timing due the event queue delay. -##! This script does not work well in a cluster context unless it has a -##! remotely mounted disk to write the content files to. +##! ``c$extract_resp`` variable must be set to ``T``. One way to achieve this +##! would be to handle the :bro:id:`connection_established` event elsewhere +##! and set the ``extract_orig`` and ``extract_resp`` options there. +##! However, there may be trouble with the timing due to event queue delay. +##! +##! .. note:: +##! +##! This script does not work well in a cluster context unless it has a +##! remotely mounted disk to write the content files to. @load base/utils/files module Conn; export { - ## The prefix given to files as they are opened on disk. + ## The prefix given to files containing extracted connections as they are + ## opened on disk. const extraction_prefix = "contents" &redef; - ## If this variable is set to T, then all contents of all files will be - ## extracted. + ## If this variable is set to ``T``, then all contents of all connections + ## will be extracted. const default_extract = F &redef; } diff --git a/scripts/base/protocols/conn/inactivity.bro b/scripts/base/protocols/conn/inactivity.bro index 04dab62470..28df192de3 100644 --- a/scripts/base/protocols/conn/inactivity.bro +++ b/scripts/base/protocols/conn/inactivity.bro @@ -4,7 +4,7 @@ module Conn; export { - ## Define inactivty timeouts by the service detected being used over + ## Define inactivity timeouts by the service detected being used over ## the connection. const analyzer_inactivity_timeouts: table[AnalyzerTag] of interval = { # For interactive services, allow longer periods of inactivity. diff --git a/scripts/base/protocols/conn/main.bro b/scripts/base/protocols/conn/main.bro index 751fe8f6cf..05e6170dc8 100644 --- a/scripts/base/protocols/conn/main.bro +++ b/scripts/base/protocols/conn/main.bro @@ -1,20 +1,36 @@ +##! This script manages the tracking/logging of general information regarding +##! TCP, UDP, and ICMP traffic. For UDP and ICMP, "connections" are to +##! be interpreted using flow semantics (sequence of packets from a source +##! host/post to a destination host/port). Further, ICMP "ports" are to +##! be interpreted as the source port meaning the ICMP message type and +##! the destination port being the ICMP message code. + @load base/utils/site module Conn; export { + ## The connection logging stream identifier. redef enum Log::ID += { LOG }; + ## The record type which contains column fields of the connection log. type Info: record { ## This is the time of the first packet. ts: time &log; + ## A unique identifier of the connection. uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. id: conn_id &log; + ## The transport layer protocol of the connection. proto: transport_proto &log; + ## An identification of an application protocol being sent over the + ## the connection. service: string &log &optional; + ## How long the connection lasted. For 3-way or 4-way connection + ## tear-downs, this will not include the final ACK. duration: interval &log &optional; ## The number of payload bytes the originator sent. For TCP - ## this is taken from sequence numbers and might be inaccurate + ## this is taken from sequence numbers and might be inaccurate ## (e.g., due to large connections) orig_bytes: count &log &optional; ## The number of payload bytes the responder sent. See ``orig_bytes``. @@ -38,21 +54,21 @@ export { ## OTH No SYN seen, just midstream traffic (a "partial connection" that was not later closed). ## ========== =============================================== conn_state: string &log &optional; - + ## If the connection is originated locally, this value will be T. If ## it was originated remotely it will be F. In the case that the - ## :bro:id:`Site::local_nets` variable is undefined, this field will + ## :bro:id:`Site::local_nets` variable is undefined, this field will ## be left empty at all times. local_orig: bool &log &optional; - - ## Indicates the number of bytes missed in content gaps which is - ## representative of packet loss. A value other than zero will - ## normally cause protocol analysis to fail but some analysis may + + ## Indicates the number of bytes missed in content gaps, which is + ## representative of packet loss. A value other than zero will + ## normally cause protocol analysis to fail but some analysis may ## have been completed prior to the packet loss. missed_bytes: count &log &default=0; - ## Records the state history of (TCP) connections as - ## a string of letters. + ## Records the state history of connections as a string of letters. + ## The meaning of those letters is: ## ## ====== ==================================================== ## Letter Meaning @@ -67,25 +83,33 @@ export { ## i inconsistent packet (e.g. SYN+RST bits both set) ## ====== ==================================================== ## - ## If the letter is in upper case it means the event comes from the - ## originator and lower case then means the responder. - ## Also, there is compression. We only record one "d" in each direction, - ## for instance. I.e., we just record that data went in that direction. - ## This history is not meant to encode how much data that happened to be. + ## If the event comes from the originator, the letter is in upper-case; if it comes + ## from the responder, it's in lower-case. Multiple packets of the same type will + ## only be noted once (e.g. we only record one "d" in each direction, regardless of + ## how many data packets were seen.) history: string &log &optional; - ## Number of packets the originator sent. + ## Number of packets that the originator sent. ## Only set if :bro:id:`use_conn_size_analyzer` = T orig_pkts: count &log &optional; - ## Number IP level bytes the originator sent (as seen on the wire, + ## Number of IP level bytes that the originator sent (as seen on the wire, ## taken from IP total_length header field). ## Only set if :bro:id:`use_conn_size_analyzer` = T orig_ip_bytes: count &log &optional; - ## Number of packets the responder sent. See ``orig_pkts``. + ## Number of packets that the responder sent. + ## Only set if :bro:id:`use_conn_size_analyzer` = T resp_pkts: count &log &optional; - ## Number IP level bytes the responder sent. See ``orig_pkts``. + ## Number og IP level bytes that the responder sent (as seen on the wire, + ## taken from IP total_length header field). + ## Only set if :bro:id:`use_conn_size_analyzer` = T resp_ip_bytes: count &log &optional; + ## If this connection was over a tunnel, indicate the + ## *uid* values for any encapsulating parent connections + ## used over the lifetime of this inner connection. + tunnel_parents: set[string] &log; }; - + + ## Event that can be handled to access the :bro:type:`Conn::Info` + ## record as it is sent on to the logging framework. global log_conn: event(rec: Info); } @@ -171,13 +195,15 @@ function set_conn(c: connection, eoc: bool) c$conn$ts=c$start_time; c$conn$uid=c$uid; c$conn$id=c$id; + if ( c?$tunnel && |c$tunnel| > 0 ) + add c$conn$tunnel_parents[c$tunnel[|c$tunnel|-1]$uid]; c$conn$proto=get_port_transport_proto(c$id$resp_p); if( |Site::local_nets| > 0 ) c$conn$local_orig=Site::is_local_addr(c$id$orig_h); - + if ( eoc ) { - if ( c$duration > 0secs ) + if ( c$duration > 0secs ) { c$conn$duration=c$duration; c$conn$orig_bytes=c$orig$size; @@ -193,7 +219,7 @@ function set_conn(c: connection, eoc: bool) c$conn$resp_ip_bytes = c$resp$num_bytes_ip; } local service = determine_service(c); - if ( service != "" ) + if ( service != "" ) c$conn$service=service; c$conn$conn_state=conn_state(c, get_port_transport_proto(c$id$resp_p)); @@ -205,10 +231,18 @@ function set_conn(c: connection, eoc: bool) event content_gap(c: connection, is_orig: bool, seq: count, length: count) &priority=5 { set_conn(c, F); - + c$conn$missed_bytes = c$conn$missed_bytes + length; } - + +event tunnel_changed(c: connection, e: EncapsulatingConnVector) &priority=5 + { + set_conn(c, F); + if ( |e| > 0 ) + add c$conn$tunnel_parents[e[|e|-1]$uid]; + c$tunnel = e; + } + event connection_state_remove(c: connection) &priority=5 { set_conn(c, T); diff --git a/scripts/base/protocols/conn/polling.bro b/scripts/base/protocols/conn/polling.bro new file mode 100644 index 0000000000..45c09c8465 --- /dev/null +++ b/scripts/base/protocols/conn/polling.bro @@ -0,0 +1,49 @@ +##! Implements a generic way to poll connections looking for certain features +##! (e.g. monitor bytes transferred). The specific feature of a connection +##! to look for, the polling interval, and the code to execute if the feature +##! is found are all controlled by user-defined callback functions. + +module ConnPolling; + +export { + ## Starts monitoring a given connection. + ## + ## c: The connection to watch. + ## + ## callback: A callback function that takes as arguments the monitored + ## *connection*, and counter *cnt* that increments each time the + ## callback is called. It returns an interval indicating how long + ## in the future to schedule an event which will call the + ## callback. A negative return interval causes polling to stop. + ## + ## cnt: The initial value of a counter which gets passed to *callback*. + ## + ## i: The initial interval at which to schedule the next callback. + ## May be ``0secs`` to poll right away. + global watch: function(c: connection, + callback: function(c: connection, cnt: count): interval, + cnt: count, i: interval); +} + +event ConnPolling::check(c: connection, + callback: function(c: connection, cnt: count): interval, + cnt: count) + { + if ( ! connection_exists(c$id) ) + return; + + lookup_connection(c$id); # updates the conn val + + local next_interval = callback(c, cnt); + if ( next_interval < 0secs ) + return; + + watch(c, callback, cnt + 1, next_interval); + } + +function watch(c: connection, + callback: function(c: connection, cnt: count): interval, + cnt: count, i: interval) + { + schedule i { ConnPolling::check(c, callback, cnt) }; + } diff --git a/scripts/base/protocols/dns/consts.bro b/scripts/base/protocols/dns/consts.bro index b57170dded..fbf4aba008 100644 --- a/scripts/base/protocols/dns/consts.bro +++ b/scripts/base/protocols/dns/consts.bro @@ -4,9 +4,9 @@ module DNS; export { - const PTR = 12; - const EDNS = 41; - const ANY = 255; + const PTR = 12; ##< RR TYPE value for a domain name pointer. + const EDNS = 41; ##< An OPT RR TYPE value described by EDNS. + const ANY = 255; ##< A QTYPE value describing a request for all records. ## Mapping of DNS query type codes to human readable string representation. const query_types = { @@ -29,50 +29,43 @@ export { [ANY] = "*", } &default = function(n: count): string { return fmt("query-%d", n); }; - const code_types = { - [0] = "X0", - [1] = "Xfmt", - [2] = "Xsrv", - [3] = "Xnam", - [4] = "Ximp", - [5] = "X[", - } &default="?"; - ## Errors used for non-TSIG/EDNS types. const base_errors = { - [0] = "NOERROR", ##< No Error - [1] = "FORMERR", ##< Format Error - [2] = "SERVFAIL", ##< Server Failure - [3] = "NXDOMAIN", ##< Non-Existent Domain - [4] = "NOTIMP", ##< Not Implemented - [5] = "REFUSED", ##< Query Refused - [6] = "YXDOMAIN", ##< Name Exists when it should not - [7] = "YXRRSET", ##< RR Set Exists when it should not - [8] = "NXRRSet", ##< RR Set that should exist does not - [9] = "NOTAUTH", ##< Server Not Authoritative for zone - [10] = "NOTZONE", ##< Name not contained in zone - [11] = "unassigned-11", ##< available for assignment - [12] = "unassigned-12", ##< available for assignment - [13] = "unassigned-13", ##< available for assignment - [14] = "unassigned-14", ##< available for assignment - [15] = "unassigned-15", ##< available for assignment - [16] = "BADVERS", ##< for EDNS, collision w/ TSIG - [17] = "BADKEY", ##< Key not recognized - [18] = "BADTIME", ##< Signature out of time window - [19] = "BADMODE", ##< Bad TKEY Mode - [20] = "BADNAME", ##< Duplicate key name - [21] = "BADALG", ##< Algorithm not supported - [22] = "BADTRUNC", ##< draft-ietf-dnsext-tsig-sha-05.txt - [3842] = "BADSIG", ##< 16 <= number collision with EDNS(16); - ##< this is a translation from TSIG(16) + [0] = "NOERROR", # No Error + [1] = "FORMERR", # Format Error + [2] = "SERVFAIL", # Server Failure + [3] = "NXDOMAIN", # Non-Existent Domain + [4] = "NOTIMP", # Not Implemented + [5] = "REFUSED", # Query Refused + [6] = "YXDOMAIN", # Name Exists when it should not + [7] = "YXRRSET", # RR Set Exists when it should not + [8] = "NXRRSet", # RR Set that should exist does not + [9] = "NOTAUTH", # Server Not Authoritative for zone + [10] = "NOTZONE", # Name not contained in zone + [11] = "unassigned-11", # available for assignment + [12] = "unassigned-12", # available for assignment + [13] = "unassigned-13", # available for assignment + [14] = "unassigned-14", # available for assignment + [15] = "unassigned-15", # available for assignment + [16] = "BADVERS", # for EDNS, collision w/ TSIG + [17] = "BADKEY", # Key not recognized + [18] = "BADTIME", # Signature out of time window + [19] = "BADMODE", # Bad TKEY Mode + [20] = "BADNAME", # Duplicate key name + [21] = "BADALG", # Algorithm not supported + [22] = "BADTRUNC", # draft-ietf-dnsext-tsig-sha-05.txt + [3842] = "BADSIG", # 16 <= number collision with EDNS(16); + # this is a translation from TSIG(16) } &default = function(n: count): string { return fmt("rcode-%d", n); }; - # This deciphers EDNS Z field values. + ## This deciphers EDNS Z field values. const edns_zfield = { [0] = "NOVALUE", # regular entry [32768] = "DNS_SEC_OK", # accepts DNS Sec RRs } &default="?"; + ## Possible values of the CLASS field in resource records or QCLASS field + ## in query messages. const classes = { [1] = "C_INTERNET", [2] = "C_CSNET", @@ -81,4 +74,4 @@ export { [254] = "C_NONE", [255] = "C_ANY", } &default = function(n: count): string { return fmt("qclass-%d", n); }; -} \ No newline at end of file +} diff --git a/scripts/base/protocols/dns/main.bro b/scripts/base/protocols/dns/main.bro index 2580b003dd..8ae3806ab6 100644 --- a/scripts/base/protocols/dns/main.bro +++ b/scripts/base/protocols/dns/main.bro @@ -1,54 +1,106 @@ +##! Base DNS analysis script which tracks and logs DNS queries along with +##! their responses. + @load ./consts module DNS; export { + ## The DNS logging stream identifier. redef enum Log::ID += { LOG }; - + + ## The record type which contains the column fields of the DNS log. type Info: record { - ts: time &log; - uid: string &log; - id: conn_id &log; - proto: transport_proto &log; - trans_id: count &log &optional; - query: string &log &optional; - qclass: count &log &optional; - qclass_name: string &log &optional; - qtype: count &log &optional; - qtype_name: string &log &optional; - rcode: count &log &optional; - rcode_name: string &log &optional; - QR: bool &log &default=F; - AA: bool &log &default=F; - TC: bool &log &default=F; - RD: bool &log &default=F; - RA: bool &log &default=F; - Z: count &log &default=0; - TTL: interval &log &optional; - answers: set[string] &log &optional; - - ## This value indicates if this request/response pair is ready to be logged. + ## The earliest time at which a DNS protocol message over the + ## associated connection is observed. + ts: time &log; + ## A unique identifier of the connection over which DNS messages + ## are being transferred. + uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. + id: conn_id &log; + ## The transport layer protocol of the connection. + proto: transport_proto &log; + ## A 16 bit identifier assigned by the program that generated the + ## DNS query. Also used in responses to match up replies to + ## outstanding queries. + trans_id: count &log &optional; + ## The domain name that is the subject of the DNS query. + query: string &log &optional; + ## The QCLASS value specifying the class of the query. + qclass: count &log &optional; + ## A descriptive name for the class of the query. + qclass_name: string &log &optional; + ## A QTYPE value specifying the type of the query. + qtype: count &log &optional; + ## A descriptive name for the type of the query. + qtype_name: string &log &optional; + ## The response code value in DNS response messages. + rcode: count &log &optional; + ## A descriptive name for the response code value. + rcode_name: string &log &optional; + ## The Authoritative Answer bit for response messages specifies that + ## the responding name server is an authority for the domain name + ## in the question section. + AA: bool &log &default=F; + ## The Truncation bit specifies that the message was truncated. + TC: bool &log &default=F; + ## The Recursion Desired bit in a request message indicates that + ## the client wants recursive service for this query. + RD: bool &log &default=F; + ## The Recursion Available bit in a response message indicates that + ## the name server supports recursive queries. + RA: bool &log &default=F; + ## A reserved field that is currently supposed to be zero in all + ## queries and responses. + Z: count &log &default=0; + ## The set of resource descriptions in the query answer. + answers: vector of string &log &optional; + ## The caching intervals of the associated RRs described by the + ## ``answers`` field. + TTLs: vector of interval &log &optional; + ## The DNS query was rejected by the server. + rejected: bool &log &default=F; + + ## This value indicates if this request/response pair is ready to be + ## logged. ready: bool &default=F; - total_answers: count &optional; + ## The total number of resource records in a reply message's answer + ## section. + total_answers: count &default=0; + ## The total number of resource records in a reply message's answer, + ## authority, and additional sections. total_replies: count &optional; }; - + + ## A record type which tracks the status of DNS queries for a given + ## :bro:type:`connection`. type State: record { ## Indexed by query id, returns Info record corresponding to ## query/response which haven't completed yet. pending: table[count] of Info &optional; - + ## This is the list of DNS responses that have completed based on the ## number of responses declared and the number received. The contents ## of the set are transaction IDs. finished_answers: set[count] &optional; }; - + + ## An event that can be handled to access the :bro:type:`DNS::Info` + ## record as it is sent to the logging framework. global log_dns: event(rec: Info); - + ## This is called by the specific dns_*_reply events with a "reply" which - ## may not represent the full data available from the resource record, but + ## may not represent the full data available from the resource record, but ## it's generally considered a summarization of the response(s). + ## + ## c: The connection record for which to fill in DNS reply data. + ## + ## msg: The DNS message header information for the response. + ## + ## ans: The general information of a RR response. + ## + ## reply: The specific response information according to RR type/class. global do_reply: event(c: connection, msg: dns_msg, ans: dns_answer, reply: string); } @@ -58,18 +110,18 @@ redef record connection += { }; # DPD configuration. -redef capture_filters += { +redef capture_filters += { ["dns"] = "port 53", ["mdns"] = "udp and port 5353", ["llmns"] = "udp and port 5355", - ["netbios-ns"] = "udp port 137", + ["netbios-ns"] = "udp port 137", }; -global dns_ports = { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp } &redef; +const dns_ports = { 53/udp, 53/tcp, 137/udp, 5353/udp, 5355/udp }; redef dpd_config += { [ANALYZER_DNS] = [$ports = dns_ports] }; -global dns_udp_ports = { 53/udp, 137/udp, 5353/udp, 5355/udp } &redef; -global dns_tcp_ports = { 53/tcp } &redef; +const dns_udp_ports = { 53/udp, 137/udp, 5353/udp, 5355/udp }; +const dns_tcp_ports = { 53/tcp }; redef dpd_config += { [ANALYZER_DNS_UDP_BINPAC] = [$ports = dns_udp_ports] }; redef dpd_config += { [ANALYZER_DNS_TCP_BINPAC] = [$ports = dns_tcp_ports] }; @@ -89,7 +141,7 @@ function new_session(c: connection, trans_id: count): Info state$finished_answers=set(); c$dns_state = state; } - + local info: Info; info$ts = network_time(); info$id = c$id; @@ -102,23 +154,29 @@ function new_session(c: connection, trans_id: count): Info function set_session(c: connection, msg: dns_msg, is_query: bool) { if ( ! c?$dns_state || msg$id !in c$dns_state$pending ) + { c$dns_state$pending[msg$id] = new_session(c, msg$id); - + # Try deleting this transaction id from the set of finished answers. + # Sometimes hosts will reuse ports and transaction ids and this should + # be considered to be a legit scenario (although bad practice). + delete c$dns_state$finished_answers[msg$id]; + } + c$dns = c$dns_state$pending[msg$id]; - c$dns$rcode = msg$rcode; - c$dns$rcode_name = base_errors[msg$rcode]; - if ( ! is_query ) { + c$dns$rcode = msg$rcode; + c$dns$rcode_name = base_errors[msg$rcode]; + if ( ! c$dns?$total_answers ) c$dns$total_answers = msg$num_answers; - - if ( c$dns?$total_replies && + + if ( c$dns?$total_replies && c$dns$total_replies != msg$num_answers + msg$num_addl + msg$num_auth ) { - event conn_weird("dns_changed_number_of_responses", c, - fmt("The declared number of responses changed from %d to %d", + event conn_weird("dns_changed_number_of_responses", c, + fmt("The declared number of responses changed from %d to %d", c$dns$total_replies, msg$num_answers + msg$num_addl + msg$num_auth)); } @@ -129,28 +187,35 @@ function set_session(c: connection, msg: dns_msg, is_query: bool) } } } - + +event dns_message(c: connection, is_orig: bool, msg: dns_msg, len: count) &priority=5 + { + set_session(c, msg, is_orig); + } + event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=5 { - set_session(c, msg, F); - - c$dns$AA = msg$AA; - c$dns$RA = msg$RA; - c$dns$TTL = ans$TTL; - if ( ans$answer_type == DNS_ANS ) { + c$dns$AA = msg$AA; + c$dns$RA = msg$RA; + if ( msg$id in c$dns_state$finished_answers ) event conn_weird("dns_reply_seen_after_done", c, ""); - + if ( reply != "" ) { if ( ! c$dns?$answers ) - c$dns$answers = set(); - add c$dns$answers[reply]; + c$dns$answers = vector(); + c$dns$answers[|c$dns$answers|] = reply; + + if ( ! c$dns?$TTLs ) + c$dns$TTLs = vector(); + c$dns$TTLs[|c$dns$TTLs|] = ans$TTL; } - - if ( c$dns?$answers && |c$dns$answers| == c$dns$total_answers ) + + if ( c$dns?$answers && c$dns?$total_answers && + |c$dns$answers| == c$dns$total_answers ) { add c$dns_state$finished_answers[c$dns$trans_id]; # Indicate this request/reply pair is ready to be logged. @@ -158,13 +223,12 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) } } } - + event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) &priority=-5 { if ( c$dns$ready ) { Log::write(DNS::LOG, c$dns); - add c$dns_state$finished_answers[c$dns$trans_id]; # This record is logged and no longer pending. delete c$dns_state$pending[c$dns$trans_id]; } @@ -172,42 +236,43 @@ event DNS::do_reply(c: connection, msg: dns_msg, ans: dns_answer, reply: string) event dns_request(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5 { - set_session(c, msg, T); - c$dns$RD = msg$RD; c$dns$TC = msg$TC; c$dns$qclass = qclass; c$dns$qclass_name = classes[qclass]; c$dns$qtype = qtype; c$dns$qtype_name = query_types[qtype]; - + # Decode netbios name queries - # Note: I'm ignoring the name type for now. Not sure if this should be + # Note: I'm ignoring the name type for now. Not sure if this should be # worked into the query/response in some fashion. if ( c$id$resp_p == 137/udp ) query = decode_netbios_name(query); c$dns$query = query; - + c$dns$Z = msg$Z; } - + event dns_A_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5 { event DNS::do_reply(c, msg, ans, fmt("%s", a)); } - + event dns_TXT_reply(c: connection, msg: dns_msg, ans: dns_answer, str: string) &priority=5 { event DNS::do_reply(c, msg, ans, str); } - -event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr, - astr: string) &priority=5 + +event dns_AAAA_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5 { - # TODO: What should we do with astr? event DNS::do_reply(c, msg, ans, fmt("%s", a)); } - + +event dns_A6_reply(c: connection, msg: dns_msg, ans: dns_answer, a: addr) &priority=5 + { + event DNS::do_reply(c, msg, ans, fmt("%s", a)); + } + event dns_NS_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5 { event DNS::do_reply(c, msg, ans, name); @@ -223,12 +288,12 @@ event dns_MX_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string, { event DNS::do_reply(c, msg, ans, name); } - + event dns_PTR_reply(c: connection, msg: dns_msg, ans: dns_answer, name: string) &priority=5 { event DNS::do_reply(c, msg, ans, name); } - + event dns_SOA_reply(c: connection, msg: dns_msg, ans: dns_answer, soa: dns_soa) &priority=5 { event DNS::do_reply(c, msg, ans, soa$mname); @@ -238,7 +303,7 @@ event dns_WKS_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5 { event DNS::do_reply(c, msg, ans, ""); } - + event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5 { event DNS::do_reply(c, msg, ans, ""); @@ -247,34 +312,32 @@ event dns_SRV_reply(c: connection, msg: dns_msg, ans: dns_answer) &priority=5 # TODO: figure out how to handle these #event dns_EDNS(c: connection, msg: dns_msg, ans: dns_answer) # { -# +# # } # #event dns_EDNS_addl(c: connection, msg: dns_msg, ans: dns_edns_additional) # { -# +# # } # #event dns_TSIG_addl(c: connection, msg: dns_msg, ans: dns_tsig_additional) # { -# +# # } - -event dns_rejected(c: connection, msg: dns_msg, - query: string, qtype: count, qclass: count) &priority=5 +event dns_rejected(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count) &priority=5 { - set_session(c, msg, F); + c$dns$rejected = T; } event connection_state_remove(c: connection) &priority=-5 { if ( ! c?$dns_state ) return; - - # If Bro is expiring state, we should go ahead and log all unlogged + + # If Bro is expiring state, we should go ahead and log all unlogged # request/response pairs now. for ( trans_id in c$dns_state$pending ) Log::write(DNS::LOG, c$dns_state$pending[trans_id]); } - + diff --git a/scripts/base/protocols/ftp/__load__.bro b/scripts/base/protocols/ftp/__load__.bro index 0a399aef36..15c61be614 100644 --- a/scripts/base/protocols/ftp/__load__.bro +++ b/scripts/base/protocols/ftp/__load__.bro @@ -1,3 +1,4 @@ @load ./utils-commands @load ./main -@load ./file-extract \ No newline at end of file +@load ./file-extract +@load ./gridftp diff --git a/scripts/base/protocols/ftp/file-extract.bro b/scripts/base/protocols/ftp/file-extract.bro index db5c8a0afa..7cee4995ba 100644 --- a/scripts/base/protocols/ftp/file-extract.bro +++ b/scripts/base/protocols/ftp/file-extract.bro @@ -1,4 +1,4 @@ -##! File extraction for FTP. +##! File extraction support for FTP. @load ./main @load base/utils/files @@ -6,7 +6,7 @@ module FTP; export { - ## Pattern of file mime types to extract from FTP entity bodies. + ## Pattern of file mime types to extract from FTP transfers. const extract_file_types = /NO_DEFAULT/ &redef; ## The on-disk prefix for files to be extracted from FTP-data transfers. @@ -14,10 +14,15 @@ export { } redef record Info += { - ## The file handle for the file to be extracted + ## On disk file where it was extracted to. extraction_file: file &log &optional; + ## Indicates if the current command/response pair should attempt to + ## extract the file if a file was transferred. extract_file: bool &default=F; + + ## Internal tracking of the total number of files extracted during this + ## session. num_extracted_files: count &default=0; }; @@ -33,7 +38,6 @@ event file_transferred(c: connection, prefix: string, descr: string, if ( extract_file_types in s$mime_type ) { s$extract_file = T; - add s$tags["extracted_file"]; ++s$num_extracted_files; } } diff --git a/scripts/base/protocols/ftp/gridftp.bro b/scripts/base/protocols/ftp/gridftp.bro new file mode 100644 index 0000000000..57752b1cbd --- /dev/null +++ b/scripts/base/protocols/ftp/gridftp.bro @@ -0,0 +1,121 @@ +##! A detection script for GridFTP data and control channels. +##! +##! GridFTP control channels are identified by FTP control channels +##! that successfully negotiate the GSSAPI method of an AUTH request +##! and for which the exchange involved an encoded TLS/SSL handshake, +##! indicating the GSI mechanism for GSSAPI was used. This analysis +##! is all supported internally, this script simple adds the "gridftp" +##! label to the *service* field of the control channel's +##! :bro:type:`connection` record. +##! +##! GridFTP data channels are identified by a heuristic that relies on +##! the fact that default settings for GridFTP clients typically +##! mutally authenticate the data channel with TLS/SSL and negotiate a +##! NULL bulk cipher (no encryption). Connections with those +##! attributes are then polled for two minutes with decreasing frequency +##! to check if the transfer sizes are large enough to indicate a +##! GridFTP data channel that would be undesireable to analyze further +##! (e.g. stop TCP reassembly). A side effect is that true connection +##! sizes are not logged, but at the benefit of saving CPU cycles that +##! otherwise go to analyzing the large (and likely benign) connections. + +@load ./main +@load base/protocols/conn +@load base/protocols/ssl +@load base/frameworks/notice + +module GridFTP; + +export { + ## Number of bytes transferred before guessing a connection is a + ## GridFTP data channel. + const size_threshold = 1073741824 &redef; + + ## Max number of times to check whether a connection's size exceeds the + ## :bro:see:`GridFTP::size_threshold`. + const max_poll_count = 15 &redef; + + ## Whether to skip further processing of the GridFTP data channel once + ## detected, which may help performance. + const skip_data = T &redef; + + ## Base amount of time between checking whether a GridFTP data connection + ## has transferred more than :bro:see:`GridFTP::size_threshold` bytes. + const poll_interval = 1sec &redef; + + ## The amount of time the base :bro:see:`GridFTP::poll_interval` is + ## increased by each poll interval. Can be used to make more frequent + ## checks at the start of a connection and gradually slow down. + const poll_interval_increase = 1sec &redef; + + ## Raised when a GridFTP data channel is detected. + ## + ## c: The connection pertaining to the GridFTP data channel. + global data_channel_detected: event(c: connection); + + ## The initial criteria used to determine whether to start polling + ## the connection for the :bro:see:`GridFTP::size_threshold` to have + ## been exceeded. This is called in a :bro:see:`ssl_established` event + ## handler and by default looks for both a client and server certificate + ## and for a NULL bulk cipher. One way in which this function could be + ## redefined is to make it also consider client/server certificate issuer + ## subjects. + ## + ## c: The connection which may possibly be a GridFTP data channel. + ## + ## Returns: true if the connection should be further polled for an + ## exceeded :bro:see:`GridFTP::size_threshold`, else false. + const data_channel_initial_criteria: function(c: connection): bool &redef; +} + +redef record FTP::Info += { + last_auth_requested: string &optional; +}; + +event ftp_request(c: connection, command: string, arg: string) &priority=4 + { + if ( command == "AUTH" && c?$ftp ) + c$ftp$last_auth_requested = arg; + } + +function size_callback(c: connection, cnt: count): interval + { + if ( c$orig$size > size_threshold || c$resp$size > size_threshold ) + { + add c$service["gridftp-data"]; + event GridFTP::data_channel_detected(c); + + if ( skip_data ) + skip_further_processing(c$id); + + return -1sec; + } + + if ( cnt >= max_poll_count ) + return -1sec; + + return poll_interval + poll_interval_increase * cnt; + } + +event ssl_established(c: connection) &priority=5 + { + # If an FTP client requests AUTH GSSAPI and later an SSL handshake + # finishes, it's likely a GridFTP control channel, so add service label. + if ( c?$ftp && c$ftp?$last_auth_requested && + /GSSAPI/ in c$ftp$last_auth_requested ) + add c$service["gridftp"]; + } + +function data_channel_initial_criteria(c: connection): bool + { + return ( c?$ssl && c$ssl?$client_subject && c$ssl?$subject && + c$ssl?$cipher && /WITH_NULL/ in c$ssl$cipher ); + } + +event ssl_established(c: connection) &priority=-3 + { + # By default GridFTP data channels do mutual authentication and + # negotiate a cipher suite with a NULL bulk cipher. + if ( data_channel_initial_criteria(c) ) + ConnPolling::watch(c, size_callback, 0, 0secs); + } diff --git a/scripts/base/protocols/ftp/main.bro b/scripts/base/protocols/ftp/main.bro index e8eb96d3ee..3d7b1fe61a 100644 --- a/scripts/base/protocols/ftp/main.bro +++ b/scripts/base/protocols/ftp/main.bro @@ -1,51 +1,76 @@ ##! The logging this script does is primarily focused on logging FTP commands ##! along with metadata. For example, if files are transferred, the argument ##! will take on the full path that the client is at along with the requested -##! file name. -##! -##! TODO: -##! -##! * Handle encrypted sessions correctly (get an example?) +##! file name. @load ./utils-commands @load base/utils/paths @load base/utils/numbers +@load base/utils/addrs module FTP; export { + ## The FTP protocol logging stream identifier. redef enum Log::ID += { LOG }; - + + ## List of commands that should have their command/response pairs logged. + const logged_commands = { + "APPE", "DELE", "RETR", "STOR", "STOU", "ACCT" + } &redef; + ## This setting changes if passwords used in FTP sessions are captured or not. const default_capture_password = F &redef; + ## User IDs that can be considered "anonymous". + const guest_ids = { "anonymous", "ftp", "ftpuser", "guest" } &redef; + type Info: record { + ## Time when the command was sent. ts: time &log; + ## Unique ID for the connection. uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. id: conn_id &log; + ## User name for the current FTP session. user: string &log &default=""; + ## Password for the current FTP session if captured. password: string &log &optional; + ## Command given by the client. command: string &log &optional; + ## Argument for the command if one is given. arg: string &log &optional; - + + ## Libmagic "sniffed" file type if the command indicates a file transfer. mime_type: string &log &optional; + ## Libmagic "sniffed" file description if the command indicates a file transfer. mime_desc: string &log &optional; + ## Size of the file if the command indicates a file transfer. file_size: count &log &optional; + + ## Reply code from the server in response to the command. reply_code: count &log &optional; + ## Reply message from the server in response to the command. reply_msg: string &log &optional; + ## Arbitrary tags that may indicate a particular attribute of this command. tags: set[string] &log &default=set(); - ## By setting the CWD to '/.', we can indicate that unless something + ## Current working directory that this session is in. By making + ## the default value '/.', we can indicate that unless something ## more concrete is discovered that the existing but unknown ## directory is ok to use. cwd: string &default="/."; + + ## Command that is currently waiting for a response. cmdarg: CmdArg &optional; + ## Queue for commands that have been sent but not yet responded to + ## are tracked here. pending_commands: PendingCmds; - ## This indicates if the session is in active or passive mode. + ## Indicates if the session is in active or passive mode. passive: bool &default=F; - ## This determines if the password will be captured for this request. + ## Determines if the password will be captured for this request. capture_password: bool &default=default_capture_password; }; @@ -56,22 +81,12 @@ export { y: count; z: count; }; - - # TODO: add this back in some form. raise a notice again? - #const excessive_filename_len = 250 &redef; - #const excessive_filename_trunc_len = 32 &redef; - - ## These are user IDs that can be considered "anonymous". - const guest_ids = { "anonymous", "ftp", "guest" } &redef; - ## The list of commands that should have their command/response pairs logged. - const logged_commands = { - "APPE", "DELE", "RETR", "STOR", "STOU", "ACCT" - } &redef; - - ## This function splits FTP reply codes into the three constituent + ## Parse FTP reply codes into the three constituent single digit values. global parse_ftp_reply_code: function(code: count): ReplyCode; - + + ## Event that can be handled to access the :bro:type:`FTP::Info` + ## record as it is sent on to the logging framework. global log_ftp: event(rec: Info); } @@ -81,11 +96,11 @@ redef record connection += { }; # Configure DPD -const ports = { 21/tcp } &redef; -redef capture_filters += { ["ftp"] = "port 21" }; +const ports = { 21/tcp, 2811/tcp } &redef; # 2811/tcp is GridFTP. +redef capture_filters += { ["ftp"] = "port 21 and port 2811" }; redef dpd_config += { [ANALYZER_FTP] = [$ports = ports] }; -redef likely_server_ports += { 21/tcp }; +redef likely_server_ports += { 21/tcp, 2811/tcp }; # Establish the variable for tracking expected connections. global ftp_data_expected: table[addr, port] of Info &create_expire=5mins; @@ -148,12 +163,16 @@ function ftp_message(s: Info) # or it's a deliberately logged command. if ( |s$tags| > 0 || (s?$cmdarg && s$cmdarg$cmd in logged_commands) ) { - if ( s?$password && to_lower(s$user) !in guest_ids ) + if ( s?$password && + ! s$capture_password && + to_lower(s$user) !in guest_ids ) + { s$password = ""; + } local arg = s$cmdarg$arg; if ( s$cmdarg$cmd in file_cmds ) - arg = fmt("ftp://%s%s", s$id$resp_h, build_path_compressed(s$cwd, arg)); + arg = fmt("ftp://%s%s", addr_to_uri(s$id$resp_h), build_path_compressed(s$cwd, arg)); s$ts=s$cmdarg$ts; s$command=s$cmdarg$cmd; @@ -258,7 +277,7 @@ event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) &prior { c$ftp$passive=T; - if ( code == 229 && data$h == 0.0.0.0 ) + if ( code == 229 && data$h == [::] ) data$h = id$resp_h; ftp_data_expected[data$h, data$p] = c$ftp; diff --git a/scripts/base/protocols/ftp/utils-commands.bro b/scripts/base/protocols/ftp/utils-commands.bro index 40dacf9b66..ddfad3e08d 100644 --- a/scripts/base/protocols/ftp/utils-commands.bro +++ b/scripts/base/protocols/ftp/utils-commands.bro @@ -2,14 +2,22 @@ module FTP; export { type CmdArg: record { + ## Time when the command was sent. ts: time; + ## Command. cmd: string &default=""; + ## Argument for the command if one was given. arg: string &default=""; + ## Counter to track how many commands have been executed. seq: count &default=0; }; - + + ## Structure for tracking pending commands in the event that the client + ## sends a large number of commands before the server has a chance to + ## reply. type PendingCmds: table[count] of CmdArg; - + + ## Possible response codes for a wide variety of FTP commands. const cmd_reply_code: set[string, count] = { # According to RFC 959 ["", [120, 220, 421]], diff --git a/scripts/base/protocols/http/file-extract.bro b/scripts/base/protocols/http/file-extract.bro index d36d95e475..466d18c3b4 100644 --- a/scripts/base/protocols/http/file-extract.bro +++ b/scripts/base/protocols/http/file-extract.bro @@ -8,40 +8,40 @@ module HTTP; export { - ## Pattern of file mime types to extract from HTTP entity bodies. + ## Pattern of file mime types to extract from HTTP response entity bodies. const extract_file_types = /NO_DEFAULT/ &redef; ## The on-disk prefix for files to be extracted from HTTP entity bodies. const extraction_prefix = "http-item" &redef; redef record Info += { - ## This field can be set per-connection to determine if the entity body - ## will be extracted. It must be set to T on or before the first - ## entity_body_data event. - extracting_file: bool &default=F; - - ## This is the holder for the file handle as the file is being written - ## to disk. + ## On-disk file where the response body was extracted to. extraction_file: file &log &optional; - }; - - redef record State += { - entity_bodies: count &default=0; + + ## Indicates if the response body is to be extracted or not. Must be + ## set before or by the first :bro:id:`http_entity_data` event for the + ## content. + extract_file: bool &default=F; }; } -event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=5 +event http_entity_data(c: connection, is_orig: bool, length: count, data: string) &priority=-5 { # Client body extraction is not currently supported in this script. - if ( is_orig || ! c$http$first_chunk ) return; + if ( is_orig ) + return; if ( c$http$first_chunk ) { if ( c$http?$mime_type && extract_file_types in c$http$mime_type ) { - c$http$extracting_file = T; - local suffix = fmt("%s_%d.dat", is_orig ? "orig" : "resp", ++c$http_state$entity_bodies); + c$http$extract_file = T; + } + + if ( c$http$extract_file ) + { + local suffix = fmt("%s_%d.dat", is_orig ? "orig" : "resp", c$http_state$current_response); local fname = generate_extraction_filename(extraction_prefix, c, suffix); c$http$extraction_file = open(fname); @@ -49,12 +49,12 @@ event http_entity_data(c: connection, is_orig: bool, length: count, data: string } } - if ( c$http$extracting_file ) + if ( c$http?$extraction_file ) print c$http$extraction_file, data; } event http_end_entity(c: connection, is_orig: bool) { - if ( c$http$extracting_file ) + if ( c$http?$extraction_file ) close(c$http$extraction_file); } diff --git a/scripts/base/protocols/http/file-hash.bro b/scripts/base/protocols/http/file-hash.bro index 094a905eeb..7e8e5cceaf 100644 --- a/scripts/base/protocols/http/file-hash.bro +++ b/scripts/base/protocols/http/file-hash.bro @@ -11,7 +11,8 @@ export { }; redef record Info += { - ## The MD5 sum for a file transferred over HTTP will be stored here. + ## MD5 sum for a file transferred over HTTP calculated from the + ## response body. md5: string &log &optional; ## This value can be set per-transfer to determine per request @@ -19,8 +20,8 @@ export { ## set to T at the time of or before the first chunk of body data. calc_md5: bool &default=F; - ## This boolean value indicates if an MD5 sum is currently being - ## calculated for the current file transfer. + ## Indicates if an MD5 sum is being calculated for the current + ## request/response pair. calculating_md5: bool &default=F; }; diff --git a/scripts/base/protocols/http/file-ident.bro b/scripts/base/protocols/http/file-ident.bro index c2d858852b..b493f02bf0 100644 --- a/scripts/base/protocols/http/file-ident.bro +++ b/scripts/base/protocols/http/file-ident.bro @@ -1,5 +1,4 @@ -##! This script is involved in the identification of file types in HTTP -##! response bodies. +##! Identification of file types in HTTP response bodies with file content sniffing. @load base/frameworks/signatures @load base/frameworks/notice @@ -7,7 +6,8 @@ @load ./utils # Add the magic number signatures to the core signature set. -redef signature_files += "base/protocols/http/file-ident.sig"; +@load-sigs ./file-ident.sig + # Ignore the signatures used to match files redef Signatures::ignored_ids += /^matchfile-/; @@ -15,30 +15,32 @@ module HTTP; export { redef enum Notice::Type += { - # This notice is thrown when the file extension doesn't - # seem to match the file contents. + ## Indicates when the file extension doesn't seem to match the file contents. Incorrect_File_Type, }; redef record Info += { - ## This will record the mime_type identified. + ## Mime type of response body identified by content sniffing. mime_type: string &log &optional; - ## This indicates that no data of the current file transfer has been + ## Indicates that no data of the current file transfer has been ## seen yet. After the first :bro:id:`http_entity_data` event, it - ## will be set to T. + ## will be set to F. first_chunk: bool &default=T; }; - - redef enum Tags += { - IDENTIFIED_FILE - }; - # Create regexes that *should* in be in the urls for specifics mime types. - # Notices are thrown if the pattern doesn't match the url for the file type. + ## Mapping between mime types and regular expressions for URLs + ## The :bro:enum:`HTTP::Incorrect_File_Type` notice is generated if the pattern + ## doesn't match the mime type that was discovered. const mime_types_extensions: table[string] of pattern = { ["application/x-dosexec"] = /\.([eE][xX][eE]|[dD][lL][lL])/, } &redef; + + ## A pattern for filtering out :bro:enum:`HTTP::Incorrect_File_Type` urls + ## that are not noteworthy before a notice is created. Each + ## pattern added should match the complete URL (the matched URLs include + ## "http://" at the beginning). + const ignored_incorrect_file_type_urls = /^$/ &redef; } event signature_match(state: signature_state, msg: string, data: string) &priority=5 @@ -59,6 +61,10 @@ event signature_match(state: signature_state, msg: string, data: string) &priori c$http?$uri && mime_types_extensions[msg] !in c$http$uri ) { local url = build_url_http(c$http); + + if ( url == ignored_incorrect_file_type_urls ) + return; + local message = fmt("%s %s %s", msg, c$http$method, url); NOTICE([$note=Incorrect_File_Type, $msg=message, diff --git a/scripts/base/protocols/http/main.bro b/scripts/base/protocols/http/main.bro index 59107bb4c7..21b4fb6113 100644 --- a/scripts/base/protocols/http/main.bro +++ b/scripts/base/protocols/http/main.bro @@ -1,3 +1,7 @@ +##! Implements base functionality for HTTP analysis. The logging model is +##! to log request/response pairs and all relevant metadata together in +##! a single record. + @load base/utils/numbers @load base/utils/files @@ -8,6 +12,7 @@ export { ## Indicate a type of attack or compromise in the record to be logged. type Tags: enum { + ## Placeholder. EMPTY }; @@ -15,64 +20,71 @@ export { const default_capture_password = F &redef; type Info: record { - ts: time &log; - uid: string &log; - id: conn_id &log; - ## This represents the pipelined depth into the connection of this + ## Timestamp for when the request happened. + ts: time &log; + ## Unique ID for the connection. + uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. + id: conn_id &log; + ## Represents the pipelined depth into the connection of this ## request/response transaction. - trans_depth: count &log; - ## The verb used in the HTTP request (GET, POST, HEAD, etc.). - method: string &log &optional; - ## The value of the HOST header. - host: string &log &optional; - ## The URI used in the request. - uri: string &log &optional; - ## The value of the "referer" header. The comment is deliberately + trans_depth: count &log; + ## Verb used in the HTTP request (GET, POST, HEAD, etc.). + method: string &log &optional; + ## Value of the HOST header. + host: string &log &optional; + ## URI used in the request. + uri: string &log &optional; + ## Value of the "referer" header. The comment is deliberately ## misspelled like the standard declares, but the name used here is ## "referrer" spelled correctly. - referrer: string &log &optional; - ## The value of the User-Agent header from the client. - user_agent: string &log &optional; - ## The actual uncompressed content size of the data transferred from + referrer: string &log &optional; + ## Value of the User-Agent header from the client. + user_agent: string &log &optional; + ## Actual uncompressed content size of the data transferred from ## the client. - request_body_len: count &log &default=0; - ## The actual uncompressed content size of the data transferred from + request_body_len: count &log &default=0; + ## Actual uncompressed content size of the data transferred from ## the server. response_body_len: count &log &default=0; - ## The status code returned by the server. + ## Status code returned by the server. status_code: count &log &optional; - ## The status message returned by the server. + ## Status message returned by the server. status_msg: string &log &optional; - ## The last 1xx informational reply code returned by the server. + ## Last seen 1xx informational reply code returned by the server. info_code: count &log &optional; - ## The last 1xx informational reply message returned by the server. + ## Last seen 1xx informational reply message returned by the server. info_msg: string &log &optional; - ## The filename given in the Content-Disposition header - ## sent by the server. + ## Filename given in the Content-Disposition header sent by the server. filename: string &log &optional; - ## This is a set of indicators of various attributes discovered and + ## A set of indicators of various attributes discovered and ## related to a particular request/response pair. tags: set[Tags] &log; - ## The username if basic-auth is performed for the request. + ## Username if basic-auth is performed for the request. username: string &log &optional; - ## The password if basic-auth is performed for the request. + ## Password if basic-auth is performed for the request. password: string &log &optional; - ## This determines if the password will be captured for this request. + ## Determines if the password will be captured for this request. capture_password: bool &default=default_capture_password; ## All of the headers that may indicate if the request was proxied. proxied: set[string] &log &optional; }; + ## Structure to maintain state for an HTTP connection with multiple + ## requests and responses. type State: record { + ## Pending requests. pending: table[count] of Info; - current_response: count &default=0; + ## Current request in the pending queue. current_request: count &default=0; + ## Current response in the pending queue. + current_response: count &default=0; }; - ## The list of HTTP headers typically used to indicate a proxied request. + ## A list of HTTP headers typically used to indicate proxied requests. const proxy_headers: set[string] = { "FORWARDED", "X-FORWARDED-FOR", @@ -83,6 +95,8 @@ export { "PROXY-CONNECTION", } &redef; + ## Event that can be handled to access the HTTP record as it is sent on + ## to the logging framework. global log_http: event(rec: Info); } @@ -100,7 +114,7 @@ event bro_init() &priority=5 # DPD configuration. const ports = { - 80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3138/tcp, + 80/tcp, 81/tcp, 631/tcp, 1080/tcp, 3128/tcp, 8000/tcp, 8080/tcp, 8888/tcp, }; redef dpd_config += { diff --git a/scripts/base/protocols/http/utils.bro b/scripts/base/protocols/http/utils.bro index 6e2583bc75..a74a2fe696 100644 --- a/scripts/base/protocols/http/utils.bro +++ b/scripts/base/protocols/http/utils.bro @@ -1,12 +1,36 @@ ##! Utilities specific for HTTP processing. @load ./main +@load base/utils/addrs module HTTP; export { + ## Given a string containing a series of key-value pairs separated by "=", + ## this function can be used to parse out all of the key names. + ## + ## data: The raw data, such as a URL or cookie value. + ## + ## kv_splitter: A regular expression representing the separator between + ## key-value pairs. + ## + ## Returns: A vector of strings containing the keys. global extract_keys: function(data: string, kv_splitter: pattern): string_vec; + + ## Creates a URL from an :bro:type:`HTTP::Info` record. This should handle + ## edge cases such as proxied requests appropriately. + ## + ## rec: An :bro:type:`HTTP::Info` record. + ## + ## Returns: A URL, not prefixed by "http://". global build_url: function(rec: Info): string; + + ## Creates a URL from an :bro:type:`HTTP::Info` record. This should handle + ## edge cases such as proxied requests appropriately. + ## + ## rec: An :bro:type:`HTTP::Info` record. + ## + ## Returns: A URL prefixed with "http://". global build_url_http: function(rec: Info): string; } @@ -28,7 +52,7 @@ function extract_keys(data: string, kv_splitter: pattern): string_vec function build_url(rec: Info): string { local uri = rec?$uri ? rec$uri : "/"; - local host = rec?$host ? rec$host : fmt("%s", rec$id$resp_h); + local host = rec?$host ? rec$host : addr_to_uri(rec$id$resp_h); if ( rec$id$resp_p != 80/tcp ) host = fmt("%s:%s", host, rec$id$resp_p); return fmt("%s%s", host, uri); diff --git a/scripts/base/protocols/irc/dcc-send.bro b/scripts/base/protocols/irc/dcc-send.bro index b2a48a472a..d07a0edf5a 100644 --- a/scripts/base/protocols/irc/dcc-send.bro +++ b/scripts/base/protocols/irc/dcc-send.bro @@ -5,8 +5,9 @@ ##! but that connection will actually be between B and C which could be ##! analyzed on a different worker. ##! -##! Example line from IRC server indicating that the DCC SEND is about to start: -##! PRIVMSG my_nick :^ADCC SEND whateverfile.zip 3640061780 1026 41709^A + +# Example line from IRC server indicating that the DCC SEND is about to start: +# PRIVMSG my_nick :^ADCC SEND whateverfile.zip 3640061780 1026 41709^A @load ./main @load base/utils/files @@ -14,24 +15,25 @@ module IRC; export { - redef enum Tag += { EXTRACTED_FILE }; - ## Pattern of file mime types to extract from IRC DCC file transfers. const extract_file_types = /NO_DEFAULT/ &redef; - ## The on-disk prefix for files to be extracted from IRC DCC file transfers. + ## On-disk prefix for files to be extracted from IRC DCC file transfers. const extraction_prefix = "irc-dcc-item" &redef; redef record Info += { - dcc_file_name: string &log &optional; - dcc_file_size: count &log &optional; - dcc_mime_type: string &log &optional; + ## DCC filename requested. + dcc_file_name: string &log &optional; + ## Size of the DCC transfer as indicated by the sender. + dcc_file_size: count &log &optional; + ## Sniffed mime type of the file. + dcc_mime_type: string &log &optional; ## The file handle for the file to be extracted - extraction_file: file &log &optional; + extraction_file: file &log &optional; - ## A boolean to indicate if the current file transfer should be extraced. - extract_file: bool &default=F; + ## A boolean to indicate if the current file transfer should be extracted. + extract_file: bool &default=F; ## The count of the number of file that have been extracted during the session. num_extracted_files: count &default=0; @@ -54,8 +56,10 @@ event file_transferred(c: connection, prefix: string, descr: string, if ( extract_file_types == irc$dcc_mime_type ) { irc$extract_file = T; - add irc$tags[EXTRACTED_FILE]; + } + if ( irc$extract_file ) + { local suffix = fmt("%d.dat", ++irc$num_extracted_files); local fname = generate_extraction_filename(extraction_prefix, c, suffix); irc$extraction_file = open(fname); @@ -76,7 +80,7 @@ event file_transferred(c: connection, prefix: string, descr: string, Log::write(IRC::LOG, irc); irc$command = tmp; - if ( irc$extract_file && irc?$extraction_file ) + if ( irc?$extraction_file ) set_contents_file(id, CONTENTS_RESP, irc$extraction_file); # Delete these values in case another DCC transfer @@ -99,7 +103,7 @@ event irc_dcc_message(c: connection, is_orig: bool, return; c$irc$dcc_file_name = argument; c$irc$dcc_file_size = size; - local p = to_port(dest_port, tcp); + local p = count_to_port(dest_port, tcp); expect_connection(to_addr("0.0.0.0"), address, p, ANALYZER_FILE, 5 min); dcc_expected_transfers[address, p] = c$irc; } diff --git a/scripts/base/protocols/irc/main.bro b/scripts/base/protocols/irc/main.bro index 731a943819..1cf542b8ea 100644 --- a/scripts/base/protocols/irc/main.bro +++ b/scripts/base/protocols/irc/main.bro @@ -1,36 +1,40 @@ -##! This is the script that implements the core IRC analysis support. It only -##! logs a very limited subset of the IRC protocol by default. The points -##! that it logs at are NICK commands, USER commands, and JOIN commands. It -##! log various bits of meta data as indicated in the :bro:type:`Info` record -##! along with the command at the command arguments. +##! Implements the core IRC analysis support. The logging model is to log +##! IRC commands along with the associated response and some additional +##! metadata about the connection if it's available. module IRC; export { + redef enum Log::ID += { LOG }; - type Tag: enum { - EMPTY - }; - type Info: record { + ## Timestamp when the command was seen. ts: time &log; + ## Unique ID for the connection. uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. id: conn_id &log; + ## Nick name given for the connection. nick: string &log &optional; + ## User name given for the connection. user: string &log &optional; - channels: set[string] &log &optional; - + + ## Command given by the client. command: string &log &optional; + ## Value for the command given by the client. value: string &log &optional; + ## Any additional data for the command. addl: string &log &optional; - tags: set[Tag] &log; }; + ## Event that can be handled to access the IRC record as it is sent on + ## to the logging framework. global irc_log: event(rec: Info); } redef record connection += { + ## IRC session information. irc: Info &optional; }; @@ -41,7 +45,7 @@ redef capture_filters += { ["irc-6668"] = "port 6668" }; redef capture_filters += { ["irc-6669"] = "port 6669" }; # DPD configuration. -global irc_ports = { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp } &redef; +const irc_ports = { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp }; redef dpd_config += { [ANALYZER_IRC] = [$ports = irc_ports] }; redef likely_server_ports += { 6666/tcp, 6667/tcp, 6668/tcp, 6669/tcp }; diff --git a/scripts/base/protocols/smtp/main.bro b/scripts/base/protocols/smtp/main.bro index 513b85e342..03b3d36a24 100644 --- a/scripts/base/protocols/smtp/main.bro +++ b/scripts/base/protocols/smtp/main.bro @@ -8,33 +8,51 @@ export { redef enum Log::ID += { LOG }; type Info: record { + ## Time when the message was first seen. ts: time &log; + ## Unique ID for the connection. uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. id: conn_id &log; - ## This is a number that indicates the number of messages deep into - ## this connection where this particular message was transferred. + ## A count to represent the depth of this message transaction in a single + ## connection where multiple messages were transferred. trans_depth: count &log; + ## Contents of the Helo header. helo: string &log &optional; + ## Contents of the From header. mailfrom: string &log &optional; + ## Contents of the Rcpt header. rcptto: set[string] &log &optional; + ## Contents of the Date header. date: string &log &optional; + ## Contents of the From header. from: string &log &optional; + ## Contents of the To header. to: set[string] &log &optional; + ## Contents of the ReplyTo header. reply_to: string &log &optional; + ## Contents of the MsgID header. msg_id: string &log &optional; + ## Contents of the In-Reply-To header. in_reply_to: string &log &optional; + ## Contents of the Subject header. subject: string &log &optional; + ## Contents of the X-Origininating-IP header. x_originating_ip: addr &log &optional; + ## Contents of the first Received header. first_received: string &log &optional; + ## Contents of the second Received header. second_received: string &log &optional; - ## The last message the server sent to the client. + ## The last message that the server sent to the client. last_reply: string &log &optional; + ## The message transmission path, as extracted from the headers. path: vector of addr &log &optional; + ## Value of the User-Agent header from the client. user_agent: string &log &optional; - ## Indicate if the "Received: from" headers should still be processed. + ## Indicates if the "Received: from" headers should still be processed. process_received_from: bool &default=T; - ## Indicates if client activity has been seen, but not yet logged + ## Indicates if client activity has been seen, but not yet logged. has_client_activity: bool &default=F; }; diff --git a/scripts/base/protocols/socks/__load__.bro b/scripts/base/protocols/socks/__load__.bro new file mode 100644 index 0000000000..0098b81a7a --- /dev/null +++ b/scripts/base/protocols/socks/__load__.bro @@ -0,0 +1,2 @@ +@load ./consts +@load ./main \ No newline at end of file diff --git a/scripts/base/protocols/socks/consts.bro b/scripts/base/protocols/socks/consts.bro new file mode 100644 index 0000000000..7b7cd58df5 --- /dev/null +++ b/scripts/base/protocols/socks/consts.bro @@ -0,0 +1,40 @@ +module SOCKS; + +export { + type RequestType: enum { + CONNECTION = 1, + PORT = 2, + UDP_ASSOCIATE = 3, + }; + + const v5_authentication_methods: table[count] of string = { + [0] = "No Authentication Required", + [1] = "GSSAPI", + [2] = "Username/Password", + [3] = "Challenge-Handshake Authentication Protocol", + [5] = "Challenge-Response Authentication Method", + [6] = "Secure Sockets Layer", + [7] = "NDS Authentication", + [8] = "Multi-Authentication Framework", + [255] = "No Acceptable Methods", + } &default=function(i: count):string { return fmt("unknown-%d", i); }; + + const v4_status: table[count] of string = { + [0x5a] = "succeeded", + [0x5b] = "general SOCKS server failure", + [0x5c] = "request failed because client is not running identd", + [0x5d] = "request failed because client's identd could not confirm the user ID string in the request", + } &default=function(i: count):string { return fmt("unknown-%d", i); }; + + const v5_status: table[count] of string = { + [0] = "succeeded", + [1] = "general SOCKS server failure", + [2] = "connection not allowed by ruleset", + [3] = "Network unreachable", + [4] = "Host unreachable", + [5] = "Connection refused", + [6] = "TTL expired", + [7] = "Command not supported", + [8] = "Address type not supported", + } &default=function(i: count):string { return fmt("unknown-%d", i); }; +} diff --git a/scripts/base/protocols/socks/main.bro b/scripts/base/protocols/socks/main.bro new file mode 100644 index 0000000000..79ae4baa19 --- /dev/null +++ b/scripts/base/protocols/socks/main.bro @@ -0,0 +1,92 @@ +@load base/frameworks/tunnels +@load ./consts + +module SOCKS; + +export { + redef enum Log::ID += { LOG }; + + type Info: record { + ## Time when the proxy connection was first detected. + ts: time &log; + ## Unique ID for the tunnel - may correspond to connection uid or be non-existent. + uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. + id: conn_id &log; + ## Protocol version of SOCKS. + version: count &log; + ## Username for the proxy if extracted from the network.. + user: string &log &optional; + ## Server status for the attempt at using the proxy. + status: string &log &optional; + ## Client requested SOCKS address. Could be an address, a name or both. + request: SOCKS::Address &log &optional; + ## Client requested port. + request_p: port &log &optional; + ## Server bound address. Could be an address, a name or both. + bound: SOCKS::Address &log &optional; + ## Server bound port. + bound_p: port &log &optional; + }; + + ## Event that can be handled to access the SOCKS + ## record as it is sent on to the logging framework. + global log_socks: event(rec: Info); +} + +event bro_init() &priority=5 + { + Log::create_stream(SOCKS::LOG, [$columns=Info, $ev=log_socks]); + } + +redef record connection += { + socks: SOCKS::Info &optional; +}; + +# Configure DPD +redef capture_filters += { ["socks"] = "tcp port 1080" }; +redef dpd_config += { [ANALYZER_SOCKS] = [$ports = set(1080/tcp)] }; +redef likely_server_ports += { 1080/tcp }; + +function set_session(c: connection, version: count) + { + if ( ! c?$socks ) + c$socks = [$ts=network_time(), $id=c$id, $uid=c$uid, $version=version]; + } + +event socks_request(c: connection, version: count, request_type: count, + sa: SOCKS::Address, p: port, user: string) &priority=5 + { + set_session(c, version); + + c$socks$request = sa; + c$socks$request_p = p; + + # Copy this conn_id and set the orig_p to zero because in the case of SOCKS proxies there will + # be potentially many source ports since a new proxy connection is established for each + # proxied connection. We treat this as a singular "tunnel". + local cid = copy(c$id); + cid$orig_p = 0/tcp; + Tunnel::register([$cid=cid, $tunnel_type=Tunnel::SOCKS, $payload_proxy=T]); + } + +event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=5 + { + set_session(c, version); + + if ( version == 5 ) + c$socks$status = v5_status[reply]; + else if ( version == 4 ) + c$socks$status = v4_status[reply]; + + c$socks$bound = sa; + c$socks$bound_p = p; + } + +event socks_reply(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port) &priority=-5 + { + # This will handle the case where the analyzer failed in some way and was removed. We probably + # don't want to log these connections. + if ( "SOCKS" in c$service ) + Log::write(SOCKS::LOG, c$socks); + } diff --git a/scripts/base/protocols/ssh/main.bro b/scripts/base/protocols/ssh/main.bro index 3a60244184..cd20f4e913 100644 --- a/scripts/base/protocols/ssh/main.bro +++ b/scripts/base/protocols/ssh/main.bro @@ -14,31 +14,35 @@ module SSH; export { + ## The SSH protocol logging stream identifier. redef enum Log::ID += { LOG }; redef enum Notice::Type += { - ## This indicates that a heuristically detected "successful" SSH + ## Indicates that a heuristically detected "successful" SSH ## authentication occurred. Login }; type Info: record { + ## Time when the SSH connection began. ts: time &log; + ## Unique ID for the connection. uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. id: conn_id &log; ## Indicates if the login was heuristically guessed to be "success" ## or "failure". status: string &log &optional; ## Direction of the connection. If the client was a local host - ## logging into an external host, this would be OUTBOUD. INBOUND + ## logging into an external host, this would be OUTBOUND. INBOUND ## would be set for the opposite situation. # TODO: handle local-local and remote-remote better. direction: Direction &log &optional; - ## The software string given by the client. + ## Software string from the client. client: string &log &optional; - ## The software string given by the server. + ## Software string from the server. server: string &log &optional; - ## The amount of data returned from the server. This is currently + ## Amount of data returned from the server. This is currently ## the only measure of the success heuristic and it is logged to ## assist analysts looking at the logs to make their own determination ## about the success on a case-by-case basis. @@ -48,8 +52,8 @@ export { done: bool &default=F; }; - ## The size in bytes at which the SSH connection is presumed to be - ## successful. + ## The size in bytes of data sent by the server at which the SSH + ## connection is presumed to be successful. const authentication_data_size = 5500 &redef; ## If true, we tell the event engine to not look at further data @@ -58,14 +62,16 @@ export { ## kinds of analyses (e.g., tracking connection size). const skip_processing_after_detection = F &redef; - ## This event is generated when the heuristic thinks that a login + ## Event that is generated when the heuristic thinks that a login ## was successful. global heuristic_successful_login: event(c: connection); - ## This event is generated when the heuristic thinks that a login + ## Event that is generated when the heuristic thinks that a login ## failed. global heuristic_failed_login: event(c: connection); + ## Event that can be handled to access the :bro:type:`SSH::Info` + ## record as it is sent on to the logging framework. global log_ssh: event(rec: Info); } diff --git a/scripts/base/protocols/ssl/consts.bro b/scripts/base/protocols/ssl/consts.bro index 2026f9bfa2..42989a4cb9 100644 --- a/scripts/base/protocols/ssl/consts.bro +++ b/scripts/base/protocols/ssl/consts.bro @@ -1,18 +1,65 @@ module SSL; export { - const SSLv2 = 0x0002; const SSLv3 = 0x0300; const TLSv10 = 0x0301; const TLSv11 = 0x0302; + const TLSv12 = 0x0303; + ## Mapping between the constants and string values for SSL/TLS versions. const version_strings: table[count] of string = { [SSLv2] = "SSLv2", [SSLv3] = "SSLv3", [TLSv10] = "TLSv10", [TLSv11] = "TLSv11", - } &default="UNKNOWN"; - + [TLSv12] = "TLSv12", + } &default=function(i: count):string { return fmt("unknown-%d", i); }; + + ## Mapping between numeric codes and human readable strings for alert + ## levels. + const alert_levels: table[count] of string = { + [1] = "warning", + [2] = "fatal", + } &default=function(i: count):string { return fmt("unknown-%d", i); }; + + ## Mapping between numeric codes and human readable strings for alert + ## descriptions.. + const alert_descriptions: table[count] of string = { + [0] = "close_notify", + [10] = "unexpected_message", + [20] = "bad_record_mac", + [21] = "decryption_failed", + [22] = "record_overflow", + [30] = "decompression_failure", + [40] = "handshake_failure", + [41] = "no_certificate", + [42] = "bad_certificate", + [43] = "unsupported_certificate", + [44] = "certificate_revoked", + [45] = "certificate_expired", + [46] = "certificate_unknown", + [47] = "illegal_parameter", + [48] = "unknown_ca", + [49] = "access_denied", + [50] = "decode_error", + [51] = "decrypt_error", + [60] = "export_restriction", + [70] = "protocol_version", + [71] = "insufficient_security", + [80] = "internal_error", + [90] = "user_canceled", + [100] = "no_renegotiation", + [110] = "unsupported_extension", + [111] = "certificate_unobtainable", + [112] = "unrecognized_name", + [113] = "bad_certificate_status_response", + [114] = "bad_certificate_hash_value", + [115] = "unknown_psk_identity", + } &default=function(i: count):string { return fmt("unknown-%d", i); }; + + ## Mapping between numeric codes and human readable strings for SSL/TLS + ## extensions. + # More information can be found here: # http://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xml const extensions: table[count] of string = { [0] = "server_name", @@ -30,11 +77,16 @@ export { [12] = "srp", [13] = "signature_algorithms", [14] = "use_srtp", + [15] = "heartbeat", [35] = "SessionTicket TLS", + [40] = "extended_random", + [13172] = "next_protocol_negotiation", + [13175] = "origin_bound_certificates", + [13180] = "encrypted_client_certificates", [65281] = "renegotiation_info" } &default=function(i: count):string { return fmt("unknown-%d", i); }; - ## SSLv2 + # SSLv2 const SSLv20_CK_RC4_128_WITH_MD5 = 0x010080; const SSLv20_CK_RC4_128_EXPORT40_WITH_MD5 = 0x020080; const SSLv20_CK_RC2_128_CBC_WITH_MD5 = 0x030080; @@ -43,7 +95,7 @@ export { const SSLv20_CK_DES_64_CBC_WITH_MD5 = 0x060040; const SSLv20_CK_DES_192_EDE3_CBC_WITH_MD5 = 0x0700C0; - ## TLS + # TLS const TLS_NULL_WITH_NULL_NULL = 0x0000; const TLS_RSA_WITH_NULL_MD5 = 0x0001; const TLS_RSA_WITH_NULL_SHA = 0x0002; @@ -260,13 +312,11 @@ export { const SSL_RSA_WITH_DES_CBC_MD5 = 0xFF82; const SSL_RSA_WITH_3DES_EDE_CBC_MD5 = 0xFF83; const TLS_EMPTY_RENEGOTIATION_INFO_SCSV = 0x00FF; - - # --- This is a table of all known cipher specs. - # --- It can be used for detecting unknown ciphers and for - # --- converting the cipher spec constants into a human readable format. - + + ## This is a table of all known cipher specs. It can be used for + ## detecting unknown ciphers and for converting the cipher spec constants + ## into a human readable format. const cipher_desc: table[count] of string = { - # --- sslv20 --- [SSLv20_CK_RC4_128_EXPORT40_WITH_MD5] = "SSLv20_CK_RC4_128_EXPORT40_WITH_MD5", [SSLv20_CK_RC4_128_WITH_MD5] = "SSLv20_CK_RC4_128_WITH_MD5", @@ -278,7 +328,6 @@ export { "SSLv20_CK_DES_192_EDE3_CBC_WITH_MD5", [SSLv20_CK_DES_64_CBC_WITH_MD5] = "SSLv20_CK_DES_64_CBC_WITH_MD5", - # --- TLS --- [TLS_NULL_WITH_NULL_NULL] = "TLS_NULL_WITH_NULL_NULL", [TLS_RSA_WITH_NULL_MD5] = "TLS_RSA_WITH_NULL_MD5", [TLS_RSA_WITH_NULL_SHA] = "TLS_RSA_WITH_NULL_SHA", @@ -490,8 +539,9 @@ export { [SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA] = "SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA", [SSL_RSA_FIPS_WITH_DES_CBC_SHA_2] = "SSL_RSA_FIPS_WITH_DES_CBC_SHA_2", [SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA_2] = "SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA_2", - } &default="UNKNOWN"; - + } &default=function(i: count):string { return fmt("unknown-%d", i); }; + + ## Mapping between the constants and string values for SSL/TLS errors. const x509_errors: table[count] of string = { [0] = "ok", [1] = "unable to get issuer cert", @@ -526,8 +576,7 @@ export { [30] = "akid issuer serial mismatch", [31] = "keyusage no certsign", [32] = "unable to get crl issuer", - [33] = "unhandled critical extension" - - }; + [33] = "unhandled critical extension", + } &default=function(i: count):string { return fmt("unknown-%d", i); }; } diff --git a/scripts/base/protocols/ssl/main.bro b/scripts/base/protocols/ssl/main.bro index c3c04d3c93..6b434ae09d 100644 --- a/scripts/base/protocols/ssl/main.bro +++ b/scripts/base/protocols/ssl/main.bro @@ -1,3 +1,6 @@ +##! Base SSL analysis script. This script logs information about the SSL/TLS +##! handshaking and encryption establishment process. + @load ./consts module SSL; @@ -6,46 +9,72 @@ export { redef enum Log::ID += { LOG }; type Info: record { + ## Time when the SSL connection was first detected. ts: time &log; - uid: string &log; + ## Unique ID for the connection. + uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. id: conn_id &log; + ## SSL/TLS version that the server offered. version: string &log &optional; + ## SSL/TLS cipher suite that the server chose. cipher: string &log &optional; + ## Value of the Server Name Indicator SSL/TLS extension. It + ## indicates the server name that the client was requesting. server_name: string &log &optional; + ## Session ID offered by the client for session resumption. session_id: string &log &optional; + ## Subject of the X.509 certificate offered by the server. subject: string &log &optional; + ## Subject of the signer of the X.509 certificate offered by the server. + issuer_subject: string &log &optional; + ## NotValidBefore field value from the server certificate. not_valid_before: time &log &optional; + ## NotValidAfter field value from the server certificate. not_valid_after: time &log &optional; - + ## Last alert that was seen during the connection. + last_alert: string &log &optional; + + ## Subject of the X.509 certificate offered by the client. + client_subject: string &log &optional; + ## Subject of the signer of the X.509 certificate offered by the client. + client_issuer_subject: string &log &optional; + + ## Full binary server certificate stored in DER format. cert: string &optional; + ## Chain of certificates offered by the server to validate its + ## complete signing chain. cert_chain: vector of string &optional; - - ## This stores the analyzer id used for the analyzer instance attached - ## to each connection. It is not used for logging since it's a + + ## Full binary client certificate stored in DER format. + client_cert: string &optional; + ## Chain of certificates offered by the client to validate its + ## complete signing chain. + client_cert_chain: vector of string &optional; + + ## The analyzer ID used for the analyzer instance attached + ## to each connection. It is not used for logging since it's a ## meaningless arbitrary number. analyzer_id: count &optional; }; - - ## This is where the default root CA bundle is defined. By loading the + + ## The default root CA bundle. By loading the ## mozilla-ca-list.bro script it will be set to Mozilla's root CA list. const root_certs: table[string] of string = {} &redef; - - ## If true, detach the SSL analyzer from the connection to prevent + + ## If true, detach the SSL analyzer from the connection to prevent ## continuing to process encrypted traffic. Helps with performance ## (especially with large file transfers). const disable_analyzer_after_detection = T &redef; - + ## The openssl command line utility. If it's in the path the default ## value will work, otherwise a full path string can be supplied for the ## utility. const openssl_util = "openssl" &redef; - + + ## Event that can be handled to access the SSL + ## record as it is sent on to the logging framework. global log_ssl: event(rec: Info); - - const ports = { - 443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp, - 989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp - } &redef; } redef record connection += { @@ -72,6 +101,11 @@ redef capture_filters += { ["xmpps"] = "tcp port 5223", }; +const ports = { + 443/tcp, 563/tcp, 585/tcp, 614/tcp, 636/tcp, + 989/tcp, 990/tcp, 992/tcp, 993/tcp, 995/tcp, 5223/tcp +}; + redef dpd_config += { [[ANALYZER_SSL]] = [$ports = ports] }; @@ -84,9 +118,10 @@ redef likely_server_ports += { function set_session(c: connection) { if ( ! c?$ssl ) - c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector()]; + c$ssl = [$ts=network_time(), $uid=c$uid, $id=c$id, $cert_chain=vector(), + $client_cert_chain=vector()]; } - + function finish(c: connection) { Log::write(SSL::LOG, c$ssl); @@ -98,54 +133,83 @@ function finish(c: connection) event ssl_client_hello(c: connection, version: count, possible_ts: time, session_id: string, ciphers: count_set) &priority=5 { set_session(c); - + # Save the session_id if there is one set. if ( session_id != /^\x00{32}$/ ) c$ssl$session_id = bytestring_to_hexstr(session_id); } - + event ssl_server_hello(c: connection, version: count, possible_ts: time, session_id: string, cipher: count, comp_method: count) &priority=5 { set_session(c); - + c$ssl$version = version_strings[version]; c$ssl$cipher = cipher_desc[cipher]; } -event x509_certificate(c: connection, cert: X509, is_server: bool, chain_idx: count, chain_len: count, der_cert: string) &priority=5 +event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=5 { set_session(c); - - if ( chain_idx == 0 ) + + # We aren't doing anything with client certificates yet. + if ( is_orig ) { - # Save the primary cert. - c$ssl$cert = der_cert; - - # Also save other certificate information about the primary cert. - c$ssl$subject = cert$subject; - c$ssl$not_valid_before = cert$not_valid_before; - c$ssl$not_valid_after = cert$not_valid_after; + if ( chain_idx == 0 ) + { + # Save the primary cert. + c$ssl$client_cert = der_cert; + + # Also save other certificate information about the primary cert. + c$ssl$client_subject = cert$subject; + c$ssl$client_issuer_subject = cert$issuer; + } + else + { + # Otherwise, add it to the cert validation chain. + c$ssl$client_cert_chain[|c$ssl$client_cert_chain|] = der_cert; + } } else { - # Otherwise, add it to the cert validation chain. - c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert; + if ( chain_idx == 0 ) + { + # Save the primary cert. + c$ssl$cert = der_cert; + + # Also save other certificate information about the primary cert. + c$ssl$subject = cert$subject; + c$ssl$issuer_subject = cert$issuer; + c$ssl$not_valid_before = cert$not_valid_before; + c$ssl$not_valid_after = cert$not_valid_after; + } + else + { + # Otherwise, add it to the cert validation chain. + c$ssl$cert_chain[|c$ssl$cert_chain|] = der_cert; + } } } - -event ssl_extension(c: connection, code: count, val: string) &priority=5 + +event ssl_extension(c: connection, is_orig: bool, code: count, val: string) &priority=5 { set_session(c); - - if ( extensions[code] == "server_name" ) + + if ( is_orig && extensions[code] == "server_name" ) c$ssl$server_name = sub_bytes(val, 6, |val|); } - + +event ssl_alert(c: connection, is_orig: bool, level: count, desc: count) &priority=5 + { + set_session(c); + + c$ssl$last_alert = alert_descriptions[desc]; + } + event ssl_established(c: connection) &priority=5 { set_session(c); } - + event ssl_established(c: connection) &priority=-5 { finish(c); @@ -163,4 +227,4 @@ event protocol_violation(c: connection, atype: count, aid: count, { if ( c?$ssl ) finish(c); - } \ No newline at end of file + } diff --git a/scripts/base/protocols/ssl/mozilla-ca-list.bro b/scripts/base/protocols/ssl/mozilla-ca-list.bro index 4c4dccb755..ad8e445912 100644 --- a/scripts/base/protocols/ssl/mozilla-ca-list.bro +++ b/scripts/base/protocols/ssl/mozilla-ca-list.bro @@ -1,5 +1,5 @@ # Don't edit! This file is automatically generated. -# Generated at: 2011-10-25 11:03:20 -0500 +# Generated at: Fri Jul 13 22:22:40 -0400 2012 @load base/protocols/ssl module SSL; redef root_certs += { @@ -11,7 +11,6 @@ redef root_certs += { ["OU=DSTCA E2,O=Digital Signature Trust Co.,C=US"] = "\x30\x82\x03\x29\x30\x82\x02\x92\xA0\x03\x02\x01\x02\x02\x04\x36\x6E\xD3\xCE\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x46\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x24\x30\x22\x06\x03\x55\x04\x0A\x13\x1B\x44\x69\x67\x69\x74\x61\x6C\x20\x53\x69\x67\x6E\x61\x74\x75\x72\x65\x20\x54\x72\x75\x73\x74\x20\x43\x6F\x2E\x31\x11\x30\x0F\x06\x03\x55\x04\x0B\x13\x08\x44\x53\x54\x43\x41\x20\x45\x32\x30\x1E\x17\x0D\x39\x38\x31\x32\x30\x39\x31\x39\x31\x37\x32\x36\x5A\x17\x0D\x31\x38\x31\x32\x30\x39\x31\x39\x34\x37\x32\x36\x5A\x30\x46\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x24\x30\x22\x06\x03\x55\x04\x0A\x13\x1B\x44\x69\x67\x69\x74\x61\x6C\x20\x53\x69\x67\x6E\x61\x74\x75\x72\x65\x20\x54\x72\x75\x73\x74\x20\x43\x6F\x2E\x31\x11\x30\x0F\x06\x03\x55\x04\x0B\x13\x08\x44\x53\x54\x43\x41\x20\x45\x32\x30\x81\x9D\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x81\x8B\x00\x30\x81\x87\x02\x81\x81\x00\xBF\x93\x8F\x17\x92\xEF\x33\x13\x18\xEB\x10\x7F\x4E\x16\xBF\xFF\x06\x8F\x2A\x85\xBC\x5E\xF9\x24\xA6\x24\x88\xB6\x03\xB7\xC1\xC3\x5F\x03\x5B\xD1\x6F\xAE\x7E\x42\xEA\x66\x23\xB8\x63\x83\x56\xFB\x28\x2D\xE1\x38\x8B\xB4\xEE\xA8\x01\xE1\xCE\x1C\xB6\x88\x2A\x22\x46\x85\xFB\x9F\xA7\x70\xA9\x47\x14\x3F\xCE\xDE\x65\xF0\xA8\x71\xF7\x4F\x26\x6C\x8C\xBC\xC6\xB5\xEF\xDE\x49\x27\xFF\x48\x2A\x7D\xE8\x4D\x03\xCC\xC7\xB2\x52\xC6\x17\x31\x13\x3B\xB5\x4D\xDB\xC8\xC4\xF6\xC3\x0F\x24\x2A\xDA\x0C\x9D\xE7\x91\x5B\x80\xCD\x94\x9D\x02\x01\x03\xA3\x82\x01\x24\x30\x82\x01\x20\x30\x11\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x01\x04\x04\x03\x02\x00\x07\x30\x68\x06\x03\x55\x1D\x1F\x04\x61\x30\x5F\x30\x5D\xA0\x5B\xA0\x59\xA4\x57\x30\x55\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x24\x30\x22\x06\x03\x55\x04\x0A\x13\x1B\x44\x69\x67\x69\x74\x61\x6C\x20\x53\x69\x67\x6E\x61\x74\x75\x72\x65\x20\x54\x72\x75\x73\x74\x20\x43\x6F\x2E\x31\x11\x30\x0F\x06\x03\x55\x04\x0B\x13\x08\x44\x53\x54\x43\x41\x20\x45\x32\x31\x0D\x30\x0B\x06\x03\x55\x04\x03\x13\x04\x43\x52\x4C\x31\x30\x2B\x06\x03\x55\x1D\x10\x04\x24\x30\x22\x80\x0F\x31\x39\x39\x38\x31\x32\x30\x39\x31\x39\x31\x37\x32\x36\x5A\x81\x0F\x32\x30\x31\x38\x31\x32\x30\x39\x31\x39\x31\x37\x32\x36\x5A\x30\x0B\x06\x03\x55\x1D\x0F\x04\x04\x03\x02\x01\x06\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\x1E\x82\x4D\x28\x65\x80\x3C\xC9\x41\x6E\xAC\x35\x2E\x5A\xCB\xDE\xEE\xF8\x39\x5B\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x1E\x82\x4D\x28\x65\x80\x3C\xC9\x41\x6E\xAC\x35\x2E\x5A\xCB\xDE\xEE\xF8\x39\x5B\x30\x0C\x06\x03\x55\x1D\x13\x04\x05\x30\x03\x01\x01\xFF\x30\x19\x06\x09\x2A\x86\x48\x86\xF6\x7D\x07\x41\x00\x04\x0C\x30\x0A\x1B\x04\x56\x34\x2E\x30\x03\x02\x04\x90\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x81\x81\x00\x47\x8D\x83\xAD\x62\xF2\xDB\xB0\x9E\x45\x22\x05\xB9\xA2\xD6\x03\x0E\x38\x72\xE7\x9E\xFC\x7B\xE6\x93\xB6\x9A\xA5\xA2\x94\xC8\x34\x1D\x91\xD1\xC5\xD7\xF4\x0A\x25\x0F\x3D\x78\x81\x9E\x0F\xB1\x67\xC4\x90\x4C\x63\xDD\x5E\xA7\xE2\xBA\x9F\xF5\xF7\x4D\xA5\x31\x7B\x9C\x29\x2D\x4C\xFE\x64\x3E\xEC\xB6\x53\xFE\xEA\x9B\xED\x82\xDB\x74\x75\x4B\x07\x79\x6E\x1E\xD8\x19\x83\x73\xDE\xF5\x3E\xD0\xB5\xDE\xE7\x4B\x68\x7D\x43\x2E\x2A\x20\xE1\x7E\xA0\x78\x44\x9E\x08\xF5\x98\xF9\xC7\x7F\x1B\x1B\xD6\x06\x20\x02\x58\xA1\xC3\xA2\x03", ["OU=Class 3 Public Primary Certification Authority,O=VeriSign\, Inc.,C=US"] = "\x30\x82\x02\x3C\x30\x82\x01\xA5\x02\x10\x70\xBA\xE4\x1D\x10\xD9\x29\x34\xB6\x38\xCA\x7B\x03\xCC\xBA\xBF\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x02\x05\x00\x30\x5F\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x31\x37\x30\x35\x06\x03\x55\x04\x0B\x13\x2E\x43\x6C\x61\x73\x73\x20\x33\x20\x50\x75\x62\x6C\x69\x63\x20\x50\x72\x69\x6D\x61\x72\x79\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x30\x1E\x17\x0D\x39\x36\x30\x31\x32\x39\x30\x30\x30\x30\x30\x30\x5A\x17\x0D\x32\x38\x30\x38\x30\x31\x32\x33\x35\x39\x35\x39\x5A\x30\x5F\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x31\x37\x30\x35\x06\x03\x55\x04\x0B\x13\x2E\x43\x6C\x61\x73\x73\x20\x33\x20\x50\x75\x62\x6C\x69\x63\x20\x50\x72\x69\x6D\x61\x72\x79\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x30\x81\x9F\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x81\x8D\x00\x30\x81\x89\x02\x81\x81\x00\xC9\x5C\x59\x9E\xF2\x1B\x8A\x01\x14\xB4\x10\xDF\x04\x40\xDB\xE3\x57\xAF\x6A\x45\x40\x8F\x84\x0C\x0B\xD1\x33\xD9\xD9\x11\xCF\xEE\x02\x58\x1F\x25\xF7\x2A\xA8\x44\x05\xAA\xEC\x03\x1F\x78\x7F\x9E\x93\xB9\x9A\x00\xAA\x23\x7D\xD6\xAC\x85\xA2\x63\x45\xC7\x72\x27\xCC\xF4\x4C\xC6\x75\x71\xD2\x39\xEF\x4F\x42\xF0\x75\xDF\x0A\x90\xC6\x8E\x20\x6F\x98\x0F\xF8\xAC\x23\x5F\x70\x29\x36\xA4\xC9\x86\xE7\xB1\x9A\x20\xCB\x53\xA5\x85\xE7\x3D\xBE\x7D\x9A\xFE\x24\x45\x33\xDC\x76\x15\xED\x0F\xA2\x71\x64\x4C\x65\x2E\x81\x68\x45\xA7\x02\x03\x01\x00\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x02\x05\x00\x03\x81\x81\x00\xBB\x4C\x12\x2B\xCF\x2C\x26\x00\x4F\x14\x13\xDD\xA6\xFB\xFC\x0A\x11\x84\x8C\xF3\x28\x1C\x67\x92\x2F\x7C\xB6\xC5\xFA\xDF\xF0\xE8\x95\xBC\x1D\x8F\x6C\x2C\xA8\x51\xCC\x73\xD8\xA4\xC0\x53\xF0\x4E\xD6\x26\xC0\x76\x01\x57\x81\x92\x5E\x21\xF1\xD1\xB1\xFF\xE7\xD0\x21\x58\xCD\x69\x17\xE3\x44\x1C\x9C\x19\x44\x39\x89\x5C\xDC\x9C\x00\x0F\x56\x8D\x02\x99\xED\xA2\x90\x45\x4C\xE4\xBB\x10\xA4\x3D\xF0\x32\x03\x0E\xF1\xCE\xF8\xE8\xC9\x51\x8C\xE6\x62\x9F\xE6\x9F\xC0\x7D\xB7\x72\x9C\xC9\x36\x3A\x6B\x9F\x4E\xA8\xFF\x64\x0D\x64", ["OU=VeriSign Trust Network,OU=(c) 1998 VeriSign\, Inc. - For authorized use only,OU=Class 3 Public Primary Certification Authority - G2,O=VeriSign\, Inc.,C=US"] = "\x30\x82\x03\x02\x30\x82\x02\x6B\x02\x10\x7D\xD9\xFE\x07\xCF\xA8\x1E\xB7\x10\x79\x67\xFB\xA7\x89\x34\xC6\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\xC1\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x31\x3C\x30\x3A\x06\x03\x55\x04\x0B\x13\x33\x43\x6C\x61\x73\x73\x20\x33\x20\x50\x75\x62\x6C\x69\x63\x20\x50\x72\x69\x6D\x61\x72\x79\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x2D\x20\x47\x32\x31\x3A\x30\x38\x06\x03\x55\x04\x0B\x13\x31\x28\x63\x29\x20\x31\x39\x39\x38\x20\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x20\x2D\x20\x46\x6F\x72\x20\x61\x75\x74\x68\x6F\x72\x69\x7A\x65\x64\x20\x75\x73\x65\x20\x6F\x6E\x6C\x79\x31\x1F\x30\x1D\x06\x03\x55\x04\x0B\x13\x16\x56\x65\x72\x69\x53\x69\x67\x6E\x20\x54\x72\x75\x73\x74\x20\x4E\x65\x74\x77\x6F\x72\x6B\x30\x1E\x17\x0D\x39\x38\x30\x35\x31\x38\x30\x30\x30\x30\x30\x30\x5A\x17\x0D\x32\x38\x30\x38\x30\x31\x32\x33\x35\x39\x35\x39\x5A\x30\x81\xC1\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x31\x3C\x30\x3A\x06\x03\x55\x04\x0B\x13\x33\x43\x6C\x61\x73\x73\x20\x33\x20\x50\x75\x62\x6C\x69\x63\x20\x50\x72\x69\x6D\x61\x72\x79\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x2D\x20\x47\x32\x31\x3A\x30\x38\x06\x03\x55\x04\x0B\x13\x31\x28\x63\x29\x20\x31\x39\x39\x38\x20\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x20\x2D\x20\x46\x6F\x72\x20\x61\x75\x74\x68\x6F\x72\x69\x7A\x65\x64\x20\x75\x73\x65\x20\x6F\x6E\x6C\x79\x31\x1F\x30\x1D\x06\x03\x55\x04\x0B\x13\x16\x56\x65\x72\x69\x53\x69\x67\x6E\x20\x54\x72\x75\x73\x74\x20\x4E\x65\x74\x77\x6F\x72\x6B\x30\x81\x9F\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x81\x8D\x00\x30\x81\x89\x02\x81\x81\x00\xCC\x5E\xD1\x11\x5D\x5C\x69\xD0\xAB\xD3\xB9\x6A\x4C\x99\x1F\x59\x98\x30\x8E\x16\x85\x20\x46\x6D\x47\x3F\xD4\x85\x20\x84\xE1\x6D\xB3\xF8\xA4\xED\x0C\xF1\x17\x0F\x3B\xF9\xA7\xF9\x25\xD7\xC1\xCF\x84\x63\xF2\x7C\x63\xCF\xA2\x47\xF2\xC6\x5B\x33\x8E\x64\x40\x04\x68\xC1\x80\xB9\x64\x1C\x45\x77\xC7\xD8\x6E\xF5\x95\x29\x3C\x50\xE8\x34\xD7\x78\x1F\xA8\xBA\x6D\x43\x91\x95\x8F\x45\x57\x5E\x7E\xC5\xFB\xCA\xA4\x04\xEB\xEA\x97\x37\x54\x30\x6F\xBB\x01\x47\x32\x33\xCD\xDC\x57\x9B\x64\x69\x61\xF8\x9B\x1D\x1C\x89\x4F\x5C\x67\x02\x03\x01\x00\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x81\x81\x00\x51\x4D\xCD\xBE\x5C\xCB\x98\x19\x9C\x15\xB2\x01\x39\x78\x2E\x4D\x0F\x67\x70\x70\x99\xC6\x10\x5A\x94\xA4\x53\x4D\x54\x6D\x2B\xAF\x0D\x5D\x40\x8B\x64\xD3\xD7\xEE\xDE\x56\x61\x92\x5F\xA6\xC4\x1D\x10\x61\x36\xD3\x2C\x27\x3C\xE8\x29\x09\xB9\x11\x64\x74\xCC\xB5\x73\x9F\x1C\x48\xA9\xBC\x61\x01\xEE\xE2\x17\xA6\x0C\xE3\x40\x08\x3B\x0E\xE7\xEB\x44\x73\x2A\x9A\xF1\x69\x92\xEF\x71\x14\xC3\x39\xAC\x71\xA7\x91\x09\x6F\xE4\x71\x06\xB3\xBA\x59\x57\x26\x79\x00\xF6\xF8\x0D\xA2\x33\x30\x28\xD4\xAA\x58\xA0\x9D\x9D\x69\x91\xFD", - ["OU=VeriSign Trust Network,OU=(c) 1998 VeriSign\, Inc. - For authorized use only,OU=Class 4 Public Primary Certification Authority - G2,O=VeriSign\, Inc.,C=US"] = "\x30\x82\x03\x02\x30\x82\x02\x6B\x02\x10\x32\x88\x8E\x9A\xD2\xF5\xEB\x13\x47\xF8\x7F\xC4\x20\x37\x25\xF8\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\xC1\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x31\x3C\x30\x3A\x06\x03\x55\x04\x0B\x13\x33\x43\x6C\x61\x73\x73\x20\x34\x20\x50\x75\x62\x6C\x69\x63\x20\x50\x72\x69\x6D\x61\x72\x79\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x2D\x20\x47\x32\x31\x3A\x30\x38\x06\x03\x55\x04\x0B\x13\x31\x28\x63\x29\x20\x31\x39\x39\x38\x20\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x20\x2D\x20\x46\x6F\x72\x20\x61\x75\x74\x68\x6F\x72\x69\x7A\x65\x64\x20\x75\x73\x65\x20\x6F\x6E\x6C\x79\x31\x1F\x30\x1D\x06\x03\x55\x04\x0B\x13\x16\x56\x65\x72\x69\x53\x69\x67\x6E\x20\x54\x72\x75\x73\x74\x20\x4E\x65\x74\x77\x6F\x72\x6B\x30\x1E\x17\x0D\x39\x38\x30\x35\x31\x38\x30\x30\x30\x30\x30\x30\x5A\x17\x0D\x32\x38\x30\x38\x30\x31\x32\x33\x35\x39\x35\x39\x5A\x30\x81\xC1\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x31\x3C\x30\x3A\x06\x03\x55\x04\x0B\x13\x33\x43\x6C\x61\x73\x73\x20\x34\x20\x50\x75\x62\x6C\x69\x63\x20\x50\x72\x69\x6D\x61\x72\x79\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x2D\x20\x47\x32\x31\x3A\x30\x38\x06\x03\x55\x04\x0B\x13\x31\x28\x63\x29\x20\x31\x39\x39\x38\x20\x56\x65\x72\x69\x53\x69\x67\x6E\x2C\x20\x49\x6E\x63\x2E\x20\x2D\x20\x46\x6F\x72\x20\x61\x75\x74\x68\x6F\x72\x69\x7A\x65\x64\x20\x75\x73\x65\x20\x6F\x6E\x6C\x79\x31\x1F\x30\x1D\x06\x03\x55\x04\x0B\x13\x16\x56\x65\x72\x69\x53\x69\x67\x6E\x20\x54\x72\x75\x73\x74\x20\x4E\x65\x74\x77\x6F\x72\x6B\x30\x81\x9F\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x81\x8D\x00\x30\x81\x89\x02\x81\x81\x00\xBA\xF0\xE4\xCF\xF9\xC4\xAE\x85\x54\xB9\x07\x57\xF9\x8F\xC5\x7F\x68\x11\xF8\xC4\x17\xB0\x44\xDC\xE3\x30\x73\xD5\x2A\x62\x2A\xB8\xD0\xCC\x1C\xED\x28\x5B\x7E\xBD\x6A\xDC\xB3\x91\x24\xCA\x41\x62\x3C\xFC\x02\x01\xBF\x1C\x16\x31\x94\x05\x97\x76\x6E\xA2\xAD\xBD\x61\x17\x6C\x4E\x30\x86\xF0\x51\x37\x2A\x50\xC7\xA8\x62\x81\xDC\x5B\x4A\xAA\xC1\xA0\xB4\x6E\xEB\x2F\xE5\x57\xC5\xB1\x2B\x40\x70\xDB\x5A\x4D\xA1\x8E\x1F\xBD\x03\x1F\xD8\x03\xD4\x8F\x4C\x99\x71\xBC\xE2\x82\xCC\x58\xE8\x98\x3A\x86\xD3\x86\x38\xF3\x00\x29\x1F\x02\x03\x01\x00\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x81\x81\x00\x85\x8C\x12\xC1\xA7\xB9\x50\x15\x7A\xCB\x3E\xAC\xB8\x43\x8A\xDC\xAA\xDD\x14\xBA\x89\x81\x7E\x01\x3C\x23\x71\x21\x88\x2F\x82\xDC\x63\xFA\x02\x45\xAC\x45\x59\xD7\x2A\x58\x44\x5B\xB7\x9F\x81\x3B\x92\x68\x3D\xE2\x37\x24\xF5\x7B\x6C\x8F\x76\x35\x96\x09\xA8\x59\x9D\xB9\xCE\x23\xAB\x74\xD6\x83\xFD\x32\x73\x27\xD8\x69\x3E\x43\x74\xF6\xAE\xC5\x89\x9A\xE7\x53\x7C\xE9\x7B\xF6\x4B\xF3\xC1\x65\x83\xDE\x8D\x8A\x9C\x3C\x88\x8D\x39\x59\xFC\xAA\x3F\x22\x8D\xA1\xC1\x66\x50\x81\x72\x4C\xED\x22\x64\x4F\x4F\xCA\x80\x91\xB6\x29", ["CN=GlobalSign Root CA,OU=Root CA,O=GlobalSign nv-sa,C=BE"] = "\x30\x82\x03\x75\x30\x82\x02\x5D\xA0\x03\x02\x01\x02\x02\x0B\x04\x00\x00\x00\x00\x01\x15\x4B\x5A\xC3\x94\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x57\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x42\x45\x31\x19\x30\x17\x06\x03\x55\x04\x0A\x13\x10\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x20\x6E\x76\x2D\x73\x61\x31\x10\x30\x0E\x06\x03\x55\x04\x0B\x13\x07\x52\x6F\x6F\x74\x20\x43\x41\x31\x1B\x30\x19\x06\x03\x55\x04\x03\x13\x12\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x1E\x17\x0D\x39\x38\x30\x39\x30\x31\x31\x32\x30\x30\x30\x30\x5A\x17\x0D\x32\x38\x30\x31\x32\x38\x31\x32\x30\x30\x30\x30\x5A\x30\x57\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x42\x45\x31\x19\x30\x17\x06\x03\x55\x04\x0A\x13\x10\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x20\x6E\x76\x2D\x73\x61\x31\x10\x30\x0E\x06\x03\x55\x04\x0B\x13\x07\x52\x6F\x6F\x74\x20\x43\x41\x31\x1B\x30\x19\x06\x03\x55\x04\x03\x13\x12\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xDA\x0E\xE6\x99\x8D\xCE\xA3\xE3\x4F\x8A\x7E\xFB\xF1\x8B\x83\x25\x6B\xEA\x48\x1F\xF1\x2A\xB0\xB9\x95\x11\x04\xBD\xF0\x63\xD1\xE2\x67\x66\xCF\x1C\xDD\xCF\x1B\x48\x2B\xEE\x8D\x89\x8E\x9A\xAF\x29\x80\x65\xAB\xE9\xC7\x2D\x12\xCB\xAB\x1C\x4C\x70\x07\xA1\x3D\x0A\x30\xCD\x15\x8D\x4F\xF8\xDD\xD4\x8C\x50\x15\x1C\xEF\x50\xEE\xC4\x2E\xF7\xFC\xE9\x52\xF2\x91\x7D\xE0\x6D\xD5\x35\x30\x8E\x5E\x43\x73\xF2\x41\xE9\xD5\x6A\xE3\xB2\x89\x3A\x56\x39\x38\x6F\x06\x3C\x88\x69\x5B\x2A\x4D\xC5\xA7\x54\xB8\x6C\x89\xCC\x9B\xF9\x3C\xCA\xE5\xFD\x89\xF5\x12\x3C\x92\x78\x96\xD6\xDC\x74\x6E\x93\x44\x61\xD1\x8D\xC7\x46\xB2\x75\x0E\x86\xE8\x19\x8A\xD5\x6D\x6C\xD5\x78\x16\x95\xA2\xE9\xC8\x0A\x38\xEB\xF2\x24\x13\x4F\x73\x54\x93\x13\x85\x3A\x1B\xBC\x1E\x34\xB5\x8B\x05\x8C\xB9\x77\x8B\xB1\xDB\x1F\x20\x91\xAB\x09\x53\x6E\x90\xCE\x7B\x37\x74\xB9\x70\x47\x91\x22\x51\x63\x16\x79\xAE\xB1\xAE\x41\x26\x08\xC8\x19\x2B\xD1\x46\xAA\x48\xD6\x64\x2A\xD7\x83\x34\xFF\x2C\x2A\xC1\x6C\x19\x43\x4A\x07\x85\xE7\xD3\x7C\xF6\x21\x68\xEF\xEA\xF2\x52\x9F\x7F\x93\x90\xCF\x02\x03\x01\x00\x01\xA3\x42\x30\x40\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x60\x7B\x66\x1A\x45\x0D\x97\xCA\x89\x50\x2F\x7D\x04\xCD\x34\xA8\xFF\xFC\xFD\x4B\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\xD6\x73\xE7\x7C\x4F\x76\xD0\x8D\xBF\xEC\xBA\xA2\xBE\x34\xC5\x28\x32\xB5\x7C\xFC\x6C\x9C\x2C\x2B\xBD\x09\x9E\x53\xBF\x6B\x5E\xAA\x11\x48\xB6\xE5\x08\xA3\xB3\xCA\x3D\x61\x4D\xD3\x46\x09\xB3\x3E\xC3\xA0\xE3\x63\x55\x1B\xF2\xBA\xEF\xAD\x39\xE1\x43\xB9\x38\xA3\xE6\x2F\x8A\x26\x3B\xEF\xA0\x50\x56\xF9\xC6\x0A\xFD\x38\xCD\xC4\x0B\x70\x51\x94\x97\x98\x04\xDF\xC3\x5F\x94\xD5\x15\xC9\x14\x41\x9C\xC4\x5D\x75\x64\x15\x0D\xFF\x55\x30\xEC\x86\x8F\xFF\x0D\xEF\x2C\xB9\x63\x46\xF6\xAA\xFC\xDF\xBC\x69\xFD\x2E\x12\x48\x64\x9A\xE0\x95\xF0\xA6\xEF\x29\x8F\x01\xB1\x15\xB5\x0C\x1D\xA5\xFE\x69\x2C\x69\x24\x78\x1E\xB3\xA7\x1C\x71\x62\xEE\xCA\xC8\x97\xAC\x17\x5D\x8A\xC2\xF8\x47\x86\x6E\x2A\xC4\x56\x31\x95\xD0\x67\x89\x85\x2B\xF9\x6C\xA6\x5D\x46\x9D\x0C\xAA\x82\xE4\x99\x51\xDD\x70\xB7\xDB\x56\x3D\x61\xE4\x6A\xE1\x5C\xD6\xF6\xFE\x3D\xDE\x41\xCC\x07\xAE\x63\x52\xBF\x53\x53\xF4\x2B\xE9\xC7\xFD\xB6\xF7\x82\x5F\x85\xD2\x41\x18\xDB\x81\xB3\x04\x1C\xC5\x1F\xA4\x80\x6F\x15\x20\xC9\xDE\x0C\x88\x0A\x1D\xD6\x66\x55\xE2\xFC\x48\xC9\x29\x26\x69\xE0", ["CN=GlobalSign,O=GlobalSign,OU=GlobalSign Root CA - R2"] = "\x30\x82\x03\xBA\x30\x82\x02\xA2\xA0\x03\x02\x01\x02\x02\x0B\x04\x00\x00\x00\x00\x01\x0F\x86\x26\xE6\x0D\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x4C\x31\x20\x30\x1E\x06\x03\x55\x04\x0B\x13\x17\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x20\x2D\x20\x52\x32\x31\x13\x30\x11\x06\x03\x55\x04\x0A\x13\x0A\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x31\x13\x30\x11\x06\x03\x55\x04\x03\x13\x0A\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x30\x1E\x17\x0D\x30\x36\x31\x32\x31\x35\x30\x38\x30\x30\x30\x30\x5A\x17\x0D\x32\x31\x31\x32\x31\x35\x30\x38\x30\x30\x30\x30\x5A\x30\x4C\x31\x20\x30\x1E\x06\x03\x55\x04\x0B\x13\x17\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x20\x2D\x20\x52\x32\x31\x13\x30\x11\x06\x03\x55\x04\x0A\x13\x0A\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x31\x13\x30\x11\x06\x03\x55\x04\x03\x13\x0A\x47\x6C\x6F\x62\x61\x6C\x53\x69\x67\x6E\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xA6\xCF\x24\x0E\xBE\x2E\x6F\x28\x99\x45\x42\xC4\xAB\x3E\x21\x54\x9B\x0B\xD3\x7F\x84\x70\xFA\x12\xB3\xCB\xBF\x87\x5F\xC6\x7F\x86\xD3\xB2\x30\x5C\xD6\xFD\xAD\xF1\x7B\xDC\xE5\xF8\x60\x96\x09\x92\x10\xF5\xD0\x53\xDE\xFB\x7B\x7E\x73\x88\xAC\x52\x88\x7B\x4A\xA6\xCA\x49\xA6\x5E\xA8\xA7\x8C\x5A\x11\xBC\x7A\x82\xEB\xBE\x8C\xE9\xB3\xAC\x96\x25\x07\x97\x4A\x99\x2A\x07\x2F\xB4\x1E\x77\xBF\x8A\x0F\xB5\x02\x7C\x1B\x96\xB8\xC5\xB9\x3A\x2C\xBC\xD6\x12\xB9\xEB\x59\x7D\xE2\xD0\x06\x86\x5F\x5E\x49\x6A\xB5\x39\x5E\x88\x34\xEC\xBC\x78\x0C\x08\x98\x84\x6C\xA8\xCD\x4B\xB4\xA0\x7D\x0C\x79\x4D\xF0\xB8\x2D\xCB\x21\xCA\xD5\x6C\x5B\x7D\xE1\xA0\x29\x84\xA1\xF9\xD3\x94\x49\xCB\x24\x62\x91\x20\xBC\xDD\x0B\xD5\xD9\xCC\xF9\xEA\x27\x0A\x2B\x73\x91\xC6\x9D\x1B\xAC\xC8\xCB\xE8\xE0\xA0\xF4\x2F\x90\x8B\x4D\xFB\xB0\x36\x1B\xF6\x19\x7A\x85\xE0\x6D\xF2\x61\x13\x88\x5C\x9F\xE0\x93\x0A\x51\x97\x8A\x5A\xCE\xAF\xAB\xD5\xF7\xAA\x09\xAA\x60\xBD\xDC\xD9\x5F\xDF\x72\xA9\x60\x13\x5E\x00\x01\xC9\x4A\xFA\x3F\xA4\xEA\x07\x03\x21\x02\x8E\x82\xCA\x03\xC2\x9B\x8F\x02\x03\x01\x00\x01\xA3\x81\x9C\x30\x81\x99\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x9B\xE2\x07\x57\x67\x1C\x1E\xC0\x6A\x06\xDE\x59\xB4\x9A\x2D\xDF\xDC\x19\x86\x2E\x30\x36\x06\x03\x55\x1D\x1F\x04\x2F\x30\x2D\x30\x2B\xA0\x29\xA0\x27\x86\x25\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x67\x6C\x6F\x62\x61\x6C\x73\x69\x67\x6E\x2E\x6E\x65\x74\x2F\x72\x6F\x6F\x74\x2D\x72\x32\x2E\x63\x72\x6C\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\x9B\xE2\x07\x57\x67\x1C\x1E\xC0\x6A\x06\xDE\x59\xB4\x9A\x2D\xDF\xDC\x19\x86\x2E\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x99\x81\x53\x87\x1C\x68\x97\x86\x91\xEC\xE0\x4A\xB8\x44\x0B\xAB\x81\xAC\x27\x4F\xD6\xC1\xB8\x1C\x43\x78\xB3\x0C\x9A\xFC\xEA\x2C\x3C\x6E\x61\x1B\x4D\x4B\x29\xF5\x9F\x05\x1D\x26\xC1\xB8\xE9\x83\x00\x62\x45\xB6\xA9\x08\x93\xB9\xA9\x33\x4B\x18\x9A\xC2\xF8\x87\x88\x4E\xDB\xDD\x71\x34\x1A\xC1\x54\xDA\x46\x3F\xE0\xD3\x2A\xAB\x6D\x54\x22\xF5\x3A\x62\xCD\x20\x6F\xBA\x29\x89\xD7\xDD\x91\xEE\xD3\x5C\xA2\x3E\xA1\x5B\x41\xF5\xDF\xE5\x64\x43\x2D\xE9\xD5\x39\xAB\xD2\xA2\xDF\xB7\x8B\xD0\xC0\x80\x19\x1C\x45\xC0\x2D\x8C\xE8\xF8\x2D\xA4\x74\x56\x49\xC5\x05\xB5\x4F\x15\xDE\x6E\x44\x78\x39\x87\xA8\x7E\xBB\xF3\x79\x18\x91\xBB\xF4\x6F\x9D\xC1\xF0\x8C\x35\x8C\x5D\x01\xFB\xC3\x6D\xB9\xEF\x44\x6D\x79\x46\x31\x7E\x0A\xFE\xA9\x82\xC1\xFF\xEF\xAB\x6E\x20\xC4\x50\xC9\x5F\x9D\x4D\x9B\x17\x8C\x0C\xE5\x01\xC9\xA0\x41\x6A\x73\x53\xFA\xA5\x50\xB4\x6E\x25\x0F\xFB\x4C\x18\xF4\xFD\x52\xD9\x8E\x69\xB1\xE8\x11\x0F\xDE\x88\xD8\xFB\x1D\x49\xF7\xAA\xDE\x95\xCF\x20\x78\xC2\x60\x12\xDB\x25\x40\x8C\x6A\xFC\x7E\x42\x38\x40\x64\x12\xF7\x9E\x81\xE1\x93\x2E", ["emailAddress=info@valicert.com,CN=http://www.valicert.com/,OU=ValiCert Class 1 Policy Validation Authority,O=ValiCert\, Inc.,L=ValiCert Validation Network"] = "\x30\x82\x02\xE7\x30\x82\x02\x50\x02\x01\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\xBB\x31\x24\x30\x22\x06\x03\x55\x04\x07\x13\x1B\x56\x61\x6C\x69\x43\x65\x72\x74\x20\x56\x61\x6C\x69\x64\x61\x74\x69\x6F\x6E\x20\x4E\x65\x74\x77\x6F\x72\x6B\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x61\x6C\x69\x43\x65\x72\x74\x2C\x20\x49\x6E\x63\x2E\x31\x35\x30\x33\x06\x03\x55\x04\x0B\x13\x2C\x56\x61\x6C\x69\x43\x65\x72\x74\x20\x43\x6C\x61\x73\x73\x20\x31\x20\x50\x6F\x6C\x69\x63\x79\x20\x56\x61\x6C\x69\x64\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x31\x21\x30\x1F\x06\x03\x55\x04\x03\x13\x18\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x76\x61\x6C\x69\x63\x65\x72\x74\x2E\x63\x6F\x6D\x2F\x31\x20\x30\x1E\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x09\x01\x16\x11\x69\x6E\x66\x6F\x40\x76\x61\x6C\x69\x63\x65\x72\x74\x2E\x63\x6F\x6D\x30\x1E\x17\x0D\x39\x39\x30\x36\x32\x35\x32\x32\x32\x33\x34\x38\x5A\x17\x0D\x31\x39\x30\x36\x32\x35\x32\x32\x32\x33\x34\x38\x5A\x30\x81\xBB\x31\x24\x30\x22\x06\x03\x55\x04\x07\x13\x1B\x56\x61\x6C\x69\x43\x65\x72\x74\x20\x56\x61\x6C\x69\x64\x61\x74\x69\x6F\x6E\x20\x4E\x65\x74\x77\x6F\x72\x6B\x31\x17\x30\x15\x06\x03\x55\x04\x0A\x13\x0E\x56\x61\x6C\x69\x43\x65\x72\x74\x2C\x20\x49\x6E\x63\x2E\x31\x35\x30\x33\x06\x03\x55\x04\x0B\x13\x2C\x56\x61\x6C\x69\x43\x65\x72\x74\x20\x43\x6C\x61\x73\x73\x20\x31\x20\x50\x6F\x6C\x69\x63\x79\x20\x56\x61\x6C\x69\x64\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x31\x21\x30\x1F\x06\x03\x55\x04\x03\x13\x18\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x76\x61\x6C\x69\x63\x65\x72\x74\x2E\x63\x6F\x6D\x2F\x31\x20\x30\x1E\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x09\x01\x16\x11\x69\x6E\x66\x6F\x40\x76\x61\x6C\x69\x63\x65\x72\x74\x2E\x63\x6F\x6D\x30\x81\x9F\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x81\x8D\x00\x30\x81\x89\x02\x81\x81\x00\xD8\x59\x82\x7A\x89\xB8\x96\xBA\xA6\x2F\x68\x6F\x58\x2E\xA7\x54\x1C\x06\x6E\xF4\xEA\x8D\x48\xBC\x31\x94\x17\xF0\xF3\x4E\xBC\xB2\xB8\x35\x92\x76\xB0\xD0\xA5\xA5\x01\xD7\x00\x03\x12\x22\x19\x08\xF8\xFF\x11\x23\x9B\xCE\x07\xF5\xBF\x69\x1A\x26\xFE\x4E\xE9\xD1\x7F\x9D\x2C\x40\x1D\x59\x68\x6E\xA6\xF8\x58\xB0\x9D\x1A\x8F\xD3\x3F\xF1\xDC\x19\x06\x81\xA8\x0E\xE0\x3A\xDD\xC8\x53\x45\x09\x06\xE6\x0F\x70\xC3\xFA\x40\xA6\x0E\xE2\x56\x05\x0F\x18\x4D\xFC\x20\x82\xD1\x73\x55\x74\x8D\x76\x72\xA0\x1D\x9D\x1D\xC0\xDD\x3F\x71\x02\x03\x01\x00\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x81\x81\x00\x50\x68\x3D\x49\xF4\x2C\x1C\x06\x94\xDF\x95\x60\x7F\x96\x7B\x17\xFE\x4F\x71\xAD\x64\xC8\xDD\x77\xD2\xEF\x59\x55\xE8\x3F\xE8\x8E\x05\x2A\x21\xF2\x07\xD2\xB5\xA7\x52\xFE\x9C\xB1\xB6\xE2\x5B\x77\x17\x40\xEA\x72\xD6\x23\xCB\x28\x81\x32\xC3\x00\x79\x18\xEC\x59\x17\x89\xC9\xC6\x6A\x1E\x71\xC9\xFD\xB7\x74\xA5\x25\x45\x69\xC5\x48\xAB\x19\xE1\x45\x8A\x25\x6B\x19\xEE\xE5\xBB\x12\xF5\x7F\xF7\xA6\x8D\x51\xC3\xF0\x9D\x74\xB7\xA9\x3E\xA0\xA5\xFF\xB6\x49\x03\x13\xDA\x22\xCC\xED\x71\x82\x2B\x99\xCF\x3A\xB7\xF5\x2D\x72\xC8", @@ -38,8 +37,6 @@ redef root_certs += { ["CN=America Online Root Certification Authority 1,O=America Online Inc.,C=US"] = "\x30\x82\x03\xA4\x30\x82\x02\x8C\xA0\x03\x02\x01\x02\x02\x01\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x63\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x1C\x30\x1A\x06\x03\x55\x04\x0A\x13\x13\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x49\x6E\x63\x2E\x31\x36\x30\x34\x06\x03\x55\x04\x03\x13\x2D\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x52\x6F\x6F\x74\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x31\x30\x1E\x17\x0D\x30\x32\x30\x35\x32\x38\x30\x36\x30\x30\x30\x30\x5A\x17\x0D\x33\x37\x31\x31\x31\x39\x32\x30\x34\x33\x30\x30\x5A\x30\x63\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x1C\x30\x1A\x06\x03\x55\x04\x0A\x13\x13\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x49\x6E\x63\x2E\x31\x36\x30\x34\x06\x03\x55\x04\x03\x13\x2D\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x52\x6F\x6F\x74\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x31\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xA8\x2F\xE8\xA4\x69\x06\x03\x47\xC3\xE9\x2A\x98\xFF\x19\xA2\x70\x9A\xC6\x50\xB2\x7E\xA5\xDF\x68\x4D\x1B\x7C\x0F\xB6\x97\x68\x7D\x2D\xA6\x8B\x97\xE9\x64\x86\xC9\xA3\xEF\xA0\x86\xBF\x60\x65\x9C\x4B\x54\x88\xC2\x48\xC5\x4A\x39\xBF\x14\xE3\x59\x55\xE5\x19\xB4\x74\xC8\xB4\x05\x39\x5C\x16\xA5\xE2\x95\x05\xE0\x12\xAE\x59\x8B\xA2\x33\x68\x58\x1C\xA6\xD4\x15\xB7\xD8\x9F\xD7\xDC\x71\xAB\x7E\x9A\xBF\x9B\x8E\x33\x0F\x22\xFD\x1F\x2E\xE7\x07\x36\xEF\x62\x39\xC5\xDD\xCB\xBA\x25\x14\x23\xDE\x0C\xC6\x3D\x3C\xCE\x82\x08\xE6\x66\x3E\xDA\x51\x3B\x16\x3A\xA3\x05\x7F\xA0\xDC\x87\xD5\x9C\xFC\x72\xA9\xA0\x7D\x78\xE4\xB7\x31\x55\x1E\x65\xBB\xD4\x61\xB0\x21\x60\xED\x10\x32\x72\xC5\x92\x25\x1E\xF8\x90\x4A\x18\x78\x47\xDF\x7E\x30\x37\x3E\x50\x1B\xDB\x1C\xD3\x6B\x9A\x86\x53\x07\xB0\xEF\xAC\x06\x78\xF8\x84\x99\xFE\x21\x8D\x4C\x80\xB6\x0C\x82\xF6\x66\x70\x79\x1A\xD3\x4F\xA3\xCF\xF1\xCF\x46\xB0\x4B\x0F\x3E\xDD\x88\x62\xB8\x8C\xA9\x09\x28\x3B\x7A\xC7\x97\xE1\x1E\xE5\xF4\x9F\xC0\xC0\xAE\x24\xA0\xC8\xA1\xD9\x0F\xD6\x7B\x26\x82\x69\x32\x3D\xA7\x02\x03\x01\x00\x01\xA3\x63\x30\x61\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x00\xAD\xD9\xA3\xF6\x79\xF6\x6E\x74\xA9\x7F\x33\x3D\x81\x17\xD7\x4C\xCF\x33\xDE\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\x00\xAD\xD9\xA3\xF6\x79\xF6\x6E\x74\xA9\x7F\x33\x3D\x81\x17\xD7\x4C\xCF\x33\xDE\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x86\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x7C\x8A\xD1\x1F\x18\x37\x82\xE0\xB8\xB0\xA3\xED\x56\x95\xC8\x62\x61\x9C\x05\xA2\xCD\xC2\x62\x26\x61\xCD\x10\x16\xD7\xCC\xB4\x65\x34\xD0\x11\x8A\xAD\xA8\xA9\x05\x66\xEF\x74\xF3\x6D\x5F\x9D\x99\xAF\xF6\x8B\xFB\xEB\x52\xB2\x05\x98\xA2\x6F\x2A\xC5\x54\xBD\x25\xBD\x5F\xAE\xC8\x86\xEA\x46\x2C\xC1\xB3\xBD\xC1\xE9\x49\x70\x18\x16\x97\x08\x13\x8C\x20\xE0\x1B\x2E\x3A\x47\xCB\x1E\xE4\x00\x30\x95\x5B\xF4\x45\xA3\xC0\x1A\xB0\x01\x4E\xAB\xBD\xC0\x23\x6E\x63\x3F\x80\x4A\xC5\x07\xED\xDC\xE2\x6F\xC7\xC1\x62\xF1\xE3\x72\xD6\x04\xC8\x74\x67\x0B\xFA\x88\xAB\xA1\x01\xC8\x6F\xF0\x14\xAF\xD2\x99\xCD\x51\x93\x7E\xED\x2E\x38\xC7\xBD\xCE\x46\x50\x3D\x72\xE3\x79\x25\x9D\x9B\x88\x2B\x10\x20\xDD\xA5\xB8\x32\x9F\x8D\xE0\x29\xDF\x21\x74\x86\x82\xDB\x2F\x82\x30\xC6\xC7\x35\x86\xB3\xF9\x96\x5F\x46\xDB\x0C\x45\xFD\xF3\x50\xC3\x6F\xC6\xC3\x48\xAD\x46\xA6\xE1\x27\x47\x0A\x1D\x0E\x9B\xB6\xC2\x77\x7F\x63\xF2\xE0\x7D\x1A\xBE\xFC\xE0\xDF\xD7\xC7\xA7\x6C\xB0\xF9\xAE\xBA\x3C\xFD\x74\xB4\x11\xE8\x58\x0D\x80\xBC\xD3\xA8\x80\x3A\x99\xED\x75\xCC\x46\x7B", ["CN=America Online Root Certification Authority 2,O=America Online Inc.,C=US"] = "\x30\x82\x05\xA4\x30\x82\x03\x8C\xA0\x03\x02\x01\x02\x02\x01\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x63\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x1C\x30\x1A\x06\x03\x55\x04\x0A\x13\x13\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x49\x6E\x63\x2E\x31\x36\x30\x34\x06\x03\x55\x04\x03\x13\x2D\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x52\x6F\x6F\x74\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x32\x30\x1E\x17\x0D\x30\x32\x30\x35\x32\x38\x30\x36\x30\x30\x30\x30\x5A\x17\x0D\x33\x37\x30\x39\x32\x39\x31\x34\x30\x38\x30\x30\x5A\x30\x63\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x1C\x30\x1A\x06\x03\x55\x04\x0A\x13\x13\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x49\x6E\x63\x2E\x31\x36\x30\x34\x06\x03\x55\x04\x03\x13\x2D\x41\x6D\x65\x72\x69\x63\x61\x20\x4F\x6E\x6C\x69\x6E\x65\x20\x52\x6F\x6F\x74\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x32\x30\x82\x02\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x02\x0F\x00\x30\x82\x02\x0A\x02\x82\x02\x01\x00\xCC\x41\x45\x1D\xE9\x3D\x4D\x10\xF6\x8C\xB1\x41\xC9\xE0\x5E\xCB\x0D\xB7\xBF\x47\x73\xD3\xF0\x55\x4D\xDD\xC6\x0C\xFA\xB1\x66\x05\x6A\xCD\x78\xB4\xDC\x02\xDB\x4E\x81\xF3\xD7\xA7\x7C\x71\xBC\x75\x63\xA0\x5D\xE3\x07\x0C\x48\xEC\x25\xC4\x03\x20\xF4\xFF\x0E\x3B\x12\xFF\x9B\x8D\xE1\xC6\xD5\x1B\xB4\x6D\x22\xE3\xB1\xDB\x7F\x21\x64\xAF\x86\xBC\x57\x22\x2A\xD6\x47\x81\x57\x44\x82\x56\x53\xBD\x86\x14\x01\x0B\xFC\x7F\x74\xA4\x5A\xAE\xF1\xBA\x11\xB5\x9B\x58\x5A\x80\xB4\x37\x78\x09\x33\x7C\x32\x47\x03\x5C\xC4\xA5\x83\x48\xF4\x57\x56\x6E\x81\x36\x27\x18\x4F\xEC\x9B\x28\xC2\xD4\xB4\xD7\x7C\x0C\x3E\x0C\x2B\xDF\xCA\x04\xD7\xC6\x8E\xEA\x58\x4E\xA8\xA4\xA5\x18\x1C\x6C\x45\x98\xA3\x41\xD1\x2D\xD2\xC7\x6D\x8D\x19\xF1\xAD\x79\xB7\x81\x3F\xBD\x06\x82\x27\x2D\x10\x58\x05\xB5\x78\x05\xB9\x2F\xDB\x0C\x6B\x90\x90\x7E\x14\x59\x38\xBB\x94\x24\x13\xE5\xD1\x9D\x14\xDF\xD3\x82\x4D\x46\xF0\x80\x39\x52\x32\x0F\xE3\x84\xB2\x7A\x43\xF2\x5E\xDE\x5F\x3F\x1D\xDD\xE3\xB2\x1B\xA0\xA1\x2A\x23\x03\x6E\x2E\x01\x15\x87\x5C\xA6\x75\x75\xC7\x97\x61\xBE\xDE\x86\xDC\xD4\x48\xDB\xBD\x2A\xBF\x4A\x55\xDA\xE8\x7D\x50\xFB\xB4\x80\x17\xB8\x94\xBF\x01\x3D\xEA\xDA\xBA\x7C\xE0\x58\x67\x17\xB9\x58\xE0\x88\x86\x46\x67\x6C\x9D\x10\x47\x58\x32\xD0\x35\x7C\x79\x2A\x90\xA2\x5A\x10\x11\x23\x35\xAD\x2F\xCC\xE4\x4A\x5B\xA7\xC8\x27\xF2\x83\xDE\x5E\xBB\x5E\x77\xE7\xE8\xA5\x6E\x63\xC2\x0D\x5D\x61\xD0\x8C\xD2\x6C\x5A\x21\x0E\xCA\x28\xA3\xCE\x2A\xE9\x95\xC7\x48\xCF\x96\x6F\x1D\x92\x25\xC8\xC6\xC6\xC1\xC1\x0C\x05\xAC\x26\xC4\xD2\x75\xD2\xE1\x2A\x67\xC0\x3D\x5B\xA5\x9A\xEB\xCF\x7B\x1A\xA8\x9D\x14\x45\xE5\x0F\xA0\x9A\x65\xDE\x2F\x28\xBD\xCE\x6F\x94\x66\x83\x48\x29\xD8\xEA\x65\x8C\xAF\x93\xD9\x64\x9F\x55\x57\x26\xBF\x6F\xCB\x37\x31\x99\xA3\x60\xBB\x1C\xAD\x89\x34\x32\x62\xB8\x43\x21\x06\x72\x0C\xA1\x5C\x6D\x46\xC5\xFA\x29\xCF\x30\xDE\x89\xDC\x71\x5B\xDD\xB6\x37\x3E\xDF\x50\xF5\xB8\x07\x25\x26\xE5\xBC\xB5\xFE\x3C\x02\xB3\xB7\xF8\xBE\x43\xC1\x87\x11\x94\x9E\x23\x6C\x17\x8A\xB8\x8A\x27\x0C\x54\x47\xF0\xA9\xB3\xC0\x80\x8C\xA0\x27\xEB\x1D\x19\xE3\x07\x8E\x77\x70\xCA\x2B\xF4\x7D\x76\xE0\x78\x67\x02\x03\x01\x00\x01\xA3\x63\x30\x61\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x4D\x45\xC1\x68\x38\xBB\x73\xA9\x69\xA1\x20\xE7\xED\xF5\x22\xA1\x23\x14\xD7\x9E\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\x4D\x45\xC1\x68\x38\xBB\x73\xA9\x69\xA1\x20\xE7\xED\xF5\x22\xA1\x23\x14\xD7\x9E\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x86\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x02\x01\x00\x67\x6B\x06\xB9\x5F\x45\x3B\x2A\x4B\x33\xB3\xE6\x1B\x6B\x59\x4E\x22\xCC\xB9\xB7\xA4\x25\xC9\xA7\xC4\xF0\x54\x96\x0B\x64\xF3\xB1\x58\x4F\x5E\x51\xFC\xB2\x97\x7B\x27\x65\xC2\xE5\xCA\xE7\x0D\x0C\x25\x7B\x62\xE3\xFA\x9F\xB4\x87\xB7\x45\x46\xAF\x83\xA5\x97\x48\x8C\xA5\xBD\xF1\x16\x2B\x9B\x76\x2C\x7A\x35\x60\x6C\x11\x80\x97\xCC\xA9\x92\x52\xE6\x2B\xE6\x69\xED\xA9\xF8\x36\x2D\x2C\x77\xBF\x61\x48\xD1\x63\x0B\xB9\x5B\x52\xED\x18\xB0\x43\x42\x22\xA6\xB1\x77\xAE\xDE\x69\xC5\xCD\xC7\x1C\xA1\xB1\xA5\x1C\x10\xFB\x18\xBE\x1A\x70\xDD\xC1\x92\x4B\xBE\x29\x5A\x9D\x3F\x35\xBE\xE5\x7D\x51\xF8\x55\xE0\x25\x75\x23\x87\x1E\x5C\xDC\xBA\x9D\xB0\xAC\xB3\x69\xDB\x17\x83\xC9\xF7\xDE\x0C\xBC\x08\xDC\x91\x9E\xA8\xD0\xD7\x15\x37\x73\xA5\x35\xB8\xFC\x7E\xC5\x44\x40\x06\xC3\xEB\xF8\x22\x80\x5C\x47\xCE\x02\xE3\x11\x9F\x44\xFF\xFD\x9A\x32\xCC\x7D\x64\x51\x0E\xEB\x57\x26\x76\x3A\xE3\x1E\x22\x3C\xC2\xA6\x36\xDD\x19\xEF\xA7\xFC\x12\xF3\x26\xC0\x59\x31\x85\x4C\x9C\xD8\xCF\xDF\xA4\xCC\xCC\x29\x93\xFF\x94\x6D\x76\x5C\x13\x08\x97\xF2\xED\xA5\x0B\x4D\xDD\xE8\xC9\x68\x0E\x66\xD3\x00\x0E\x33\x12\x5B\xBC\x95\xE5\x32\x90\xA8\xB3\xC6\x6C\x83\xAD\x77\xEE\x8B\x7E\x7E\xB1\xA9\xAB\xD3\xE1\xF1\xB6\xC0\xB1\xEA\x88\xC0\xE7\xD3\x90\xE9\x28\x92\x94\x7B\x68\x7B\x97\x2A\x0A\x67\x2D\x85\x02\x38\x10\xE4\x03\x61\xD4\xDA\x25\x36\xC7\x08\x58\x2D\xA1\xA7\x51\xAF\x30\x0A\x49\xF5\xA6\x69\x87\x07\x2D\x44\x46\x76\x8E\x2A\xE5\x9A\x3B\xD7\x18\xA2\xFC\x9C\x38\x10\xCC\xC6\x3B\xD2\xB5\x17\x3A\x6F\xFD\xAE\x25\xBD\xF5\x72\x59\x64\xB1\x74\x2A\x38\x5F\x18\x4C\xDF\xCF\x71\x04\x5A\x36\xD4\xBF\x2F\x99\x9C\xE8\xD9\xBA\xB1\x95\xE6\x02\x4B\x21\xA1\x5B\xD5\xC1\x4F\x8F\xAE\x69\x6D\x53\xDB\x01\x93\xB5\x5C\x1E\x18\xDD\x64\x5A\xCA\x18\x28\x3E\x63\x04\x11\xFD\x1C\x8D\x00\x0F\xB8\x37\xDF\x67\x8A\x9D\x66\xA9\x02\x6A\x91\xFF\x13\xCA\x2F\x5D\x83\xBC\x87\x93\x6C\xDC\x24\x51\x16\x04\x25\x66\xFA\xB3\xD9\xC2\xBA\x29\xBE\x9A\x48\x38\x82\x99\xF4\xBF\x3B\x4A\x31\x19\xF9\xBF\x8E\x21\x33\x14\xCA\x4F\x54\x5F\xFB\xCE\xFB\x8F\x71\x7F\xFD\x5E\x19\xA0\x0F\x4B\x91\xB8\xC4\x54\xBC\x06\xB0\x45\x8F\x26\x91\xA2\x8E\xFE\xA9", ["CN=Visa eCommerce Root,OU=Visa International Service Association,O=VISA,C=US"] = "\x30\x82\x03\xA2\x30\x82\x02\x8A\xA0\x03\x02\x01\x02\x02\x10\x13\x86\x35\x4D\x1D\x3F\x06\xF2\xC1\xF9\x65\x05\xD5\x90\x1C\x62\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x6B\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0D\x30\x0B\x06\x03\x55\x04\x0A\x13\x04\x56\x49\x53\x41\x31\x2F\x30\x2D\x06\x03\x55\x04\x0B\x13\x26\x56\x69\x73\x61\x20\x49\x6E\x74\x65\x72\x6E\x61\x74\x69\x6F\x6E\x61\x6C\x20\x53\x65\x72\x76\x69\x63\x65\x20\x41\x73\x73\x6F\x63\x69\x61\x74\x69\x6F\x6E\x31\x1C\x30\x1A\x06\x03\x55\x04\x03\x13\x13\x56\x69\x73\x61\x20\x65\x43\x6F\x6D\x6D\x65\x72\x63\x65\x20\x52\x6F\x6F\x74\x30\x1E\x17\x0D\x30\x32\x30\x36\x32\x36\x30\x32\x31\x38\x33\x36\x5A\x17\x0D\x32\x32\x30\x36\x32\x34\x30\x30\x31\x36\x31\x32\x5A\x30\x6B\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0D\x30\x0B\x06\x03\x55\x04\x0A\x13\x04\x56\x49\x53\x41\x31\x2F\x30\x2D\x06\x03\x55\x04\x0B\x13\x26\x56\x69\x73\x61\x20\x49\x6E\x74\x65\x72\x6E\x61\x74\x69\x6F\x6E\x61\x6C\x20\x53\x65\x72\x76\x69\x63\x65\x20\x41\x73\x73\x6F\x63\x69\x61\x74\x69\x6F\x6E\x31\x1C\x30\x1A\x06\x03\x55\x04\x03\x13\x13\x56\x69\x73\x61\x20\x65\x43\x6F\x6D\x6D\x65\x72\x63\x65\x20\x52\x6F\x6F\x74\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xAF\x57\xDE\x56\x1E\x6E\xA1\xDA\x60\xB1\x94\x27\xCB\x17\xDB\x07\x3F\x80\x85\x4F\xC8\x9C\xB6\xD0\xF4\x6F\x4F\xCF\x99\xD8\xE1\xDB\xC2\x48\x5C\x3A\xAC\x39\x33\xC7\x1F\x6A\x8B\x26\x3D\x2B\x35\xF5\x48\xB1\x91\xC1\x02\x4E\x04\x96\x91\x7B\xB0\x33\xF0\xB1\x14\x4E\x11\x6F\xB5\x40\xAF\x1B\x45\xA5\x4A\xEF\x7E\xB6\xAC\xF2\xA0\x1F\x58\x3F\x12\x46\x60\x3C\x8D\xA1\xE0\x7D\xCF\x57\x3E\x33\x1E\xFB\x47\xF1\xAA\x15\x97\x07\x55\x66\xA5\xB5\x2D\x2E\xD8\x80\x59\xB2\xA7\x0D\xB7\x46\xEC\x21\x63\xFF\x35\xAB\xA5\x02\xCF\x2A\xF4\x4C\xFE\x7B\xF5\x94\x5D\x84\x4D\xA8\xF2\x60\x8F\xDB\x0E\x25\x3C\x9F\x73\x71\xCF\x94\xDF\x4A\xEA\xDB\xDF\x72\x38\x8C\xF3\x96\xBD\xF1\x17\xBC\xD2\xBA\x3B\x45\x5A\xC6\xA7\xF6\xC6\x17\x8B\x01\x9D\xFC\x19\xA8\x2A\x83\x16\xB8\x3A\x48\xFE\x4E\x3E\xA0\xAB\x06\x19\xE9\x53\xF3\x80\x13\x07\xED\x2D\xBF\x3F\x0A\x3C\x55\x20\x39\x2C\x2C\x00\x69\x74\x95\x4A\xBC\x20\xB2\xA9\x79\xE5\x18\x89\x91\xA8\xDC\x1C\x4D\xEF\xBB\x7E\x37\x0B\x5D\xFE\x39\xA5\x88\x52\x8C\x00\x6C\xEC\x18\x7C\x41\xBD\xF6\x8B\x75\x77\xBA\x60\x9D\x84\xE7\xFE\x2D\x02\x03\x01\x00\x01\xA3\x42\x30\x40\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x15\x38\x83\x0F\x3F\x2C\x3F\x70\x33\x1E\xCD\x46\xFE\x07\x8C\x20\xE0\xD7\xC3\xB7\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x5F\xF1\x41\x7D\x7C\x5C\x08\xB9\x2B\xE0\xD5\x92\x47\xFA\x67\x5C\xA5\x13\xC3\x03\x21\x9B\x2B\x4C\x89\x46\xCF\x59\x4D\xC9\xFE\xA5\x40\xB6\x63\xCD\xDD\x71\x28\x95\x67\x11\xCC\x24\xAC\xD3\x44\x6C\x71\xAE\x01\x20\x6B\x03\xA2\x8F\x18\xB7\x29\x3A\x7D\xE5\x16\x60\x53\x78\x3C\xC0\xAF\x15\x83\xF7\x8F\x52\x33\x24\xBD\x64\x93\x97\xEE\x8B\xF7\xDB\x18\xA8\x6D\x71\xB3\xF7\x2C\x17\xD0\x74\x25\x69\xF7\xFE\x6B\x3C\x94\xBE\x4D\x4B\x41\x8C\x4E\xE2\x73\xD0\xE3\x90\x22\x73\x43\xCD\xF3\xEF\xEA\x73\xCE\x45\x8A\xB0\xA6\x49\xFF\x4C\x7D\x9D\x71\x88\xC4\x76\x1D\x90\x5B\x1D\xEE\xFD\xCC\xF7\xEE\xFD\x60\xA5\xB1\x7A\x16\x71\xD1\x16\xD0\x7C\x12\x3C\x6C\x69\x97\xDB\xAE\x5F\x39\x9A\x70\x2F\x05\x3C\x19\x46\x04\x99\x20\x36\xD0\x60\x6E\x61\x06\xBB\x16\x42\x8C\x70\xF7\x30\xFB\xE0\xDB\x66\xA3\x00\x01\xBD\xE6\x2C\xDA\x91\x5F\xA0\x46\x8B\x4D\x6A\x9C\x3D\x3D\xDD\x05\x46\xFE\x76\xBF\xA0\x0A\x3C\xE4\x00\xE6\x27\xB7\xFF\x84\x2D\xDE\xBA\x22\x27\x96\x10\x71\xEB\x22\xED\xDF\xDF\x33\x9C\xCF\xE3\xAD\xAE\x8E\xD4\x8E\xE6\x4F\x51\xAF\x16\x92\xE0\x5C\xF6\x07\x0F", - ["emailAddress=certificate@trustcenter.de,OU=TC TrustCenter Class 2 CA,O=TC TrustCenter for Security in Data Networks GmbH,L=Hamburg,ST=Hamburg,C=DE"] = "\x30\x82\x03\x5C\x30\x82\x02\xC5\xA0\x03\x02\x01\x02\x02\x02\x03\xEA\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x04\x05\x00\x30\x81\xBC\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x45\x31\x10\x30\x0E\x06\x03\x55\x04\x08\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x3A\x30\x38\x06\x03\x55\x04\x0A\x13\x31\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x66\x6F\x72\x20\x53\x65\x63\x75\x72\x69\x74\x79\x20\x69\x6E\x20\x44\x61\x74\x61\x20\x4E\x65\x74\x77\x6F\x72\x6B\x73\x20\x47\x6D\x62\x48\x31\x22\x30\x20\x06\x03\x55\x04\x0B\x13\x19\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x43\x6C\x61\x73\x73\x20\x32\x20\x43\x41\x31\x29\x30\x27\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x09\x01\x16\x1A\x63\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x40\x74\x72\x75\x73\x74\x63\x65\x6E\x74\x65\x72\x2E\x64\x65\x30\x1E\x17\x0D\x39\x38\x30\x33\x30\x39\x31\x31\x35\x39\x35\x39\x5A\x17\x0D\x31\x31\x30\x31\x30\x31\x31\x31\x35\x39\x35\x39\x5A\x30\x81\xBC\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x45\x31\x10\x30\x0E\x06\x03\x55\x04\x08\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x3A\x30\x38\x06\x03\x55\x04\x0A\x13\x31\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x66\x6F\x72\x20\x53\x65\x63\x75\x72\x69\x74\x79\x20\x69\x6E\x20\x44\x61\x74\x61\x20\x4E\x65\x74\x77\x6F\x72\x6B\x73\x20\x47\x6D\x62\x48\x31\x22\x30\x20\x06\x03\x55\x04\x0B\x13\x19\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x43\x6C\x61\x73\x73\x20\x32\x20\x43\x41\x31\x29\x30\x27\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x09\x01\x16\x1A\x63\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x40\x74\x72\x75\x73\x74\x63\x65\x6E\x74\x65\x72\x2E\x64\x65\x30\x81\x9F\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x81\x8D\x00\x30\x81\x89\x02\x81\x81\x00\xDA\x38\xE8\xED\x32\x00\x29\x71\x83\x01\x0D\xBF\x8C\x01\xDC\xDA\xC6\xAD\x39\xA4\xA9\x8A\x2F\xD5\x8B\x5C\x68\x5F\x50\xC6\x62\xF5\x66\xBD\xCA\x91\x22\xEC\xAA\x1D\x51\xD7\x3D\xB3\x51\xB2\x83\x4E\x5D\xCB\x49\xB0\xF0\x4C\x55\xE5\x6B\x2D\xC7\x85\x0B\x30\x1C\x92\x4E\x82\xD4\xCA\x02\xED\xF7\x6F\xBE\xDC\xE0\xE3\x14\xB8\x05\x53\xF2\x9A\xF4\x56\x8B\x5A\x9E\x85\x93\xD1\xB4\x82\x56\xAE\x4D\xBB\xA8\x4B\x57\x16\xBC\xFE\xF8\x58\x9E\xF8\x29\x8D\xB0\x7B\xCD\x78\xC9\x4F\xAC\x8B\x67\x0C\xF1\x9C\xFB\xFC\x57\x9B\x57\x5C\x4F\x0D\x02\x03\x01\x00\x01\xA3\x6B\x30\x69\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x86\x30\x33\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x08\x04\x26\x16\x24\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x74\x72\x75\x73\x74\x63\x65\x6E\x74\x65\x72\x2E\x64\x65\x2F\x67\x75\x69\x64\x65\x6C\x69\x6E\x65\x73\x30\x11\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x01\x04\x04\x03\x02\x00\x07\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x04\x05\x00\x03\x81\x81\x00\x84\x52\xFB\x28\xDF\xFF\x1F\x75\x01\xBC\x01\xBE\x04\x56\x97\x6A\x74\x42\x24\x31\x83\xF9\x46\xB1\x06\x8A\x89\xCF\x96\x2C\x33\xBF\x8C\xB5\x5F\x7A\x72\xA1\x85\x06\xCE\x86\xF8\x05\x8E\xE8\xF9\x25\xCA\xDA\x83\x8C\x06\xAC\xEB\x36\x6D\x85\x91\x34\x04\x36\xF4\x42\xF0\xF8\x79\x2E\x0A\x48\x5C\xAB\xCC\x51\x4F\x78\x76\xA0\xD9\xAC\x19\xBD\x2A\xD1\x69\x04\x28\x91\xCA\x36\x10\x27\x80\x57\x5B\xD2\x5C\xF5\xC2\x5B\xAB\x64\x81\x63\x74\x51\xF4\x97\xBF\xCD\x12\x28\xF7\x4D\x66\x7F\xA7\xF0\x1C\x01\x26\x78\xB2\x66\x47\x70\x51\x64", - ["emailAddress=certificate@trustcenter.de,OU=TC TrustCenter Class 3 CA,O=TC TrustCenter for Security in Data Networks GmbH,L=Hamburg,ST=Hamburg,C=DE"] = "\x30\x82\x03\x5C\x30\x82\x02\xC5\xA0\x03\x02\x01\x02\x02\x02\x03\xEB\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x04\x05\x00\x30\x81\xBC\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x45\x31\x10\x30\x0E\x06\x03\x55\x04\x08\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x3A\x30\x38\x06\x03\x55\x04\x0A\x13\x31\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x66\x6F\x72\x20\x53\x65\x63\x75\x72\x69\x74\x79\x20\x69\x6E\x20\x44\x61\x74\x61\x20\x4E\x65\x74\x77\x6F\x72\x6B\x73\x20\x47\x6D\x62\x48\x31\x22\x30\x20\x06\x03\x55\x04\x0B\x13\x19\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x43\x6C\x61\x73\x73\x20\x33\x20\x43\x41\x31\x29\x30\x27\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x09\x01\x16\x1A\x63\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x40\x74\x72\x75\x73\x74\x63\x65\x6E\x74\x65\x72\x2E\x64\x65\x30\x1E\x17\x0D\x39\x38\x30\x33\x30\x39\x31\x31\x35\x39\x35\x39\x5A\x17\x0D\x31\x31\x30\x31\x30\x31\x31\x31\x35\x39\x35\x39\x5A\x30\x81\xBC\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x45\x31\x10\x30\x0E\x06\x03\x55\x04\x08\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x13\x07\x48\x61\x6D\x62\x75\x72\x67\x31\x3A\x30\x38\x06\x03\x55\x04\x0A\x13\x31\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x66\x6F\x72\x20\x53\x65\x63\x75\x72\x69\x74\x79\x20\x69\x6E\x20\x44\x61\x74\x61\x20\x4E\x65\x74\x77\x6F\x72\x6B\x73\x20\x47\x6D\x62\x48\x31\x22\x30\x20\x06\x03\x55\x04\x0B\x13\x19\x54\x43\x20\x54\x72\x75\x73\x74\x43\x65\x6E\x74\x65\x72\x20\x43\x6C\x61\x73\x73\x20\x33\x20\x43\x41\x31\x29\x30\x27\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x09\x01\x16\x1A\x63\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x40\x74\x72\x75\x73\x74\x63\x65\x6E\x74\x65\x72\x2E\x64\x65\x30\x81\x9F\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x81\x8D\x00\x30\x81\x89\x02\x81\x81\x00\xB6\xB4\xC1\x35\x05\x2E\x0D\x8D\xEC\xA0\x40\x6A\x1C\x0E\x27\xA6\x50\x92\x6B\x50\x1B\x07\xDE\x2E\xE7\x76\xCC\xE0\xDA\xFC\x84\xA8\x5E\x8C\x63\x6A\x2B\x4D\xD9\x4E\x02\x76\x11\xC1\x0B\xF2\x8D\x79\xCA\x00\xB6\xF1\xB0\x0E\xD7\xFB\xA4\x17\x3D\xAF\xAB\x69\x7A\x96\x27\xBF\xAF\x33\xA1\x9A\x2A\x59\xAA\xC4\xB5\x37\x08\xF2\x12\xA5\x31\xB6\x43\xF5\x32\x96\x71\x28\x28\xAB\x8D\x28\x86\xDF\xBB\xEE\xE3\x0C\x7D\x30\xD6\xC3\x52\xAB\x8F\x5D\x27\x9C\x6B\xC0\xA3\xE7\x05\x6B\x57\x49\x44\xB3\x6E\xEA\x64\xCF\xD2\x8E\x7A\x50\x77\x77\x02\x03\x01\x00\x01\xA3\x6B\x30\x69\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x86\x30\x33\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x08\x04\x26\x16\x24\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x74\x72\x75\x73\x74\x63\x65\x6E\x74\x65\x72\x2E\x64\x65\x2F\x67\x75\x69\x64\x65\x6C\x69\x6E\x65\x73\x30\x11\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x01\x04\x04\x03\x02\x00\x07\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x04\x05\x00\x03\x81\x81\x00\x16\x3D\xC6\xCD\xC1\xBB\x85\x71\x85\x46\x9F\x3E\x20\x8F\x51\x28\x99\xEC\x2D\x45\x21\x63\x23\x5B\x04\xBB\x4C\x90\xB8\x88\x92\x04\x4D\xBD\x7D\x01\xA3\x3F\xF6\xEC\xCE\xF1\xDE\xFE\x7D\xE5\xE1\x3E\xBB\xC6\xAB\x5E\x0B\xDD\x3D\x96\xC4\xCB\xA9\xD4\xF9\x26\xE6\x06\x4E\x9E\x0C\xA5\x7A\xBA\x6E\xC3\x7C\x82\x19\xD1\xC7\xB1\xB1\xC3\xDB\x0D\x8E\x9B\x40\x7C\x37\x0B\xF1\x5D\xE8\xFD\x1F\x90\x88\xA5\x0E\x4E\x37\x64\x21\xA8\x4E\x8D\xB4\x9F\xF1\xDE\x48\xAD\xD5\x56\x18\x52\x29\x8B\x47\x34\x12\x09\xD4\xBB\x92\x35\xEF\x0F\xDB\x34", ["CN=Certum CA,O=Unizeto Sp. z o.o.,C=PL"] = "\x30\x82\x03\x0C\x30\x82\x01\xF4\xA0\x03\x02\x01\x02\x02\x03\x01\x00\x20\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x3E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x50\x4C\x31\x1B\x30\x19\x06\x03\x55\x04\x0A\x13\x12\x55\x6E\x69\x7A\x65\x74\x6F\x20\x53\x70\x2E\x20\x7A\x20\x6F\x2E\x6F\x2E\x31\x12\x30\x10\x06\x03\x55\x04\x03\x13\x09\x43\x65\x72\x74\x75\x6D\x20\x43\x41\x30\x1E\x17\x0D\x30\x32\x30\x36\x31\x31\x31\x30\x34\x36\x33\x39\x5A\x17\x0D\x32\x37\x30\x36\x31\x31\x31\x30\x34\x36\x33\x39\x5A\x30\x3E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x50\x4C\x31\x1B\x30\x19\x06\x03\x55\x04\x0A\x13\x12\x55\x6E\x69\x7A\x65\x74\x6F\x20\x53\x70\x2E\x20\x7A\x20\x6F\x2E\x6F\x2E\x31\x12\x30\x10\x06\x03\x55\x04\x03\x13\x09\x43\x65\x72\x74\x75\x6D\x20\x43\x41\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xCE\xB1\xC1\x2E\xD3\x4F\x7C\xCD\x25\xCE\x18\x3E\x4F\xC4\x8C\x6F\x80\x6A\x73\xC8\x5B\x51\xF8\x9B\xD2\xDC\xBB\x00\x5C\xB1\xA0\xFC\x75\x03\xEE\x81\xF0\x88\xEE\x23\x52\xE9\xE6\x15\x33\x8D\xAC\x2D\x09\xC5\x76\xF9\x2B\x39\x80\x89\xE4\x97\x4B\x90\xA5\xA8\x78\xF8\x73\x43\x7B\xA4\x61\xB0\xD8\x58\xCC\xE1\x6C\x66\x7E\x9C\xF3\x09\x5E\x55\x63\x84\xD5\xA8\xEF\xF3\xB1\x2E\x30\x68\xB3\xC4\x3C\xD8\xAC\x6E\x8D\x99\x5A\x90\x4E\x34\xDC\x36\x9A\x8F\x81\x88\x50\xB7\x6D\x96\x42\x09\xF3\xD7\x95\x83\x0D\x41\x4B\xB0\x6A\x6B\xF8\xFC\x0F\x7E\x62\x9F\x67\xC4\xED\x26\x5F\x10\x26\x0F\x08\x4F\xF0\xA4\x57\x28\xCE\x8F\xB8\xED\x45\xF6\x6E\xEE\x25\x5D\xAA\x6E\x39\xBE\xE4\x93\x2F\xD9\x47\xA0\x72\xEB\xFA\xA6\x5B\xAF\xCA\x53\x3F\xE2\x0E\xC6\x96\x56\x11\x6E\xF7\xE9\x66\xA9\x26\xD8\x7F\x95\x53\xED\x0A\x85\x88\xBA\x4F\x29\xA5\x42\x8C\x5E\xB6\xFC\x85\x20\x00\xAA\x68\x0B\xA1\x1A\x85\x01\x9C\xC4\x46\x63\x82\x88\xB6\x22\xB1\xEE\xFE\xAA\x46\x59\x7E\xCF\x35\x2C\xD5\xB6\xDA\x5D\xF7\x48\x33\x14\x54\xB6\xEB\xD9\x6F\xCE\xCD\x88\xD6\xAB\x1B\xDA\x96\x3B\x1D\x59\x02\x03\x01\x00\x01\xA3\x13\x30\x11\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\xB8\x8D\xCE\xEF\xE7\x14\xBA\xCF\xEE\xB0\x44\x92\x6C\xB4\x39\x3E\xA2\x84\x6E\xAD\xB8\x21\x77\xD2\xD4\x77\x82\x87\xE6\x20\x41\x81\xEE\xE2\xF8\x11\xB7\x63\xD1\x17\x37\xBE\x19\x76\x24\x1C\x04\x1A\x4C\xEB\x3D\xAA\x67\x6F\x2D\xD4\xCD\xFE\x65\x31\x70\xC5\x1B\xA6\x02\x0A\xBA\x60\x7B\x6D\x58\xC2\x9A\x49\xFE\x63\x32\x0B\x6B\xE3\x3A\xC0\xAC\xAB\x3B\xB0\xE8\xD3\x09\x51\x8C\x10\x83\xC6\x34\xE0\xC5\x2B\xE0\x1A\xB6\x60\x14\x27\x6C\x32\x77\x8C\xBC\xB2\x72\x98\xCF\xCD\xCC\x3F\xB9\xC8\x24\x42\x14\xD6\x57\xFC\xE6\x26\x43\xA9\x1D\xE5\x80\x90\xCE\x03\x54\x28\x3E\xF7\x3F\xD3\xF8\x4D\xED\x6A\x0A\x3A\x93\x13\x9B\x3B\x14\x23\x13\x63\x9C\x3F\xD1\x87\x27\x79\xE5\x4C\x51\xE3\x01\xAD\x85\x5D\x1A\x3B\xB1\xD5\x73\x10\xA4\xD3\xF2\xBC\x6E\x64\xF5\x5A\x56\x90\xA8\xC7\x0E\x4C\x74\x0F\x2E\x71\x3B\xF7\xC8\x47\xF4\x69\x6F\x15\xF2\x11\x5E\x83\x1E\x9C\x7C\x52\xAE\xFD\x02\xDA\x12\xA8\x59\x67\x18\xDB\xBC\x70\xDD\x9B\xB1\x69\xED\x80\xCE\x89\x40\x48\x6A\x0E\x35\xCA\x29\x66\x15\x21\x94\x2C\xE8\x60\x2A\x9B\x85\x4A\x40\xF3\x6B\x8A\x24\xEC\x06\x16\x2C\x73", ["CN=AAA Certificate Services,O=Comodo CA Limited,L=Salford,ST=Greater Manchester,C=GB"] = "\x30\x82\x04\x32\x30\x82\x03\x1A\xA0\x03\x02\x01\x02\x02\x01\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x7B\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x42\x31\x1B\x30\x19\x06\x03\x55\x04\x08\x0C\x12\x47\x72\x65\x61\x74\x65\x72\x20\x4D\x61\x6E\x63\x68\x65\x73\x74\x65\x72\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x0C\x07\x53\x61\x6C\x66\x6F\x72\x64\x31\x1A\x30\x18\x06\x03\x55\x04\x0A\x0C\x11\x43\x6F\x6D\x6F\x64\x6F\x20\x43\x41\x20\x4C\x69\x6D\x69\x74\x65\x64\x31\x21\x30\x1F\x06\x03\x55\x04\x03\x0C\x18\x41\x41\x41\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x53\x65\x72\x76\x69\x63\x65\x73\x30\x1E\x17\x0D\x30\x34\x30\x31\x30\x31\x30\x30\x30\x30\x30\x30\x5A\x17\x0D\x32\x38\x31\x32\x33\x31\x32\x33\x35\x39\x35\x39\x5A\x30\x7B\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x42\x31\x1B\x30\x19\x06\x03\x55\x04\x08\x0C\x12\x47\x72\x65\x61\x74\x65\x72\x20\x4D\x61\x6E\x63\x68\x65\x73\x74\x65\x72\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x0C\x07\x53\x61\x6C\x66\x6F\x72\x64\x31\x1A\x30\x18\x06\x03\x55\x04\x0A\x0C\x11\x43\x6F\x6D\x6F\x64\x6F\x20\x43\x41\x20\x4C\x69\x6D\x69\x74\x65\x64\x31\x21\x30\x1F\x06\x03\x55\x04\x03\x0C\x18\x41\x41\x41\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x53\x65\x72\x76\x69\x63\x65\x73\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xBE\x40\x9D\xF4\x6E\xE1\xEA\x76\x87\x1C\x4D\x45\x44\x8E\xBE\x46\xC8\x83\x06\x9D\xC1\x2A\xFE\x18\x1F\x8E\xE4\x02\xFA\xF3\xAB\x5D\x50\x8A\x16\x31\x0B\x9A\x06\xD0\xC5\x70\x22\xCD\x49\x2D\x54\x63\xCC\xB6\x6E\x68\x46\x0B\x53\xEA\xCB\x4C\x24\xC0\xBC\x72\x4E\xEA\xF1\x15\xAE\xF4\x54\x9A\x12\x0A\xC3\x7A\xB2\x33\x60\xE2\xDA\x89\x55\xF3\x22\x58\xF3\xDE\xDC\xCF\xEF\x83\x86\xA2\x8C\x94\x4F\x9F\x68\xF2\x98\x90\x46\x84\x27\xC7\x76\xBF\xE3\xCC\x35\x2C\x8B\x5E\x07\x64\x65\x82\xC0\x48\xB0\xA8\x91\xF9\x61\x9F\x76\x20\x50\xA8\x91\xC7\x66\xB5\xEB\x78\x62\x03\x56\xF0\x8A\x1A\x13\xEA\x31\xA3\x1E\xA0\x99\xFD\x38\xF6\xF6\x27\x32\x58\x6F\x07\xF5\x6B\xB8\xFB\x14\x2B\xAF\xB7\xAA\xCC\xD6\x63\x5F\x73\x8C\xDA\x05\x99\xA8\x38\xA8\xCB\x17\x78\x36\x51\xAC\xE9\x9E\xF4\x78\x3A\x8D\xCF\x0F\xD9\x42\xE2\x98\x0C\xAB\x2F\x9F\x0E\x01\xDE\xEF\x9F\x99\x49\xF1\x2D\xDF\xAC\x74\x4D\x1B\x98\xB5\x47\xC5\xE5\x29\xD1\xF9\x90\x18\xC7\x62\x9C\xBE\x83\xC7\x26\x7B\x3E\x8A\x25\xC7\xC0\xDD\x9D\xE6\x35\x68\x10\x20\x9D\x8F\xD8\xDE\xD2\xC3\x84\x9C\x0D\x5E\xE8\x2F\xC9\x02\x03\x01\x00\x01\xA3\x81\xC0\x30\x81\xBD\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xA0\x11\x0A\x23\x3E\x96\xF1\x07\xEC\xE2\xAF\x29\xEF\x82\xA5\x7F\xD0\x30\xA4\xB4\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x7B\x06\x03\x55\x1D\x1F\x04\x74\x30\x72\x30\x38\xA0\x36\xA0\x34\x86\x32\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x63\x6F\x6D\x6F\x64\x6F\x63\x61\x2E\x63\x6F\x6D\x2F\x41\x41\x41\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x53\x65\x72\x76\x69\x63\x65\x73\x2E\x63\x72\x6C\x30\x36\xA0\x34\xA0\x32\x86\x30\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x63\x6F\x6D\x6F\x64\x6F\x2E\x6E\x65\x74\x2F\x41\x41\x41\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x53\x65\x72\x76\x69\x63\x65\x73\x2E\x63\x72\x6C\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x08\x56\xFC\x02\xF0\x9B\xE8\xFF\xA4\xFA\xD6\x7B\xC6\x44\x80\xCE\x4F\xC4\xC5\xF6\x00\x58\xCC\xA6\xB6\xBC\x14\x49\x68\x04\x76\xE8\xE6\xEE\x5D\xEC\x02\x0F\x60\xD6\x8D\x50\x18\x4F\x26\x4E\x01\xE3\xE6\xB0\xA5\xEE\xBF\xBC\x74\x54\x41\xBF\xFD\xFC\x12\xB8\xC7\x4F\x5A\xF4\x89\x60\x05\x7F\x60\xB7\x05\x4A\xF3\xF6\xF1\xC2\xBF\xC4\xB9\x74\x86\xB6\x2D\x7D\x6B\xCC\xD2\xF3\x46\xDD\x2F\xC6\xE0\x6A\xC3\xC3\x34\x03\x2C\x7D\x96\xDD\x5A\xC2\x0E\xA7\x0A\x99\xC1\x05\x8B\xAB\x0C\x2F\xF3\x5C\x3A\xCF\x6C\x37\x55\x09\x87\xDE\x53\x40\x6C\x58\xEF\xFC\xB6\xAB\x65\x6E\x04\xF6\x1B\xDC\x3C\xE0\x5A\x15\xC6\x9E\xD9\xF1\x59\x48\x30\x21\x65\x03\x6C\xEC\xE9\x21\x73\xEC\x9B\x03\xA1\xE0\x37\xAD\xA0\x15\x18\x8F\xFA\xBA\x02\xCE\xA7\x2C\xA9\x10\x13\x2C\xD4\xE5\x08\x26\xAB\x22\x97\x60\xF8\x90\x5E\x74\xD4\xA2\x9A\x53\xBD\xF2\xA9\x68\xE0\xA2\x6E\xC2\xD7\x6C\xB1\xA3\x0F\x9E\xBF\xEB\x68\xE7\x56\xF2\xAE\xF2\xE3\x2B\x38\x3A\x09\x81\xB5\x6B\x85\xD7\xBE\x2D\xED\x3F\x1A\xB7\xB2\x63\xE2\xF5\x62\x2C\x82\xD4\x6A\x00\x41\x50\xF1\x39\x83\x9F\x95\xE9\x36\x96\x98\x6E", ["CN=Secure Certificate Services,O=Comodo CA Limited,L=Salford,ST=Greater Manchester,C=GB"] = "\x30\x82\x04\x3F\x30\x82\x03\x27\xA0\x03\x02\x01\x02\x02\x01\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x7E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x42\x31\x1B\x30\x19\x06\x03\x55\x04\x08\x0C\x12\x47\x72\x65\x61\x74\x65\x72\x20\x4D\x61\x6E\x63\x68\x65\x73\x74\x65\x72\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x0C\x07\x53\x61\x6C\x66\x6F\x72\x64\x31\x1A\x30\x18\x06\x03\x55\x04\x0A\x0C\x11\x43\x6F\x6D\x6F\x64\x6F\x20\x43\x41\x20\x4C\x69\x6D\x69\x74\x65\x64\x31\x24\x30\x22\x06\x03\x55\x04\x03\x0C\x1B\x53\x65\x63\x75\x72\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x53\x65\x72\x76\x69\x63\x65\x73\x30\x1E\x17\x0D\x30\x34\x30\x31\x30\x31\x30\x30\x30\x30\x30\x30\x5A\x17\x0D\x32\x38\x31\x32\x33\x31\x32\x33\x35\x39\x35\x39\x5A\x30\x7E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x42\x31\x1B\x30\x19\x06\x03\x55\x04\x08\x0C\x12\x47\x72\x65\x61\x74\x65\x72\x20\x4D\x61\x6E\x63\x68\x65\x73\x74\x65\x72\x31\x10\x30\x0E\x06\x03\x55\x04\x07\x0C\x07\x53\x61\x6C\x66\x6F\x72\x64\x31\x1A\x30\x18\x06\x03\x55\x04\x0A\x0C\x11\x43\x6F\x6D\x6F\x64\x6F\x20\x43\x41\x20\x4C\x69\x6D\x69\x74\x65\x64\x31\x24\x30\x22\x06\x03\x55\x04\x03\x0C\x1B\x53\x65\x63\x75\x72\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x53\x65\x72\x76\x69\x63\x65\x73\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xC0\x71\x33\x82\x8A\xD0\x70\xEB\x73\x87\x82\x40\xD5\x1D\xE4\xCB\xC9\x0E\x42\x90\xF9\xDE\x34\xB9\xA1\xBA\x11\xF4\x25\x85\xF3\xCC\x72\x6D\xF2\x7B\x97\x6B\xB3\x07\xF1\x77\x24\x91\x5F\x25\x8F\xF6\x74\x3D\xE4\x80\xC2\xF8\x3C\x0D\xF3\xBF\x40\xEA\xF7\xC8\x52\xD1\x72\x6F\xEF\xC8\xAB\x41\xB8\x6E\x2E\x17\x2A\x95\x69\x0C\xCD\xD2\x1E\x94\x7B\x2D\x94\x1D\xAA\x75\xD7\xB3\x98\xCB\xAC\xBC\x64\x53\x40\xBC\x8F\xAC\xAC\x36\xCB\x5C\xAD\xBB\xDD\xE0\x94\x17\xEC\xD1\x5C\xD0\xBF\xEF\xA5\x95\xC9\x90\xC5\xB0\xAC\xFB\x1B\x43\xDF\x7A\x08\x5D\xB7\xB8\xF2\x40\x1B\x2B\x27\x9E\x50\xCE\x5E\x65\x82\x88\x8C\x5E\xD3\x4E\x0C\x7A\xEA\x08\x91\xB6\x36\xAA\x2B\x42\xFB\xEA\xC2\xA3\x39\xE5\xDB\x26\x38\xAD\x8B\x0A\xEE\x19\x63\xC7\x1C\x24\xDF\x03\x78\xDA\xE6\xEA\xC1\x47\x1A\x0B\x0B\x46\x09\xDD\x02\xFC\xDE\xCB\x87\x5F\xD7\x30\x63\x68\xA1\xAE\xDC\x32\xA1\xBA\xBE\xFE\x44\xAB\x68\xB6\xA5\x17\x15\xFD\xBD\xD5\xA7\xA7\x9A\xE4\x44\x33\xE9\x88\x8E\xFC\xED\x51\xEB\x93\x71\x4E\xAD\x01\xE7\x44\x8E\xAB\x2D\xCB\xA8\xFE\x01\x49\x48\xF0\xC0\xDD\xC7\x68\xD8\x92\xFE\x3D\x02\x03\x01\x00\x01\xA3\x81\xC7\x30\x81\xC4\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x3C\xD8\x93\x88\xC2\xC0\x82\x09\xCC\x01\x99\x06\x93\x20\xE9\x9E\x70\x09\x63\x4F\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x81\x81\x06\x03\x55\x1D\x1F\x04\x7A\x30\x78\x30\x3B\xA0\x39\xA0\x37\x86\x35\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x63\x6F\x6D\x6F\x64\x6F\x63\x61\x2E\x63\x6F\x6D\x2F\x53\x65\x63\x75\x72\x65\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x53\x65\x72\x76\x69\x63\x65\x73\x2E\x63\x72\x6C\x30\x39\xA0\x37\xA0\x35\x86\x33\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x63\x6F\x6D\x6F\x64\x6F\x2E\x6E\x65\x74\x2F\x53\x65\x63\x75\x72\x65\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x53\x65\x72\x76\x69\x63\x65\x73\x2E\x63\x72\x6C\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x87\x01\x6D\x23\x1D\x7E\x5B\x17\x7D\xC1\x61\x32\xCF\x8F\xE7\xF3\x8A\x94\x59\x66\xE0\x9E\x28\xA8\x5E\xD3\xB7\xF4\x34\xE6\xAA\x39\xB2\x97\x16\xC5\x82\x6F\x32\xA4\xE9\x8C\xE7\xAF\xFD\xEF\xC2\xE8\xB9\x4B\xAA\xA3\xF4\xE6\xDA\x8D\x65\x21\xFB\xBA\x80\xEB\x26\x28\x85\x1A\xFE\x39\x8C\xDE\x5B\x04\x04\xB4\x54\xF9\xA3\x67\x9E\x41\xFA\x09\x52\xCC\x05\x48\xA8\xC9\x3F\x21\x04\x1E\xCE\x48\x6B\xFC\x85\xE8\xC2\x7B\xAF\x7F\xB7\xCC\xF8\x5F\x3A\xFD\x35\xC6\x0D\xEF\x97\xDC\x4C\xAB\x11\xE1\x6B\xCB\x31\xD1\x6C\xFB\x48\x80\xAB\xDC\x9C\x37\xB8\x21\x14\x4B\x0D\x71\x3D\xEC\x83\x33\x6E\xD1\x6E\x32\x16\xEC\x98\xC7\x16\x8B\x59\xA6\x34\xAB\x05\x57\x2D\x93\xF7\xAA\x13\xCB\xD2\x13\xE2\xB7\x2E\x3B\xCD\x6B\x50\x17\x09\x68\x3E\xB5\x26\x57\xEE\xB6\xE0\xB6\xDD\xB9\x29\x80\x79\x7D\x8F\xA3\xF0\xA4\x28\xA4\x15\xC4\x85\xF4\x27\xD4\x6B\xBF\xE5\x5C\xE4\x65\x02\x76\x54\xB4\xE3\x37\x66\x24\xD3\x19\x61\xC8\x52\x10\xE5\x8B\x37\x9A\xB9\xA9\xF9\x1D\xBF\xEA\x99\x92\x61\x96\xFF\x01\xCD\xA1\x5F\x0D\xBC\x71\xBC\x0E\xAC\x0B\x1D\x47\x45\x1D\xC1\xEC\x7C\xEC\xFD\x29", @@ -51,7 +48,6 @@ redef root_certs += { ["CN=Sonera Class2 CA,O=Sonera,C=FI"] = "\x30\x82\x03\x20\x30\x82\x02\x08\xA0\x03\x02\x01\x02\x02\x01\x1D\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x39\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x46\x49\x31\x0F\x30\x0D\x06\x03\x55\x04\x0A\x13\x06\x53\x6F\x6E\x65\x72\x61\x31\x19\x30\x17\x06\x03\x55\x04\x03\x13\x10\x53\x6F\x6E\x65\x72\x61\x20\x43\x6C\x61\x73\x73\x32\x20\x43\x41\x30\x1E\x17\x0D\x30\x31\x30\x34\x30\x36\x30\x37\x32\x39\x34\x30\x5A\x17\x0D\x32\x31\x30\x34\x30\x36\x30\x37\x32\x39\x34\x30\x5A\x30\x39\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x46\x49\x31\x0F\x30\x0D\x06\x03\x55\x04\x0A\x13\x06\x53\x6F\x6E\x65\x72\x61\x31\x19\x30\x17\x06\x03\x55\x04\x03\x13\x10\x53\x6F\x6E\x65\x72\x61\x20\x43\x6C\x61\x73\x73\x32\x20\x43\x41\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\x90\x17\x4A\x35\x9D\xCA\xF0\x0D\x96\xC7\x44\xFA\x16\x37\xFC\x48\xBD\xBD\x7F\x80\x2D\x35\x3B\xE1\x6F\xA8\x67\xA9\xBF\x03\x1C\x4D\x8C\x6F\x32\x47\xD5\x41\x68\xA4\x13\x04\xC1\x35\x0C\x9A\x84\x43\xFC\x5C\x1D\xFF\x89\xB3\xE8\x17\x18\xCD\x91\x5F\xFB\x89\xE3\xEA\xBF\x4E\x5D\x7C\x1B\x26\xD3\x75\x79\xED\xE6\x84\xE3\x57\xE5\xAD\x29\xC4\xF4\x3A\x28\xE7\xA5\x7B\x84\x36\x69\xB3\xFD\x5E\x76\xBD\xA3\x2D\x99\xD3\x90\x4E\x23\x28\x7D\x18\x63\xF1\x54\x3B\x26\x9D\x76\x5B\x97\x42\xB2\xFF\xAE\xF0\x4E\xEC\xDD\x39\x95\x4E\x83\x06\x7F\xE7\x49\x40\xC8\xC5\x01\xB2\x54\x5A\x66\x1D\x3D\xFC\xF9\xE9\x3C\x0A\x9E\x81\xB8\x70\xF0\x01\x8B\xE4\x23\x54\x7C\xC8\xAE\xF8\x90\x1E\x00\x96\x72\xD4\x54\xCF\x61\x23\xBC\xEA\xFB\x9D\x02\x95\xD1\xB6\xB9\x71\x3A\x69\x08\x3F\x0F\xB4\xE1\x42\xC7\x88\xF5\x3F\x98\xA8\xA7\xBA\x1C\xE0\x71\x71\xEF\x58\x57\x81\x50\x7A\x5C\x6B\x74\x46\x0E\x83\x03\x98\xC3\x8E\xA8\x6E\xF2\x76\x32\x6E\x27\x83\xC2\x73\xF3\xDC\x18\xE8\xB4\x93\xEA\x75\x44\x6B\x04\x60\x20\x71\x57\x87\x9D\xF3\xBE\xA0\x90\x23\x3D\x8A\x24\xE1\xDA\x21\xDB\xC3\x02\x03\x01\x00\x01\xA3\x33\x30\x31\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x11\x06\x03\x55\x1D\x0E\x04\x0A\x04\x08\x4A\xA0\xAA\x58\x84\xD3\x5E\x3C\x30\x0B\x06\x03\x55\x1D\x0F\x04\x04\x03\x02\x01\x06\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x5A\xCE\x87\xF9\x16\x72\x15\x57\x4B\x1D\xD9\x9B\xE7\xA2\x26\x30\xEC\x93\x67\xDF\xD6\x2D\xD2\x34\xAF\xF7\x38\xA5\xCE\xAB\x16\xB9\xAB\x2F\x7C\x35\xCB\xAC\xD0\x0F\xB4\x4C\x2B\xFC\x80\xEF\x6B\x8C\x91\x5F\x36\x76\xF7\xDB\xB3\x1B\x19\xEA\xF4\xB2\x11\xFD\x61\x71\x44\xBF\x28\xB3\x3A\x1D\xBF\xB3\x43\xE8\x9F\xBF\xDC\x31\x08\x71\xB0\x9D\x8D\xD6\x34\x47\x32\x90\xC6\x65\x24\xF7\xA0\x4A\x7C\x04\x73\x8F\x39\x6F\x17\x8C\x72\xB5\xBD\x4B\xC8\x7A\xF8\x7B\x83\xC3\x28\x4E\x9C\x09\xEA\x67\x3F\xB2\x67\x04\x1B\xC3\x14\xDA\xF8\xE7\x49\x24\x91\xD0\x1D\x6A\xFA\x61\x39\xEF\x6B\xE7\x21\x75\x06\x07\xD8\x12\xB4\x21\x20\x70\x42\x71\x81\xDA\x3C\x9A\x36\xBE\xA6\x5B\x0D\x6A\x6C\x9A\x1F\x91\x7B\xF9\xF9\xEF\x42\xBA\x4E\x4E\x9E\xCC\x0C\x8D\x94\xDC\xD9\x45\x9C\x5E\xEC\x42\x50\x63\xAE\xF4\x5D\xC4\xB1\x12\xDC\xCA\x3B\xA8\x2E\x9D\x14\x5A\x05\x75\xB7\xEC\xD7\x63\xE2\xBA\x35\xB6\x04\x08\x91\xE8\xDA\x9D\x9C\xF6\x66\xB5\x18\xAC\x0A\xA6\x54\x26\x34\x33\xD2\x1B\xC1\xD4\x7F\x1A\x3A\x8E\x0B\xAA\x32\x6E\xDB\xFC\x4F\x25\x9F\xD9\x32\xC7\x96\x5A\x70\xAC\xDF\x4C", ["CN=Staat der Nederlanden Root CA,O=Staat der Nederlanden,C=NL"] = "\x30\x82\x03\xBA\x30\x82\x02\xA2\xA0\x03\x02\x01\x02\x02\x04\x00\x98\x96\x8A\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x55\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4E\x4C\x31\x1E\x30\x1C\x06\x03\x55\x04\x0A\x13\x15\x53\x74\x61\x61\x74\x20\x64\x65\x72\x20\x4E\x65\x64\x65\x72\x6C\x61\x6E\x64\x65\x6E\x31\x26\x30\x24\x06\x03\x55\x04\x03\x13\x1D\x53\x74\x61\x61\x74\x20\x64\x65\x72\x20\x4E\x65\x64\x65\x72\x6C\x61\x6E\x64\x65\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x1E\x17\x0D\x30\x32\x31\x32\x31\x37\x30\x39\x32\x33\x34\x39\x5A\x17\x0D\x31\x35\x31\x32\x31\x36\x30\x39\x31\x35\x33\x38\x5A\x30\x55\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4E\x4C\x31\x1E\x30\x1C\x06\x03\x55\x04\x0A\x13\x15\x53\x74\x61\x61\x74\x20\x64\x65\x72\x20\x4E\x65\x64\x65\x72\x6C\x61\x6E\x64\x65\x6E\x31\x26\x30\x24\x06\x03\x55\x04\x03\x13\x1D\x53\x74\x61\x61\x74\x20\x64\x65\x72\x20\x4E\x65\x64\x65\x72\x6C\x61\x6E\x64\x65\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\x98\xD2\xB5\x51\x11\x7A\x81\xA6\x14\x98\x71\x6D\xBE\xCC\xE7\x13\x1B\xD6\x27\x0E\x7A\xB3\x6A\x18\x1C\xB6\x61\x5A\xD5\x61\x09\xBF\xDE\x90\x13\xC7\x67\xEE\xDD\xF3\xDA\xC5\x0C\x12\x9E\x35\x55\x3E\x2C\x27\x88\x40\x6B\xF7\xDC\xDD\x22\x61\xF5\xC2\xC7\x0E\xF5\xF6\xD5\x76\x53\x4D\x8F\x8C\xBC\x18\x76\x37\x85\x9D\xE8\xCA\x49\xC7\xD2\x4F\x98\x13\x09\xA2\x3E\x22\x88\x9C\x7F\xD6\xF2\x10\x65\xB4\xEE\x5F\x18\xD5\x17\xE3\xF8\xC5\xFD\xE2\x9D\xA2\xEF\x53\x0E\x85\x77\xA2\x0F\xE1\x30\x47\xEE\x00\xE7\x33\x7D\x44\x67\x1A\x0B\x51\xE8\x8B\xA0\x9E\x50\x98\x68\x34\x52\x1F\x2E\x6D\x01\xF2\x60\x45\xF2\x31\xEB\xA9\x31\x68\x29\xBB\x7A\x41\x9E\xC6\x19\x7F\x94\xB4\x51\x39\x03\x7F\xB2\xDE\xA7\x32\x9B\xB4\x47\x8E\x6F\xB4\x4A\xAE\xE5\xAF\xB1\xDC\xB0\x1B\x61\xBC\x99\x72\xDE\xE4\x89\xB7\x7A\x26\x5D\xDA\x33\x49\x5B\x52\x9C\x0E\xF5\x8A\xAD\xC3\xB8\x3D\xE8\x06\x6A\xC2\xD5\x2A\x0B\x6C\x7B\x84\xBD\x56\x05\xCB\x86\x65\x92\xEC\x44\x2B\xB0\x8E\xB9\xDC\x70\x0B\x46\xDA\xAD\xBC\x63\x88\x39\xFA\xDB\x6A\xFE\x23\xFA\xBC\xE4\x48\xF4\x67\x2B\x6A\x11\x10\x21\x49\x02\x03\x01\x00\x01\xA3\x81\x91\x30\x81\x8E\x30\x0C\x06\x03\x55\x1D\x13\x04\x05\x30\x03\x01\x01\xFF\x30\x4F\x06\x03\x55\x1D\x20\x04\x48\x30\x46\x30\x44\x06\x04\x55\x1D\x20\x00\x30\x3C\x30\x3A\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x01\x16\x2E\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x70\x6B\x69\x6F\x76\x65\x72\x68\x65\x69\x64\x2E\x6E\x6C\x2F\x70\x6F\x6C\x69\x63\x69\x65\x73\x2F\x72\x6F\x6F\x74\x2D\x70\x6F\x6C\x69\x63\x79\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xA8\x7D\xEB\xBC\x63\xA4\x74\x13\x74\x00\xEC\x96\xE0\xD3\x34\xC1\x2C\xBF\x6C\xF8\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x05\x84\x87\x55\x74\x36\x61\xC1\xBB\xD1\xD4\xC6\x15\xA8\x13\xB4\x9F\xA4\xFE\xBB\xEE\x15\xB4\x2F\x06\x0C\x29\xF2\xA8\x92\xA4\x61\x0D\xFC\xAB\x5C\x08\x5B\x51\x13\x2B\x4D\xC2\x2A\x61\xC8\xF8\x09\x58\xFC\x2D\x02\xB2\x39\x7D\x99\x66\x81\xBF\x6E\x5C\x95\x45\x20\x6C\xE6\x79\xA7\xD1\xD8\x1C\x29\xFC\xC2\x20\x27\x51\xC8\xF1\x7C\x5D\x34\x67\x69\x85\x11\x30\xC6\x00\xD2\xD7\xF3\xD3\x7C\xB6\xF0\x31\x57\x28\x12\x82\x73\xE9\x33\x2F\xA6\x55\xB4\x0B\x91\x94\x47\x9C\xFA\xBB\x7A\x42\x32\xE8\xAE\x7E\x2D\xC8\xBC\xAC\x14\xBF\xD9\x0F\xD9\x5B\xFC\xC1\xF9\x7A\x95\xE1\x7D\x7E\x96\xFC\x71\xB0\xC2\x4C\xC8\xDF\x45\x34\xC9\xCE\x0D\xF2\x9C\x64\x08\xD0\x3B\xC3\x29\xC5\xB2\xED\x90\x04\xC1\xB1\x29\x91\xC5\x30\x6F\xC1\xA9\x72\x33\xCC\xFE\x5D\x16\x17\x2C\x11\x69\xE7\x7E\xFE\xC5\x83\x08\xDF\xBC\xDC\x22\x3A\x2E\x20\x69\x23\x39\x56\x60\x67\x90\x8B\x2E\x76\x39\xFB\x11\x88\x97\xF6\x7C\xBD\x4B\xB8\x20\x16\x67\x05\x8D\xE2\x3B\xC1\x72\x3F\x94\x95\x37\xC7\x5D\xB9\x9E\xD8\x93\xA1\x17\x8F\xFF\x0C\x66\x15\xC1\x24\x7C\x32\x7C\x03\x1D\x3B\xA1\x58\x45\x32\x93", ["OU=TDC Internet Root CA,O=TDC Internet,C=DK"] = "\x30\x82\x04\x2B\x30\x82\x03\x13\xA0\x03\x02\x01\x02\x02\x04\x3A\xCC\xA5\x4C\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x43\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x4B\x31\x15\x30\x13\x06\x03\x55\x04\x0A\x13\x0C\x54\x44\x43\x20\x49\x6E\x74\x65\x72\x6E\x65\x74\x31\x1D\x30\x1B\x06\x03\x55\x04\x0B\x13\x14\x54\x44\x43\x20\x49\x6E\x74\x65\x72\x6E\x65\x74\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x1E\x17\x0D\x30\x31\x30\x34\x30\x35\x31\x36\x33\x33\x31\x37\x5A\x17\x0D\x32\x31\x30\x34\x30\x35\x31\x37\x30\x33\x31\x37\x5A\x30\x43\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x4B\x31\x15\x30\x13\x06\x03\x55\x04\x0A\x13\x0C\x54\x44\x43\x20\x49\x6E\x74\x65\x72\x6E\x65\x74\x31\x1D\x30\x1B\x06\x03\x55\x04\x0B\x13\x14\x54\x44\x43\x20\x49\x6E\x74\x65\x72\x6E\x65\x74\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xC4\xB8\x40\xBC\x91\xD5\x63\x1F\xD7\x99\xA0\x8B\x0C\x40\x1E\x74\xB7\x48\x9D\x46\x8C\x02\xB2\xE0\x24\x5F\xF0\x19\x13\xA7\x37\x83\x6B\x5D\xC7\x8E\xF9\x84\x30\xCE\x1A\x3B\xFA\xFB\xCE\x8B\x6D\x23\xC6\xC3\x6E\x66\x9F\x89\xA5\xDF\xE0\x42\x50\x67\xFA\x1F\x6C\x1E\xF4\xD0\x05\xD6\xBF\xCA\xD6\x4E\xE4\x68\x60\x6C\x46\xAA\x1C\x5D\x63\xE1\x07\x86\x0E\x65\x00\xA7\x2E\xA6\x71\xC6\xBC\xB9\x81\xA8\x3A\x7D\x1A\xD2\xF9\xD1\xAC\x4B\xCB\xCE\x75\xAF\xDC\x7B\xFA\x81\x73\xD4\xFC\xBA\xBD\x41\x88\xD4\x74\xB3\xF9\x5E\x38\x3A\x3C\x43\xA8\xD2\x95\x4E\x77\x6D\x13\x0C\x9D\x8F\x78\x01\xB7\x5A\x20\x1F\x03\x37\x35\xE2\x2C\xDB\x4B\x2B\x2C\x78\xB9\x49\xDB\xC4\xD0\xC7\x9C\x9C\xE4\x8A\x20\x09\x21\x16\x56\x66\xFF\x05\xEC\x5B\xE3\xF0\xCF\xAB\x24\x24\x5E\xC3\x7F\x70\x7A\x12\xC4\xD2\xB5\x10\xA0\xB6\x21\xE1\x8D\x78\x69\x55\x44\x69\xF5\xCA\x96\x1C\x34\x85\x17\x25\x77\xE2\xF6\x2F\x27\x98\x78\xFD\x79\x06\x3A\xA2\xD6\x5A\x43\xC1\xFF\xEC\x04\x3B\xEE\x13\xEF\xD3\x58\x5A\xFF\x92\xEB\xEC\xAE\xDA\xF2\x37\x03\x47\x41\xB6\x97\xC9\x2D\x0A\x41\x22\xBB\xBB\xE6\xA7\x02\x03\x01\x00\x01\xA3\x82\x01\x25\x30\x82\x01\x21\x30\x11\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x01\x04\x04\x03\x02\x00\x07\x30\x65\x06\x03\x55\x1D\x1F\x04\x5E\x30\x5C\x30\x5A\xA0\x58\xA0\x56\xA4\x54\x30\x52\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x4B\x31\x15\x30\x13\x06\x03\x55\x04\x0A\x13\x0C\x54\x44\x43\x20\x49\x6E\x74\x65\x72\x6E\x65\x74\x31\x1D\x30\x1B\x06\x03\x55\x04\x0B\x13\x14\x54\x44\x43\x20\x49\x6E\x74\x65\x72\x6E\x65\x74\x20\x52\x6F\x6F\x74\x20\x43\x41\x31\x0D\x30\x0B\x06\x03\x55\x04\x03\x13\x04\x43\x52\x4C\x31\x30\x2B\x06\x03\x55\x1D\x10\x04\x24\x30\x22\x80\x0F\x32\x30\x30\x31\x30\x34\x30\x35\x31\x36\x33\x33\x31\x37\x5A\x81\x0F\x32\x30\x32\x31\x30\x34\x30\x35\x31\x37\x30\x33\x31\x37\x5A\x30\x0B\x06\x03\x55\x1D\x0F\x04\x04\x03\x02\x01\x06\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\x6C\x64\x01\xC7\xFD\x85\x6D\xAC\xC8\xDA\x9E\x50\x08\x85\x08\xB5\x3C\x56\xA8\x50\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x6C\x64\x01\xC7\xFD\x85\x6D\xAC\xC8\xDA\x9E\x50\x08\x85\x08\xB5\x3C\x56\xA8\x50\x30\x0C\x06\x03\x55\x1D\x13\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x09\x2A\x86\x48\x86\xF6\x7D\x07\x41\x00\x04\x10\x30\x0E\x1B\x08\x56\x35\x2E\x30\x3A\x34\x2E\x30\x03\x02\x04\x90\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x4E\x43\xCC\xD1\xDD\x1D\x10\x1B\x06\x7F\xB7\xA4\xFA\xD3\xD9\x4D\xFB\x23\x9F\x23\x54\x5B\xE6\x8B\x2F\x04\x28\x8B\xB5\x27\x6D\x89\xA1\xEC\x98\x69\xDC\xE7\x8D\x26\x83\x05\x79\x74\xEC\xB4\xB9\xA3\x97\xC1\x35\x00\xFD\x15\xDA\x39\x81\x3A\x95\x31\x90\xDE\x97\xE9\x86\xA8\x99\x77\x0C\xE5\x5A\xA0\x84\xFF\x12\x16\xAC\x6E\xB8\x8D\xC3\x7B\x92\xC2\xAC\x2E\xD0\x7D\x28\xEC\xB6\xF3\x60\x38\x69\x6F\x3E\xD8\x04\x55\x3E\x9E\xCC\x55\xD2\xBA\xFE\xBB\x47\x04\xD7\x0A\xD9\x16\x0A\x34\x29\xF5\x58\x13\xD5\x4F\xCF\x8F\x56\x4B\xB3\x1E\xEE\xD3\x98\x79\xDA\x08\x1E\x0C\x6F\xB8\xF8\x16\x27\xEF\xC2\x6F\x3D\xF6\xA3\x4B\x3E\x0E\xE4\x6D\x6C\xDB\x3B\x41\x12\x9B\xBD\x0D\x47\x23\x7F\x3C\x4A\xD0\xAF\xC0\xAF\xF6\xEF\x1B\xB5\x15\xC4\xEB\x83\xC4\x09\x5F\x74\x8B\xD9\x11\xFB\xC2\x56\xB1\x3C\xF8\x70\xCA\x34\x8D\x43\x40\x13\x8C\xFD\x99\x03\x54\x79\xC6\x2E\xEA\x86\xA1\xF6\x3A\xD4\x09\xBC\xF4\xBC\x66\xCC\x3D\x58\xD0\x57\x49\x0A\xEE\x25\xE2\x41\xEE\x13\xF9\x9B\x38\x34\xD1\x00\xF5\x7E\xE7\x94\x1D\xFC\x69\x03\x62\xB8\x99\x05\x05\x3D\x6B\x78\x12\xBD\xB0\x6F\x65", - ["CN=TDC OCES CA,O=TDC,C=DK"] = "\x30\x82\x05\x19\x30\x82\x04\x01\xA0\x03\x02\x01\x02\x02\x04\x3E\x48\xBD\xC4\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x31\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x4B\x31\x0C\x30\x0A\x06\x03\x55\x04\x0A\x13\x03\x54\x44\x43\x31\x14\x30\x12\x06\x03\x55\x04\x03\x13\x0B\x54\x44\x43\x20\x4F\x43\x45\x53\x20\x43\x41\x30\x1E\x17\x0D\x30\x33\x30\x32\x31\x31\x30\x38\x33\x39\x33\x30\x5A\x17\x0D\x33\x37\x30\x32\x31\x31\x30\x39\x30\x39\x33\x30\x5A\x30\x31\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x4B\x31\x0C\x30\x0A\x06\x03\x55\x04\x0A\x13\x03\x54\x44\x43\x31\x14\x30\x12\x06\x03\x55\x04\x03\x13\x0B\x54\x44\x43\x20\x4F\x43\x45\x53\x20\x43\x41\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xAC\x62\xF6\x61\x20\xB2\xCF\xC0\xC6\x85\xD7\xE3\x79\xE6\xCC\xED\xF2\x39\x92\xA4\x97\x2E\x64\xA3\x84\x5B\x87\x9C\x4C\xFD\xA4\xF3\xC4\x5F\x21\xBD\x56\x10\xEB\xDB\x2E\x61\xEC\x93\x69\xE3\xA3\xCC\xBD\x99\xC3\x05\xFC\x06\xB8\xCA\x36\x1C\xFE\x90\x8E\x49\x4C\xC4\x56\x9A\x2F\x56\xBC\xCF\x7B\x0C\xF1\x6F\x47\xA6\x0D\x43\x4D\xE2\xE9\x1D\x39\x34\xCD\x8D\x2C\xD9\x12\x98\xF9\xE3\xE1\xC1\x4A\x7C\x86\x38\xC4\xA9\xC4\x61\x88\xD2\x5E\xAF\x1A\x26\x4D\xD5\xE4\xA0\x22\x47\x84\xD9\x64\xB7\x19\x96\xFC\xEC\x19\xE4\xB2\x97\x26\x4E\x4A\x4C\xCB\x8F\x24\x8B\x54\x18\x1C\x48\x61\x7B\xD5\x88\x68\xDA\x5D\xB5\xEA\xCD\x1A\x30\xC1\x80\x83\x76\x50\xAA\x4F\xD1\xD4\xDD\x38\xF0\xEF\x16\xF4\xE1\x0C\x50\x06\xBF\xEA\xFB\x7A\x49\xA1\x28\x2B\x1C\xF6\xFC\x15\x32\xA3\x74\x6A\x8F\xA9\xC3\x62\x29\x71\x31\xE5\x3B\xA4\x60\x17\x5E\x74\xE6\xDA\x13\xED\xE9\x1F\x1F\x1B\xD1\xB2\x68\x73\xC6\x10\x34\x75\x46\x10\x10\xE3\x90\x00\x76\x40\xCB\x8B\xB7\x43\x09\x21\xFF\xAB\x4E\x93\xC6\x58\xE9\xA5\x82\xDB\x77\xC4\x3A\x99\xB1\x72\x95\x49\x04\xF0\xB7\x2B\xFA\x7B\x59\x8E\xDD\x02\x03\x01\x00\x01\xA3\x82\x02\x37\x30\x82\x02\x33\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x81\xEC\x06\x03\x55\x1D\x20\x04\x81\xE4\x30\x81\xE1\x30\x81\xDE\x06\x08\x2A\x81\x50\x81\x29\x01\x01\x01\x30\x81\xD1\x30\x2F\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x01\x16\x23\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x63\x65\x72\x74\x69\x66\x69\x6B\x61\x74\x2E\x64\x6B\x2F\x72\x65\x70\x6F\x73\x69\x74\x6F\x72\x79\x30\x81\x9D\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x02\x30\x81\x90\x30\x0A\x16\x03\x54\x44\x43\x30\x03\x02\x01\x01\x1A\x81\x81\x43\x65\x72\x74\x69\x66\x69\x6B\x61\x74\x65\x72\x20\x66\x72\x61\x20\x64\x65\x6E\x6E\x65\x20\x43\x41\x20\x75\x64\x73\x74\x65\x64\x65\x73\x20\x75\x6E\x64\x65\x72\x20\x4F\x49\x44\x20\x31\x2E\x32\x2E\x32\x30\x38\x2E\x31\x36\x39\x2E\x31\x2E\x31\x2E\x31\x2E\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x73\x20\x66\x72\x6F\x6D\x20\x74\x68\x69\x73\x20\x43\x41\x20\x61\x72\x65\x20\x69\x73\x73\x75\x65\x64\x20\x75\x6E\x64\x65\x72\x20\x4F\x49\x44\x20\x31\x2E\x32\x2E\x32\x30\x38\x2E\x31\x36\x39\x2E\x31\x2E\x31\x2E\x31\x2E\x30\x11\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x01\x04\x04\x03\x02\x00\x07\x30\x81\x81\x06\x03\x55\x1D\x1F\x04\x7A\x30\x78\x30\x48\xA0\x46\xA0\x44\xA4\x42\x30\x40\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x44\x4B\x31\x0C\x30\x0A\x06\x03\x55\x04\x0A\x13\x03\x54\x44\x43\x31\x14\x30\x12\x06\x03\x55\x04\x03\x13\x0B\x54\x44\x43\x20\x4F\x43\x45\x53\x20\x43\x41\x31\x0D\x30\x0B\x06\x03\x55\x04\x03\x13\x04\x43\x52\x4C\x31\x30\x2C\xA0\x2A\xA0\x28\x86\x26\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x6F\x63\x65\x73\x2E\x63\x65\x72\x74\x69\x66\x69\x6B\x61\x74\x2E\x64\x6B\x2F\x6F\x63\x65\x73\x2E\x63\x72\x6C\x30\x2B\x06\x03\x55\x1D\x10\x04\x24\x30\x22\x80\x0F\x32\x30\x30\x33\x30\x32\x31\x31\x30\x38\x33\x39\x33\x30\x5A\x81\x0F\x32\x30\x33\x37\x30\x32\x31\x31\x30\x39\x30\x39\x33\x30\x5A\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\x60\xB5\x85\xEC\x56\x64\x7E\x12\x19\x27\x67\x1D\x50\x15\x4B\x73\xAE\x3B\xF9\x12\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x60\xB5\x85\xEC\x56\x64\x7E\x12\x19\x27\x67\x1D\x50\x15\x4B\x73\xAE\x3B\xF9\x12\x30\x1D\x06\x09\x2A\x86\x48\x86\xF6\x7D\x07\x41\x00\x04\x10\x30\x0E\x1B\x08\x56\x36\x2E\x30\x3A\x34\x2E\x30\x03\x02\x04\x90\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x0A\xBA\x26\x26\x46\xD3\x73\xA8\x09\xF3\x6B\x0B\x30\x99\xFD\x8A\xE1\x57\x7A\x11\xD3\xB8\x94\xD7\x09\x10\x6E\xA3\xB1\x38\x03\xD1\xB6\xF2\x43\x41\x29\x62\xA7\x72\xD8\xFB\x7C\x05\xE6\x31\x70\x27\x54\x18\x4E\x8A\x7C\x4E\xE5\xD1\xCA\x8C\x78\x88\xCF\x1B\xD3\x90\x8B\xE6\x23\xF8\x0B\x0E\x33\x43\x7D\x9C\xE2\x0A\x19\x8F\xC9\x01\x3E\x74\x5D\x74\xC9\x8B\x1C\x03\xE5\x18\xC8\x01\x4C\x3F\xCB\x97\x05\x5D\x98\x71\xA6\x98\x6F\xB6\x7C\xBD\x37\x7F\xBE\xE1\x93\x25\x6D\x6F\xF0\x0A\xAD\x17\x18\xE1\x03\xBC\x07\x29\xC8\xAD\x26\xE8\xF8\x61\xF0\xFD\x21\x09\x7E\x9A\x8E\xA9\x68\x7D\x48\x62\x72\xBD\x00\xEA\x01\x99\xB8\x06\x82\x51\x81\x4E\xF1\xF5\xB4\x91\x54\xB9\x23\x7A\x00\x9A\x9F\x5D\x8D\xE0\x3C\x64\xB9\x1A\x12\x92\x2A\xC7\x82\x44\x72\x39\xDC\xE2\x3C\xC6\xD8\x55\xF5\x15\x4E\xC8\x05\x0E\xDB\xC6\xD0\x62\xA6\xEC\x15\xB4\xB5\x02\x82\xDB\xAC\x8C\xA2\x81\xF0\x9B\x99\x31\xF5\x20\x20\xA8\x88\x61\x0A\x07\x9F\x94\xFC\xD0\xD7\x1B\xCC\x2E\x17\xF3\x04\x27\x76\x67\xEB\x54\x83\xFD\xA4\x90\x7E\x06\x3D\x04\xA3\x43\x2D\xDA\xFC\x0B\x62\xEA\x2F\x5F\x62\x53", ["CN=UTN - DATACorp SGC,OU=http://www.usertrust.com,O=The USERTRUST Network,L=Salt Lake City,ST=UT,C=US"] = "\x30\x82\x04\x5E\x30\x82\x03\x46\xA0\x03\x02\x01\x02\x02\x10\x44\xBE\x0C\x8B\x50\x00\x21\xB4\x11\xD3\x2A\x68\x06\xA9\xAD\x69\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\x93\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0B\x30\x09\x06\x03\x55\x04\x08\x13\x02\x55\x54\x31\x17\x30\x15\x06\x03\x55\x04\x07\x13\x0E\x53\x61\x6C\x74\x20\x4C\x61\x6B\x65\x20\x43\x69\x74\x79\x31\x1E\x30\x1C\x06\x03\x55\x04\x0A\x13\x15\x54\x68\x65\x20\x55\x53\x45\x52\x54\x52\x55\x53\x54\x20\x4E\x65\x74\x77\x6F\x72\x6B\x31\x21\x30\x1F\x06\x03\x55\x04\x0B\x13\x18\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x75\x73\x65\x72\x74\x72\x75\x73\x74\x2E\x63\x6F\x6D\x31\x1B\x30\x19\x06\x03\x55\x04\x03\x13\x12\x55\x54\x4E\x20\x2D\x20\x44\x41\x54\x41\x43\x6F\x72\x70\x20\x53\x47\x43\x30\x1E\x17\x0D\x39\x39\x30\x36\x32\x34\x31\x38\x35\x37\x32\x31\x5A\x17\x0D\x31\x39\x30\x36\x32\x34\x31\x39\x30\x36\x33\x30\x5A\x30\x81\x93\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0B\x30\x09\x06\x03\x55\x04\x08\x13\x02\x55\x54\x31\x17\x30\x15\x06\x03\x55\x04\x07\x13\x0E\x53\x61\x6C\x74\x20\x4C\x61\x6B\x65\x20\x43\x69\x74\x79\x31\x1E\x30\x1C\x06\x03\x55\x04\x0A\x13\x15\x54\x68\x65\x20\x55\x53\x45\x52\x54\x52\x55\x53\x54\x20\x4E\x65\x74\x77\x6F\x72\x6B\x31\x21\x30\x1F\x06\x03\x55\x04\x0B\x13\x18\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x75\x73\x65\x72\x74\x72\x75\x73\x74\x2E\x63\x6F\x6D\x31\x1B\x30\x19\x06\x03\x55\x04\x03\x13\x12\x55\x54\x4E\x20\x2D\x20\x44\x41\x54\x41\x43\x6F\x72\x70\x20\x53\x47\x43\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xDF\xEE\x58\x10\xA2\x2B\x6E\x55\xC4\x8E\xBF\x2E\x46\x09\xE7\xE0\x08\x0F\x2E\x2B\x7A\x13\x94\x1B\xBD\xF6\xB6\x80\x8E\x65\x05\x93\x00\x1E\xBC\xAF\xE2\x0F\x8E\x19\x0D\x12\x47\xEC\xAC\xAD\xA3\xFA\x2E\x70\xF8\xDE\x6E\xFB\x56\x42\x15\x9E\x2E\x5C\xEF\x23\xDE\x21\xB9\x05\x76\x27\x19\x0F\x4F\xD6\xC3\x9C\xB4\xBE\x94\x19\x63\xF2\xA6\x11\x0A\xEB\x53\x48\x9C\xBE\xF2\x29\x3B\x16\xE8\x1A\xA0\x4C\xA6\xC9\xF4\x18\x59\x68\xC0\x70\xF2\x53\x00\xC0\x5E\x50\x82\xA5\x56\x6F\x36\xF9\x4A\xE0\x44\x86\xA0\x4D\x4E\xD6\x47\x6E\x49\x4A\xCB\x67\xD7\xA6\xC4\x05\xB9\x8E\x1E\xF4\xFC\xFF\xCD\xE7\x36\xE0\x9C\x05\x6C\xB2\x33\x22\x15\xD0\xB4\xE0\xCC\x17\xC0\xB2\xC0\xF4\xFE\x32\x3F\x29\x2A\x95\x7B\xD8\xF2\xA7\x4E\x0F\x54\x7C\xA1\x0D\x80\xB3\x09\x03\xC1\xFF\x5C\xDD\x5E\x9A\x3E\xBC\xAE\xBC\x47\x8A\x6A\xAE\x71\xCA\x1F\xB1\x2A\xB8\x5F\x42\x05\x0B\xEC\x46\x30\xD1\x72\x0B\xCA\xE9\x56\x6D\xF5\xEF\xDF\x78\xBE\x61\xBA\xB2\xA5\xAE\x04\x4C\xBC\xA8\xAC\x69\x15\x97\xBD\xEF\xEB\xB4\x8C\xBF\x35\xF8\xD4\xC3\xD1\x28\x0E\x5C\x3A\x9F\x70\x18\x33\x20\x77\xC4\xA2\xAF\x02\x03\x01\x00\x01\xA3\x81\xAB\x30\x81\xA8\x30\x0B\x06\x03\x55\x1D\x0F\x04\x04\x03\x02\x01\xC6\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x53\x32\xD1\xB3\xCF\x7F\xFA\xE0\xF1\xA0\x5D\x85\x4E\x92\xD2\x9E\x45\x1D\xB4\x4F\x30\x3D\x06\x03\x55\x1D\x1F\x04\x36\x30\x34\x30\x32\xA0\x30\xA0\x2E\x86\x2C\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x75\x73\x65\x72\x74\x72\x75\x73\x74\x2E\x63\x6F\x6D\x2F\x55\x54\x4E\x2D\x44\x41\x54\x41\x43\x6F\x72\x70\x53\x47\x43\x2E\x63\x72\x6C\x30\x2A\x06\x03\x55\x1D\x25\x04\x23\x30\x21\x06\x08\x2B\x06\x01\x05\x05\x07\x03\x01\x06\x0A\x2B\x06\x01\x04\x01\x82\x37\x0A\x03\x03\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x04\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x27\x35\x97\x00\x8A\x8B\x28\xBD\xC6\x33\x30\x1E\x29\xFC\xE2\xF7\xD5\x98\xD4\x40\xBB\x60\xCA\xBF\xAB\x17\x2C\x09\x36\x7F\x50\xFA\x41\xDC\xAE\x96\x3A\x0A\x23\x3E\x89\x59\xC9\xA3\x07\xED\x1B\x37\xAD\xFC\x7C\xBE\x51\x49\x5A\xDE\x3A\x0A\x54\x08\x16\x45\xC2\x99\xB1\x87\xCD\x8C\x68\xE0\x69\x03\xE9\xC4\x4E\x98\xB2\x3B\x8C\x16\xB3\x0E\xA0\x0C\x98\x50\x9B\x93\xA9\x70\x09\xC8\x2C\xA3\x8F\xDF\x02\xE4\xE0\x71\x3A\xF1\xB4\x23\x72\xA0\xAA\x01\xDF\xDF\x98\x3E\x14\x50\xA0\x31\x26\xBD\x28\xE9\x5A\x30\x26\x75\xF9\x7B\x60\x1C\x8D\xF3\xCD\x50\x26\x6D\x04\x27\x9A\xDF\xD5\x0D\x45\x47\x29\x6B\x2C\xE6\x76\xD9\xA9\x29\x7D\x32\xDD\xC9\x36\x3C\xBD\xAE\x35\xF1\x11\x9E\x1D\xBB\x90\x3F\x12\x47\x4E\x8E\xD7\x7E\x0F\x62\x73\x1D\x52\x26\x38\x1C\x18\x49\xFD\x30\x74\x9A\xC4\xE5\x22\x2F\xD8\xC0\x8D\xED\x91\x7A\x4C\x00\x8F\x72\x7F\x5D\xDA\xDD\x1B\x8B\x45\x6B\xE7\xDD\x69\x97\xA8\xC5\x56\x4C\x0F\x0C\xF6\x9F\x7A\x91\x37\xF6\x97\x82\xE0\xDD\x71\x69\xFF\x76\x3F\x60\x4D\x3C\xCF\xF7\x99\xF9\xC6\x57\xF4\xC9\x55\x39\x78\xBA\x2C\x79\xC9\xA6\x88\x2B\xF4\x08", ["CN=UTN-USERFirst-Hardware,OU=http://www.usertrust.com,O=The USERTRUST Network,L=Salt Lake City,ST=UT,C=US"] = "\x30\x82\x04\x74\x30\x82\x03\x5C\xA0\x03\x02\x01\x02\x02\x10\x44\xBE\x0C\x8B\x50\x00\x24\xB4\x11\xD3\x36\x2A\xFE\x65\x0A\xFD\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\x97\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0B\x30\x09\x06\x03\x55\x04\x08\x13\x02\x55\x54\x31\x17\x30\x15\x06\x03\x55\x04\x07\x13\x0E\x53\x61\x6C\x74\x20\x4C\x61\x6B\x65\x20\x43\x69\x74\x79\x31\x1E\x30\x1C\x06\x03\x55\x04\x0A\x13\x15\x54\x68\x65\x20\x55\x53\x45\x52\x54\x52\x55\x53\x54\x20\x4E\x65\x74\x77\x6F\x72\x6B\x31\x21\x30\x1F\x06\x03\x55\x04\x0B\x13\x18\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x75\x73\x65\x72\x74\x72\x75\x73\x74\x2E\x63\x6F\x6D\x31\x1F\x30\x1D\x06\x03\x55\x04\x03\x13\x16\x55\x54\x4E\x2D\x55\x53\x45\x52\x46\x69\x72\x73\x74\x2D\x48\x61\x72\x64\x77\x61\x72\x65\x30\x1E\x17\x0D\x39\x39\x30\x37\x30\x39\x31\x38\x31\x30\x34\x32\x5A\x17\x0D\x31\x39\x30\x37\x30\x39\x31\x38\x31\x39\x32\x32\x5A\x30\x81\x97\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x55\x53\x31\x0B\x30\x09\x06\x03\x55\x04\x08\x13\x02\x55\x54\x31\x17\x30\x15\x06\x03\x55\x04\x07\x13\x0E\x53\x61\x6C\x74\x20\x4C\x61\x6B\x65\x20\x43\x69\x74\x79\x31\x1E\x30\x1C\x06\x03\x55\x04\x0A\x13\x15\x54\x68\x65\x20\x55\x53\x45\x52\x54\x52\x55\x53\x54\x20\x4E\x65\x74\x77\x6F\x72\x6B\x31\x21\x30\x1F\x06\x03\x55\x04\x0B\x13\x18\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x75\x73\x65\x72\x74\x72\x75\x73\x74\x2E\x63\x6F\x6D\x31\x1F\x30\x1D\x06\x03\x55\x04\x03\x13\x16\x55\x54\x4E\x2D\x55\x53\x45\x52\x46\x69\x72\x73\x74\x2D\x48\x61\x72\x64\x77\x61\x72\x65\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xB1\xF7\xC3\x38\x3F\xB4\xA8\x7F\xCF\x39\x82\x51\x67\xD0\x6D\x9F\xD2\xFF\x58\xF3\xE7\x9F\x2B\xEC\x0D\x89\x54\x99\xB9\x38\x99\x16\xF7\xE0\x21\x79\x48\xC2\xBB\x61\x74\x12\x96\x1D\x3C\x6A\x72\xD5\x3C\x10\x67\x3A\x39\xED\x2B\x13\xCD\x66\xEB\x95\x09\x33\xA4\x6C\x97\xB1\xE8\xC6\xEC\xC1\x75\x79\x9C\x46\x5E\x8D\xAB\xD0\x6A\xFD\xB9\x2A\x55\x17\x10\x54\xB3\x19\xF0\x9A\xF6\xF1\xB1\x5D\xB6\xA7\x6D\xFB\xE0\x71\x17\x6B\xA2\x88\xFB\x00\xDF\xFE\x1A\x31\x77\x0C\x9A\x01\x7A\xB1\x32\xE3\x2B\x01\x07\x38\x6E\xC3\xA5\x5E\x23\xBC\x45\x9B\x7B\x50\xC1\xC9\x30\x8F\xDB\xE5\x2B\x7A\xD3\x5B\xFB\x33\x40\x1E\xA0\xD5\x98\x17\xBC\x8B\x87\xC3\x89\xD3\x5D\xA0\x8E\xB2\xAA\xAA\xF6\x8E\x69\x88\x06\xC5\xFA\x89\x21\xF3\x08\x9D\x69\x2E\x09\x33\x9B\x29\x0D\x46\x0F\x8C\xCC\x49\x34\xB0\x69\x51\xBD\xF9\x06\xCD\x68\xAD\x66\x4C\xBC\x3E\xAC\x61\xBD\x0A\x88\x0E\xC8\xDF\x3D\xEE\x7C\x04\x4C\x9D\x0A\x5E\x6B\x91\xD6\xEE\xC7\xED\x28\x8D\xAB\x4D\x87\x89\x73\xD0\x6E\xA4\xD0\x1E\x16\x8B\x14\xE1\x76\x44\x03\x7F\x63\xAC\xE4\xCD\x49\x9C\xC5\x92\xF4\xAB\x32\xA1\x48\x5B\x02\x03\x01\x00\x01\xA3\x81\xB9\x30\x81\xB6\x30\x0B\x06\x03\x55\x1D\x0F\x04\x04\x03\x02\x01\xC6\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xA1\x72\x5F\x26\x1B\x28\x98\x43\x95\x5D\x07\x37\xD5\x85\x96\x9D\x4B\xD2\xC3\x45\x30\x44\x06\x03\x55\x1D\x1F\x04\x3D\x30\x3B\x30\x39\xA0\x37\xA0\x35\x86\x33\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x75\x73\x65\x72\x74\x72\x75\x73\x74\x2E\x63\x6F\x6D\x2F\x55\x54\x4E\x2D\x55\x53\x45\x52\x46\x69\x72\x73\x74\x2D\x48\x61\x72\x64\x77\x61\x72\x65\x2E\x63\x72\x6C\x30\x31\x06\x03\x55\x1D\x25\x04\x2A\x30\x28\x06\x08\x2B\x06\x01\x05\x05\x07\x03\x01\x06\x08\x2B\x06\x01\x05\x05\x07\x03\x05\x06\x08\x2B\x06\x01\x05\x05\x07\x03\x06\x06\x08\x2B\x06\x01\x05\x05\x07\x03\x07\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x47\x19\x0F\xDE\x74\xC6\x99\x97\xAF\xFC\xAD\x28\x5E\x75\x8E\xEB\x2D\x67\xEE\x4E\x7B\x2B\xD7\x0C\xFF\xF6\xDE\xCB\x55\xA2\x0A\xE1\x4C\x54\x65\x93\x60\x6B\x9F\x12\x9C\xAD\x5E\x83\x2C\xEB\x5A\xAE\xC0\xE4\x2D\xF4\x00\x63\x1D\xB8\xC0\x6C\xF2\xCF\x49\xBB\x4D\x93\x6F\x06\xA6\x0A\x22\xB2\x49\x62\x08\x4E\xFF\xC8\xC8\x14\xB2\x88\x16\x5D\xE7\x01\xE4\x12\x95\xE5\x45\x34\xB3\x8B\x69\xBD\xCF\xB4\x85\x8F\x75\x51\x9E\x7D\x3A\x38\x3A\x14\x48\x12\xC6\xFB\xA7\x3B\x1A\x8D\x0D\x82\x40\x07\xE8\x04\x08\x90\xA1\x89\xCB\x19\x50\xDF\xCA\x1C\x01\xBC\x1D\x04\x19\x7B\x10\x76\x97\x3B\xEE\x90\x90\xCA\xC4\x0E\x1F\x16\x6E\x75\xEF\x33\xF8\xD3\x6F\x5B\x1E\x96\xE3\xE0\x74\x77\x74\x7B\x8A\xA2\x6E\x2D\xDD\x76\xD6\x39\x30\x82\xF0\xAB\x9C\x52\xF2\x2A\xC7\xAF\x49\x5E\x7E\xC7\x68\xE5\x82\x81\xC8\x6A\x27\xF9\x27\x88\x2A\xD5\x58\x50\x95\x1F\xF0\x3B\x1C\x57\xBB\x7D\x14\x39\x62\x2B\x9A\xC9\x94\x92\x2A\xA3\x22\x0C\xFF\x89\x26\x7D\x5F\x23\x2B\x47\xD7\x15\x1D\xA9\x6A\x9E\x51\x0D\x2A\x51\x9E\x81\xF9\xD4\x3B\x5E\x70\x12\x7F\x10\x32\x9C\x1E\xBB\x9D\xF8\x66\xA8", ["CN=Chambers of Commerce Root,OU=http://www.chambersign.org,O=AC Camerfirma SA CIF A82743287,C=EU"] = "\x30\x82\x04\xBD\x30\x82\x03\xA5\xA0\x03\x02\x01\x02\x02\x01\x00\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x7F\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x45\x55\x31\x27\x30\x25\x06\x03\x55\x04\x0A\x13\x1E\x41\x43\x20\x43\x61\x6D\x65\x72\x66\x69\x72\x6D\x61\x20\x53\x41\x20\x43\x49\x46\x20\x41\x38\x32\x37\x34\x33\x32\x38\x37\x31\x23\x30\x21\x06\x03\x55\x04\x0B\x13\x1A\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x63\x68\x61\x6D\x62\x65\x72\x73\x69\x67\x6E\x2E\x6F\x72\x67\x31\x22\x30\x20\x06\x03\x55\x04\x03\x13\x19\x43\x68\x61\x6D\x62\x65\x72\x73\x20\x6F\x66\x20\x43\x6F\x6D\x6D\x65\x72\x63\x65\x20\x52\x6F\x6F\x74\x30\x1E\x17\x0D\x30\x33\x30\x39\x33\x30\x31\x36\x31\x33\x34\x33\x5A\x17\x0D\x33\x37\x30\x39\x33\x30\x31\x36\x31\x33\x34\x34\x5A\x30\x7F\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x45\x55\x31\x27\x30\x25\x06\x03\x55\x04\x0A\x13\x1E\x41\x43\x20\x43\x61\x6D\x65\x72\x66\x69\x72\x6D\x61\x20\x53\x41\x20\x43\x49\x46\x20\x41\x38\x32\x37\x34\x33\x32\x38\x37\x31\x23\x30\x21\x06\x03\x55\x04\x0B\x13\x1A\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x63\x68\x61\x6D\x62\x65\x72\x73\x69\x67\x6E\x2E\x6F\x72\x67\x31\x22\x30\x20\x06\x03\x55\x04\x03\x13\x19\x43\x68\x61\x6D\x62\x65\x72\x73\x20\x6F\x66\x20\x43\x6F\x6D\x6D\x65\x72\x63\x65\x20\x52\x6F\x6F\x74\x30\x82\x01\x20\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0D\x00\x30\x82\x01\x08\x02\x82\x01\x01\x00\xB7\x36\x55\xE5\xA5\x5D\x18\x30\xE0\xDA\x89\x54\x91\xFC\xC8\xC7\x52\xF8\x2F\x50\xD9\xEF\xB1\x75\x73\x65\x47\x7D\x1B\x5B\xBA\x75\xC5\xFC\xA1\x88\x24\xFA\x2F\xED\xCA\x08\x4A\x39\x54\xC4\x51\x7A\xB5\xDA\x60\xEA\x38\x3C\x81\xB2\xCB\xF1\xBB\xD9\x91\x23\x3F\x48\x01\x70\x75\xA9\x05\x2A\xAD\x1F\x71\xF3\xC9\x54\x3D\x1D\x06\x6A\x40\x3E\xB3\x0C\x85\xEE\x5C\x1B\x79\xC2\x62\xC4\xB8\x36\x8E\x35\x5D\x01\x0C\x23\x04\x47\x35\xAA\x9B\x60\x4E\xA0\x66\x3D\xCB\x26\x0A\x9C\x40\xA1\xF4\x5D\x98\xBF\x71\xAB\xA5\x00\x68\x2A\xED\x83\x7A\x0F\xA2\x14\xB5\xD4\x22\xB3\x80\xB0\x3C\x0C\x5A\x51\x69\x2D\x58\x18\x8F\xED\x99\x9E\xF1\xAE\xE2\x95\xE6\xF6\x47\xA8\xD6\x0C\x0F\xB0\x58\x58\xDB\xC3\x66\x37\x9E\x9B\x91\x54\x33\x37\xD2\x94\x1C\x6A\x48\xC9\xC9\xF2\xA5\xDA\xA5\x0C\x23\xF7\x23\x0E\x9C\x32\x55\x5E\x71\x9C\x84\x05\x51\x9A\x2D\xFD\xE6\x4E\x2A\x34\x5A\xDE\xCA\x40\x37\x67\x0C\x54\x21\x55\x77\xDA\x0A\x0C\xCC\x97\xAE\x80\xDC\x94\x36\x4A\xF4\x3E\xCE\x36\x13\x1E\x53\xE4\xAC\x4E\x3A\x05\xEC\xDB\xAE\x72\x9C\x38\x8B\xD0\x39\x3B\x89\x0A\x3E\x77\xFE\x75\x02\x01\x03\xA3\x82\x01\x44\x30\x82\x01\x40\x30\x12\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x08\x30\x06\x01\x01\xFF\x02\x01\x0C\x30\x3C\x06\x03\x55\x1D\x1F\x04\x35\x30\x33\x30\x31\xA0\x2F\xA0\x2D\x86\x2B\x68\x74\x74\x70\x3A\x2F\x2F\x63\x72\x6C\x2E\x63\x68\x61\x6D\x62\x65\x72\x73\x69\x67\x6E\x2E\x6F\x72\x67\x2F\x63\x68\x61\x6D\x62\x65\x72\x73\x72\x6F\x6F\x74\x2E\x63\x72\x6C\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xE3\x94\xF5\xB1\x4D\xE9\xDB\xA1\x29\x5B\x57\x8B\x4D\x76\x06\x76\xE1\xD1\xA2\x8A\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x11\x06\x09\x60\x86\x48\x01\x86\xF8\x42\x01\x01\x04\x04\x03\x02\x00\x07\x30\x27\x06\x03\x55\x1D\x11\x04\x20\x30\x1E\x81\x1C\x63\x68\x61\x6D\x62\x65\x72\x73\x72\x6F\x6F\x74\x40\x63\x68\x61\x6D\x62\x65\x72\x73\x69\x67\x6E\x2E\x6F\x72\x67\x30\x27\x06\x03\x55\x1D\x12\x04\x20\x30\x1E\x81\x1C\x63\x68\x61\x6D\x62\x65\x72\x73\x72\x6F\x6F\x74\x40\x63\x68\x61\x6D\x62\x65\x72\x73\x69\x67\x6E\x2E\x6F\x72\x67\x30\x58\x06\x03\x55\x1D\x20\x04\x51\x30\x4F\x30\x4D\x06\x0B\x2B\x06\x01\x04\x01\x81\x87\x2E\x0A\x03\x01\x30\x3E\x30\x3C\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x01\x16\x30\x68\x74\x74\x70\x3A\x2F\x2F\x63\x70\x73\x2E\x63\x68\x61\x6D\x62\x65\x72\x73\x69\x67\x6E\x2E\x6F\x72\x67\x2F\x63\x70\x73\x2F\x63\x68\x61\x6D\x62\x65\x72\x73\x72\x6F\x6F\x74\x2E\x68\x74\x6D\x6C\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x0C\x41\x97\xC2\x1A\x86\xC0\x22\x7C\x9F\xFB\x90\xF3\x1A\xD1\x03\xB1\xEF\x13\xF9\x21\x5F\x04\x9C\xDA\xC9\xA5\x8D\x27\x6C\x96\x87\x91\xBE\x41\x90\x01\x72\x93\xE7\x1E\x7D\x5F\xF6\x89\xC6\x5D\xA7\x40\x09\x3D\xAC\x49\x45\x45\xDC\x2E\x8D\x30\x68\xB2\x09\xBA\xFB\xC3\x2F\xCC\xBA\x0B\xDF\x3F\x77\x7B\x46\x7D\x3A\x12\x24\x8E\x96\x8F\x3C\x05\x0A\x6F\xD2\x94\x28\x1D\x6D\x0C\xC0\x2E\x88\x22\xD5\xD8\xCF\x1D\x13\xC7\xF0\x48\xD7\xD7\x05\xA7\xCF\xC7\x47\x9E\x3B\x3C\x34\xC8\x80\x4F\xD4\x14\xBB\xFC\x0D\x50\xF7\xFA\xB3\xEC\x42\x5F\xA9\xDD\x6D\xC8\xF4\x75\xCF\x7B\xC1\x72\x26\xB1\x01\x1C\x5C\x2C\xFD\x7A\x4E\xB4\x01\xC5\x05\x57\xB9\xE7\x3C\xAA\x05\xD9\x88\xE9\x07\x46\x41\xCE\xEF\x41\x81\xAE\x58\xDF\x83\xA2\xAE\xCA\xD7\x77\x1F\xE7\x00\x3C\x9D\x6F\x8E\xE4\x32\x09\x1D\x4D\x78\x34\x78\x34\x3C\x94\x9B\x26\xED\x4F\x71\xC6\x19\x7A\xBD\x20\x22\x48\x5A\xFE\x4B\x7D\x03\xB7\xE7\x58\xBE\xC6\x32\x4E\x74\x1E\x68\xDD\xA8\x68\x5B\xB3\x3E\xEE\x62\x7D\xD9\x80\xE8\x0A\x75\x7A\xB7\xEE\xB4\x65\x9A\x21\x90\xE0\xAA\xD0\x98\xBC\x38\xB5\x73\x3C\x8B\xF8\xDC", @@ -139,4 +135,12 @@ redef root_certs += { ["CN=Root CA Generalitat Valenciana,OU=PKIGVA,O=Generalitat Valenciana,C=ES"] = "\x30\x82\x06\x8B\x30\x82\x05\x73\xA0\x03\x02\x01\x02\x02\x04\x3B\x45\xE5\x68\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x68\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x45\x53\x31\x1F\x30\x1D\x06\x03\x55\x04\x0A\x13\x16\x47\x65\x6E\x65\x72\x61\x6C\x69\x74\x61\x74\x20\x56\x61\x6C\x65\x6E\x63\x69\x61\x6E\x61\x31\x0F\x30\x0D\x06\x03\x55\x04\x0B\x13\x06\x50\x4B\x49\x47\x56\x41\x31\x27\x30\x25\x06\x03\x55\x04\x03\x13\x1E\x52\x6F\x6F\x74\x20\x43\x41\x20\x47\x65\x6E\x65\x72\x61\x6C\x69\x74\x61\x74\x20\x56\x61\x6C\x65\x6E\x63\x69\x61\x6E\x61\x30\x1E\x17\x0D\x30\x31\x30\x37\x30\x36\x31\x36\x32\x32\x34\x37\x5A\x17\x0D\x32\x31\x30\x37\x30\x31\x31\x35\x32\x32\x34\x37\x5A\x30\x68\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x45\x53\x31\x1F\x30\x1D\x06\x03\x55\x04\x0A\x13\x16\x47\x65\x6E\x65\x72\x61\x6C\x69\x74\x61\x74\x20\x56\x61\x6C\x65\x6E\x63\x69\x61\x6E\x61\x31\x0F\x30\x0D\x06\x03\x55\x04\x0B\x13\x06\x50\x4B\x49\x47\x56\x41\x31\x27\x30\x25\x06\x03\x55\x04\x03\x13\x1E\x52\x6F\x6F\x74\x20\x43\x41\x20\x47\x65\x6E\x65\x72\x61\x6C\x69\x74\x61\x74\x20\x56\x61\x6C\x65\x6E\x63\x69\x61\x6E\x61\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xC6\x2A\xAB\x57\x11\x37\x2F\x22\x8A\xCA\x03\x74\x1D\xCA\xED\x2D\xA2\x0B\xBC\x33\x52\x40\x26\x47\xBE\x5A\x69\xA6\x3B\x72\x36\x17\x4C\xE8\xDF\xB8\xBB\x2F\x76\xE1\x40\x46\x74\x65\x02\x90\x52\x08\xB4\xFF\xA8\x8C\xC1\xE0\xC7\x89\x56\x10\x39\x33\xEF\x68\xB4\x5F\x5F\xDA\x6D\x23\xA1\x89\x5E\x22\xA3\x4A\x06\xF0\x27\xF0\x57\xB9\xF8\xE9\x4E\x32\x77\x0A\x3F\x41\x64\xF3\xEB\x65\xEE\x76\xFE\x54\xAA\x7D\x1D\x20\xAE\xF3\xD7\x74\xC2\x0A\x5F\xF5\x08\x28\x52\x08\xCC\x55\x5D\xD2\x0F\xDB\x9A\x81\xA5\xBB\xA1\xB3\xC1\x94\xCD\x54\xE0\x32\x75\x31\x91\x1A\x62\xB2\xDE\x75\xE2\xCF\x4F\x89\xD9\x91\x90\x0F\x41\x1B\xB4\x5A\x4A\x77\xBD\x67\x83\xE0\x93\xE7\x5E\xA7\x0C\xE7\x81\xD3\xF4\x52\xAC\x53\xB2\x03\xC7\x44\x26\xFB\x79\xE5\xCB\x34\x60\x50\x10\x7B\x1B\xDB\x6B\xD7\x47\xAB\x5F\x7C\x68\xCA\x6E\x9D\x41\x03\x10\xEE\x6B\x99\x7B\x5E\x25\xA8\xC2\xAB\xE4\xC0\xF3\x5C\x9C\xE3\xBE\xCE\x31\x4C\x64\x1E\x5E\x80\xA2\xF5\x83\x7E\x0C\xD6\xCA\x8C\x55\x8E\xBE\xE0\xBE\x49\x07\x0F\xA3\x24\x41\x7A\x58\x1D\x84\xEA\x58\x12\xC8\xE1\xB7\xED\xEF\x93\xDE\x94\x08\x31\x02\x03\x01\x00\x01\xA3\x82\x03\x3B\x30\x82\x03\x37\x30\x32\x06\x08\x2B\x06\x01\x05\x05\x07\x01\x01\x04\x26\x30\x24\x30\x22\x06\x08\x2B\x06\x01\x05\x05\x07\x30\x01\x86\x16\x68\x74\x74\x70\x3A\x2F\x2F\x6F\x63\x73\x70\x2E\x70\x6B\x69\x2E\x67\x76\x61\x2E\x65\x73\x30\x12\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x08\x30\x06\x01\x01\xFF\x02\x01\x02\x30\x82\x02\x34\x06\x03\x55\x1D\x20\x04\x82\x02\x2B\x30\x82\x02\x27\x30\x82\x02\x23\x06\x0A\x2B\x06\x01\x04\x01\xBF\x55\x02\x01\x00\x30\x82\x02\x13\x30\x82\x01\xE8\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x02\x30\x82\x01\xDA\x1E\x82\x01\xD6\x00\x41\x00\x75\x00\x74\x00\x6F\x00\x72\x00\x69\x00\x64\x00\x61\x00\x64\x00\x20\x00\x64\x00\x65\x00\x20\x00\x43\x00\x65\x00\x72\x00\x74\x00\x69\x00\x66\x00\x69\x00\x63\x00\x61\x00\x63\x00\x69\x00\xF3\x00\x6E\x00\x20\x00\x52\x00\x61\x00\xED\x00\x7A\x00\x20\x00\x64\x00\x65\x00\x20\x00\x6C\x00\x61\x00\x20\x00\x47\x00\x65\x00\x6E\x00\x65\x00\x72\x00\x61\x00\x6C\x00\x69\x00\x74\x00\x61\x00\x74\x00\x20\x00\x56\x00\x61\x00\x6C\x00\x65\x00\x6E\x00\x63\x00\x69\x00\x61\x00\x6E\x00\x61\x00\x2E\x00\x0D\x00\x0A\x00\x4C\x00\x61\x00\x20\x00\x44\x00\x65\x00\x63\x00\x6C\x00\x61\x00\x72\x00\x61\x00\x63\x00\x69\x00\xF3\x00\x6E\x00\x20\x00\x64\x00\x65\x00\x20\x00\x50\x00\x72\x00\xE1\x00\x63\x00\x74\x00\x69\x00\x63\x00\x61\x00\x73\x00\x20\x00\x64\x00\x65\x00\x20\x00\x43\x00\x65\x00\x72\x00\x74\x00\x69\x00\x66\x00\x69\x00\x63\x00\x61\x00\x63\x00\x69\x00\xF3\x00\x6E\x00\x20\x00\x71\x00\x75\x00\x65\x00\x20\x00\x72\x00\x69\x00\x67\x00\x65\x00\x20\x00\x65\x00\x6C\x00\x20\x00\x66\x00\x75\x00\x6E\x00\x63\x00\x69\x00\x6F\x00\x6E\x00\x61\x00\x6D\x00\x69\x00\x65\x00\x6E\x00\x74\x00\x6F\x00\x20\x00\x64\x00\x65\x00\x20\x00\x6C\x00\x61\x00\x20\x00\x70\x00\x72\x00\x65\x00\x73\x00\x65\x00\x6E\x00\x74\x00\x65\x00\x20\x00\x41\x00\x75\x00\x74\x00\x6F\x00\x72\x00\x69\x00\x64\x00\x61\x00\x64\x00\x20\x00\x64\x00\x65\x00\x20\x00\x43\x00\x65\x00\x72\x00\x74\x00\x69\x00\x66\x00\x69\x00\x63\x00\x61\x00\x63\x00\x69\x00\xF3\x00\x6E\x00\x20\x00\x73\x00\x65\x00\x20\x00\x65\x00\x6E\x00\x63\x00\x75\x00\x65\x00\x6E\x00\x74\x00\x72\x00\x61\x00\x20\x00\x65\x00\x6E\x00\x20\x00\x6C\x00\x61\x00\x20\x00\x64\x00\x69\x00\x72\x00\x65\x00\x63\x00\x63\x00\x69\x00\xF3\x00\x6E\x00\x20\x00\x77\x00\x65\x00\x62\x00\x20\x00\x68\x00\x74\x00\x74\x00\x70\x00\x3A\x00\x2F\x00\x2F\x00\x77\x00\x77\x00\x77\x00\x2E\x00\x70\x00\x6B\x00\x69\x00\x2E\x00\x67\x00\x76\x00\x61\x00\x2E\x00\x65\x00\x73\x00\x2F\x00\x63\x00\x70\x00\x73\x30\x25\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x01\x16\x19\x68\x74\x74\x70\x3A\x2F\x2F\x77\x77\x77\x2E\x70\x6B\x69\x2E\x67\x76\x61\x2E\x65\x73\x2F\x63\x70\x73\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x7B\x35\xD3\x40\xD2\x1C\x78\x19\x66\xEF\x74\x10\x28\xDC\x3E\x4F\xB2\x78\x04\xFC\x30\x81\x95\x06\x03\x55\x1D\x23\x04\x81\x8D\x30\x81\x8A\x80\x14\x7B\x35\xD3\x40\xD2\x1C\x78\x19\x66\xEF\x74\x10\x28\xDC\x3E\x4F\xB2\x78\x04\xFC\xA1\x6C\xA4\x6A\x30\x68\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x45\x53\x31\x1F\x30\x1D\x06\x03\x55\x04\x0A\x13\x16\x47\x65\x6E\x65\x72\x61\x6C\x69\x74\x61\x74\x20\x56\x61\x6C\x65\x6E\x63\x69\x61\x6E\x61\x31\x0F\x30\x0D\x06\x03\x55\x04\x0B\x13\x06\x50\x4B\x49\x47\x56\x41\x31\x27\x30\x25\x06\x03\x55\x04\x03\x13\x1E\x52\x6F\x6F\x74\x20\x43\x41\x20\x47\x65\x6E\x65\x72\x61\x6C\x69\x74\x61\x74\x20\x56\x61\x6C\x65\x6E\x63\x69\x61\x6E\x61\x82\x04\x3B\x45\xE5\x68\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x24\x61\x4E\xF5\xB5\xC8\x42\x02\x2A\xB3\x5C\x75\xAD\xC5\x6D\xCA\xE7\x94\x3F\xA5\x68\x95\x88\xC1\x54\xC0\x10\x69\xA2\x12\x2F\x18\x3F\x25\x50\xA8\x7C\x4A\xEA\xC6\x09\xD9\xF4\x75\xC6\x40\xDA\xAF\x50\x9D\x3D\xA5\x16\xBB\x6D\x31\xC6\xC7\x73\x0A\x48\xFE\x20\x72\xED\x6F\xCC\xE8\x83\x61\x16\x46\x90\x01\x95\x4B\x7D\x8E\x9A\x52\x09\x2F\xF6\x6F\x1C\xE4\xA1\x71\xCF\x8C\x2A\x5A\x17\x73\x83\x47\x4D\x0F\x36\xFB\x04\x4D\x49\x51\xE2\x14\xC9\x64\x61\xFB\xD4\x14\xE0\xF4\x9E\xB7\x34\x8F\x0A\x26\xBD\x97\x5C\xF4\x79\x3A\x4A\x30\x19\xCC\xAD\x4F\xA0\x98\x8A\xB4\x31\x97\x2A\xE2\x73\x6D\x7E\x78\xB8\xF8\x88\x89\x4F\xB1\x22\x91\x64\x4B\xF5\x50\xDE\x03\xDB\xE5\xC5\x76\xE7\x13\x66\x75\x7E\x65\xFB\x01\x9F\x93\x87\x88\x9D\xF9\x46\x57\x7C\x4D\x60\xAF\x98\x73\x13\x23\xA4\x20\x91\x81\xFA\xD0\x61\x66\xB8\x7D\xD1\xAF\xD6\x6F\x1E\x6C\x3D\xE9\x11\xFD\xA9\xF9\x82\x22\x86\x99\x33\x71\x5A\xEA\x19\x57\x3D\x91\xCD\xA9\xC0\xA3\x6E\x07\x13\xA6\xC9\xED\xF8\x68\xA3\x9E\xC3\x5A\x72\x09\x87\x28\xD1\xC4\x73\xC4\x73\x18\x5F\x50\x75\x16\x31\x9F\xB7\xE8\x7C\xC3", ["CN=A-Trust-nQual-03,OU=A-Trust-nQual-03,O=A-Trust Ges. f. Sicherheitssysteme im elektr. Datenverkehr GmbH,C=AT"] = "\x30\x82\x03\xCF\x30\x82\x02\xB7\xA0\x03\x02\x01\x02\x02\x03\x01\x6C\x1E\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\x8D\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x41\x54\x31\x48\x30\x46\x06\x03\x55\x04\x0A\x0C\x3F\x41\x2D\x54\x72\x75\x73\x74\x20\x47\x65\x73\x2E\x20\x66\x2E\x20\x53\x69\x63\x68\x65\x72\x68\x65\x69\x74\x73\x73\x79\x73\x74\x65\x6D\x65\x20\x69\x6D\x20\x65\x6C\x65\x6B\x74\x72\x2E\x20\x44\x61\x74\x65\x6E\x76\x65\x72\x6B\x65\x68\x72\x20\x47\x6D\x62\x48\x31\x19\x30\x17\x06\x03\x55\x04\x0B\x0C\x10\x41\x2D\x54\x72\x75\x73\x74\x2D\x6E\x51\x75\x61\x6C\x2D\x30\x33\x31\x19\x30\x17\x06\x03\x55\x04\x03\x0C\x10\x41\x2D\x54\x72\x75\x73\x74\x2D\x6E\x51\x75\x61\x6C\x2D\x30\x33\x30\x1E\x17\x0D\x30\x35\x30\x38\x31\x37\x32\x32\x30\x30\x30\x30\x5A\x17\x0D\x31\x35\x30\x38\x31\x37\x32\x32\x30\x30\x30\x30\x5A\x30\x81\x8D\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x41\x54\x31\x48\x30\x46\x06\x03\x55\x04\x0A\x0C\x3F\x41\x2D\x54\x72\x75\x73\x74\x20\x47\x65\x73\x2E\x20\x66\x2E\x20\x53\x69\x63\x68\x65\x72\x68\x65\x69\x74\x73\x73\x79\x73\x74\x65\x6D\x65\x20\x69\x6D\x20\x65\x6C\x65\x6B\x74\x72\x2E\x20\x44\x61\x74\x65\x6E\x76\x65\x72\x6B\x65\x68\x72\x20\x47\x6D\x62\x48\x31\x19\x30\x17\x06\x03\x55\x04\x0B\x0C\x10\x41\x2D\x54\x72\x75\x73\x74\x2D\x6E\x51\x75\x61\x6C\x2D\x30\x33\x31\x19\x30\x17\x06\x03\x55\x04\x03\x0C\x10\x41\x2D\x54\x72\x75\x73\x74\x2D\x6E\x51\x75\x61\x6C\x2D\x30\x33\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xAD\x3D\x61\x6E\x03\xF3\x90\x3B\xC0\x41\x0B\x84\x80\xCD\xEC\x2A\xA3\x9D\x6B\xBB\x6E\xC2\x42\x84\xF7\x51\x14\xE1\xA0\xA8\x2D\x51\xA3\x51\xF2\xDE\x23\xF0\x34\x44\xFF\x94\xEB\xCC\x05\x23\x95\x40\xB9\x07\x78\xA5\x25\xF6\x0A\xBD\x45\x86\xE8\xD9\xBD\xC0\x04\x8E\x85\x44\x61\xEF\x7F\xA7\xC9\xFA\xC1\x25\xCC\x85\x2C\x63\x3F\x05\x60\x73\x49\x05\xE0\x60\x78\x95\x10\x4B\xDC\xF9\x11\x59\xCE\x71\x7F\x40\x9B\x8A\xAA\x24\xDF\x0B\x42\xE2\xDB\x56\xBC\x4A\xD2\xA5\x0C\x9B\xB7\x43\x3E\xDD\x83\xD3\x26\x10\x02\xCF\xEA\x23\xC4\x49\x4E\xE5\xD3\xE9\xB4\x88\xAB\x0C\xAE\x62\x92\xD4\x65\x87\xD9\x6A\xD7\xF4\x85\x9F\xE4\x33\x22\x25\xA5\xE5\xC8\x33\xBA\xC3\xC7\x41\xDC\x5F\xC6\x6A\xCC\x00\x0E\x6D\x32\xA8\xB6\x87\x36\x00\x62\x77\x9B\x1E\x1F\x34\xCB\x90\x3C\x78\x88\x74\x05\xEB\x79\xF5\x93\x71\x65\xCA\x9D\xC7\x6B\x18\x2D\x3D\x5C\x4E\xE7\xD5\xF8\x3F\x31\x7D\x8F\x87\xEC\x0A\x22\x2F\x23\xE9\xFE\xBB\x7D\xC9\xE0\xF4\xEC\xEB\x7C\xC4\xB0\xC3\x2D\x62\xB5\x9A\x71\xD6\xB1\x6A\xE8\xEC\xD9\xED\xD5\x72\xEC\xBE\x57\x01\xCE\x05\x55\x9F\xDE\xD1\x60\x88\x10\xB3\x02\x03\x01\x00\x01\xA3\x36\x30\x34\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x11\x06\x03\x55\x1D\x0E\x04\x0A\x04\x08\x44\x6A\x95\x67\x55\x79\x11\x4F\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x55\xD4\x54\xD1\x59\x48\x5C\xB3\x93\x85\xAA\xBF\x63\x2F\xE4\x80\xCE\x34\xA3\x34\x62\x3E\xF6\xD8\xEE\x67\x88\x31\x04\x03\x6F\x0B\xD4\x07\xFB\x4E\x75\x0F\xD3\x2E\xD3\xC0\x17\xC7\xC6\x28\xEC\x06\x0D\x11\x24\x0E\x0E\xA5\x5D\xBF\x8C\xB2\x13\x96\x71\xDC\xD4\xCE\x0E\x0D\x0A\x68\x32\x6C\xB9\x41\x31\x19\xAB\xB1\x07\x7B\x4D\x98\xD3\x5C\xB0\xD1\xF0\xA7\x42\xA0\xB5\xC4\x8E\xAF\xFE\xF1\x3F\xF4\xEF\x4F\x46\x00\x76\xEB\x02\xFB\xF9\x9D\xD2\x40\x96\xC7\x88\x3A\xB8\x9F\x11\x79\xF3\x80\x65\xA8\xBD\x1F\xD3\x78\x81\xA0\x51\x4C\x37\xB4\xA6\x5D\x25\x70\xD1\x66\xC9\x68\xF9\x2E\x11\x14\x68\xF1\x54\x98\x08\xAC\x26\x92\x0F\xDE\x89\x9E\xD4\xFA\xB3\x79\x2B\xD2\xA3\x79\xD4\xEC\x8B\xAC\x87\x53\x68\x42\x4C\x51\x51\x74\x1E\x1B\x27\x2E\xE3\xF5\x1F\x29\x74\x4D\xED\xAF\xF7\xE1\x92\x99\x81\xE8\xBE\x3A\xC7\x17\x50\xF6\xB7\xC6\xFC\x9B\xB0\x8A\x6B\xD6\x88\x03\x91\x8F\x06\x77\x3A\x85\x02\xDD\x98\xD5\x43\x78\x3F\xC6\x30\x15\xAC\x9B\x6B\xCB\x57\xB7\x89\x51\x8B\x3A\xE8\xC9\x84\x0C\xDB\xB1\x50\x20\x0A\x1A\x4A\xBA\x6A\x1A\xBD\xEC\x1B\xC8\xC5\x84\x9A\xCD", ["CN=TWCA Root Certification Authority,OU=Root CA,O=TAIWAN-CA,C=TW"] = "\x30\x82\x03\x7B\x30\x82\x02\x63\xA0\x03\x02\x01\x02\x02\x01\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x5F\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x54\x57\x31\x12\x30\x10\x06\x03\x55\x04\x0A\x0C\x09\x54\x41\x49\x57\x41\x4E\x2D\x43\x41\x31\x10\x30\x0E\x06\x03\x55\x04\x0B\x0C\x07\x52\x6F\x6F\x74\x20\x43\x41\x31\x2A\x30\x28\x06\x03\x55\x04\x03\x0C\x21\x54\x57\x43\x41\x20\x52\x6F\x6F\x74\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x30\x1E\x17\x0D\x30\x38\x30\x38\x32\x38\x30\x37\x32\x34\x33\x33\x5A\x17\x0D\x33\x30\x31\x32\x33\x31\x31\x35\x35\x39\x35\x39\x5A\x30\x5F\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x54\x57\x31\x12\x30\x10\x06\x03\x55\x04\x0A\x0C\x09\x54\x41\x49\x57\x41\x4E\x2D\x43\x41\x31\x10\x30\x0E\x06\x03\x55\x04\x0B\x0C\x07\x52\x6F\x6F\x74\x20\x43\x41\x31\x2A\x30\x28\x06\x03\x55\x04\x03\x0C\x21\x54\x57\x43\x41\x20\x52\x6F\x6F\x74\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xB0\x7E\x72\xB8\xA4\x03\x94\xE6\xA7\xDE\x09\x38\x91\x4A\x11\x40\x87\xA7\x7C\x59\x64\x14\x7B\xB5\x11\x10\xDD\xFE\xBF\xD5\xC0\xBB\x56\xE2\x85\x25\xF4\x35\x72\x0F\xF8\x53\xD0\x41\xE1\x44\x01\xC2\xB4\x1C\xC3\x31\x42\x16\x47\x85\x33\x22\x76\xB2\x0A\x6F\x0F\xE5\x25\x50\x4F\x85\x86\xBE\xBF\x98\x2E\x10\x67\x1E\xBE\x11\x05\x86\x05\x90\xC4\x59\xD0\x7C\x78\x10\xB0\x80\x5C\xB7\xE1\xC7\x2B\x75\xCB\x7C\x9F\xAE\xB5\xD1\x9D\x23\x37\x63\xA7\xDC\x42\xA2\x2D\x92\x04\x1B\x50\xC1\x7B\xB8\x3E\x1B\xC9\x56\x04\x8B\x2F\x52\x9B\xAD\xA9\x56\xE9\xC1\xFF\xAD\xA9\x58\x87\x30\xB6\x81\xF7\x97\x45\xFC\x19\x57\x3B\x2B\x6F\xE4\x47\xF4\x99\x45\xFE\x1D\xF1\xF8\x97\xA3\x88\x1D\x37\x1C\x5C\x8F\xE0\x76\x25\x9A\x50\xF8\xA0\x54\xFF\x44\x90\x76\x23\xD2\x32\xC6\xC3\xAB\x06\xBF\xFC\xFB\xBF\xF3\xAD\x7D\x92\x62\x02\x5B\x29\xD3\x35\xA3\x93\x9A\x43\x64\x60\x5D\xB2\xFA\x32\xFF\x3B\x04\xAF\x4D\x40\x6A\xF9\xC7\xE3\xEF\x23\xFD\x6B\xCB\xE5\x0F\x8B\x38\x0D\xEE\x0A\xFC\xFE\x0F\x98\x9F\x30\x31\xDD\x6C\x52\x65\xF9\x8B\x81\xBE\x22\xE1\x1C\x58\x03\xBA\x91\x1B\x89\x07\x02\x03\x01\x00\x01\xA3\x42\x30\x40\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x6A\x38\x5B\x26\x8D\xDE\x8B\x5A\xF2\x4F\x7A\x54\x83\x19\x18\xE3\x08\x35\xA6\xBA\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x3C\xD5\x77\x3D\xDA\xDF\x89\xBA\x87\x0C\x08\x54\x6A\x20\x50\x92\xBE\xB0\x41\x3D\xB9\x26\x64\x83\x0A\x2F\xE8\x40\xC0\x97\x28\x27\x82\x30\x4A\xC9\x93\xFF\x6A\xE7\xA6\x00\x7F\x89\x42\x9A\xD6\x11\xE5\x53\xCE\x2F\xCC\xF2\xDA\x05\xC4\xFE\xE2\x50\xC4\x3A\x86\x7D\xCC\xDA\x7E\x10\x09\x3B\x92\x35\x2A\x53\xB2\xFE\xEB\x2B\x05\xD9\x6C\x5D\xE6\xD0\xEF\xD3\x6A\x66\x9E\x15\x28\x85\x7A\xE8\x82\x00\xAC\x1E\xA7\x09\x69\x56\x42\xD3\x68\x51\x18\xBE\x54\x9A\xBF\x44\x41\xBA\x49\xBE\x20\xBA\x69\x5C\xEE\xB8\x77\xCD\xCE\x6C\x1F\xAD\x83\x96\x18\x7D\x0E\xB5\x14\x39\x84\xF1\x28\xE9\x2D\xA3\x9E\x7B\x1E\x7A\x72\x5A\x83\xB3\x79\x6F\xEF\xB4\xFC\xD0\x0A\xA5\x58\x4F\x46\xDF\xFB\x6D\x79\x59\xF2\x84\x22\x52\xAE\x0F\xCC\xFB\x7C\x3B\xE7\x6A\xCA\x47\x61\xC3\x7A\xF8\xD3\x92\x04\x1F\xB8\x20\x84\xE1\x36\x54\x16\xC7\x40\xDE\x3B\x8A\x73\xDC\xDF\xC6\x09\x4C\xDF\xEC\xDA\xFF\xD4\x53\x42\xA1\xC9\xF2\x62\x1D\x22\x83\x3C\x97\xC5\xF9\x19\x62\x27\xAC\x65\x22\xD7\xD3\x3C\xC6\xE5\x8E\xB2\x53\xCC\x49\xCE\xBC\x30\xFE\x7B\x0E\x33\x90\xFB\xED\xD2\x14\x91\x1F\x07\xAF", + ["OU=Security Communication RootCA2,O=SECOM Trust Systems CO.\,LTD.,C=JP"] = "\x30\x82\x03\x77\x30\x82\x02\x5F\xA0\x03\x02\x01\x02\x02\x01\x00\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x30\x5D\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4A\x50\x31\x25\x30\x23\x06\x03\x55\x04\x0A\x13\x1C\x53\x45\x43\x4F\x4D\x20\x54\x72\x75\x73\x74\x20\x53\x79\x73\x74\x65\x6D\x73\x20\x43\x4F\x2E\x2C\x4C\x54\x44\x2E\x31\x27\x30\x25\x06\x03\x55\x04\x0B\x13\x1E\x53\x65\x63\x75\x72\x69\x74\x79\x20\x43\x6F\x6D\x6D\x75\x6E\x69\x63\x61\x74\x69\x6F\x6E\x20\x52\x6F\x6F\x74\x43\x41\x32\x30\x1E\x17\x0D\x30\x39\x30\x35\x32\x39\x30\x35\x30\x30\x33\x39\x5A\x17\x0D\x32\x39\x30\x35\x32\x39\x30\x35\x30\x30\x33\x39\x5A\x30\x5D\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4A\x50\x31\x25\x30\x23\x06\x03\x55\x04\x0A\x13\x1C\x53\x45\x43\x4F\x4D\x20\x54\x72\x75\x73\x74\x20\x53\x79\x73\x74\x65\x6D\x73\x20\x43\x4F\x2E\x2C\x4C\x54\x44\x2E\x31\x27\x30\x25\x06\x03\x55\x04\x0B\x13\x1E\x53\x65\x63\x75\x72\x69\x74\x79\x20\x43\x6F\x6D\x6D\x75\x6E\x69\x63\x61\x74\x69\x6F\x6E\x20\x52\x6F\x6F\x74\x43\x41\x32\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xD0\x15\x39\x52\xB1\x52\xB3\xBA\xC5\x59\x82\xC4\x5D\x52\xAE\x3A\x43\x65\x80\x4B\xC7\xF2\x96\xBC\xDB\x36\x97\xD6\xA6\x64\x8C\xA8\x5E\xF0\xE3\x0A\x1C\xF7\xDF\x97\x3D\x4B\xAE\xF6\x5D\xEC\x21\xB5\x41\xAB\xCD\xB9\x7E\x76\x9F\xBE\xF9\x3E\x36\x34\xA0\x3B\xC1\xF6\x31\x11\x45\x74\x93\x3D\x57\x80\xC5\xF9\x89\x99\xCA\xE5\xAB\x6A\xD4\xB5\xDA\x41\x90\x10\xC1\xD6\xD6\x42\x89\xC2\xBF\xF4\x38\x12\x95\x4C\x54\x05\xF7\x36\xE4\x45\x83\x7B\x14\x65\xD6\xDC\x0C\x4D\xD1\xDE\x7E\x0C\xAB\x3B\xC4\x15\xBE\x3A\x56\xA6\x5A\x6F\x76\x69\x52\xA9\x7A\xB9\xC8\xEB\x6A\x9A\x5D\x52\xD0\x2D\x0A\x6B\x35\x16\x09\x10\x84\xD0\x6A\xCA\x3A\x06\x00\x37\x47\xE4\x7E\x57\x4F\x3F\x8B\xEB\x67\xB8\x88\xAA\xC5\xBE\x53\x55\xB2\x91\xC4\x7D\xB9\xB0\x85\x19\x06\x78\x2E\xDB\x61\x1A\xFA\x85\xF5\x4A\x91\xA1\xE7\x16\xD5\x8E\xA2\x39\xDF\x94\xB8\x70\x1F\x28\x3F\x8B\xFC\x40\x5E\x63\x83\x3C\x83\x2A\x1A\x99\x6B\xCF\xDE\x59\x6A\x3B\xFC\x6F\x16\xD7\x1F\xFD\x4A\x10\xEB\x4E\x82\x16\x3A\xAC\x27\x0C\x53\xF1\xAD\xD5\x24\xB0\x6B\x03\x50\xC1\x2D\x3C\x16\xDD\x44\x34\x27\x1A\x75\xFB\x02\x03\x01\x00\x01\xA3\x42\x30\x40\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x0A\x85\xA9\x77\x65\x05\x98\x7C\x40\x81\xF8\x0F\x97\x2C\x38\xF1\x0A\xEC\x3C\xCF\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x03\x82\x01\x01\x00\x4C\x3A\xA3\x44\xAC\xB9\x45\xB1\xC7\x93\x7E\xC8\x0B\x0A\x42\xDF\x64\xEA\x1C\xEE\x59\x6C\x08\xBA\x89\x5F\x6A\xCA\x4A\x95\x9E\x7A\x8F\x07\xC5\xDA\x45\x72\x82\x71\x0E\x3A\xD2\xCC\x6F\xA7\xB4\xA1\x23\xBB\xF6\x24\x9F\xCB\x17\xFE\x8C\xA6\xCE\xC2\xD2\xDB\xCC\x8D\xFC\x71\xFC\x03\x29\xC1\x6C\x5D\x33\x5F\x64\xB6\x65\x3B\x89\x6F\x18\x76\x78\xF5\xDC\xA2\x48\x1F\x19\x3F\x8E\x93\xEB\xF1\xFA\x17\xEE\xCD\x4E\xE3\x04\x12\x55\xD6\xE5\xE4\xDD\xFB\x3E\x05\x7C\xE2\x1D\x5E\xC6\xA7\xBC\x97\x4F\x68\x3A\xF5\xE9\x2E\x0A\x43\xB6\xAF\x57\x5C\x62\x68\x7C\xB7\xFD\xA3\x8A\x84\xA0\xAC\x62\xBE\x2B\x09\x87\x34\xF0\x6A\x01\xBB\x9B\x29\x56\x3C\xFE\x00\x37\xCF\x23\x6C\xF1\x4E\xAA\xB6\x74\x46\x12\x6C\x91\xEE\x34\xD5\xEC\x9A\x91\xE7\x44\xBE\x90\x31\x72\xD5\x49\x02\xF6\x02\xE5\xF4\x1F\xEB\x7C\xD9\x96\x55\xA9\xFF\xEC\x8A\xF9\x99\x47\xFF\x35\x5A\x02\xAA\x04\xCB\x8A\x5B\x87\x71\x29\x91\xBD\xA4\xB4\x7A\x0D\xBD\x9A\xF5\x57\x23\x00\x07\x21\x17\x3F\x4A\x39\xD1\x05\x49\x0B\xA7\xB6\x37\x81\xA5\x5D\x8C\xAA\x33\x5E\x81\x28\x7C\xA7\x7D\x27\xEB\x00\xAE\x8D\x37", + ["CN=EC-ACC,OU=Jerarquia Entitats de Certificacio Catalanes,OU=Vegeu https://www.catcert.net/verarrel (c)03,OU=Serveis Publics de Certificacio,O=Agencia Catalana de Certificacio (NIF Q-0801176-I),C=ES"] = "\x30\x82\x05\x56\x30\x82\x04\x3E\xA0\x03\x02\x01\x02\x02\x10\xEE\x2B\x3D\xEB\xD4\x21\xDE\x14\xA8\x62\xAC\x04\xF3\xDD\xC4\x01\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\xF3\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x45\x53\x31\x3B\x30\x39\x06\x03\x55\x04\x0A\x13\x32\x41\x67\x65\x6E\x63\x69\x61\x20\x43\x61\x74\x61\x6C\x61\x6E\x61\x20\x64\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x63\x69\x6F\x20\x28\x4E\x49\x46\x20\x51\x2D\x30\x38\x30\x31\x31\x37\x36\x2D\x49\x29\x31\x28\x30\x26\x06\x03\x55\x04\x0B\x13\x1F\x53\x65\x72\x76\x65\x69\x73\x20\x50\x75\x62\x6C\x69\x63\x73\x20\x64\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x63\x69\x6F\x31\x35\x30\x33\x06\x03\x55\x04\x0B\x13\x2C\x56\x65\x67\x65\x75\x20\x68\x74\x74\x70\x73\x3A\x2F\x2F\x77\x77\x77\x2E\x63\x61\x74\x63\x65\x72\x74\x2E\x6E\x65\x74\x2F\x76\x65\x72\x61\x72\x72\x65\x6C\x20\x28\x63\x29\x30\x33\x31\x35\x30\x33\x06\x03\x55\x04\x0B\x13\x2C\x4A\x65\x72\x61\x72\x71\x75\x69\x61\x20\x45\x6E\x74\x69\x74\x61\x74\x73\x20\x64\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x63\x69\x6F\x20\x43\x61\x74\x61\x6C\x61\x6E\x65\x73\x31\x0F\x30\x0D\x06\x03\x55\x04\x03\x13\x06\x45\x43\x2D\x41\x43\x43\x30\x1E\x17\x0D\x30\x33\x30\x31\x30\x37\x32\x33\x30\x30\x30\x30\x5A\x17\x0D\x33\x31\x30\x31\x30\x37\x32\x32\x35\x39\x35\x39\x5A\x30\x81\xF3\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x45\x53\x31\x3B\x30\x39\x06\x03\x55\x04\x0A\x13\x32\x41\x67\x65\x6E\x63\x69\x61\x20\x43\x61\x74\x61\x6C\x61\x6E\x61\x20\x64\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x63\x69\x6F\x20\x28\x4E\x49\x46\x20\x51\x2D\x30\x38\x30\x31\x31\x37\x36\x2D\x49\x29\x31\x28\x30\x26\x06\x03\x55\x04\x0B\x13\x1F\x53\x65\x72\x76\x65\x69\x73\x20\x50\x75\x62\x6C\x69\x63\x73\x20\x64\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x63\x69\x6F\x31\x35\x30\x33\x06\x03\x55\x04\x0B\x13\x2C\x56\x65\x67\x65\x75\x20\x68\x74\x74\x70\x73\x3A\x2F\x2F\x77\x77\x77\x2E\x63\x61\x74\x63\x65\x72\x74\x2E\x6E\x65\x74\x2F\x76\x65\x72\x61\x72\x72\x65\x6C\x20\x28\x63\x29\x30\x33\x31\x35\x30\x33\x06\x03\x55\x04\x0B\x13\x2C\x4A\x65\x72\x61\x72\x71\x75\x69\x61\x20\x45\x6E\x74\x69\x74\x61\x74\x73\x20\x64\x65\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x63\x69\x6F\x20\x43\x61\x74\x61\x6C\x61\x6E\x65\x73\x31\x0F\x30\x0D\x06\x03\x55\x04\x03\x13\x06\x45\x43\x2D\x41\x43\x43\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xB3\x22\xC7\x4F\xE2\x97\x42\x95\x88\x47\x83\x40\xF6\x1D\x17\xF3\x83\x73\x24\x1E\x51\xF3\x98\x8A\xC3\x92\xB8\xFF\x40\x90\x05\x70\x87\x60\xC9\x00\xA9\xB5\x94\x65\x19\x22\x15\x17\xC2\x43\x6C\x66\x44\x9A\x0D\x04\x3E\x39\x6F\xA5\x4B\x7A\xAA\x63\xB7\x8A\x44\x9D\xD9\x63\x91\x84\x66\xE0\x28\x0F\xBA\x42\xE3\x6E\x8E\xF7\x14\x27\x93\x69\xEE\x91\x0E\xA3\x5F\x0E\xB1\xEB\x66\xA2\x72\x4F\x12\x13\x86\x65\x7A\x3E\xDB\x4F\x07\xF4\xA7\x09\x60\xDA\x3A\x42\x99\xC7\xB2\x7F\xB3\x16\x95\x1C\xC7\xF9\x34\xB5\x94\x85\xD5\x99\x5E\xA0\x48\xA0\x7E\xE7\x17\x65\xB8\xA2\x75\xB8\x1E\xF3\xE5\x42\x7D\xAF\xED\xF3\x8A\x48\x64\x5D\x82\x14\x93\xD8\xC0\xE4\xFF\xB3\x50\x72\xF2\x76\xF6\xB3\x5D\x42\x50\x79\xD0\x94\x3E\x6B\x0C\x00\xBE\xD8\x6B\x0E\x4E\x2A\xEC\x3E\xD2\xCC\x82\xA2\x18\x65\x33\x13\x77\x9E\x9A\x5D\x1A\x13\xD8\xC3\xDB\x3D\xC8\x97\x7A\xEE\x70\xED\xA7\xE6\x7C\xDB\x71\xCF\x2D\x94\x62\xDF\x6D\xD6\xF5\x38\xBE\x3F\xA5\x85\x0A\x19\xB8\xA8\xD8\x09\x75\x42\x70\xC4\xEA\xEF\xCB\x0E\xC8\x34\xA8\x12\x22\x98\x0C\xB8\x13\x94\xB6\x4B\xEC\xF0\xD0\x90\xE7\x27\x02\x03\x01\x00\x01\xA3\x81\xE3\x30\x81\xE0\x30\x1D\x06\x03\x55\x1D\x11\x04\x16\x30\x14\x81\x12\x65\x63\x5F\x61\x63\x63\x40\x63\x61\x74\x63\x65\x72\x74\x2E\x6E\x65\x74\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xA0\xC3\x8B\x44\xAA\x37\xA5\x45\xBF\x97\x80\x5A\xD1\xF1\x78\xA2\x9B\xE9\x5D\x8D\x30\x7F\x06\x03\x55\x1D\x20\x04\x78\x30\x76\x30\x74\x06\x0B\x2B\x06\x01\x04\x01\xF5\x78\x01\x03\x01\x0A\x30\x65\x30\x2C\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x01\x16\x20\x68\x74\x74\x70\x73\x3A\x2F\x2F\x77\x77\x77\x2E\x63\x61\x74\x63\x65\x72\x74\x2E\x6E\x65\x74\x2F\x76\x65\x72\x61\x72\x72\x65\x6C\x30\x35\x06\x08\x2B\x06\x01\x05\x05\x07\x02\x02\x30\x29\x1A\x27\x56\x65\x67\x65\x75\x20\x68\x74\x74\x70\x73\x3A\x2F\x2F\x77\x77\x77\x2E\x63\x61\x74\x63\x65\x72\x74\x2E\x6E\x65\x74\x2F\x76\x65\x72\x61\x72\x72\x65\x6C\x20\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\xA0\x48\x5B\x82\x01\xF6\x4D\x48\xB8\x39\x55\x35\x9C\x80\x7A\x53\x99\xD5\x5A\xFF\xB1\x71\x3B\xCC\x39\x09\x94\x5E\xD6\xDA\xEF\xBE\x01\x5B\x5D\xD3\x1E\xD8\xFD\x7D\x4F\xCD\xA0\x41\xE0\x34\x93\xBF\xCB\xE2\x86\x9C\x37\x92\x90\x56\x1C\xDC\xEB\x29\x05\xE5\xC4\x9E\xC7\x35\xDF\x8A\x0C\xCD\xC5\x21\x43\xE9\xAA\x88\xE5\x35\xC0\x19\x42\x63\x5A\x02\x5E\xA4\x48\x18\x3A\x85\x6F\xDC\x9D\xBC\x3F\x9D\x9C\xC1\x87\xB8\x7A\x61\x08\xE9\x77\x0B\x7F\x70\xAB\x7A\xDD\xD9\x97\x2C\x64\x1E\x85\xBF\xBC\x74\x96\xA1\xC3\x7A\x12\xEC\x0C\x1A\x6E\x83\x0C\x3C\xE8\x72\x46\x9F\xFB\x48\xD5\x5E\x97\xE6\xB1\xA1\xF8\xE4\xEF\x46\x25\x94\x9C\x89\xDB\x69\x38\xBE\xEC\x5C\x0E\x56\xC7\x65\x51\xE5\x50\x88\x88\xBF\x42\xD5\x2B\x3D\xE5\xF9\xBA\x9E\x2E\xB3\xCA\xF4\x73\x92\x02\x0B\xBE\x4C\x66\xEB\x20\xFE\xB9\xCB\xB5\x99\x7F\xE6\xB6\x13\xFA\xCA\x4B\x4D\xD9\xEE\x53\x46\x06\x3B\xC6\x4E\xAD\x93\x5A\x81\x7E\x6C\x2A\x4B\x6A\x05\x45\x8C\xF2\x21\xA4\x31\x90\x87\x6C\x65\x9C\x9D\xA5\x60\x95\x3A\x52\x7F\xF5\xD1\xAB\x08\x6E\xF3\xEE\x5B\xF9\x88\x3D\x7E\xB8\x6F\x6E\x03\xE4\x42", + ["CN=Hellenic Academic and Research Institutions RootCA 2011,O=Hellenic Academic and Research Institutions Cert. Authority,C=GR"] = "\x30\x82\x04\x31\x30\x82\x03\x19\xA0\x03\x02\x01\x02\x02\x01\x00\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x81\x95\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x52\x31\x44\x30\x42\x06\x03\x55\x04\x0A\x13\x3B\x48\x65\x6C\x6C\x65\x6E\x69\x63\x20\x41\x63\x61\x64\x65\x6D\x69\x63\x20\x61\x6E\x64\x20\x52\x65\x73\x65\x61\x72\x63\x68\x20\x49\x6E\x73\x74\x69\x74\x75\x74\x69\x6F\x6E\x73\x20\x43\x65\x72\x74\x2E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x31\x40\x30\x3E\x06\x03\x55\x04\x03\x13\x37\x48\x65\x6C\x6C\x65\x6E\x69\x63\x20\x41\x63\x61\x64\x65\x6D\x69\x63\x20\x61\x6E\x64\x20\x52\x65\x73\x65\x61\x72\x63\x68\x20\x49\x6E\x73\x74\x69\x74\x75\x74\x69\x6F\x6E\x73\x20\x52\x6F\x6F\x74\x43\x41\x20\x32\x30\x31\x31\x30\x1E\x17\x0D\x31\x31\x31\x32\x30\x36\x31\x33\x34\x39\x35\x32\x5A\x17\x0D\x33\x31\x31\x32\x30\x31\x31\x33\x34\x39\x35\x32\x5A\x30\x81\x95\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x52\x31\x44\x30\x42\x06\x03\x55\x04\x0A\x13\x3B\x48\x65\x6C\x6C\x65\x6E\x69\x63\x20\x41\x63\x61\x64\x65\x6D\x69\x63\x20\x61\x6E\x64\x20\x52\x65\x73\x65\x61\x72\x63\x68\x20\x49\x6E\x73\x74\x69\x74\x75\x74\x69\x6F\x6E\x73\x20\x43\x65\x72\x74\x2E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x31\x40\x30\x3E\x06\x03\x55\x04\x03\x13\x37\x48\x65\x6C\x6C\x65\x6E\x69\x63\x20\x41\x63\x61\x64\x65\x6D\x69\x63\x20\x61\x6E\x64\x20\x52\x65\x73\x65\x61\x72\x63\x68\x20\x49\x6E\x73\x74\x69\x74\x75\x74\x69\x6F\x6E\x73\x20\x52\x6F\x6F\x74\x43\x41\x20\x32\x30\x31\x31\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xA9\x53\x00\xE3\x2E\xA6\xF6\x8E\xFA\x60\xD8\x2D\x95\x3E\xF8\x2C\x2A\x54\x4E\xCD\xB9\x84\x61\x94\x58\x4F\x8F\x3D\x8B\xE4\x43\xF3\x75\x89\x8D\x51\xE4\xC3\x37\xD2\x8A\x88\x4D\x79\x1E\xB7\x12\xDD\x43\x78\x4A\x8A\x92\xE6\xD7\x48\xD5\x0F\xA4\x3A\x29\x44\x35\xB8\x07\xF6\x68\x1D\x55\xCD\x38\x51\xF0\x8C\x24\x31\x85\xAF\x83\xC9\x7D\xE9\x77\xAF\xED\x1A\x7B\x9D\x17\xF9\xB3\x9D\x38\x50\x0F\xA6\x5A\x79\x91\x80\xAF\x37\xAE\xA6\xD3\x31\xFB\xB5\x26\x09\x9D\x3C\x5A\xEF\x51\xC5\x2B\xDF\x96\x5D\xEB\x32\x1E\x02\xDA\x70\x49\xEC\x6E\x0C\xC8\x9A\x37\x8D\xF7\xF1\x36\x60\x4B\x26\x2C\x82\x9E\xD0\x78\xF3\x0D\x0F\x63\xA4\x51\x30\xE1\xF9\x2B\x27\x12\x07\xD8\xEA\xBD\x18\x62\x98\xB0\x59\x37\x7D\xBE\xEE\xF3\x20\x51\x42\x5A\x83\xEF\x93\xBA\x69\x15\xF1\x62\x9D\x9F\x99\x39\x82\xA1\xB7\x74\x2E\x8B\xD4\xC5\x0B\x7B\x2F\xF0\xC8\x0A\xDA\x3D\x79\x0A\x9A\x93\x1C\xA5\x28\x72\x73\x91\x43\x9A\xA7\xD1\x4D\x85\x84\xB9\xA9\x74\x8F\x14\x40\xC7\xDC\xDE\xAC\x41\x64\x6C\xB4\x19\x9B\x02\x63\x6D\x24\x64\x8F\x44\xB2\x25\xEA\xCE\x5D\x74\x0C\x63\x32\x5C\x8D\x87\xE5\x02\x03\x01\x00\x01\xA3\x81\x89\x30\x81\x86\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0B\x06\x03\x55\x1D\x0F\x04\x04\x03\x02\x01\x06\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xA6\x91\x42\xFD\x13\x61\x4A\x23\x9E\x08\xA4\x29\xE5\xD8\x13\x04\x23\xEE\x41\x25\x30\x47\x06\x03\x55\x1D\x1E\x04\x40\x30\x3E\xA0\x3C\x30\x05\x82\x03\x2E\x67\x72\x30\x05\x82\x03\x2E\x65\x75\x30\x06\x82\x04\x2E\x65\x64\x75\x30\x06\x82\x04\x2E\x6F\x72\x67\x30\x05\x81\x03\x2E\x67\x72\x30\x05\x81\x03\x2E\x65\x75\x30\x06\x81\x04\x2E\x65\x64\x75\x30\x06\x81\x04\x2E\x6F\x72\x67\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x1F\xEF\x79\x41\xE1\x7B\x6E\x3F\xB2\x8C\x86\x37\x42\x4A\x4E\x1C\x37\x1E\x8D\x66\xBA\x24\x81\xC9\x4F\x12\x0F\x21\xC0\x03\x97\x86\x25\x6D\x5D\xD3\x22\x29\xA8\x6C\xA2\x0D\xA9\xEB\x3D\x06\x5B\x99\x3A\xC7\xCC\xC3\x9A\x34\x7F\xAB\x0E\xC8\x4E\x1C\xE1\xFA\xE4\xDC\xCD\x0D\xBE\xBF\x24\xFE\x6C\xE7\x6B\xC2\x0D\xC8\x06\x9E\x4E\x8D\x61\x28\xA6\x6A\xFD\xE5\xF6\x62\xEA\x18\x3C\x4E\xA0\x53\x9D\xB2\x3A\x9C\xEB\xA5\x9C\x91\x16\xB6\x4D\x82\xE0\x0C\x05\x48\xA9\x6C\xF5\xCC\xF8\xCB\x9D\x49\xB4\xF0\x02\xA5\xFD\x70\x03\xED\x8A\x21\xA5\xAE\x13\x86\x49\xC3\x33\x73\xBE\x87\x3B\x74\x8B\x17\x45\x26\x4C\x16\x91\x83\xFE\x67\x7D\xCD\x4D\x63\x67\xFA\xF3\x03\x12\x96\x78\x06\x8D\xB1\x67\xED\x8E\x3F\xBE\x9F\x4F\x02\xF5\xB3\x09\x2F\xF3\x4C\x87\xDF\x2A\xCB\x95\x7C\x01\xCC\xAC\x36\x7A\xBF\xA2\x73\x7A\xF7\x8F\xC1\xB5\x9A\xA1\x14\xB2\x8F\x33\x9F\x0D\xEF\x22\xDC\x66\x7B\x84\xBD\x45\x17\x06\x3D\x3C\xCA\xB9\x77\x34\x8F\xCA\xEA\xCF\x3F\x31\x3E\xE3\x88\xE3\x80\x49\x25\xC8\x97\xB5\x9D\x9A\x99\x4D\xB0\x3C\xF8\x4A\x00\x9B\x64\xDD\x9F\x39\x4B\xD1\x27\xD7\xB8", + ["CN=Actalis Authentication Root CA,O=Actalis S.p.A./03358520967,L=Milan,C=IT"] = "\x30\x82\x05\xBB\x30\x82\x03\xA3\xA0\x03\x02\x01\x02\x02\x08\x57\x0A\x11\x97\x42\xC4\xE3\xCC\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x30\x6B\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x49\x54\x31\x0E\x30\x0C\x06\x03\x55\x04\x07\x0C\x05\x4D\x69\x6C\x61\x6E\x31\x23\x30\x21\x06\x03\x55\x04\x0A\x0C\x1A\x41\x63\x74\x61\x6C\x69\x73\x20\x53\x2E\x70\x2E\x41\x2E\x2F\x30\x33\x33\x35\x38\x35\x32\x30\x39\x36\x37\x31\x27\x30\x25\x06\x03\x55\x04\x03\x0C\x1E\x41\x63\x74\x61\x6C\x69\x73\x20\x41\x75\x74\x68\x65\x6E\x74\x69\x63\x61\x74\x69\x6F\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x1E\x17\x0D\x31\x31\x30\x39\x32\x32\x31\x31\x32\x32\x30\x32\x5A\x17\x0D\x33\x30\x30\x39\x32\x32\x31\x31\x32\x32\x30\x32\x5A\x30\x6B\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x49\x54\x31\x0E\x30\x0C\x06\x03\x55\x04\x07\x0C\x05\x4D\x69\x6C\x61\x6E\x31\x23\x30\x21\x06\x03\x55\x04\x0A\x0C\x1A\x41\x63\x74\x61\x6C\x69\x73\x20\x53\x2E\x70\x2E\x41\x2E\x2F\x30\x33\x33\x35\x38\x35\x32\x30\x39\x36\x37\x31\x27\x30\x25\x06\x03\x55\x04\x03\x0C\x1E\x41\x63\x74\x61\x6C\x69\x73\x20\x41\x75\x74\x68\x65\x6E\x74\x69\x63\x61\x74\x69\x6F\x6E\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x82\x02\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x02\x0F\x00\x30\x82\x02\x0A\x02\x82\x02\x01\x00\xA7\xC6\xC4\xA5\x29\xA4\x2C\xEF\xE5\x18\xC5\xB0\x50\xA3\x6F\x51\x3B\x9F\x0A\x5A\xC9\xC2\x48\x38\x0A\xC2\x1C\xA0\x18\x7F\x91\xB5\x87\xB9\x40\x3F\xDD\x1D\x68\x1F\x08\x83\xD5\x2D\x1E\x88\xA0\xF8\x8F\x56\x8F\x6D\x99\x02\x92\x90\x16\xD5\x5F\x08\x6C\x89\xD7\xE1\xAC\xBC\x20\xC2\xB1\xE0\x83\x51\x8A\x69\x4D\x00\x96\x5A\x6F\x2F\xC0\x44\x7E\xA3\x0E\xE4\x91\xCD\x58\xEE\xDC\xFB\xC7\x1E\x45\x47\xDD\x27\xB9\x08\x01\x9F\xA6\x21\x1D\xF5\x41\x2D\x2F\x4C\xFD\x28\xAD\xE0\x8A\xAD\x22\xB4\x56\x65\x8E\x86\x54\x8F\x93\x43\x29\xDE\x39\x46\x78\xA3\x30\x23\xBA\xCD\xF0\x7D\x13\x57\xC0\x5D\xD2\x83\x6B\x48\x4C\xC4\xAB\x9F\x80\x5A\x5B\x3A\xBD\xC9\xA7\x22\x3F\x80\x27\x33\x5B\x0E\xB7\x8A\x0C\x5D\x07\x37\x08\xCB\x6C\xD2\x7A\x47\x22\x44\x35\xC5\xCC\xCC\x2E\x8E\xDD\x2A\xED\xB7\x7D\x66\x0D\x5F\x61\x51\x22\x55\x1B\xE3\x46\xE3\xE3\x3D\xD0\x35\x62\x9A\xDB\xAF\x14\xC8\x5B\xA1\xCC\x89\x1B\xE1\x30\x26\xFC\xA0\x9B\x1F\x81\xA7\x47\x1F\x04\xEB\xA3\x39\x92\x06\x9F\x99\xD3\xBF\xD3\xEA\x4F\x50\x9C\x19\xFE\x96\x87\x1E\x3C\x65\xF6\xA3\x18\x24\x83\x86\x10\xE7\x54\x3E\xA8\x3A\x76\x24\x4F\x81\x21\xC5\xE3\x0F\x02\xF8\x93\x94\x47\x20\xBB\xFE\xD4\x0E\xD3\x68\xB9\xDD\xC4\x7A\x84\x82\xE3\x53\x54\x79\xDD\xDB\x9C\xD2\xF2\x07\x9B\x2E\xB6\xBC\x3E\xED\x85\x6D\xEF\x25\x11\xF2\x97\x1A\x42\x61\xF7\x4A\x97\xE8\x8B\xB1\x10\x07\xFA\x65\x81\xB2\xA2\x39\xCF\xF7\x3C\xFF\x18\xFB\xC6\xF1\x5A\x8B\x59\xE2\x02\xAC\x7B\x92\xD0\x4E\x14\x4F\x59\x45\xF6\x0C\x5E\x28\x5F\xB0\xE8\x3F\x45\xCF\xCF\xAF\x9B\x6F\xFB\x84\xD3\x77\x5A\x95\x6F\xAC\x94\x84\x9E\xEE\xBC\xC0\x4A\x8F\x4A\x93\xF8\x44\x21\xE2\x31\x45\x61\x50\x4E\x10\xD8\xE3\x35\x7C\x4C\x19\xB4\xDE\x05\xBF\xA3\x06\x9F\xC8\xB5\xCD\xE4\x1F\xD7\x17\x06\x0D\x7A\x95\x74\x55\x0D\x68\x1A\xFC\x10\x1B\x62\x64\x9D\x6D\xE0\x95\xA0\xC3\x94\x07\x57\x0D\x14\xE6\xBD\x05\xFB\xB8\x9F\xE6\xDF\x8B\xE2\xC6\xE7\x7E\x96\xF6\x53\xC5\x80\x34\x50\x28\x58\xF0\x12\x50\x71\x17\x30\xBA\xE6\x78\x63\xBC\xF4\xB2\xAD\x9B\x2B\xB2\xFE\xE1\x39\x8C\x5E\xBA\x0B\x20\x94\xDE\x7B\x83\xB8\xFF\xE3\x56\x8D\xB7\x11\xE9\x3B\x8C\xF2\xB1\xC1\x5D\x9D\xA4\x0B\x4C\x2B\xD9\xB2\x18\xF5\xB5\x9F\x4B\x02\x03\x01\x00\x01\xA3\x63\x30\x61\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x52\xD8\x88\x3A\xC8\x9F\x78\x66\xED\x89\xF3\x7B\x38\x70\x94\xC9\x02\x02\x36\xD0\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\x52\xD8\x88\x3A\xC8\x9F\x78\x66\xED\x89\xF3\x7B\x38\x70\x94\xC9\x02\x02\x36\xD0\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x03\x82\x02\x01\x00\x0B\x7B\x72\x87\xC0\x60\xA6\x49\x4C\x88\x58\xE6\x1D\x88\xF7\x14\x64\x48\xA6\xD8\x58\x0A\x0E\x4F\x13\x35\xDF\x35\x1D\xD4\xED\x06\x31\xC8\x81\x3E\x6A\xD5\xDD\x3B\x1A\x32\xEE\x90\x3D\x11\xD2\x2E\xF4\x8E\xC3\x63\x2E\x23\x66\xB0\x67\xBE\x6F\xB6\xC0\x13\x39\x60\xAA\xA2\x34\x25\x93\x75\x52\xDE\xA7\x9D\xAD\x0E\x87\x89\x52\x71\x6A\x16\x3C\x19\x1D\x83\xF8\x9A\x29\x65\xBE\xF4\x3F\x9A\xD9\xF0\xF3\x5A\x87\x21\x71\x80\x4D\xCB\xE0\x38\x9B\x3F\xBB\xFA\xE0\x30\x4D\xCF\x86\xD3\x65\x10\x19\x18\xD1\x97\x02\xB1\x2B\x72\x42\x68\xAC\xA0\xBD\x4E\x5A\xDA\x18\xBF\x6B\x98\x81\xD0\xFD\x9A\xBE\x5E\x15\x48\xCD\x11\x15\xB9\xC0\x29\x5C\xB4\xE8\x88\xF7\x3E\x36\xAE\xB7\x62\xFD\x1E\x62\xDE\x70\x78\x10\x1C\x48\x5B\xDA\xBC\xA4\x38\xBA\x67\xED\x55\x3E\x5E\x57\xDF\xD4\x03\x40\x4C\x81\xA4\xD2\x4F\x63\xA7\x09\x42\x09\x14\xFC\x00\xA9\xC2\x80\x73\x4F\x2E\xC0\x40\xD9\x11\x7B\x48\xEA\x7A\x02\xC0\xD3\xEB\x28\x01\x26\x58\x74\xC1\xC0\x73\x22\x6D\x93\x95\xFD\x39\x7D\xBB\x2A\xE3\xF6\x82\xE3\x2C\x97\x5F\x4E\x1F\x91\x94\xFA\xFE\x2C\xA3\xD8\x76\x1A\xB8\x4D\xB2\x38\x4F\x9B\xFA\x1D\x48\x60\x79\x26\xE2\xF3\xFD\xA9\xD0\x9A\xE8\x70\x8F\x49\x7A\xD6\xE5\xBD\x0A\x0E\xDB\x2D\xF3\x8D\xBF\xEB\xE3\xA4\x7D\xCB\xC7\x95\x71\xE8\xDA\xA3\x7C\xC5\xC2\xF8\x74\x92\x04\x1B\x86\xAC\xA4\x22\x53\x40\xB6\xAC\xFE\x4C\x76\xCF\xFB\x94\x32\xC0\x35\x9F\x76\x3F\x6E\xE5\x90\x6E\xA0\xA6\x26\xA2\xB8\x2C\xBE\xD1\x2B\x85\xFD\xA7\x68\xC8\xBA\x01\x2B\xB1\x6C\x74\x1D\xB8\x73\x95\xE7\xEE\xB7\xC7\x25\xF0\x00\x4C\x00\xB2\x7E\xB6\x0B\x8B\x1C\xF3\xC0\x50\x9E\x25\xB9\xE0\x08\xDE\x36\x66\xFF\x37\xA5\xD1\xBB\x54\x64\x2C\xC9\x27\xB5\x4B\x92\x7E\x65\xFF\xD3\x2D\xE1\xB9\x4E\xBC\x7F\xA4\x41\x21\x90\x41\x77\xA6\x39\x1F\xEA\x9E\xE3\x9F\xD0\x66\x6F\x05\xEC\xAA\x76\x7E\xBF\x6B\x16\xA0\xEB\xB5\xC7\xFC\x92\x54\x2F\x2B\x11\x27\x25\x37\x78\x4C\x51\x6A\xB0\xF3\xCC\x58\x5D\x14\xF1\x6A\x48\x15\xFF\xC2\x07\xB6\xB1\x8D\x0F\x8E\x5C\x50\x46\xB3\x3D\xBF\x01\x98\x4F\xB2\x59\x54\x47\x3E\x34\x7B\x78\x6D\x56\x93\x2E\x73\xEA\x66\x28\x78\xCD\x1D\x14\xBF\xA0\x8F\x2F\x2E\xB8\x2E\x8E\xF2\x14\x8A\xCC\xE9\xB5\x7C\xFB\x6C\x9D\x0C\xA5\xE1\x96", + ["OU=Trustis FPS Root CA,O=Trustis Limited,C=GB"] = "\x30\x82\x03\x67\x30\x82\x02\x4F\xA0\x03\x02\x01\x02\x02\x10\x1B\x1F\xAD\xB6\x20\xF9\x24\xD3\x36\x6B\xF7\xC7\xF1\x8C\xA0\x59\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x30\x45\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x42\x31\x18\x30\x16\x06\x03\x55\x04\x0A\x13\x0F\x54\x72\x75\x73\x74\x69\x73\x20\x4C\x69\x6D\x69\x74\x65\x64\x31\x1C\x30\x1A\x06\x03\x55\x04\x0B\x13\x13\x54\x72\x75\x73\x74\x69\x73\x20\x46\x50\x53\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x1E\x17\x0D\x30\x33\x31\x32\x32\x33\x31\x32\x31\x34\x30\x36\x5A\x17\x0D\x32\x34\x30\x31\x32\x31\x31\x31\x33\x36\x35\x34\x5A\x30\x45\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x47\x42\x31\x18\x30\x16\x06\x03\x55\x04\x0A\x13\x0F\x54\x72\x75\x73\x74\x69\x73\x20\x4C\x69\x6D\x69\x74\x65\x64\x31\x1C\x30\x1A\x06\x03\x55\x04\x0B\x13\x13\x54\x72\x75\x73\x74\x69\x73\x20\x46\x50\x53\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x82\x01\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x01\x0F\x00\x30\x82\x01\x0A\x02\x82\x01\x01\x00\xC5\x50\x7B\x9E\x3B\x35\xD0\xDF\xC4\x8C\xCD\x8E\x9B\xED\xA3\xC0\x36\x99\xF4\x42\xEA\xA7\x3E\x80\x83\x0F\xA6\xA7\x59\x87\xC9\x90\x45\x43\x7E\x00\xEA\x86\x79\x2A\x03\xBD\x3D\x37\x99\x89\x66\xB7\xE5\x8A\x56\x86\x93\x9C\x68\x4B\x68\x04\x8C\x93\x93\x02\x3E\x30\xD2\x37\x3A\x22\x61\x89\x1C\x85\x4E\x7D\x8F\xD5\xAF\x7B\x35\xF6\x7E\x28\x47\x89\x31\xDC\x0E\x79\x64\x1F\x99\xD2\x5B\xBA\xFE\x7F\x60\xBF\xAD\xEB\xE7\x3C\x38\x29\x6A\x2F\xE5\x91\x0B\x55\xFF\xEC\x6F\x58\xD5\x2D\xC9\xDE\x4C\x66\x71\x8F\x0C\xD7\x04\xDA\x07\xE6\x1E\x18\xE3\xBD\x29\x02\xA8\xFA\x1C\xE1\x5B\xB9\x83\xA8\x41\x48\xBC\x1A\x71\x8D\xE7\x62\xE5\x2D\xB2\xEB\xDF\x7C\xCF\xDB\xAB\x5A\xCA\x31\xF1\x4C\x22\xF3\x05\x13\xF7\x82\xF9\x73\x79\x0C\xBE\xD7\x4B\x1C\xC0\xD1\x15\x3C\x93\x41\x64\xD1\xE6\xBE\x23\x17\x22\x00\x89\x5E\x1F\x6B\xA5\xAC\x6E\xA7\x4B\x8C\xED\xA3\x72\xE6\xAF\x63\x4D\x2F\x85\xD2\x14\x35\x9A\x2E\x4E\x8C\xEA\x32\x98\x28\x86\xA1\x91\x09\x41\x3A\xB4\xE1\xE3\xF2\xFA\xF0\xC9\x0A\xA2\x41\xDD\xA9\xE3\x03\xC7\x88\x15\x3B\x1C\xD4\x1A\x94\xD7\x9F\x64\x59\x12\x6D\x02\x03\x01\x00\x01\xA3\x53\x30\x51\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1F\x06\x03\x55\x1D\x23\x04\x18\x30\x16\x80\x14\xBA\xFA\x71\x25\x79\x8B\x57\x41\x25\x21\x86\x0B\x71\xEB\xB2\x64\x0E\x8B\x21\x67\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xBA\xFA\x71\x25\x79\x8B\x57\x41\x25\x21\x86\x0B\x71\xEB\xB2\x64\x0E\x8B\x21\x67\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x05\x05\x00\x03\x82\x01\x01\x00\x7E\x58\xFF\xFD\x35\x19\x7D\x9C\x18\x4F\x9E\xB0\x2B\xBC\x8E\x8C\x14\xFF\x2C\xA0\xDA\x47\x5B\xC3\xEF\x81\x2D\xAF\x05\xEA\x74\x48\x5B\xF3\x3E\x4E\x07\xC7\x6D\xC5\xB3\x93\xCF\x22\x35\x5C\xB6\x3F\x75\x27\x5F\x09\x96\xCD\xA0\xFE\xBE\x40\x0C\x5C\x12\x55\xF8\x93\x82\xCA\x29\xE9\x5E\x3F\x56\x57\x8B\x38\x36\xF7\x45\x1A\x4C\x28\xCD\x9E\x41\xB8\xED\x56\x4C\x84\xA4\x40\xC8\xB8\xB0\xA5\x2B\x69\x70\x04\x6A\xC3\xF8\xD4\x12\x32\xF9\x0E\xC3\xB1\xDC\x32\x84\x44\x2C\x6F\xCB\x46\x0F\xEA\x66\x41\x0F\x4F\xF1\x58\xA5\xA6\x0D\x0D\x0F\x61\xDE\xA5\x9E\x5D\x7D\x65\xA1\x3C\x17\xE7\xA8\x55\x4E\xEF\xA0\xC7\xED\xC6\x44\x7F\x54\xF5\xA3\xE0\x8F\xF0\x7C\x55\x22\x8F\x29\xB6\x81\xA3\xE1\x6D\x4E\x2C\x1B\x80\x67\xEC\xAD\x20\x9F\x0C\x62\x61\xD5\x97\xFF\x43\xED\x2D\xC1\xDA\x5D\x29\x2A\x85\x3F\xAC\x65\xEE\x86\x0F\x05\x8D\x90\x5F\xDF\xEE\x9F\xF4\xBF\xEE\x1D\xFB\x98\xE4\x7F\x90\x2B\x84\x78\x10\x0E\x6C\x49\x53\xEF\x15\x5B\x65\x46\x4A\x5D\xAF\xBA\xFB\x3A\x72\x1D\xCD\xF6\x25\x88\x1E\x97\xCC\x21\x9C\x29\x01\x0D\x65\xEB\x57\xD9\xF3\x57\x96\xBB\x48\xCD\x81", + ["CN=StartCom Certification Authority G2,O=StartCom Ltd.,C=IL"] = "\x30\x82\x05\x63\x30\x82\x03\x4B\xA0\x03\x02\x01\x02\x02\x01\x3B\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x30\x53\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x49\x4C\x31\x16\x30\x14\x06\x03\x55\x04\x0A\x13\x0D\x53\x74\x61\x72\x74\x43\x6F\x6D\x20\x4C\x74\x64\x2E\x31\x2C\x30\x2A\x06\x03\x55\x04\x03\x13\x23\x53\x74\x61\x72\x74\x43\x6F\x6D\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x47\x32\x30\x1E\x17\x0D\x31\x30\x30\x31\x30\x31\x30\x31\x30\x30\x30\x31\x5A\x17\x0D\x33\x39\x31\x32\x33\x31\x32\x33\x35\x39\x30\x31\x5A\x30\x53\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x49\x4C\x31\x16\x30\x14\x06\x03\x55\x04\x0A\x13\x0D\x53\x74\x61\x72\x74\x43\x6F\x6D\x20\x4C\x74\x64\x2E\x31\x2C\x30\x2A\x06\x03\x55\x04\x03\x13\x23\x53\x74\x61\x72\x74\x43\x6F\x6D\x20\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x69\x6F\x6E\x20\x41\x75\x74\x68\x6F\x72\x69\x74\x79\x20\x47\x32\x30\x82\x02\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x02\x0F\x00\x30\x82\x02\x0A\x02\x82\x02\x01\x00\xB6\x89\x36\x5B\x07\xB7\x20\x36\xBD\x82\xBB\xE1\x16\x20\x03\x95\x7A\xAF\x0E\xA3\x55\xC9\x25\x99\x4A\xC5\xD0\x56\x41\x87\x90\x4D\x21\x60\xA4\x14\x87\x3B\xCD\xFD\xB2\x3E\xB4\x67\x03\x6A\xED\xE1\x0F\x4B\xC0\x91\x85\x70\x45\xE0\x42\x9E\xDE\x29\x23\xD4\x01\x0D\xA0\x10\x79\xB8\xDB\x03\xBD\xF3\xA9\x2F\xD1\xC6\xE0\x0F\xCB\x9E\x8A\x14\x0A\xB8\xBD\xF6\x56\x62\xF1\xC5\x72\xB6\x32\x25\xD9\xB2\xF3\xBD\x65\xC5\x0D\x2C\x6E\xD5\x92\x6F\x18\x8B\x00\x41\x14\x82\x6F\x40\x20\x26\x7A\x28\x0F\xF5\x1E\x7F\x27\xF7\x94\xB1\x37\x3D\xB7\xC7\x91\xF7\xE2\x01\xEC\xFD\x94\x89\xE1\xCC\x6E\xD3\x36\xD6\x0A\x19\x79\xAE\xD7\x34\x82\x65\xFF\x7C\x42\xBB\xB6\xDD\x0B\xA6\x34\xAF\x4B\x60\xFE\x7F\x43\x49\x06\x8B\x8C\x43\xB8\x56\xF2\xD9\x7F\x21\x43\x17\xEA\xA7\x48\x95\x01\x75\x75\xEA\x2B\xA5\x43\x95\xEA\x15\x84\x9D\x08\x8D\x26\x6E\x55\x9B\xAB\xDC\xD2\x39\xD2\x31\x1D\x60\xE2\xAC\xCC\x56\x45\x24\xF5\x1C\x54\xAB\xEE\x86\xDD\x96\x32\x85\xF8\x4C\x4F\xE8\x95\x76\xB6\x05\xDD\x36\x23\x67\xBC\xFF\x15\xE2\xCA\x3B\xE6\xA6\xEC\x3B\xEC\x26\x11\x34\x48\x8D\xF6\x80\x2B\x1A\x23\x02\xEB\x8A\x1C\x3A\x76\x2A\x7B\x56\x16\x1C\x72\x2A\xB3\xAA\xE3\x60\xA5\x00\x9F\x04\x9B\xE2\x6F\x1E\x14\x58\x5B\xA5\x6C\x8B\x58\x3C\xC3\xBA\x4E\x3A\x5C\xF7\xE1\x96\x2B\x3E\xEF\x07\xBC\xA4\xE5\x5D\xCC\x4D\x9F\x0D\xE1\xDC\xAA\xBB\xE1\x6E\x1A\xEC\x8F\xE1\xB6\x4C\x4D\x79\x72\x5D\x17\x35\x0B\x1D\xD7\xC1\x47\xDA\x96\x24\xE0\xD0\x72\xA8\x5A\x5F\x66\x2D\x10\xDC\x2F\x2A\x13\xAE\x26\xFE\x0A\x1C\x19\xCC\xD0\x3E\x0B\x9C\xC8\x09\x2E\xF9\x5B\x96\x7A\x47\x9C\xE9\x7A\xF3\x05\x50\x74\x95\x73\x9E\x30\x09\xF3\x97\x82\x5E\xE6\x8F\x39\x08\x1E\x59\xE5\x35\x14\x42\x13\xFF\x00\x9C\xF7\xBE\xAA\x50\xCF\xE2\x51\x48\xD7\xB8\x6F\xAF\xF8\x4E\x7E\x33\x98\x92\x14\x62\x3A\x75\x63\xCF\x7B\xFA\xDE\x82\x3B\xA9\xBB\x39\xE2\xC4\xBD\x2C\x00\x0E\xC8\x17\xAC\x13\xEF\x4D\x25\x8E\xD8\xB3\x90\x2F\xA9\xDA\x29\x7D\x1D\xAF\x74\x3A\xB2\x27\xC0\xC1\x1E\x3E\x75\xA3\x16\xA9\xAF\x7A\x22\x5D\x9F\x13\x1A\xCF\xA7\xA0\xEB\xE3\x86\x0A\xD3\xFD\xE6\x96\x95\xD7\x23\xC8\x37\xDD\xC4\x7C\xAA\x36\xAC\x98\x1A\x12\xB1\xE0\x4E\xE8\xB1\x3B\xF5\xD6\x6F\xF1\x30\xD7\x02\x03\x01\x00\x01\xA3\x42\x30\x40\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x4B\xC5\xB4\x40\x6B\xAD\x1C\xB3\xA5\x1C\x65\x6E\x46\x36\x89\x87\x05\x0C\x0E\xB6\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x03\x82\x02\x01\x00\x73\x57\x3F\x2C\xD5\x95\x32\x7E\x37\xDB\x96\x92\xEB\x19\x5E\x7E\x53\xE7\x41\xEC\x11\xB6\x47\xEF\xB5\xDE\xED\x74\x5C\xC5\xF1\x8E\x49\xE0\xFC\x6E\x99\x13\xCD\x9F\x8A\xDA\xCD\x3A\x0A\xD8\x3A\x5A\x09\x3F\x5F\x34\xD0\x2F\x03\xD2\x66\x1D\x1A\xBD\x9C\x90\x37\xC8\x0C\x8E\x07\x5A\x94\x45\x46\x2A\xE6\xBE\x7A\xDA\xA1\xA9\xA4\x69\x12\x92\xB0\x7D\x36\xD4\x44\x87\xD7\x51\xF1\x29\x63\xD6\x75\xCD\x16\xE4\x27\x89\x1D\xF8\xC2\x32\x48\xFD\xDB\x99\xD0\x8F\x5F\x54\x74\xCC\xAC\x67\x34\x11\x62\xD9\x0C\x0A\x37\x87\xD1\xA3\x17\x48\x8E\xD2\x17\x1D\xF6\xD7\xFD\xDB\x65\xEB\xFD\xA8\xD4\xF5\xD6\x4F\xA4\x5B\x75\xE8\xC5\xD2\x60\xB2\xDB\x09\x7E\x25\x8B\x7B\xBA\x52\x92\x9E\x3E\xE8\xC5\x77\xA1\x3C\xE0\x4A\x73\x6B\x61\xCF\x86\xDC\x43\xFF\xFF\x21\xFE\x23\x5D\x24\x4A\xF5\xD3\x6D\x0F\x62\x04\x05\x57\x82\xDA\x6E\xA4\x33\x25\x79\x4B\x2E\x54\x19\x8B\xCC\x2C\x3D\x30\xE9\xD1\x06\xFF\xE8\x32\x46\xBE\xB5\x33\x76\x77\xA8\x01\x5D\x96\xC1\xC1\xD5\xBE\xAE\x25\xC0\xC9\x1E\x0A\x09\x20\x88\xA1\x0E\xC9\xF3\x6F\x4D\x82\x54\x00\x20\xA7\xD2\x8F\xE4\x39\x54\x17\x2E\x8D\x1E\xB8\x1B\xBB\x1B\xBD\x9A\x4E\x3B\x10\x34\xDC\x9C\x88\x53\xEF\xA2\x31\x5B\x58\x4F\x91\x62\xC8\xC2\x9A\x9A\xCD\x15\x5D\x38\xA9\xD6\xBE\xF8\x13\xB5\x9F\x12\x69\xF2\x50\x62\xAC\xFB\x17\x37\xF4\xEE\xB8\x75\x67\x60\x10\xFB\x83\x50\xF9\x44\xB5\x75\x9C\x40\x17\xB2\xFE\xFD\x79\x5D\x6E\x58\x58\x5F\x30\xFC\x00\xAE\xAF\x33\xC1\x0E\x4E\x6C\xBA\xA7\xA6\xA1\x7F\x32\xDB\x38\xE0\xB1\x72\x17\x0A\x2B\x91\xEC\x6A\x63\x26\xED\x89\xD4\x78\xCC\x74\x1E\x05\xF8\x6B\xFE\x8C\x6A\x76\x39\x29\xAE\x65\x23\x12\x95\x08\x22\x1C\x97\xCE\x5B\x06\xEE\x0C\xE2\xBB\xBC\x1F\x44\x93\xF6\xD8\x38\x45\x05\x21\xED\xE4\xAD\xAB\x12\xB6\x03\xA4\x42\x2E\x2D\xC4\x09\x3A\x03\x67\x69\x84\x9A\xE1\x59\x90\x8A\x28\x85\xD5\x5D\x74\xB1\xD1\x0E\x20\x58\x9B\x13\xA5\xB0\x63\xA6\xED\x7B\x47\xFD\x45\x55\x30\xA4\xEE\x9A\xD4\xE6\xE2\x87\xEF\x98\xC9\x32\x82\x11\x29\x22\xBC\x00\x0A\x31\x5E\x2D\x0F\xC0\x8E\xE9\x6B\xB2\x8F\x2E\x06\xD8\xD1\x91\xC7\xC6\x12\xF4\x4C\xFD\x30\x17\xC3\xC1\xDA\x38\x5B\xE3\xA9\xEA\xE6\xA1\xBA\x79\xEF\x73\xD8\xB6\x53\x57\x2D\xF6\xD0\xE1\xD7\x48", + ["CN=Buypass Class 2 Root CA,O=Buypass AS-983163327,C=NO"] = "\x30\x82\x05\x59\x30\x82\x03\x41\xA0\x03\x02\x01\x02\x02\x01\x02\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x30\x4E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4E\x4F\x31\x1D\x30\x1B\x06\x03\x55\x04\x0A\x0C\x14\x42\x75\x79\x70\x61\x73\x73\x20\x41\x53\x2D\x39\x38\x33\x31\x36\x33\x33\x32\x37\x31\x20\x30\x1E\x06\x03\x55\x04\x03\x0C\x17\x42\x75\x79\x70\x61\x73\x73\x20\x43\x6C\x61\x73\x73\x20\x32\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x1E\x17\x0D\x31\x30\x31\x30\x32\x36\x30\x38\x33\x38\x30\x33\x5A\x17\x0D\x34\x30\x31\x30\x32\x36\x30\x38\x33\x38\x30\x33\x5A\x30\x4E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4E\x4F\x31\x1D\x30\x1B\x06\x03\x55\x04\x0A\x0C\x14\x42\x75\x79\x70\x61\x73\x73\x20\x41\x53\x2D\x39\x38\x33\x31\x36\x33\x33\x32\x37\x31\x20\x30\x1E\x06\x03\x55\x04\x03\x0C\x17\x42\x75\x79\x70\x61\x73\x73\x20\x43\x6C\x61\x73\x73\x20\x32\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x82\x02\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x02\x0F\x00\x30\x82\x02\x0A\x02\x82\x02\x01\x00\xD7\xC7\x5E\xF7\xC1\x07\xD4\x77\xFB\x43\x21\xF4\xF4\xF5\x69\xE4\xEE\x32\x01\xDB\xA3\x86\x1F\xE4\x59\x0D\xBA\xE7\x75\x83\x52\xEB\xEA\x1C\x61\x15\x48\xBB\x1D\x07\xCA\x8C\xAE\xB0\xDC\x96\x9D\xEA\xC3\x60\x92\x86\x82\x28\x73\x9C\x56\x06\xFF\x4B\x64\xF0\x0C\x2A\x37\x49\xB5\xE5\xCF\x0C\x7C\xEE\xF1\x4A\xBB\x73\x30\x65\xF3\xD5\x2F\x83\xB6\x7E\xE3\xE7\xF5\x9E\xAB\x60\xF9\xD3\xF1\x9D\x92\x74\x8A\xE4\x1C\x96\xAC\x5B\x80\xE9\xB5\xF4\x31\x87\xA3\x51\xFC\xC7\x7E\xA1\x6F\x8E\x53\x77\xD4\x97\xC1\x55\x33\x92\x3E\x18\x2F\x75\xD4\xAD\x86\x49\xCB\x95\xAF\x54\x06\x6C\xD8\x06\x13\x8D\x5B\xFF\xE1\x26\x19\x59\xC0\x24\xBA\x81\x71\x79\x90\x44\x50\x68\x24\x94\x5F\xB8\xB3\x11\xF1\x29\x41\x61\xA3\x41\xCB\x23\x36\xD5\xC1\xF1\x32\x50\x10\x4E\x7F\xF4\x86\x93\xEC\x84\xD3\x8E\xBC\x4B\xBF\x5C\x01\x4E\x07\x3D\xDC\x14\x8A\x94\x0A\xA4\xEA\x73\xFB\x0B\x51\xE8\x13\x07\x18\xFA\x0E\xF1\x2B\xD1\x54\x15\x7D\x3C\xE1\xF7\xB4\x19\x42\x67\x62\x5E\x77\xE0\xA2\x55\xEC\xB6\xD9\x69\x17\xD5\x3A\xAF\x44\xED\x4A\xC5\x9E\xE4\x7A\x27\x7C\xE5\x75\xD7\xAA\xCB\x25\xE7\xDF\x6B\x0A\xDB\x0F\x4D\x93\x4E\xA8\xA0\xCD\x7B\x2E\xF2\x59\x01\x6A\xB7\x0D\xB8\x07\x81\x7E\x8B\x38\x1B\x38\xE6\x0A\x57\x99\x3D\xEE\x21\xE8\xA3\xF5\x0C\x16\xDD\x8B\xEC\x34\x8E\x9C\x2A\x1C\x00\x15\x17\x8D\x68\x83\xD2\x70\x9F\x18\x08\xCD\x11\x68\xD5\xC9\x6B\x52\xCD\xC4\x46\x8F\xDC\xB5\xF3\xD8\x57\x73\x1E\xE9\x94\x39\x04\xBF\xD3\xDE\x38\xDE\xB4\x53\xEC\x69\x1C\xA2\x7E\xC4\x8F\xE4\x1B\x70\xAD\xF2\xA2\xF9\xFB\xF7\x16\x64\x66\x69\x9F\x49\x51\xA2\xE2\x15\x18\x67\x06\x4A\x7F\xD5\x6C\xB5\x4D\xB3\x33\xE0\x61\xEB\x5D\xBE\xE9\x98\x0F\x32\xD7\x1D\x4B\x3C\x2E\x5A\x01\x52\x91\x09\xF2\xDF\xEA\x8D\xD8\x06\x40\x63\xAA\x11\xE4\xFE\xC3\x37\x9E\x14\x52\x3F\xF4\xE2\xCC\xF2\x61\x93\xD1\xFD\x67\x6B\xD7\x52\xAE\xBF\x68\xAB\x40\x43\xA0\x57\x35\x53\x78\xF0\x53\xF8\x61\x42\x07\x64\xC6\xD7\x6F\x9B\x4C\x38\x0D\x63\xAC\x62\xAF\x36\x8B\xA2\x73\x0A\x0D\xF5\x21\xBD\x74\xAA\x4D\xEA\x72\x03\x49\xDB\xC7\x5F\x1D\x62\x63\xC7\xFD\xDD\x91\xEC\x33\xEE\xF5\x6D\xB4\x6E\x30\x68\xDE\xC8\xD6\x26\xB0\x75\x5E\x7B\xB4\x07\x20\x98\xA1\x76\x32\xB8\x4D\x6C\x4F\x02\x03\x01\x00\x01\xA3\x42\x30\x40\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\xC9\x80\x77\xE0\x62\x92\x82\xF5\x46\x9C\xF3\xBA\xF7\x4C\xC3\xDE\xB8\xA3\xAD\x39\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x03\x82\x02\x01\x00\x53\x5F\x21\xF5\xBA\xB0\x3A\x52\x39\x2C\x92\xB0\x6C\x00\xC9\xEF\xCE\x20\xEF\x06\xF2\x96\x9E\xE9\xA4\x74\x7F\x7A\x16\xFC\xB7\xF5\xB6\xFB\x15\x1B\x3F\xAB\xA6\xC0\x72\x5D\x10\xB1\x71\xEE\xBC\x4F\xE3\xAD\xAC\x03\x6D\x2E\x71\x2E\xAF\xC4\xE3\xAD\xA3\xBD\x0C\x11\xA7\xB4\xFF\x4A\xB2\x7B\x10\x10\x1F\xA7\x57\x41\xB2\xC0\xAE\xF4\x2C\x59\xD6\x47\x10\x88\xF3\x21\x51\x29\x30\xCA\x60\x86\xAF\x46\xAB\x1D\xED\x3A\x5B\xB0\x94\xDE\x44\xE3\x41\x08\xA2\xC1\xEC\x1D\xD6\xFD\x4F\xB6\xD6\x47\xD0\x14\x0B\xCA\xE6\xCA\xB5\x7B\x77\x7E\x41\x1F\x5E\x83\xC7\xB6\x8C\x39\x96\xB0\x3F\x96\x81\x41\x6F\x60\x90\xE2\xE8\xF9\xFB\x22\x71\xD9\x7D\xB3\x3D\x46\xBF\xB4\x84\xAF\x90\x1C\x0F\x8F\x12\x6A\xAF\xEF\xEE\x1E\x7A\xAE\x02\x4A\x8A\x17\x2B\x76\xFE\xAC\x54\x89\x24\x2C\x4F\x3F\xB6\xB2\xA7\x4E\x8C\xA8\x91\x97\xFB\x29\xC6\x7B\x5C\x2D\xB9\xCB\x66\xB6\xB7\xA8\x5B\x12\x51\x85\xB5\x09\x7E\x62\x78\x70\xFE\xA9\x6A\x60\xB6\x1D\x0E\x79\x0C\xFD\xCA\xEA\x24\x80\x72\xC3\x97\x3F\xF2\x77\xAB\x43\x22\x0A\xC7\xEB\xB6\x0C\x84\x82\x2C\x80\x6B\x41\x8A\x08\xC0\xEB\xA5\x6B\xDF\x99\x12\xCB\x8A\xD5\x5E\x80\x0C\x91\xE0\x26\x08\x36\x48\xC5\xFA\x38\x11\x35\xFF\x25\x83\x2D\xF2\x7A\xBF\xDA\xFD\x8E\xFE\xA5\xCB\x45\x2C\x1F\xC4\x88\x53\xAE\x77\x0E\xD9\x9A\x76\xC5\x8E\x2C\x1D\xA3\xBA\xD5\xEC\x32\xAE\xC0\xAA\xAC\xF7\xD1\x7A\x4D\xEB\xD4\x07\xE2\x48\xF7\x22\x8E\xB0\xA4\x9F\x6A\xCE\x8E\xB2\xB2\x60\xF4\xA3\x22\xD0\x23\xEB\x94\x5A\x7A\x69\xDD\x0F\xBF\x40\x57\xAC\x6B\x59\x50\xD9\xA3\x99\xE1\x6E\xFE\x8D\x01\x79\x27\x23\x15\xDE\x92\x9D\x7B\x09\x4D\x5A\xE7\x4B\x48\x30\x5A\x18\xE6\x0A\x6D\xE6\x8F\xE0\xD2\xBB\xE6\xDF\x7C\x6E\x21\x82\xC1\x68\x39\x4D\xB4\x98\x58\x66\x62\xCC\x4A\x90\x5E\xC3\xFA\x27\x04\xB1\x79\x15\x74\x99\xCC\xBE\xAD\x20\xDE\x26\x60\x1C\xEB\x56\x51\xA6\xA3\xEA\xE4\xA3\x3F\xA7\xFF\x61\xDC\xF1\x5A\x4D\x6C\x32\x23\x43\xEE\xAC\xA8\xEE\xEE\x4A\x12\x09\x3C\x5D\x71\xC2\xBE\x79\xFA\xC2\x87\x68\x1D\x0B\xFD\x5C\x69\xCC\x06\xD0\x9A\x7D\x54\x99\x2A\xC9\x39\x1A\x19\xAF\x4B\x2A\x43\xF3\x63\x5D\x5A\x58\xE2\x2F\xE3\x1D\xE4\xA9\xD6\xD0\x0A\xD0\x9E\xBF\xD7\x81\x09\xF1\xC9\xC7\x26\x0D\xAC\x98\x16\x56\xA0", + ["CN=Buypass Class 3 Root CA,O=Buypass AS-983163327,C=NO"] = "\x30\x82\x05\x59\x30\x82\x03\x41\xA0\x03\x02\x01\x02\x02\x01\x02\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x30\x4E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4E\x4F\x31\x1D\x30\x1B\x06\x03\x55\x04\x0A\x0C\x14\x42\x75\x79\x70\x61\x73\x73\x20\x41\x53\x2D\x39\x38\x33\x31\x36\x33\x33\x32\x37\x31\x20\x30\x1E\x06\x03\x55\x04\x03\x0C\x17\x42\x75\x79\x70\x61\x73\x73\x20\x43\x6C\x61\x73\x73\x20\x33\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x1E\x17\x0D\x31\x30\x31\x30\x32\x36\x30\x38\x32\x38\x35\x38\x5A\x17\x0D\x34\x30\x31\x30\x32\x36\x30\x38\x32\x38\x35\x38\x5A\x30\x4E\x31\x0B\x30\x09\x06\x03\x55\x04\x06\x13\x02\x4E\x4F\x31\x1D\x30\x1B\x06\x03\x55\x04\x0A\x0C\x14\x42\x75\x79\x70\x61\x73\x73\x20\x41\x53\x2D\x39\x38\x33\x31\x36\x33\x33\x32\x37\x31\x20\x30\x1E\x06\x03\x55\x04\x03\x0C\x17\x42\x75\x79\x70\x61\x73\x73\x20\x43\x6C\x61\x73\x73\x20\x33\x20\x52\x6F\x6F\x74\x20\x43\x41\x30\x82\x02\x22\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x01\x05\x00\x03\x82\x02\x0F\x00\x30\x82\x02\x0A\x02\x82\x02\x01\x00\xA5\xDA\x0A\x95\x16\x50\xE3\x95\xF2\x5E\x9D\x76\x31\x06\x32\x7A\x9B\xF1\x10\x76\xB8\x00\x9A\xB5\x52\x36\xCD\x24\x47\xB0\x9F\x18\x64\xBC\x9A\xF6\xFA\xD5\x79\xD8\x90\x62\x4C\x22\x2F\xDE\x38\x3D\xD6\xE0\xA8\xE9\x1C\x2C\xDB\x78\x11\xE9\x8E\x68\x51\x15\x72\xC7\xF3\x33\x87\xE4\xA0\x5D\x0B\x5C\xE0\x57\x07\x2A\x30\xF5\xCD\xC4\x37\x77\x28\x4D\x18\x91\xE6\xBF\xD5\x52\xFD\x71\x2D\x70\x3E\xE7\xC6\xC4\x8A\xE3\xF0\x28\x0B\xF4\x76\x98\xA1\x8B\x87\x55\xB2\x3A\x13\xFC\xB7\x3E\x27\x37\x8E\x22\xE3\xA8\x4F\x2A\xEF\x60\xBB\x3D\xB7\x39\xC3\x0E\x01\x47\x99\x5D\x12\x4F\xDB\x43\xFA\x57\xA1\xED\xF9\x9D\xBE\x11\x47\x26\x5B\x13\x98\xAB\x5D\x16\x8A\xB0\x37\x1C\x57\x9D\x45\xFF\x88\x96\x36\xBF\xBB\xCA\x07\x7B\x6F\x87\x63\xD7\xD0\x32\x6A\xD6\x5D\x6C\x0C\xF1\xB3\x6E\x39\xE2\x6B\x31\x2E\x39\x00\x27\x14\xDE\x38\xC0\xEC\x19\x66\x86\x12\xE8\x9D\x72\x16\x13\x64\x52\xC7\xA9\x37\x1C\xFD\x82\x30\xED\x84\x18\x1D\xF4\xAE\x5C\xFF\x70\x13\x00\xEB\xB1\xF5\x33\x7A\x4B\xD6\x55\xF8\x05\x8D\x4B\x69\xB0\xF5\xB3\x28\x36\x5C\x14\xC4\x51\x73\x4D\x6B\x0B\xF1\x34\x07\xDB\x17\x39\xD7\xDC\x28\x7B\x6B\xF5\x9F\xF3\x2E\xC1\x4F\x17\x2A\x10\xF3\xCC\xCA\xE8\xEB\xFD\x6B\xAB\x2E\x9A\x9F\x2D\x82\x6E\x04\xD4\x52\x01\x93\x2D\x3D\x86\xFC\x7E\xFC\xDF\xEF\x42\x1D\xA6\x6B\xEF\xB9\x20\xC6\xF7\xBD\xA0\xA7\x95\xFD\xA7\xE6\x89\x24\xD8\xCC\x8C\x34\x6C\xE2\x23\x2F\xD9\x12\x1A\x21\xB9\x55\x91\x6F\x0B\x91\x79\x19\x0C\xAD\x40\x88\x0B\x70\xE2\x7A\xD2\x0E\xD8\x68\x48\xBB\x82\x13\x39\x10\x58\xE9\xD8\x2A\x07\xC6\x12\xDB\x58\xDB\xD2\x3B\x55\x10\x47\x05\x15\x67\x62\x7E\x18\x63\xA6\x46\x3F\x09\x0E\x54\x32\x5E\xBF\x0D\x62\x7A\x27\xEF\x80\xE8\xDB\xD9\x4B\x06\x5A\x37\x5A\x25\xD0\x08\x12\x77\xD4\x6F\x09\x50\x97\x3D\xC8\x1D\xC3\xDF\x8C\x45\x30\x56\xC6\xD3\x64\xAB\x66\xF3\xC0\x5E\x96\x9C\xC3\xC4\xEF\xC3\x7C\x6B\x8B\x3A\x79\x7F\xB3\x49\xCF\x3D\xE2\x89\x9F\xA0\x30\x4B\x85\xB9\x9C\x94\x24\x79\x8F\x7D\x6B\xA9\x45\x68\x0F\x2B\xD0\xF1\xDA\x1C\xCB\x69\xB8\xCA\x49\x62\x6D\xC8\xD0\x63\x62\xDD\x60\x0F\x58\xAA\x8F\xA1\xBC\x05\xA5\x66\xA2\xCF\x1B\x76\xB2\x84\x64\xB1\x4C\x39\x52\xC0\x30\xBA\xF0\x8C\x4B\x02\xB0\xB6\xB7\x02\x03\x01\x00\x01\xA3\x42\x30\x40\x30\x0F\x06\x03\x55\x1D\x13\x01\x01\xFF\x04\x05\x30\x03\x01\x01\xFF\x30\x1D\x06\x03\x55\x1D\x0E\x04\x16\x04\x14\x47\xB8\xCD\xFF\xE5\x6F\xEE\xF8\xB2\xEC\x2F\x4E\x0E\xF9\x25\xB0\x8E\x3C\x6B\xC3\x30\x0E\x06\x03\x55\x1D\x0F\x01\x01\xFF\x04\x04\x03\x02\x01\x06\x30\x0D\x06\x09\x2A\x86\x48\x86\xF7\x0D\x01\x01\x0B\x05\x00\x03\x82\x02\x01\x00\x00\x20\x23\x41\x35\x04\x90\xC2\x40\x62\x60\xEF\xE2\x35\x4C\xD7\x3F\xAC\xE2\x34\x90\xB8\xA1\x6F\x76\xFA\x16\x16\xA4\x48\x37\x2C\xE9\x90\xC2\xF2\x3C\xF8\x0A\x9F\xD8\x81\xE5\xBB\x5B\xDA\x25\x2C\xA4\xA7\x55\x71\x24\x32\xF6\xC8\x0B\xF2\xBC\x6A\xF8\x93\xAC\xB2\x07\xC2\x5F\x9F\xDB\xCC\xC8\x8A\xAA\xBE\x6A\x6F\xE1\x49\x10\xCC\x31\xD7\x80\xBB\xBB\xC8\xD8\xA2\x0E\x64\x57\xEA\xA2\xF5\xC2\xA9\x31\x15\xD2\x20\x6A\xEC\xFC\x22\x01\x28\xCF\x86\xB8\x80\x1E\xA9\xCC\x11\xA5\x3C\xF2\x16\xB3\x47\x9D\xFC\xD2\x80\x21\xC4\xCB\xD0\x47\x70\x41\xA1\xCA\x83\x19\x08\x2C\x6D\xF2\x5D\x77\x9C\x8A\x14\x13\xD4\x36\x1C\x92\xF0\xE5\x06\x37\xDC\xA6\xE6\x90\x9B\x38\x8F\x5C\x6B\x1B\x46\x86\x43\x42\x5F\x3E\x01\x07\x53\x54\x5D\x65\x7D\xF7\x8A\x73\xA1\x9A\x54\x5A\x1F\x29\x43\x14\x27\xC2\x85\x0F\xB5\x88\x7B\x1A\x3B\x94\xB7\x1D\x60\xA7\xB5\x9C\xE7\x29\x69\x57\x5A\x9B\x93\x7A\x43\x30\x1B\x03\xD7\x62\xC8\x40\xA6\xAA\xFC\x64\xE4\x4A\xD7\x91\x53\x01\xA8\x20\x88\x6E\x9C\x5F\x44\xB9\xCB\x60\x81\x34\xEC\x6F\xD3\x7D\xDA\x48\x5F\xEB\xB4\x90\xBC\x2D\xA9\x1C\x0B\xAC\x1C\xD5\xA2\x68\x20\x80\x04\xD6\xFC\xB1\x8F\x2F\xBB\x4A\x31\x0D\x4A\x86\x1C\xEB\xE2\x36\x29\x26\xF5\xDA\xD8\xC4\xF2\x75\x61\xCF\x7E\xAE\x76\x63\x4A\x7A\x40\x65\x93\x87\xF8\x1E\x80\x8C\x86\xE5\x86\xD6\x8F\x0E\xFC\x53\x2C\x60\xE8\x16\x61\x1A\xA2\x3E\x43\x7B\xCD\x39\x60\x54\x6A\xF5\xF2\x89\x26\x01\x68\x83\x48\xA2\x33\xE8\xC9\x04\x91\xB2\x11\x34\x11\x3E\xEA\xD0\x43\x19\x1F\x03\x93\x90\x0C\xFF\x51\x3D\x57\xF4\x41\x6E\xE1\xCB\xA0\xBE\xEB\xC9\x63\xCD\x6D\xCC\xE4\xF8\x36\xAA\x68\x9D\xED\xBD\x5D\x97\x70\x44\x0D\xB6\x0E\x35\xDC\xE1\x0C\x5D\xBB\xA0\x51\x94\xCB\x7E\x16\xEB\x11\x2F\xA3\x92\x45\xC8\x4C\x71\xD9\xBC\xC9\x99\x52\x57\x46\x2F\x50\xCF\xBD\x35\x69\xF4\x3D\x15\xCE\x06\xA5\x2C\x0F\x3E\xF6\x81\xBA\x94\xBB\xC3\xBB\xBF\x65\x78\xD2\x86\x79\xFF\x49\x3B\x1A\x83\x0C\xF0\xDE\x78\xEC\xC8\xF2\x4D\x4C\x1A\xDE\x82\x29\xF8\xC1\x5A\xDA\xED\xEE\xE6\x27\x5E\xE8\x45\xD0\x9D\x1C\x51\xA8\x68\xAB\x44\xE3\xD0\x8B\x6A\xE3\xF8\x3B\xBB\xDC\x4D\xD7\x64\xF2\x51\xBE\xE6\xAA\xAB\x5A\xE9\x31\xEE\x06\xBC\x73\xBF\x13\x62\x0A\x9F\xC7\xB9\x97", }; diff --git a/scripts/base/protocols/syslog/consts.bro b/scripts/base/protocols/syslog/consts.bro index f08e7f71d7..dce1877ecf 100644 --- a/scripts/base/protocols/syslog/consts.bro +++ b/scripts/base/protocols/syslog/consts.bro @@ -1,6 +1,9 @@ +##! Constants definitions for syslog. + module Syslog; export { + ## Mapping between the constants and string values for syslog facilities. const facility_codes: table[count] of string = { [0] = "KERN", [1] = "USER", @@ -27,7 +30,8 @@ export { [22] = "LOCAL6", [23] = "LOCAL7", } &default=function(c: count): string { return fmt("?-%d", c); }; - + + ## Mapping between the constants and string values for syslog severities. const severity_codes: table[count] of string = { [0] = "EMERG", [1] = "ALERT", diff --git a/scripts/base/protocols/syslog/main.bro b/scripts/base/protocols/syslog/main.bro index 2acc843ea8..61334e3f2b 100644 --- a/scripts/base/protocols/syslog/main.bro +++ b/scripts/base/protocols/syslog/main.bro @@ -1,4 +1,5 @@ -##! Core script support for logging syslog messages. +##! Core script support for logging syslog messages. This script represents +##! one syslog message as one logged record. @load ./consts @@ -8,19 +9,25 @@ export { redef enum Log::ID += { LOG }; type Info: record { + ## Timestamp when the syslog message was seen. ts: time &log; + ## Unique ID for the connection. uid: string &log; + ## The connection's 4-tuple of endpoint addresses/ports. id: conn_id &log; + ## Protocol over which the message was seen. proto: transport_proto &log; + ## Syslog facility for the message. facility: string &log; + ## Syslog severity for the message. severity: string &log; + ## The plain text message. message: string &log; }; - - const ports = { 514/udp } &redef; } redef capture_filters += { ["syslog"] = "port 514" }; +const ports = { 514/udp } &redef; redef dpd_config += { [ANALYZER_SYSLOG_BINPAC] = [$ports = ports] }; redef likely_server_ports += { 514/udp }; diff --git a/scripts/base/utils/addrs.bro b/scripts/base/utils/addrs.bro index 415b9adfa9..08efd5281a 100644 --- a/scripts/base/utils/addrs.bro +++ b/scripts/base/utils/addrs.bro @@ -98,3 +98,18 @@ function find_ip_addresses(input: string): string_array } return output; } + +## Returns the string representation of an IP address suitable for inclusion +## in a URI. For IPv4, this does no special formatting, but for IPv6, the +## address is included in square brackets. +## +## a: the address to make suitable for URI inclusion. +## +## Returns: the string representation of *a* suitable for URI inclusion. +function addr_to_uri(a: addr): string + { + if ( is_v4_addr(a) ) + return fmt("%s", a); + else + return fmt("[%s]", a); + } diff --git a/scripts/base/utils/files.bro b/scripts/base/utils/files.bro index 8111245c24..ccd03df0e6 100644 --- a/scripts/base/utils/files.bro +++ b/scripts/base/utils/files.bro @@ -1,10 +1,11 @@ +@load ./addrs ## This function can be used to generate a consistent filename for when ## contents of a file, stream, or connection are being extracted to disk. function generate_extraction_filename(prefix: string, c: connection, suffix: string): string { - local conn_info = fmt("%s:%d-%s:%d", - c$id$orig_h, c$id$orig_p, c$id$resp_h, c$id$resp_p); + local conn_info = fmt("%s:%d-%s:%d", addr_to_uri(c$id$orig_h), c$id$orig_p, + addr_to_uri(c$id$resp_h), c$id$resp_p); if ( prefix != "" ) conn_info = fmt("%s_%s", prefix, conn_info); diff --git a/scripts/base/utils/site.bro b/scripts/base/utils/site.bro index 536c891572..55ee0e5ed1 100644 --- a/scripts/base/utils/site.bro +++ b/scripts/base/utils/site.bro @@ -8,27 +8,31 @@ export { ## Address space that is considered private and unrouted. ## By default it has RFC defined non-routable IPv4 address space. const private_address_space: set[subnet] = { - 10.0.0.0/8, - 192.168.0.0/16, - 127.0.0.0/8, - 172.16.0.0/12 + 10.0.0.0/8, + 192.168.0.0/16, + 172.16.0.0/12, + 100.64.0.0/10, # RFC6598 Carrier Grade NAT + 127.0.0.0/8, + [fe80::]/10, + [::1]/128, } &redef; ## Networks that are considered "local". const local_nets: set[subnet] &redef; - - ## This is used for retrieving the subnet when you multiple - ## :bro:id:`local_nets`. A membership query can be done with an - ## :bro:type:`addr` and the table will yield the subnet it was found + + ## This is used for retrieving the subnet when using multiple entries in + ## :bro:id:`Site::local_nets`. It's populated automatically from there. + ## A membership query can be done with an + ## :bro:type:`addr` and the table will yield the subnet it was found ## within. global local_nets_table: table[subnet] of subnet = {}; ## Networks that are considered "neighbors". const neighbor_nets: set[subnet] &redef; - + ## If local network administrators are known and they have responsibility ## for defined address space, then a mapping can be defined here between - ## networks for which they have responsibility and a set of email + ## networks for which they have responsibility and a set of email ## addresses. const local_admins: table[subnet] of set[string] = {} &redef; @@ -40,27 +44,33 @@ export { ## Function that returns true if an address corresponds to one of ## the local networks, false if not. + ## The function inspects :bro:id:`Site::local_nets`. global is_local_addr: function(a: addr): bool; - + ## Function that returns true if an address corresponds to one of ## the neighbor networks, false if not. + ## The function inspects :bro:id:`Site::neighbor_nets`. global is_neighbor_addr: function(a: addr): bool; - + ## Function that returns true if an address corresponds to one of ## the private/unrouted networks, false if not. + ## The function inspects :bro:id:`Site::private_address_space`. global is_private_addr: function(a: addr): bool; - ## Function that returns true if a host name is within a local + ## Function that returns true if a host name is within a local ## DNS zone. + ## The function inspects :bro:id:`Site::local_zones`. global is_local_name: function(name: string): bool; - - ## Function that returns true if a host name is within a neighbor + + ## Function that returns true if a host name is within a neighbor ## DNS zone. + ## The function inspects :bro:id:`Site::neighbor_zones`. global is_neighbor_name: function(name: string): bool; - + ## Function that returns a common separated list of email addresses ## that are considered administrators for the IP address provided as ## an argument. + ## The function inspects :bro:id:`Site::local_admins`. global get_emails: function(a: addr): string; } @@ -73,22 +83,22 @@ function is_local_addr(a: addr): bool { return a in local_nets; } - + function is_neighbor_addr(a: addr): bool { return a in neighbor_nets; } - + function is_private_addr(a: addr): bool { return a in private_address_space; } - + function is_local_name(name: string): bool { return local_dns_suffix_regex in name; } - + function is_neighbor_name(name: string): bool { return local_dns_neighbor_suffix_regex in name; @@ -96,7 +106,7 @@ function is_neighbor_name(name: string): bool # This is a hack for doing a for loop. const one_to_32: vector of count = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32}; - + # TODO: make this work with IPv6 function find_all_emails(ip: addr): set[string] { diff --git a/scripts/policy/frameworks/communication/listen.bro b/scripts/policy/frameworks/communication/listen.bro index e366e5b4ff..111bc64a23 100644 --- a/scripts/policy/frameworks/communication/listen.bro +++ b/scripts/policy/frameworks/communication/listen.bro @@ -8,5 +8,6 @@ module Communication; event bro_init() &priority=-10 { enable_communication(); - listen(listen_interface, listen_port, listen_ssl); + listen(listen_interface, listen_port, listen_ssl, listen_ipv6, + listen_ipv6_zone_id, listen_retry); } diff --git a/scripts/policy/frameworks/control/controllee.bro b/scripts/policy/frameworks/control/controllee.bro index 798ab8814a..b4769764f4 100644 --- a/scripts/policy/frameworks/control/controllee.bro +++ b/scripts/policy/frameworks/control/controllee.bro @@ -1,3 +1,12 @@ +##! The controllee portion of the control framework. Load this script if remote +##! runtime control of the Bro process is desired. +##! +##! A controllee only needs to load the controllee script in addition +##! to the specific analysis scripts desired. It may also need a node +##! configured as a controller node in the communications nodes configuration:: +##! +##! bro frameworks/control/controllee + @load base/frameworks/control # If an instance is a controllee, it implicitly needs to listen for remote # connections. diff --git a/scripts/policy/frameworks/control/controller.bro b/scripts/policy/frameworks/control/controller.bro index cb76a8b322..22b19bf973 100644 --- a/scripts/policy/frameworks/control/controller.bro +++ b/scripts/policy/frameworks/control/controller.bro @@ -1,3 +1,10 @@ +##! This is a utility script that implements the controller interface for the +##! control framework. It's intended to be run to control a remote Bro +##! and then shutdown. +##! +##! It's intended to be used from the command line like this:: +##! bro frameworks/control/controller Control::host= Control::port= Control::cmd= [Control::arg=] + @load base/frameworks/control @load base/frameworks/communication @@ -18,8 +25,8 @@ event bro_init() &priority=5 # Establish the communication configuration and only request response # messages. - Communication::nodes["control"] = [$host=host, $p=host_port, - $sync=F, $connect=T, + Communication::nodes["control"] = [$host=host, $zone_id=zone_id, + $p=host_port, $sync=F, $connect=T, $class="control", $events=Control::controllee_events]; } diff --git a/scripts/policy/frameworks/dpd/detect-protocols.bro b/scripts/policy/frameworks/dpd/detect-protocols.bro index 8e1ea1267f..8f4e892ce4 100644 --- a/scripts/policy/frameworks/dpd/detect-protocols.bro +++ b/scripts/policy/frameworks/dpd/detect-protocols.bro @@ -8,7 +8,6 @@ module ProtocolDetector; export { redef enum Notice::Type += { - Off_Port_Protocol_Found, # raised for each connection found Protocol_Found, Server_Found, }; @@ -155,13 +154,10 @@ function report_protocols(c: connection) { if ( [a, c$id$resp_h, c$id$resp_p] in valids ) do_notice(c, a, valids[a, c$id$resp_h, c$id$resp_p]); - else if ( [a, 0.0.0.0, c$id$resp_p] in valids ) do_notice(c, a, valids[a, 0.0.0.0, c$id$resp_p]); else do_notice(c, a, NONE); - - append_addl(c, analyzer_name(a)); } delete conns[c$id]; @@ -218,20 +214,6 @@ event protocol_confirmation(c: connection, atype: count, aid: count) } } -# event connection_analyzer_disabled(c: connection, analyzer: count) -# { -# if ( c$id !in conns ) -# return; -# -# delete conns[c$id][analyzer]; -# } - -function append_proto_addl(c: connection) - { - for ( a in conns[c$id] ) - append_addl(c, fmt_protocol(get_protocol(c, a))); - } - function found_protocol(c: connection, analyzer: count, protocol: string) { # Don't report anything running on a well-known port. diff --git a/scripts/policy/frameworks/metrics/conn-example.bro b/scripts/policy/frameworks/metrics/conn-example.bro index b3800c3ed3..974012963b 100644 --- a/scripts/policy/frameworks/metrics/conn-example.bro +++ b/scripts/policy/frameworks/metrics/conn-example.bro @@ -1,3 +1,6 @@ +##! An example of using the metrics framework to collect connection metrics +##! aggregated into /24 CIDR ranges. + @load base/frameworks/metrics @load base/utils/site diff --git a/scripts/policy/frameworks/metrics/http-example.bro b/scripts/policy/frameworks/metrics/http-example.bro index 50b18b2a27..58ca4e6614 100644 --- a/scripts/policy/frameworks/metrics/http-example.bro +++ b/scripts/policy/frameworks/metrics/http-example.bro @@ -1,9 +1,17 @@ +##! Provides an example of aggregating and limiting collection down to +##! only local networks. Additionally, the status code for the response from +##! the request is added into the metric. + @load base/frameworks/metrics @load base/protocols/http @load base/utils/site redef enum Metrics::ID += { + ## Measures HTTP requests indexed on both the request host and the response + ## code from the server. HTTP_REQUESTS_BY_STATUS_CODE, + + ## Currently unfinished and not working. HTTP_REQUESTS_BY_HOST_HEADER, }; @@ -11,13 +19,13 @@ event bro_init() { # TODO: these are waiting on a fix with table vals + records before they will work. #Metrics::add_filter(HTTP_REQUESTS_BY_HOST_HEADER, - # [$pred(index: Index) = { return Site:is_local_addr(index$host) }, + # [$pred(index: Metrics::Index) = { return Site::is_local_addr(index$host); }, # $aggregation_mask=24, - # $break_interval=5mins]); - # - ## Site::local_nets must be defined in order for this to actually do anything. - #Metrics::add_filter(HTTP_REQUESTS_BY_STATUS_CODE, [$aggregation_table=Site::local_nets_table, - # $break_interval=5mins]); + # $break_interval=1min]); + + # Site::local_nets must be defined in order for this to actually do anything. + Metrics::add_filter(HTTP_REQUESTS_BY_STATUS_CODE, [$aggregation_table=Site::local_nets_table, + $break_interval=1min]); } event HTTP::log_http(rec: HTTP::Info) diff --git a/scripts/policy/frameworks/metrics/ssl-example.bro b/scripts/policy/frameworks/metrics/ssl-example.bro index 46dd0e4741..5ec675779a 100644 --- a/scripts/policy/frameworks/metrics/ssl-example.bro +++ b/scripts/policy/frameworks/metrics/ssl-example.bro @@ -1,3 +1,8 @@ +##! Provides an example of using the metrics framework to collect the number +##! of times a specific server name indicator value is seen in SSL session +##! establishments. Names ending in google.com are being filtered out as an +##! example of the predicate based filtering in metrics filters. + @load base/frameworks/metrics @load base/protocols/ssl diff --git a/scripts/policy/frameworks/software/version-changes.bro b/scripts/policy/frameworks/software/version-changes.bro index 6d46151f0f..974a23dc76 100644 --- a/scripts/policy/frameworks/software/version-changes.bro +++ b/scripts/policy/frameworks/software/version-changes.bro @@ -1,3 +1,7 @@ +##! Provides the possibly to define software names that are interesting to +##! watch for changes. A notice is generated if software versions change on a +##! host. + @load base/frameworks/notice @load base/frameworks/software @@ -5,24 +9,17 @@ module Software; export { redef enum Notice::Type += { - ## For certain softwares, a version changing may matter. In that case, + ## For certain software, a version changing may matter. In that case, ## this notice will be generated. Software that matters if the version ## changes can be configured with the ## :bro:id:`Software::interesting_version_changes` variable. Software_Version_Change, }; - ## Some software is more interesting when the version changes and this + ## Some software is more interesting when the version changes and this is ## a set of all software that should raise a notice when a different ## version is seen on a host. - const interesting_version_changes: set[string] = { - "SSH" - } &redef; - - ## Some software is more interesting when the version changes and this - ## a set of all software that should raise a notice when a different - ## version is seen on a host. - const interesting_type_changes: set[string] = {}; + const interesting_version_changes: set[string] = { } &redef; } event log_software(rec: Info) diff --git a/scripts/policy/frameworks/software/vulnerable.bro b/scripts/policy/frameworks/software/vulnerable.bro index 0ce949b83d..c2c2ba5b32 100644 --- a/scripts/policy/frameworks/software/vulnerable.bro +++ b/scripts/policy/frameworks/software/vulnerable.bro @@ -1,3 +1,7 @@ +##! Provides a variable to define vulnerable versions of software and if a +##! a version of that software as old or older than the defined version a +##! notice will be generated. + @load base/frameworks/notice @load base/frameworks/software @@ -5,6 +9,7 @@ module Software; export { redef enum Notice::Type += { + ## Indicates that a vulnerable version of software was detected. Vulnerable_Version, }; @@ -18,6 +23,7 @@ event log_software(rec: Info) if ( rec$name in vulnerable_versions && cmp_versions(rec$version, vulnerable_versions[rec$name]) <= 0 ) { - NOTICE([$note=Vulnerable_Version, $src=rec$host, $msg=software_fmt(rec)]); + NOTICE([$note=Vulnerable_Version, $src=rec$host, + $msg=fmt("A vulnerable version of software was detected: %s", software_fmt(rec))]); } } diff --git a/scripts/policy/integration/barnyard2/main.bro b/scripts/policy/integration/barnyard2/main.bro index c2f1c790d3..1d38d80809 100644 --- a/scripts/policy/integration/barnyard2/main.bro +++ b/scripts/policy/integration/barnyard2/main.bro @@ -15,7 +15,7 @@ export { alert: AlertData &log; }; - ## This can convert a Barnyard :bro:type:`PacketID` value to a + ## This can convert a Barnyard :bro:type:`Barnyard2::PacketID` value to a ## :bro:type:`conn_id` value in the case that you might need to index ## into an existing data structure elsewhere within Bro. global pid2cid: function(p: PacketID): conn_id; diff --git a/scripts/policy/misc/capture-loss.bro b/scripts/policy/misc/capture-loss.bro index d966708762..b2d23020f8 100644 --- a/scripts/policy/misc/capture-loss.bro +++ b/scripts/policy/misc/capture-loss.bro @@ -17,7 +17,7 @@ export { redef enum Notice::Type += { ## Report if the detected capture loss exceeds the percentage - ## threshold + ## threshold. Too_Much_Loss }; @@ -42,9 +42,9 @@ export { const watch_interval = 15mins &redef; ## The percentage of missed data that is considered "too much" - ## when the :bro:enum:`Too_Much_Loss` notice should be generated. - ## The value is expressed as a double between 0 and 1 with 1 being - ## 100% + ## when the :bro:enum:`CaptureLoss::Too_Much_Loss` notice should be + ## generated. The value is expressed as a double between 0 and 1 with 1 + ## being 100% const too_much_loss: double = 0.1 &redef; } diff --git a/scripts/policy/misc/loaded-scripts.bro b/scripts/policy/misc/loaded-scripts.bro index 27275156ea..468478e682 100644 --- a/scripts/policy/misc/loaded-scripts.bro +++ b/scripts/policy/misc/loaded-scripts.bro @@ -1,4 +1,4 @@ -##! +##! Log the loaded scripts. module LoadedScripts; diff --git a/scripts/policy/misc/profiling.bro b/scripts/policy/misc/profiling.bro index 457675b1d6..31451f1a55 100644 --- a/scripts/policy/misc/profiling.bro +++ b/scripts/policy/misc/profiling.bro @@ -2,14 +2,13 @@ module Profiling; +## Set the profiling output file. redef profiling_file = open_log_file("prof"); -export { - ## Cheap profiling every 15 seconds. - redef profiling_interval = 15 secs &redef; -} +## Set the cheap profiling interval. +redef profiling_interval = 15 secs; -# Expensive profiling every 5 minutes. +## Set the expensive profiling interval. redef expensive_profiling_multiple = 20; event bro_init() diff --git a/scripts/policy/misc/stats.bro b/scripts/policy/misc/stats.bro new file mode 100644 index 0000000000..d7866fd136 --- /dev/null +++ b/scripts/policy/misc/stats.bro @@ -0,0 +1,83 @@ +##! Log memory/packet/lag statistics. Differs from profiling.bro in that this +##! is lighter-weight (much less info, and less load to generate). + +@load base/frameworks/notice + +module Stats; + +export { + redef enum Log::ID += { LOG }; + + ## How often stats are reported. + const stats_report_interval = 1min &redef; + + type Info: record { + ## Timestamp for the measurement. + ts: time &log; + ## Peer that generated this log. Mostly for clusters. + peer: string &log; + ## Amount of memory currently in use in MB. + mem: count &log; + ## Number of packets processed since the last stats interval. + pkts_proc: count &log; + ## Number of events that been processed since the last stats interval. + events_proc: count &log; + ## Number of events that have been queued since the last stats interval. + events_queued: count &log; + + ## Lag between the wall clock and packet timestamps if reading live traffic. + lag: interval &log &optional; + ## Number of packets received since the last stats interval if reading + ## live traffic. + pkts_recv: count &log &optional; + ## Number of packets dropped since the last stats interval if reading + ## live traffic. + pkts_dropped: count &log &optional; + ## Number of packets seen on the link since the last stats interval + ## if reading live traffic. + pkts_link: count &log &optional; + }; + + ## Event to catch stats as they are written to the logging stream. + global log_stats: event(rec: Info); +} + +event bro_init() &priority=5 + { + Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats]); + } + +event check_stats(last_ts: time, last_ns: NetStats, last_res: bro_resources) + { + local now = current_time(); + local ns = net_stats(); + local res = resource_usage(); + + if ( bro_is_terminating() ) + # No more stats will be written or scheduled when Bro is + # shutting down. + return; + + local info: Info = [$ts=now, $peer=peer_description, $mem=res$mem/1000000, + $pkts_proc=res$num_packets - last_res$num_packets, + $events_proc=res$num_events_dispatched - last_res$num_events_dispatched, + $events_queued=res$num_events_queued - last_res$num_events_queued]; + + if ( reading_live_traffic() ) + { + info$lag = now - network_time(); + # Someone's going to have to explain what this is and add a field to the Info record. + # info$util = 100.0*((res$user_time + res$system_time) - (last_res$user_time + last_res$system_time))/(now-last_ts); + info$pkts_recv = ns$pkts_recvd - last_ns$pkts_recvd; + info$pkts_dropped = ns$pkts_dropped - last_ns$pkts_dropped; + info$pkts_link = ns$pkts_link - last_ns$pkts_link; + } + + Log::write(Stats::LOG, info); + schedule stats_report_interval { check_stats(now, ns, res) }; + } + +event bro_init() + { + schedule stats_report_interval { check_stats(current_time(), net_stats(), resource_usage()) }; + } diff --git a/scripts/policy/misc/trim-trace-file.bro b/scripts/policy/misc/trim-trace-file.bro index 3caa41c06b..8a7781b628 100644 --- a/scripts/policy/misc/trim-trace-file.bro +++ b/scripts/policy/misc/trim-trace-file.bro @@ -10,7 +10,8 @@ export { ## This event can be generated externally to this script if on-demand ## tracefile rotation is required with the caveat that the script doesn't ## currently attempt to get back on schedule automatically and the next - ## trim will likely won't happen on the :bro:id:`trim_interval`. + ## trim will likely won't happen on the + ## :bro:id:`TrimTraceFile::trim_interval`. global go: event(first_trim: bool); } diff --git a/scripts/policy/protocols/conn/known-hosts.bro b/scripts/policy/protocols/conn/known-hosts.bro index 017b6c8a25..8914a5a22a 100644 --- a/scripts/policy/protocols/conn/known-hosts.bro +++ b/scripts/policy/protocols/conn/known-hosts.bro @@ -8,8 +8,10 @@ module Known; export { + ## The known-hosts logging stream identifier. redef enum Log::ID += { HOSTS_LOG }; - + + ## The record type which contains the column fields of the known-hosts log. type HostsInfo: record { ## The timestamp at which the host was detected. ts: time &log; @@ -19,7 +21,7 @@ export { }; ## The hosts whose existence should be logged and tracked. - ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS + ## See :bro:type:`Host` for possible choices. const host_tracking = LOCAL_HOSTS &redef; ## The set of all known addresses to store for preventing duplicate @@ -28,7 +30,9 @@ export { ## Maintain the list of known hosts for 24 hours so that the existence ## of each individual address is logged each day. global known_hosts: set[addr] &create_expire=1day &synchronized &redef; - + + ## An event that can be handled to access the :bro:type:`Known::HostsInfo` + ## record as it is sent on to the logging framework. global log_known_hosts: event(rec: HostsInfo); } diff --git a/scripts/policy/protocols/conn/known-services.bro b/scripts/policy/protocols/conn/known-services.bro index 9d58f3a9fb..f494a30f82 100644 --- a/scripts/policy/protocols/conn/known-services.bro +++ b/scripts/policy/protocols/conn/known-services.bro @@ -8,29 +8,41 @@ module Known; export { + ## The known-services logging stream identifier. redef enum Log::ID += { SERVICES_LOG }; - + + ## The record type which contains the column fields of the known-services + ## log. type ServicesInfo: record { + ## The time at which the service was detected. ts: time &log; + ## The host address on which the service is running. host: addr &log; + ## The port number on which the service is running. port_num: port &log; + ## The transport-layer protocol which the service uses. port_proto: transport_proto &log; + ## A set of protocols that match the service's connection payloads. service: set[string] &log; - - done: bool &default=F; }; ## The hosts whose services should be tracked and logged. + ## See :bro:type:`Host` for possible choices. const service_tracking = LOCAL_HOSTS &redef; - + + ## Tracks the set of daily-detected services for preventing the logging + ## of duplicates, but can also be inspected by other scripts for + ## different purposes. global known_services: set[addr, port] &create_expire=1day &synchronized; - + + ## Event that can be handled to access the :bro:type:`Known::ServicesInfo` + ## record as it is sent on to the logging framework. global log_known_services: event(rec: ServicesInfo); } redef record connection += { - ## This field is to indicate whether or not the processing for detecting - ## and logging the service for this connection is complete. + # This field is to indicate whether or not the processing for detecting + # and logging the service for this connection is complete. known_services_done: bool &default=F; }; diff --git a/scripts/policy/protocols/ftp/detect.bro b/scripts/policy/protocols/ftp/detect.bro index abb62e08fc..e1bd627921 100644 --- a/scripts/policy/protocols/ftp/detect.bro +++ b/scripts/policy/protocols/ftp/detect.bro @@ -7,7 +7,7 @@ module FTP; export { redef enum Notice::Type += { - ## This indicates that a successful response to a "SITE EXEC" + ## Indicates that a successful response to a "SITE EXEC" ## command/arg pair was seen. Site_Exec_Success, }; diff --git a/scripts/policy/protocols/ftp/software.bro b/scripts/policy/protocols/ftp/software.bro index 622357a608..0ae963d552 100644 --- a/scripts/policy/protocols/ftp/software.bro +++ b/scripts/policy/protocols/ftp/software.bro @@ -12,8 +12,10 @@ module FTP; export { redef enum Software::Type += { - FTP_CLIENT, - FTP_SERVER, + ## Identifier for FTP clients in the software framework. + CLIENT, + ## Not currently implemented. + SERVER, }; } @@ -21,7 +23,6 @@ event ftp_request(c: connection, command: string, arg: string) &priority=4 { if ( command == "CLNT" ) { - local si = Software::parse(arg, c$id$orig_h, FTP_CLIENT); - Software::found(c$id, si); + Software::found(c$id, [$unparsed_version=arg, $host=c$id$orig_h, $software_type=CLIENT]); } } diff --git a/scripts/policy/protocols/http/detect-MHR.bro b/scripts/policy/protocols/http/detect-MHR.bro index 3b2e8bf968..a0e3cb50fb 100644 --- a/scripts/policy/protocols/http/detect-MHR.bro +++ b/scripts/policy/protocols/http/detect-MHR.bro @@ -1,15 +1,18 @@ -##! This script takes MD5 sums of files transferred over HTTP and checks them with -##! Team Cymru's Malware Hash Registry (http://www.team-cymru.org/Services/MHR/). +##! Detect file downloads over HTTP that have MD5 sums matching files in Team +##! Cymru's Malware Hash Registry (http://www.team-cymru.org/Services/MHR/). ##! By default, not all file transfers will have MD5 sums calculated. Read the -##! documentation for the :doc:base/protocols/http/file-hash.bro script to see how to -##! configure which transfers will have hashes calculated. +##! documentation for the :doc:base/protocols/http/file-hash.bro script to see +##! how to configure which transfers will have hashes calculated. @load base/frameworks/notice @load base/protocols/http +module HTTP; + export { redef enum Notice::Type += { - ## If the MD5 sum of a file transferred over HTTP + ## The MD5 sum of a file transferred over HTTP matched in the + ## malware hash registry. Malware_Hash_Registry_Match }; } diff --git a/scripts/policy/protocols/http/detect-intel.bro b/scripts/policy/protocols/http/detect-intel.bro index 6da4d8d1e1..281d705c13 100644 --- a/scripts/policy/protocols/http/detect-intel.bro +++ b/scripts/policy/protocols/http/detect-intel.bro @@ -1,4 +1,4 @@ -##! Intelligence based HTTP detections. +##! Intelligence based HTTP detections. Not yet working! @load base/protocols/http/main @load base/protocols/http/utils diff --git a/scripts/policy/protocols/http/detect-sqli.bro b/scripts/policy/protocols/http/detect-sqli.bro index c4ba7ee74e..a92565c63a 100644 --- a/scripts/policy/protocols/http/detect-sqli.bro +++ b/scripts/policy/protocols/http/detect-sqli.bro @@ -12,12 +12,14 @@ export { SQL_Injection_Attacker, ## Indicates that a host was seen to have SQL injection attacks against ## it. This is tracked by IP address as opposed to hostname. - SQL_Injection_Attack_Against, + SQL_Injection_Victim, }; redef enum Metrics::ID += { - SQL_ATTACKER, - SQL_ATTACKS_AGAINST, + ## Metric to track SQL injection attackers. + SQLI_ATTACKER, + ## Metrics to track SQL injection victims. + SQLI_VICTIM, }; redef enum Tags += { @@ -30,17 +32,17 @@ export { COOKIE_SQLI, }; - ## This defines the threshold that determines if an SQL injection attack + ## Defines the threshold that determines if an SQL injection attack ## is ongoing based on the number of requests that appear to be SQL ## injection attacks. const sqli_requests_threshold = 50 &redef; - ## Interval at which to watch for the :bro:id:`sqli_requests_threshold` - ## variable to be crossed. At the end of each interval the counter is - ## reset. + ## Interval at which to watch for the + ## :bro:id:`HTTP::sqli_requests_threshold` variable to be crossed. + ## At the end of each interval the counter is reset. const sqli_requests_interval = 5min &redef; - ## This regular expression is used to match URI based SQL injections + ## Regular expression is used to match URI based SQL injections. const match_sql_injection_uri = /[\?&][^[:blank:]\x00-\x37\|]+?=[\-[:alnum:]%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+.*?([hH][aA][vV][iI][nN][gG]|[uU][nN][iI][oO][nN]|[eE][xX][eE][cC]|[sS][eE][lL][eE][cC][tT]|[dD][eE][lL][eE][tT][eE]|[dD][rR][oO][pP]|[dD][eE][cC][lL][aA][rR][eE]|[cC][rR][eE][aA][tT][eE]|[iI][nN][sS][eE][rR][tT])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+/ | /[\?&][^[:blank:]\x00-\x37\|]+?=[\-0-9%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+([xX]?[oO][rR]|[nN]?[aA][nN][dD])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+['"]?(([^a-zA-Z&]+)?=|[eE][xX][iI][sS][tT][sS])/ @@ -56,14 +58,14 @@ event bro_init() &priority=3 # determine when it looks like an actual attack and how to respond when # thresholds are crossed. - Metrics::add_filter(SQL_ATTACKER, [$log=F, + Metrics::add_filter(SQLI_ATTACKER, [$log=F, $notice_threshold=sqli_requests_threshold, $break_interval=sqli_requests_interval, $note=SQL_Injection_Attacker]); - Metrics::add_filter(SQL_ATTACKS_AGAINST, [$log=F, - $notice_threshold=sqli_requests_threshold, - $break_interval=sqli_requests_interval, - $note=SQL_Injection_Attack_Against]); + Metrics::add_filter(SQLI_VICTIM, [$log=F, + $notice_threshold=sqli_requests_threshold, + $break_interval=sqli_requests_interval, + $note=SQL_Injection_Victim]); } event http_request(c: connection, method: string, original_URI: string, @@ -73,7 +75,7 @@ event http_request(c: connection, method: string, original_URI: string, { add c$http$tags[URI_SQLI]; - Metrics::add_data(SQL_ATTACKER, [$host=c$id$orig_h], 1); - Metrics::add_data(SQL_ATTACKS_AGAINST, [$host=c$id$resp_h], 1); + Metrics::add_data(SQLI_ATTACKER, [$host=c$id$orig_h], 1); + Metrics::add_data(SQLI_VICTIM, [$host=c$id$resp_h], 1); } } diff --git a/scripts/policy/protocols/http/detect-webapps.bro b/scripts/policy/protocols/http/detect-webapps.bro index 4a94d1adbd..fb805bfd33 100644 --- a/scripts/policy/protocols/http/detect-webapps.bro +++ b/scripts/policy/protocols/http/detect-webapps.bro @@ -1,19 +1,24 @@ +##! Detect and log web applications through the software framework. + @load base/frameworks/signatures @load base/frameworks/software @load base/protocols/http +@load-sigs ./detect-webapps.sig + module HTTP; -redef signature_files += "protocols/http/detect-webapps.sig"; # Ignore the signatures used to match webapps redef Signatures::ignored_ids += /^webapp-/; export { redef enum Software::Type += { + ## Identifier for web applications in the software framework. WEB_APPLICATION, }; redef record Software::Info += { + ## Most root URL where the software was discovered. url: string &optional &log; }; } @@ -23,7 +28,8 @@ event signature_match(state: signature_state, msg: string, data: string) &priori if ( /^webapp-/ !in state$sig_id ) return; local c = state$conn; - local si = Software::parse(msg, c$id$resp_h, WEB_APPLICATION); + local si = Software::Info; + si = [$name=msg, $unparsed_version=msg, $host=c$id$resp_h, $host_p=c$id$resp_p, $software_type=WEB_APPLICATION]; si$url = build_url_http(c$http); if ( c$id$resp_h in Software::tracked && si$name in Software::tracked[c$id$resp_h] ) @@ -32,7 +38,8 @@ event signature_match(state: signature_state, msg: string, data: string) &priori # use that as the new url for the software. # PROBLEM: different version of the same software on the same server with a shared root path local is_substring = 0; - if ( Software::tracked[c$id$resp_h][si$name]?$url ) + if ( Software::tracked[c$id$resp_h][si$name]?$url && + |si$url| <= |Software::tracked[c$id$resp_h][si$name]$url| ) is_substring = strstr(Software::tracked[c$id$resp_h][si$name]$url, si$url); if ( is_substring == 1 ) diff --git a/scripts/policy/protocols/http/software-browser-plugins.bro b/scripts/policy/protocols/http/software-browser-plugins.bro index db9eafd1a7..b466a9da40 100644 --- a/scripts/policy/protocols/http/software-browser-plugins.bro +++ b/scripts/policy/protocols/http/software-browser-plugins.bro @@ -1,5 +1,5 @@ -##! This script take advantage of a few ways that installed plugin information -##! leaks from web browsers. +##! Detect browser plugins as they leak through requests to Omniture +##! advertising servers. @load base/protocols/http @load base/frameworks/software @@ -13,6 +13,7 @@ export { }; redef enum Software::Type += { + ## Identifier for browser plugins in the software framework. BROWSER_PLUGIN }; } @@ -26,8 +27,7 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr # Flash doesn't include it's name so we'll add it here since it # simplifies the version parsing. value = cat("Flash/", value); - local flash_version = Software::parse(value, c$id$orig_h, BROWSER_PLUGIN); - Software::found(c$id, flash_version); + Software::found(c$id, [$unparsed_version=value, $host=c$id$orig_h, $software_type=BROWSER_PLUGIN]); } } else @@ -54,7 +54,7 @@ event log_http(rec: Info) local plugins = split(sw, /[[:blank:]]*;[[:blank:]]*/); for ( i in plugins ) - Software::found(rec$id, Software::parse(plugins[i], rec$id$orig_h, BROWSER_PLUGIN)); + Software::found(rec$id, [$unparsed_version=plugins[i], $host=rec$id$orig_h, $software_type=BROWSER_PLUGIN]); } } - } \ No newline at end of file + } diff --git a/scripts/policy/protocols/http/software.bro b/scripts/policy/protocols/http/software.bro index 8732634359..2ad4246a56 100644 --- a/scripts/policy/protocols/http/software.bro +++ b/scripts/policy/protocols/http/software.bro @@ -6,8 +6,11 @@ module HTTP; export { redef enum Software::Type += { + ## Identifier for web servers in the software framework. SERVER, + ## Identifier for app servers in the software framework. APPSERVER, + ## Identifier for web browsers in the software framework. BROWSER, }; @@ -20,18 +23,18 @@ event http_header(c: connection, is_orig: bool, name: string, value: string) &pr if ( is_orig ) { if ( name == "USER-AGENT" && ignored_user_agents !in value ) - Software::found(c$id, Software::parse(value, c$id$orig_h, BROWSER)); + Software::found(c$id, [$unparsed_version=value, $host=c$id$orig_h, $software_type=BROWSER]); } else { if ( name == "SERVER" ) - Software::found(c$id, Software::parse(value, c$id$resp_h, SERVER)); + Software::found(c$id, [$unparsed_version=value, $host=c$id$resp_h, $host_p=c$id$resp_p, $software_type=SERVER]); else if ( name == "X-POWERED-BY" ) - Software::found(c$id, Software::parse(value, c$id$resp_h, APPSERVER)); + Software::found(c$id, [$unparsed_version=value, $host=c$id$resp_h, $host_p=c$id$resp_p, $software_type=APPSERVER]); else if ( name == "MICROSOFTSHAREPOINTTEAMSERVICES" ) { value = cat("SharePoint/", value); - Software::found(c$id, Software::parse(value, c$id$resp_h, APPSERVER)); + Software::found(c$id, [$unparsed_version=value, $host=c$id$resp_h, $host_p=c$id$resp_p, $software_type=APPSERVER]); } } } diff --git a/scripts/policy/protocols/http/var-extraction-cookies.bro b/scripts/policy/protocols/http/var-extraction-cookies.bro index 2b3f282b03..610c6e1381 100644 --- a/scripts/policy/protocols/http/var-extraction-cookies.bro +++ b/scripts/policy/protocols/http/var-extraction-cookies.bro @@ -1,4 +1,4 @@ -##! This script extracts and logs variables from cookies sent by clients +##! Extracts and logs variables names from cookies sent by clients. @load base/protocols/http/main @load base/protocols/http/utils @@ -6,6 +6,7 @@ module HTTP; redef record Info += { + ## Variable names extracted from all cookies. cookie_vars: vector of string &optional &log; }; diff --git a/scripts/policy/protocols/http/var-extraction-uri.bro b/scripts/policy/protocols/http/var-extraction-uri.bro index b03474bb94..27ee89d6f2 100644 --- a/scripts/policy/protocols/http/var-extraction-uri.bro +++ b/scripts/policy/protocols/http/var-extraction-uri.bro @@ -1,10 +1,12 @@ -##! This script extracts and logs variables from the requested URI +##! Extracts and log variables from the requested URI in the default HTTP +##! logging stream. @load base/protocols/http module HTTP; redef record Info += { + ## Variable names from the URI. uri_vars: vector of string &optional &log; }; diff --git a/scripts/policy/protocols/smtp/blocklists.bro b/scripts/policy/protocols/smtp/blocklists.bro index a3e75318bb..b1fb0e498d 100644 --- a/scripts/policy/protocols/smtp/blocklists.bro +++ b/scripts/policy/protocols/smtp/blocklists.bro @@ -1,3 +1,4 @@ +##! Watch for various SPAM blocklist URLs in SMTP error messages. @load base/protocols/smtp @@ -5,9 +6,11 @@ module SMTP; export { redef enum Notice::Type += { - ## Indicates that the server sent a reply mentioning an SMTP block list. + ## An SMTP server sent a reply mentioning an SMTP block list. Blocklist_Error_Message, - ## Indicates the client's address is seen in the block list error message. + ## The originator's address is seen in the block list error message. + ## This is useful to detect local hosts sending SPAM with a high + ## positive rate. Blocklist_Blocked_Host, }; @@ -52,7 +55,8 @@ event smtp_reply(c: connection, is_orig: bool, code: count, cmd: string, message = fmt("%s is on an SMTP block list", c$id$orig_h); } - NOTICE([$note=note, $conn=c, $msg=message, $sub=msg]); + NOTICE([$note=note, $conn=c, $msg=message, $sub=msg, + $identifier=cat(c$id$orig_h)]); } } } diff --git a/scripts/policy/protocols/smtp/software.bro b/scripts/policy/protocols/smtp/software.bro index 3c4c870885..f520485338 100644 --- a/scripts/policy/protocols/smtp/software.bro +++ b/scripts/policy/protocols/smtp/software.bro @@ -75,8 +75,7 @@ event log_smtp(rec: Info) if ( addr_matches_host(rec$id$orig_h, detect_clients_in_messages_from) ) { - local s = Software::parse(rec$user_agent, client_ip, s_type); - Software::found(rec$id, s); + Software::found(rec$id, [$unparsed_version=rec$user_agent, $host=client_ip, $software_type=s_type]); } } } diff --git a/scripts/policy/protocols/ssh/detect-bruteforcing.bro b/scripts/policy/protocols/ssh/detect-bruteforcing.bro index 3abe185d58..aa6e920c12 100644 --- a/scripts/policy/protocols/ssh/detect-bruteforcing.bro +++ b/scripts/policy/protocols/ssh/detect-bruteforcing.bro @@ -1,3 +1,5 @@ +##! Detect hosts which are doing password guessing attacks and/or password +##! bruteforcing over SSH. @load base/protocols/ssh @load base/frameworks/metrics @@ -9,17 +11,17 @@ module SSH; export { redef enum Notice::Type += { ## Indicates that a host has been identified as crossing the - ## :bro:id:`password_guesses_limit` threshold with heuristically + ## :bro:id:`SSH::password_guesses_limit` threshold with heuristically ## determined failed logins. Password_Guessing, ## Indicates that a host previously identified as a "password guesser" - ## has now had a heuristically successful login attempt. + ## has now had a heuristically successful login attempt. This is not + ## currently implemented. Login_By_Password_Guesser, }; redef enum Metrics::ID += { - ## This metric is to measure failed logins with the hope of detecting - ## bruteforcing hosts. + ## Metric is to measure failed logins. FAILED_LOGIN, }; @@ -37,7 +39,7 @@ export { ## client subnets and the yield value represents server subnets. const ignore_guessers: table[subnet] of subnet &redef; - ## Keeps track of hosts identified as guessing passwords. + ## Tracks hosts identified as guessing passwords. global password_guessers: set[addr] &read_expire=guessing_timeout+1hr &synchronized &redef; } diff --git a/scripts/policy/protocols/ssh/geo-data.bro b/scripts/policy/protocols/ssh/geo-data.bro index daa05f4ebc..0f8bb932fe 100644 --- a/scripts/policy/protocols/ssh/geo-data.bro +++ b/scripts/policy/protocols/ssh/geo-data.bro @@ -1,5 +1,4 @@ -##! This implements all of the additional information and geodata detections -##! for SSH analysis. +##! Geodata based detections for SSH analysis. @load base/frameworks/notice @load base/protocols/ssh @@ -19,8 +18,8 @@ export { remote_location: geo_location &log &optional; }; - ## The set of countries for which you'd like to throw notices upon - ## successful login + ## The set of countries for which you'd like to generate notices upon + ## successful login. const watched_countries: set[string] = {"RO"} &redef; } diff --git a/scripts/policy/protocols/ssh/interesting-hostnames.bro b/scripts/policy/protocols/ssh/interesting-hostnames.bro index 29886d0eb0..f79c67ede9 100644 --- a/scripts/policy/protocols/ssh/interesting-hostnames.bro +++ b/scripts/policy/protocols/ssh/interesting-hostnames.bro @@ -10,9 +10,9 @@ module SSH; export { redef enum Notice::Type += { - ## Generated if a login originates or responds with a host and the + ## Generated if a login originates or responds with a host where the ## reverse hostname lookup resolves to a name matched by the - ## :bro:id:`interesting_hostnames` regular expression. + ## :bro:id:`SSH::interesting_hostnames` regular expression. Interesting_Hostname_Login, }; @@ -36,7 +36,9 @@ event SSH::heuristic_successful_login(c: connection) if ( interesting_hostnames in hostname ) { NOTICE([$note=Interesting_Hostname_Login, - $msg=fmt("Interesting login from hostname: %s", hostname), + $msg=fmt("Possible SSH login involving a %s %s with an interesting hostname.", + Site::is_local_addr(host) ? "local" : "remote", + host == c$id$orig_h ? "client" : "server"), $sub=hostname, $conn=c]); } } diff --git a/scripts/policy/protocols/ssh/software.bro b/scripts/policy/protocols/ssh/software.bro index a239655270..ba03bed284 100644 --- a/scripts/policy/protocols/ssh/software.bro +++ b/scripts/policy/protocols/ssh/software.bro @@ -1,4 +1,4 @@ -##! This script extracts SSH client and server information from SSH +##! Extracts SSH client and server information from SSH ##! connections and forwards it to the software framework. @load base/frameworks/software @@ -7,7 +7,9 @@ module SSH; export { redef enum Software::Type += { + ## Identifier for SSH clients in the software framework. SERVER, + ## Identifier for SSH servers in the software framework. CLIENT, }; } @@ -16,14 +18,12 @@ event ssh_client_version(c: connection, version: string) &priority=4 { # Get rid of the protocol information when passing to the software framework. local cleaned_version = sub(version, /^SSH[0-9\.\-]+/, ""); - local si = Software::parse(cleaned_version, c$id$orig_h, CLIENT); - Software::found(c$id, si); + Software::found(c$id, [$unparsed_version=cleaned_version, $host=c$id$orig_h, $software_type=CLIENT]); } event ssh_server_version(c: connection, version: string) &priority=4 { # Get rid of the protocol information when passing to the software framework. local cleaned_version = sub(version, /SSH[0-9\.\-]{2,}/, ""); - local si = Software::parse(cleaned_version, c$id$resp_h, SERVER); - Software::found(c$id, si); + Software::found(c$id, [$unparsed_version=cleaned_version, $host=c$id$resp_h, $host_p=c$id$resp_p, $software_type=SERVER]); } diff --git a/scripts/policy/protocols/ssl/cert-hash.bro b/scripts/policy/protocols/ssl/cert-hash.bro index 80a937f670..32a165a946 100644 --- a/scripts/policy/protocols/ssl/cert-hash.bro +++ b/scripts/policy/protocols/ssl/cert-hash.bro @@ -1,4 +1,4 @@ -##! This script calculates MD5 sums for server DER formatted certificates. +##! Calculate MD5 sums for server DER formatted certificates. @load base/protocols/ssl @@ -6,15 +6,16 @@ module SSL; export { redef record Info += { + ## MD5 sum of the raw server certificate. cert_hash: string &log &optional; }; } -event x509_certificate(c: connection, cert: X509, is_server: bool, chain_idx: count, chain_len: count, der_cert: string) &priority=4 +event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=4 { # We aren't tracking client certificates yet and we are also only tracking # the primary cert. Watch that this came from an SSL analyzed session too. - if ( ! is_server || chain_idx != 0 || ! c?$ssl ) + if ( is_orig || chain_idx != 0 || ! c?$ssl ) return; c$ssl$cert_hash = md5_hash(der_cert); diff --git a/scripts/policy/protocols/ssl/expiring-certs.bro b/scripts/policy/protocols/ssl/expiring-certs.bro index 50480b3a09..80616e6a99 100644 --- a/scripts/policy/protocols/ssl/expiring-certs.bro +++ b/scripts/policy/protocols/ssl/expiring-certs.bro @@ -1,6 +1,6 @@ -##! This script can be used to generate notices when X.509 certificates over -##! SSL/TLS are expired or going to expire based on the date and time values -##! stored within the certificate. +##! Generate notices when X.509 certificates over SSL/TLS are expired or +##! going to expire soon based on the date and time values stored within the +##! certificate. @load base/protocols/ssl @load base/frameworks/notice @@ -24,19 +24,21 @@ export { ## The category of hosts you would like to be notified about which have ## certificates that are going to be expiring soon. By default, these - ## notices will be suppressed by the notice framework for 1 day. + ## notices will be suppressed by the notice framework for 1 day after + ## a particular certificate has had a notice generated. ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS const notify_certs_expiration = LOCAL_HOSTS &redef; ## The time before a certificate is going to expire that you would like to - ## start receiving :bro:enum:`Certificate_Expires_Soon` notices. + ## start receiving :bro:enum:`SSL::Certificate_Expires_Soon` notices. const notify_when_cert_expiring_in = 30days &redef; } -event x509_certificate(c: connection, cert: X509, is_server: bool, chain_idx: count, chain_len: count, der_cert: string) &priority=3 +event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3 { # If this isn't the host cert or we aren't interested in the server, just return. - if ( chain_idx != 0 || + if ( is_orig || + chain_idx != 0 || ! c$ssl?$cert_hash || ! addr_matches_host(c$id$resp_h, notify_certs_expiration) ) return; diff --git a/scripts/policy/protocols/ssl/extract-certs-pem.bro b/scripts/policy/protocols/ssl/extract-certs-pem.bro index e6a740c215..420c60a4fd 100644 --- a/scripts/policy/protocols/ssl/extract-certs-pem.bro +++ b/scripts/policy/protocols/ssl/extract-certs-pem.bro @@ -2,7 +2,7 @@ ##! after being converted to PEM files. The certificates will be stored in ##! a single file, one for local certificates and one for remote certificates. ##! -##! A couple of things to think about with this script:: +##! ..note:: ##! ##! - It doesn't work well on a cluster because each worker will write its ##! own certificate files and no duplicate checking is done across @@ -20,15 +20,15 @@ module SSL; export { - ## Setting to control if host certificates offered by the defined hosts + ## Control if host certificates offered by the defined hosts ## will be written to the PEM certificates file. ## Choices are: LOCAL_HOSTS, REMOTE_HOSTS, ALL_HOSTS, NO_HOSTS const extract_certs_pem = LOCAL_HOSTS &redef; } -## This is an internally maintained variable to prevent relogging of -## certificates that have already been seen. It is indexed on an md5 sum of -## the certificate. +# This is an internally maintained variable to prevent relogging of +# certificates that have already been seen. It is indexed on an md5 sum of +# the certificate. global extracted_certs: set[string] = set() &read_expire=1hr &redef; event ssl_established(c: connection) &priority=5 diff --git a/scripts/policy/protocols/ssl/known-certs.bro b/scripts/policy/protocols/ssl/known-certs.bro index 90f6ee6186..3986a9aa1e 100644 --- a/scripts/policy/protocols/ssl/known-certs.bro +++ b/scripts/policy/protocols/ssl/known-certs.bro @@ -1,5 +1,4 @@ -##! This script can be used to log information about certificates while -##! attempting to avoid duplicate logging. +##! Log information about certificates while attempting to avoid duplicate logging. @load base/utils/directions-and-hosts @load base/protocols/ssl @@ -36,6 +35,8 @@ export { ## in the set is for storing the DER formatted certificate's MD5 hash. global certs: set[addr, string] &create_expire=1day &synchronized &redef; + ## Event that can be handled to access the loggable record as it is sent + ## on to the logging framework. global log_known_certs: event(rec: CertsInfo); } @@ -44,10 +45,10 @@ event bro_init() &priority=5 Log::create_stream(Known::CERTS_LOG, [$columns=CertsInfo, $ev=log_known_certs]); } -event x509_certificate(c: connection, cert: X509, is_server: bool, chain_idx: count, chain_len: count, der_cert: string) &priority=3 +event x509_certificate(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string) &priority=3 { # Make sure this is the server cert and we have a hash for it. - if ( chain_idx != 0 || ! c$ssl?$cert_hash ) + if ( is_orig || chain_idx != 0 || ! c$ssl?$cert_hash ) return; local host = c$id$resp_h; diff --git a/scripts/policy/protocols/ssl/validate-certs.bro b/scripts/policy/protocols/ssl/validate-certs.bro index 5a663864d2..03624eac84 100644 --- a/scripts/policy/protocols/ssl/validate-certs.bro +++ b/scripts/policy/protocols/ssl/validate-certs.bro @@ -14,8 +14,7 @@ export { }; redef record Info += { - ## This stores and logs the result of certificate validation for - ## this connection. + ## Result of certificate validation for this connection. validation_status: string &log &optional; }; diff --git a/scripts/policy/tuning/logs-to-elasticsearch.bro b/scripts/policy/tuning/logs-to-elasticsearch.bro new file mode 100644 index 0000000000..2a4b70362a --- /dev/null +++ b/scripts/policy/tuning/logs-to-elasticsearch.bro @@ -0,0 +1,36 @@ +##! Load this script to enable global log output to an ElasticSearch database. + +module LogElasticSearch; + +export { + ## An elasticsearch specific rotation interval. + const rotation_interval = 3hr &redef; + + ## Optionally ignore any :bro:type:`Log::ID` from being sent to + ## ElasticSearch with this script. + const excluded_log_ids: set[Log::ID] &redef; + + ## If you want to explicitly only send certain :bro:type:`Log::ID` + ## streams, add them to this set. If the set remains empty, all will + ## be sent. The :bro:id:`LogElasticSearch::excluded_log_ids` option will remain in + ## effect as well. + const send_logs: set[Log::ID] &redef; +} + +event bro_init() &priority=-5 + { + if ( server_host == "" ) + return; + + for ( stream_id in Log::active_streams ) + { + if ( stream_id in excluded_log_ids || + (|send_logs| > 0 && stream_id !in send_logs) ) + next; + + local filter: Log::Filter = [$name = "default-es", + $writer = Log::WRITER_ELASTICSEARCH, + $interv = LogElasticSearch::rotation_interval]; + Log::add_filter(stream_id, filter); + } + } diff --git a/scripts/site/local-manager.bro b/scripts/site/local-manager.bro index c933207603..5e6005f21e 100644 --- a/scripts/site/local-manager.bro +++ b/scripts/site/local-manager.bro @@ -1,9 +1 @@ -##! Local site policy loaded only by the manager in a cluster. - -@load base/frameworks/notice - -# If you are running a cluster you should define your Notice::policy here -# so that notice processing occurs on the manager. -redef Notice::policy += { - -}; +##! Local site policy loaded only by the manager if Bro is running as a cluster. diff --git a/scripts/site/local-proxy.bro b/scripts/site/local-proxy.bro index 1b71cc1870..478ba6d048 100644 --- a/scripts/site/local-proxy.bro +++ b/scripts/site/local-proxy.bro @@ -1,2 +1 @@ ##! Local site policy loaded only by the proxies if Bro is running as a cluster. - diff --git a/scripts/site/local.bro b/scripts/site/local.bro index 597b92ba3d..db1a786839 100644 --- a/scripts/site/local.bro +++ b/scripts/site/local.bro @@ -1,37 +1,46 @@ -##! Local site policy. Customize as appropriate. This file will not be -##! overwritten when upgrading or reinstalling. +##! Local site policy. Customize as appropriate. +##! +##! This file will not be overwritten when upgrading or reinstalling! -# Load the script to log which script were loaded during each run +# This script logs which scripts were loaded during each run. @load misc/loaded-scripts # Apply the default tuning scripts for common tuning settings. @load tuning/defaults -# Vulnerable versions of software to generate notices for when discovered. +# Generate notices when vulnerable versions of software are discovered. # The default is to only monitor software found in the address space defined # as "local". Refer to the software framework's documentation for more # information. @load frameworks/software/vulnerable + +# Example vulnerable software. This needs to be updated and maintained over +# time as new vulnerabilities are discovered. redef Software::vulnerable_versions += { ["Flash"] = [$major=10,$minor=2,$minor2=153,$addl="1"], ["Java"] = [$major=1,$minor=6,$minor2=0,$addl="22"], }; +# Detect software changing (e.g. attacker installing hacked SSHD). +@load frameworks/software/version-changes + # This adds signatures to detect cleartext forward and reverse windows shells. -redef signature_files += "frameworks/signatures/detect-windows-shells.sig"; +@load-sigs frameworks/signatures/detect-windows-shells # Uncomment the following line to begin receiving (by default hourly) emails # containing all of your notices. # redef Notice::policy += { [$action = Notice::ACTION_ALARM, $priority = 0] }; # Load all of the scripts that detect software in various protocols. -@load protocols/http/software -#@load protocols/http/detect-webapps @load protocols/ftp/software @load protocols/smtp/software @load protocols/ssh/software +@load protocols/http/software +# The detect-webapps script could possibly cause performance trouble when +# running on live traffic. Enable it cautiously. +#@load protocols/http/detect-webapps -# Load the script to detect DNS results pointing toward your Site::local_nets +# This script detects DNS results pointing toward your Site::local_nets # where the name is not part of your local DNS zone and is being hosted # externally. Requires that the Site::local_zones variable is defined. @load protocols/dns/detect-external-names @@ -39,15 +48,12 @@ redef signature_files += "frameworks/signatures/detect-windows-shells.sig"; # Script to detect various activity in FTP sessions. @load protocols/ftp/detect -# Detect software changing (e.g. attacker installing hacked SSHD). -@load frameworks/software/version-changes - # Scripts that do asset tracking. @load protocols/conn/known-hosts @load protocols/conn/known-services @load protocols/ssl/known-certs -# Load the script to enable SSL/TLS certificate validation. +# This script enables SSL/TLS certificate validation. @load protocols/ssl/validate-certs # If you have libGeoIP support built in, do some geographic detections and @@ -60,13 +66,5 @@ redef signature_files += "frameworks/signatures/detect-windows-shells.sig"; # Detect MD5 sums in Team Cymru's Malware Hash Registry. @load protocols/http/detect-MHR -# Detect SQL injection attacks +# Detect SQL injection attacks. @load protocols/http/detect-sqli - -# Uncomment this redef if you want to extract SMTP MIME entities for -# some file types. The numbers given indicate how many bytes to extract for -# the various mime types. -redef SMTP::entity_excerpt_len += { -# ["text/plain"] = 1024, -# ["text/html"] = 1024, -}; diff --git a/scripts/test-all-policy.bro b/scripts/test-all-policy.bro index f653220cbc..a7c43b14b3 100644 --- a/scripts/test-all-policy.bro +++ b/scripts/test-all-policy.bro @@ -26,6 +26,7 @@ @load misc/capture-loss.bro @load misc/loaded-scripts.bro @load misc/profiling.bro +@load misc/stats.bro @load misc/trim-trace-file.bro @load protocols/conn/known-hosts.bro @load protocols/conn/known-services.bro @@ -59,4 +60,5 @@ @load tuning/defaults/__load__.bro @load tuning/defaults/packet-fragments.bro @load tuning/defaults/warnings.bro +@load tuning/logs-to-elasticsearch.bro @load tuning/track-all-assets.bro diff --git a/src/ARP.cc b/src/ARP.cc index 3606ed66d5..7ffd82764c 100644 --- a/src/ARP.cc +++ b/src/ARP.cc @@ -17,7 +17,7 @@ ARP_Analyzer::~ARP_Analyzer() { } -bool ARP_Analyzer::IsARP(const u_char* pkt, int hdr_size) const +bool ARP_Analyzer::IsARP(const u_char* pkt, int hdr_size) { unsigned short network_protocol = *(unsigned short*) (pkt + hdr_size - 2); diff --git a/src/ARP.h b/src/ARP.h index 6b84dbd587..f4b623c513 100644 --- a/src/ARP.h +++ b/src/ARP.h @@ -15,6 +15,8 @@ #include #elif defined(HAVE_NETINET_IF_ETHER_H) #include +#elif defined(HAVE_NET_ETHERTYPES_H) +#include #endif #ifndef arp_pkthdr @@ -29,9 +31,6 @@ public: ARP_Analyzer(); virtual ~ARP_Analyzer(); - // Whether a packet is of interest for ARP analysis. - bool IsARP(const u_char* pkt, int hdr_size) const; - void NextPacket(double t, const struct pcap_pkthdr* hdr, const u_char* const pkt, int hdr_size); @@ -39,6 +38,10 @@ public: void RREvent(EventHandlerPtr e, const u_char* src, const u_char* dst, const char* spa, const char* sha, const char* tpa, const char* tha); + + // Whether a packet is of interest for ARP analysis. + static bool IsARP(const u_char* pkt, int hdr_size); + protected: AddrVal* ConstructAddrVal(const void* addr); StringVal* EthAddrToStr(const u_char* addr); diff --git a/src/AYIYA.cc b/src/AYIYA.cc new file mode 100644 index 0000000000..c525a73b6c --- /dev/null +++ b/src/AYIYA.cc @@ -0,0 +1,24 @@ +#include "AYIYA.h" + +AYIYA_Analyzer::AYIYA_Analyzer(Connection* conn) +: Analyzer(AnalyzerTag::AYIYA, conn) + { + interp = new binpac::AYIYA::AYIYA_Conn(this); + } + +AYIYA_Analyzer::~AYIYA_Analyzer() + { + delete interp; + } + +void AYIYA_Analyzer::Done() + { + Analyzer::Done(); + Event(udp_session_done); + } + +void AYIYA_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, int seq, const IP_Hdr* ip, int caplen) + { + Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen); + interp->NewData(orig, data, data + len); + } diff --git a/src/AYIYA.h b/src/AYIYA.h new file mode 100644 index 0000000000..79b41553c7 --- /dev/null +++ b/src/AYIYA.h @@ -0,0 +1,29 @@ +#ifndef AYIYA_h +#define AYIYA_h + +#include "ayiya_pac.h" + +class AYIYA_Analyzer : public Analyzer { +public: + AYIYA_Analyzer(Connection* conn); + virtual ~AYIYA_Analyzer(); + + virtual void Done(); + virtual void DeliverPacket(int len, const u_char* data, bool orig, + int seq, const IP_Hdr* ip, int caplen); + + static Analyzer* InstantiateAnalyzer(Connection* conn) + { return new AYIYA_Analyzer(conn); } + + static bool Available() + { return BifConst::Tunnel::enable_ayiya && + BifConst::Tunnel::max_depth > 0; } + +protected: + friend class AnalyzerTimer; + void ExpireTimer(double t); + + binpac::AYIYA::AYIYA_Conn* interp; +}; + +#endif diff --git a/src/Analyzer.cc b/src/Analyzer.cc index 83f1f7f821..8c5573f96b 100644 --- a/src/Analyzer.cc +++ b/src/Analyzer.cc @@ -4,6 +4,7 @@ #include "PIA.h" #include "Event.h" +#include "AYIYA.h" #include "BackDoor.h" #include "BitTorrent.h" #include "BitTorrentTracker.h" @@ -33,9 +34,11 @@ #include "NFS.h" #include "Portmap.h" #include "POP3.h" +#include "SOCKS.h" #include "SSH.h" -#include "SSL-binpac.h" +#include "SSL.h" #include "Syslog-binpac.h" +#include "Teredo.h" #include "ConnSizeAnalyzer.h" // Keep same order here as in AnalyzerTag definition! @@ -49,18 +52,6 @@ const Analyzer::Config Analyzer::analyzer_configs[] = { { AnalyzerTag::ICMP, "ICMP", ICMP_Analyzer::InstantiateAnalyzer, ICMP_Analyzer::Available, 0, false }, - { AnalyzerTag::ICMP_TimeExceeded, "ICMP_TIMEEXCEEDED", - ICMP_TimeExceeded_Analyzer::InstantiateAnalyzer, - ICMP_TimeExceeded_Analyzer::Available, 0, false }, - { AnalyzerTag::ICMP_Unreachable, "ICMP_UNREACHABLE", - ICMP_Unreachable_Analyzer::InstantiateAnalyzer, - ICMP_Unreachable_Analyzer::Available, 0, false }, - { AnalyzerTag::ICMP_Echo, "ICMP_ECHO", - ICMP_Echo_Analyzer::InstantiateAnalyzer, - ICMP_Echo_Analyzer::Available, 0, false }, - { AnalyzerTag::ICMP_Redir, "ICMP_REDIR", - ICMP_Redir_Analyzer::InstantiateAnalyzer, - ICMP_Redir_Analyzer::Available, 0, false }, { AnalyzerTag::TCP, "TCP", TCP_Analyzer::InstantiateAnalyzer, TCP_Analyzer::Available, 0, false }, @@ -133,12 +124,22 @@ const Analyzer::Config Analyzer::analyzer_configs[] = { HTTP_Analyzer_binpac::InstantiateAnalyzer, HTTP_Analyzer_binpac::Available, 0, false }, { AnalyzerTag::SSL, "SSL", - SSL_Analyzer_binpac::InstantiateAnalyzer, - SSL_Analyzer_binpac::Available, 0, false }, + SSL_Analyzer::InstantiateAnalyzer, + SSL_Analyzer::Available, 0, false }, { AnalyzerTag::SYSLOG_BINPAC, "SYSLOG_BINPAC", Syslog_Analyzer_binpac::InstantiateAnalyzer, Syslog_Analyzer_binpac::Available, 0, false }, + { AnalyzerTag::AYIYA, "AYIYA", + AYIYA_Analyzer::InstantiateAnalyzer, + AYIYA_Analyzer::Available, 0, false }, + { AnalyzerTag::SOCKS, "SOCKS", + SOCKS_Analyzer::InstantiateAnalyzer, + SOCKS_Analyzer::Available, 0, false }, + { AnalyzerTag::Teredo, "TEREDO", + Teredo_Analyzer::InstantiateAnalyzer, + Teredo_Analyzer::Available, 0, false }, + { AnalyzerTag::File, "FILE", File_Analyzer::InstantiateAnalyzer, File_Analyzer::Available, 0, false }, { AnalyzerTag::Backdoor, "BACKDOOR", @@ -170,6 +171,7 @@ const Analyzer::Config Analyzer::analyzer_configs[] = { { AnalyzerTag::Contents_SMB, "CONTENTS_SMB", 0, 0, 0, false }, { AnalyzerTag::Contents_RPC, "CONTENTS_RPC", 0, 0, 0, false }, { AnalyzerTag::Contents_NFS, "CONTENTS_NFS", 0, 0, 0, false }, + { AnalyzerTag::FTP_ADAT, "FTP_ADAT", 0, 0, 0, false }, }; AnalyzerTimer::~AnalyzerTimer() diff --git a/src/Analyzer.h b/src/Analyzer.h index 7797e215fe..6ccd7648d3 100644 --- a/src/Analyzer.h +++ b/src/Analyzer.h @@ -215,6 +215,11 @@ public: // analyzer, even if the method is called multiple times. virtual void ProtocolConfirmation(); + // Return whether the analyzer previously called ProtocolConfirmation() + // at least once before. + bool ProtocolConfirmed() const + { return protocol_confirmed; } + // Report that we found a significant protocol violation which might // indicate that the analyzed data is in fact not the expected // protocol. The protocol_violation event is raised once per call to @@ -338,6 +343,10 @@ private: for ( analyzer_list::iterator var = the_kids.begin(); \ var != the_kids.end(); var++ ) +#define LOOP_OVER_GIVEN_CONST_CHILDREN(var, the_kids) \ + for ( analyzer_list::const_iterator var = the_kids.begin(); \ + var != the_kids.end(); var++ ) + class SupportAnalyzer : public Analyzer { public: SupportAnalyzer(AnalyzerTag::Tag tag, Connection* conn, bool arg_orig) diff --git a/src/AnalyzerTags.h b/src/AnalyzerTags.h index fd31773120..4301de8f71 100644 --- a/src/AnalyzerTags.h +++ b/src/AnalyzerTags.h @@ -20,9 +20,7 @@ namespace AnalyzerTag { PIA_TCP, PIA_UDP, // Transport-layer analyzers. - ICMP, - ICMP_TimeExceeded, ICMP_Unreachable, ICMP_Echo, ICMP_Redir, - TCP, UDP, + ICMP, TCP, UDP, // Application-layer analyzers (hand-written). BitTorrent, BitTorrentTracker, @@ -35,15 +33,20 @@ namespace AnalyzerTag { DHCP_BINPAC, DNS_TCP_BINPAC, DNS_UDP_BINPAC, HTTP_BINPAC, SSL, SYSLOG_BINPAC, + // Decapsulation analyzers. + AYIYA, + SOCKS, + Teredo, + // Other File, Backdoor, InterConn, SteppingStone, TCPStats, ConnSize, - // Support-analyzers Contents, ContentLine, NVT, Zip, Contents_DNS, Contents_NCP, Contents_NetbiosSSN, Contents_Rlogin, Contents_Rsh, Contents_DCE_RPC, Contents_SMB, Contents_RPC, Contents_NFS, + FTP_ADAT, // End-marker. LastAnalyzer }; diff --git a/src/Anon.cc b/src/Anon.cc index 440f8600d5..f58057b2fc 100644 --- a/src/Anon.cc +++ b/src/Anon.cc @@ -5,7 +5,6 @@ #include "util.h" #include "net_util.h" -#include "md5.h" #include "Anon.h" #include "Val.h" #include "NetVar.h" @@ -153,7 +152,9 @@ void AnonymizeIPAddr_A50::init() int AnonymizeIPAddr_A50::PreservePrefix(ipaddr32_t input, int num_bits) { - DEBUG_MSG("%s/%d\n", dotted_addr(input), num_bits); + DEBUG_MSG("%s/%d\n", + IPAddr(IPv4, &input, IPAddr::Network).AsString().c_str(), + num_bits); if ( ! before_anonymization ) { diff --git a/src/Attr.cc b/src/Attr.cc index a5a350f452..bdf247b4f5 100644 --- a/src/Attr.cc +++ b/src/Attr.cc @@ -5,7 +5,7 @@ #include "Attr.h" #include "Expr.h" #include "Serializer.h" -#include "LogMgr.h" +#include "threading/SerialTypes.h" const char* attr_name(attr_tag t) { @@ -15,9 +15,10 @@ const char* attr_name(attr_tag t) "&add_func", "&delete_func", "&expire_func", "&read_expire", "&write_expire", "&create_expire", "&persistent", "&synchronized", "&postprocessor", - "&encrypt", "&match", "&disable_print_hook", + "&encrypt", "&match", "&raw_output", "&mergeable", "&priority", - "&group", "&log", "&error_handler", "(&tracked)", + "&group", "&log", "&error_handler", "&type_column", + "(&tracked)", }; return attr_names[int(t)]; @@ -60,16 +61,19 @@ void Attr::DescribeReST(ODesc* d) const d->Add("="); d->SP(); - if ( expr->Type()->Tag() == TYPE_FUNC ) - d->Add(":bro:type:`func`"); - else if ( expr->Type()->Tag() == TYPE_ENUM ) + if ( expr->Tag() == EXPR_NAME ) { - d->Add(":bro:enum:`"); + d->Add(":bro:see:`"); expr->Describe(d); d->Add("`"); } + else if ( expr->Type()->Tag() == TYPE_FUNC ) + { + d->Add(":bro:type:`func`"); + } + else { d->Add("``"); @@ -381,11 +385,6 @@ void Attributes::CheckAttr(Attr* a) // FIXME: Check here for global ID? break; - case ATTR_DISABLE_PRINT_HOOK: - if ( type->Tag() != TYPE_FILE ) - Error("&disable_print_hook only applicable to files"); - break; - case ATTR_RAW_OUTPUT: if ( type->Tag() != TYPE_FILE ) Error("&raw_output only applicable to files"); @@ -413,10 +412,29 @@ void Attributes::CheckAttr(Attr* a) break; case ATTR_LOG: - if ( ! LogVal::IsCompatibleType(type) ) + if ( ! threading::Value::IsCompatibleType(type) ) Error("&log applied to a type that cannot be logged"); break; + case ATTR_TYPE_COLUMN: + { + if ( type->Tag() != TYPE_PORT ) + { + Error("type_column tag only applicable to ports"); + break; + } + + BroType* atype = a->AttrExpr()->Type(); + + if ( atype->Tag() != TYPE_STRING ) { + Error("type column needs to have a string argument"); + break; + } + + break; + } + + default: BadTag("Attributes::CheckAttr", attr_name(a->Tag())); } @@ -481,7 +499,11 @@ bool Attributes::DoSerialize(SerialInfo* info) const loop_over_list((*attrs), i) { Attr* a = (*attrs)[i]; - SERIALIZE_OPTIONAL(a->AttrExpr()) + + // Broccoli doesn't support expressions. + Expr* e = (! info->broccoli_peer) ? a->AttrExpr() : 0; + SERIALIZE_OPTIONAL(e); + if ( ! SERIALIZE(char(a->Tag())) ) return false; } diff --git a/src/Attr.h b/src/Attr.h index 6c835dc61c..c9a0dedb33 100644 --- a/src/Attr.h +++ b/src/Attr.h @@ -28,13 +28,13 @@ typedef enum { ATTR_POSTPROCESSOR, ATTR_ENCRYPT, ATTR_MATCH, - ATTR_DISABLE_PRINT_HOOK, ATTR_RAW_OUTPUT, ATTR_MERGEABLE, ATTR_PRIORITY, ATTR_GROUP, ATTR_LOG, ATTR_ERROR_HANDLER, + ATTR_TYPE_COLUMN, // for input framework ATTR_TRACKED, // hidden attribute, tracked by NotifierRegistry #define NUM_ATTRS (int(ATTR_TRACKED) + 1) } attr_tag; diff --git a/src/Base64.cc b/src/Base64.cc index 9008837f35..b0da8ea74c 100644 --- a/src/Base64.cc +++ b/src/Base64.cc @@ -1,14 +1,27 @@ #include "config.h" #include "Base64.h" -static int base64_table[256]; +int Base64Decoder::default_base64_table[256]; +const string Base64Decoder::default_alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; -static void init_base64_table() +int* Base64Decoder::InitBase64Table(const string& alphabet) { - static int table_initialized = 0; + assert(alphabet.size() == 64); - if ( ++table_initialized > 1 ) - return; + static bool default_table_initialized = false; + + if ( alphabet == default_alphabet && default_table_initialized ) + return default_base64_table; + + int* base64_table = 0; + + if ( alphabet == default_alphabet ) + { + base64_table = default_base64_table; + default_table_initialized = true; + } + else + base64_table = new int[256]; int i; for ( i = 0; i < 256; ++i ) @@ -16,28 +29,36 @@ static void init_base64_table() for ( i = 0; i < 26; ++i ) { - base64_table['A' + i] = i; - base64_table['a' + i] = i + 26; + base64_table[int(alphabet[0 + i])] = i; + base64_table[int(alphabet[26 + i])] = i + 26; } for ( i = 0; i < 10; ++i ) - base64_table['0' + i] = i + 52; + base64_table[int(alphabet[52 + i])] = i + 52; // Casts to avoid compiler warnings. - base64_table[int('+')] = 62; - base64_table[int('/')] = 63; + base64_table[int(alphabet[62])] = 62; + base64_table[int(alphabet[63])] = 63; base64_table[int('=')] = 0; + + return base64_table; } -Base64Decoder::Base64Decoder(Analyzer* arg_analyzer) +Base64Decoder::Base64Decoder(Analyzer* arg_analyzer, const string& alphabet) { - init_base64_table(); + base64_table = InitBase64Table(alphabet.size() ? alphabet : default_alphabet); base64_group_next = 0; base64_padding = base64_after_padding = 0; errored = 0; analyzer = arg_analyzer; } +Base64Decoder::~Base64Decoder() + { + if ( base64_table != default_base64_table ) + delete base64_table; + } + int Base64Decoder::Decode(int len, const char* data, int* pblen, char** pbuf) { int blen; @@ -142,13 +163,21 @@ int Base64Decoder::Done(int* pblen, char** pbuf) return 0; } -BroString* decode_base64(const BroString* s) + +BroString* decode_base64(const BroString* s, const BroString* a) { + if ( a && a->Len() != 64 ) + { + reporter->Error("base64 decoding alphabet is not 64 characters: %s", + a->CheckString()); + return 0; + } + int buf_len = int((s->Len() + 3) / 4) * 3 + 1; int rlen2, rlen = buf_len; char* rbuf2, *rbuf = new char[rlen]; - Base64Decoder dec(0); + Base64Decoder dec(0, a ? a->CheckString() : ""); if ( dec.Decode(s->Len(), (const char*) s->Bytes(), &rlen, &rbuf) == -1 ) goto err; diff --git a/src/Base64.h b/src/Base64.h index 7f351a2c2b..0e02c94cf0 100644 --- a/src/Base64.h +++ b/src/Base64.h @@ -13,11 +13,11 @@ class Base64Decoder { public: - // is used for error reporting, and it should be zero - // when the decoder is called by the built-in function - // decode_base64(). - Base64Decoder(Analyzer* analyzer); - ~Base64Decoder() { } + // is used for error reporting, and it should be zero when + // the decoder is called by the built-in function decode_base64(). + // Empty alphabet indicates the default base64 alphabet. + Base64Decoder(Analyzer* analyzer, const string& alphabet = ""); + ~Base64Decoder(); // A note on Decode(): // @@ -57,8 +57,13 @@ protected: int base64_after_padding; int errored; // if true, we encountered an error - skip further processing Analyzer* analyzer; + int* base64_table; + + static int* InitBase64Table(const string& alphabet); + static int default_base64_table[256]; + static const string default_alphabet; }; -BroString* decode_base64(const BroString* s); +BroString* decode_base64(const BroString* s, const BroString* a = 0); #endif /* base64_h */ diff --git a/src/BitTorrent.cc b/src/BitTorrent.cc index c58eb4cf65..fa8fb09e43 100644 --- a/src/BitTorrent.cc +++ b/src/BitTorrent.cc @@ -66,45 +66,50 @@ void BitTorrent_Analyzer::DeliverStream(int len, const u_char* data, bool orig) void BitTorrent_Analyzer::Undelivered(int seq, int len, bool orig) { - uint64 entry_offset = orig ? - *interp->upflow()->next_message_offset() : - *interp->downflow()->next_message_offset(); - uint64& this_stream_len = orig ? stream_len_orig : stream_len_resp; - bool& this_stop = orig ? stop_orig : stop_resp; - TCP_ApplicationAnalyzer::Undelivered(seq, len, orig); - this_stream_len += len; + // TODO: Code commented out for now. I think that shoving data that + // is definitely wrong into the parser seems like a really bad idea. + // The way it's currently tracking the next message offset isn't + // compatible with new 64bit int support in binpac either. - if ( entry_offset < this_stream_len ) - { // entry point is somewhere in the gap - DeliverWeird("Stopping BitTorrent analysis: cannot recover from content gap", orig); - this_stop = true; - if ( stop_orig && stop_resp ) - ProtocolViolation("BitTorrent: content gap and/or protocol violation"); - } - else - { // fill the gap - try - { - u_char gap[len]; - memset(gap, 0, len); - interp->NewData(orig, gap, gap + len); - } - catch ( binpac::Exception const &e ) - { - DeliverWeird("Stopping BitTorrent analysis: filling content gap failed", orig); - this_stop = true; - if ( stop_orig && stop_resp ) - ProtocolViolation("BitTorrent: content gap and/or protocol violation"); - } - } + //uint64 entry_offset = orig ? + // *interp->upflow()->next_message_offset() : + // *interp->downflow()->next_message_offset(); + //uint64& this_stream_len = orig ? stream_len_orig : stream_len_resp; + //bool& this_stop = orig ? stop_orig : stop_resp; + // + //this_stream_len += len; + // + //if ( entry_offset < this_stream_len ) + // { // entry point is somewhere in the gap + // DeliverWeird("Stopping BitTorrent analysis: cannot recover from content gap", orig); + // this_stop = true; + // if ( stop_orig && stop_resp ) + // ProtocolViolation("BitTorrent: content gap and/or protocol violation"); + // } + //else + // { // fill the gap + // try + // { + // u_char gap[len]; + // memset(gap, 0, len); + // interp->NewData(orig, gap, gap + len); + // } + // catch ( binpac::Exception const &e ) + // { + // DeliverWeird("Stopping BitTorrent analysis: filling content gap failed", orig); + // this_stop = true; + // if ( stop_orig && stop_resp ) + // ProtocolViolation("BitTorrent: content gap and/or protocol violation"); + // } + // } } -void BitTorrent_Analyzer::EndpointEOF(TCP_Reassembler* endp) +void BitTorrent_Analyzer::EndpointEOF(bool is_orig) { - TCP_ApplicationAnalyzer::EndpointEOF(endp); - interp->FlowEOF(endp->IsOrig()); + TCP_ApplicationAnalyzer::EndpointEOF(is_orig); + interp->FlowEOF(is_orig); } void BitTorrent_Analyzer::DeliverWeird(const char* msg, bool orig) diff --git a/src/BitTorrent.h b/src/BitTorrent.h index 191b4c50d7..f083cf4fc7 100644 --- a/src/BitTorrent.h +++ b/src/BitTorrent.h @@ -15,7 +15,7 @@ public: virtual void Done(); virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void Undelivered(int seq, int len, bool orig); - virtual void EndpointEOF(TCP_Reassembler* endp); + virtual void EndpointEOF(bool is_orig); static Analyzer* InstantiateAnalyzer(Connection* conn) { return new BitTorrent_Analyzer(conn); } diff --git a/src/BitTorrentTracker.cc b/src/BitTorrentTracker.cc index 995a01dd63..12c5a199de 100644 --- a/src/BitTorrentTracker.cc +++ b/src/BitTorrentTracker.cc @@ -215,9 +215,9 @@ void BitTorrentTracker_Analyzer::Undelivered(int seq, int len, bool orig) stop_resp = true; } -void BitTorrentTracker_Analyzer::EndpointEOF(TCP_Reassembler* endp) +void BitTorrentTracker_Analyzer::EndpointEOF(bool is_orig) { - TCP_ApplicationAnalyzer::EndpointEOF(endp); + TCP_ApplicationAnalyzer::EndpointEOF(is_orig); } void BitTorrentTracker_Analyzer::InitBencParser(void) diff --git a/src/BitTorrentTracker.h b/src/BitTorrentTracker.h index d57665d104..3b9efe0430 100644 --- a/src/BitTorrentTracker.h +++ b/src/BitTorrentTracker.h @@ -48,7 +48,7 @@ public: virtual void Done(); virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void Undelivered(int seq, int len, bool orig); - virtual void EndpointEOF(TCP_Reassembler* endp); + virtual void EndpointEOF(bool is_orig); static Analyzer* InstantiateAnalyzer(Connection* conn) { return new BitTorrentTracker_Analyzer(conn); } diff --git a/src/BroDoc.cc b/src/BroDoc.cc index b84b9d023d..1e2d7d52ea 100644 --- a/src/BroDoc.cc +++ b/src/BroDoc.cc @@ -85,12 +85,13 @@ void BroDoc::AddImport(const std::string& s) if ( ext_pos != std::string::npos ) lname = lname.substr(0, ext_pos); - const char* full_filename = ""; - const char* subpath = ""; + const char* full_filename = NULL; + const char* subpath = NULL; + FILE* f = search_for_file(lname.c_str(), "bro", &full_filename, true, &subpath); - if ( f ) + if ( f && full_filename && subpath ) { fclose(f); @@ -126,12 +127,14 @@ void BroDoc::AddImport(const std::string& s) } delete [] tmp; - delete [] full_filename; - delete [] subpath; } + else fprintf(stderr, "Failed to document '@load %s' in file: %s\n", s.c_str(), reST_filename.c_str()); + + delete [] full_filename; + delete [] subpath; } void BroDoc::SetPacketFilter(const std::string& s) @@ -170,13 +173,26 @@ void BroDoc::WriteDocFile() const { WriteToDoc(".. Automatically generated. Do not edit.\n\n"); + WriteToDoc(":tocdepth: 3\n\n"); + WriteSectionHeading(doc_title.c_str(), '='); - WriteToDoc("\n:download:`Original Source File <%s>`\n\n", - downloadable_filename.c_str()); + WriteStringList(".. bro:namespace:: %s\n", modules); - WriteSectionHeading("Overview", '-'); - WriteStringList("%s\n", "%s\n\n", summary); + WriteToDoc("\n"); + + // WriteSectionHeading("Overview", '-'); + WriteStringList("%s\n", summary); + + WriteToDoc("\n"); + + if ( ! modules.empty() ) + { + WriteToDoc(":Namespace%s: ", (modules.size() > 1 ? "s" : "")); + // WriteStringList(":bro:namespace:`%s`", modules); + WriteStringList("``%s``, ", "``%s``", modules); + WriteToDoc("\n"); + } if ( ! imports.empty() ) { @@ -196,37 +212,38 @@ void BroDoc::WriteDocFile() const WriteToDoc("\n"); } + WriteToDoc(":Source File: :download:`%s`\n", + downloadable_filename.c_str()); + WriteToDoc("\n"); WriteInterface("Summary", '~', '#', true, true); - if ( ! modules.empty() ) - { - WriteSectionHeading("Namespaces", '~'); - WriteStringList(".. bro:namespace:: %s\n", modules); - WriteToDoc("\n"); - } - if ( ! notices.empty() ) - WriteBroDocObjList(notices, "Notices", '~'); + WriteBroDocObjList(notices, "Notices", '#'); - WriteInterface("Public Interface", '-', '~', true, false); + if ( port_analysis.size() || packet_filter.size() ) + WriteSectionHeading("Configuration Changes", '#'); if ( ! port_analysis.empty() ) { - WriteSectionHeading("Port Analysis", '-'); - WriteToDoc(":ref:`More Information `\n\n"); - WriteStringList("%s", port_analysis); + WriteSectionHeading("Port Analysis", '^'); + WriteToDoc("Loading this script makes the following changes to " + ":bro:see:`dpd_config`.\n\n"); + WriteStringList("%s, ", "%s", port_analysis); } if ( ! packet_filter.empty() ) { - WriteSectionHeading("Packet Filter", '-'); - WriteToDoc(":ref:`More Information `\n\n"); + WriteSectionHeading("Packet Filter", '^'); + WriteToDoc("Loading this script makes the following changes to " + ":bro:see:`capture_filters`.\n\n"); WriteToDoc("Filters added::\n\n"); WriteToDoc("%s\n", packet_filter.c_str()); } + WriteInterface("Detailed Interface", '~', '#', true, false); + #if 0 // Disabled for now. BroDocObjList::const_iterator it; bool hasPrivateIdentifiers = false; @@ -241,7 +258,7 @@ void BroDoc::WriteDocFile() const } if ( hasPrivateIdentifiers ) - WriteInterface("Private Interface", '-', '~', false, false); + WriteInterface("Private Interface", '~', '#', false, false); #endif } diff --git a/src/BroDocObj.cc b/src/BroDocObj.cc index d9fe16632b..12753ea15d 100644 --- a/src/BroDocObj.cc +++ b/src/BroDocObj.cc @@ -4,9 +4,12 @@ #include "ID.h" #include "BroDocObj.h" +BroDocObj* BroDocObj::last = 0; + BroDocObj::BroDocObj(const ID* id, std::list*& reST, bool is_fake) { + last = this; broID = id; reST_doc_strings = reST; reST = 0; diff --git a/src/BroDocObj.h b/src/BroDocObj.h index 0ad96afa86..cb512f8cda 100644 --- a/src/BroDocObj.h +++ b/src/BroDocObj.h @@ -103,6 +103,20 @@ public: */ int LongestShortDescLen() const; + /** + * Adds a reST documentation string to this BroDocObj's list. + * @param s the documentation string to append. + */ + void AddDocString(const std::string& s) + { + if ( ! reST_doc_strings ) + reST_doc_strings = new std::list(); + reST_doc_strings->push_back(s); + FormulateShortDesc(); + } + + static BroDocObj* last; + protected: std::list* reST_doc_strings; std::list short_desc; diff --git a/src/Brofiler.cc b/src/Brofiler.cc new file mode 100644 index 0000000000..c9a3505069 --- /dev/null +++ b/src/Brofiler.cc @@ -0,0 +1,102 @@ +#include +#include +#include +#include +#include "Brofiler.h" +#include "util.h" + +Brofiler::Brofiler() + : ignoring(0), delim('\t') + { + } + +Brofiler::~Brofiler() + { + } + +bool Brofiler::ReadStats() + { + char* bf = getenv("BRO_PROFILER_FILE"); + if ( ! bf ) + return false; + + FILE* f = fopen(bf, "r"); + if ( ! f ) + return false; + + char line[16384]; + string delimiter; + delimiter = delim; + + while( fgets(line, sizeof(line), f) ) + { + line[strlen(line) - 1] = 0; //remove newline + string cnt(strtok(line, delimiter.c_str())); + string location(strtok(0, delimiter.c_str())); + string desc(strtok(0, delimiter.c_str())); + pair location_desc(location, desc); + uint64 count; + atoi_n(cnt.size(), cnt.c_str(), 0, 10, count); + usage_map[location_desc] = count; + } + + fclose(f); + return true; + } + +bool Brofiler::WriteStats() + { + char* bf = getenv("BRO_PROFILER_FILE"); + if ( ! bf ) return false; + + FILE* f; + const char* p = strstr(bf, ".XXXXXX"); + + if ( p && ! p[7] ) + { + int fd = mkstemp(bf); + if ( fd == -1 ) + { + reporter->Error("Failed to generate unique file name from BRO_PROFILER_FILE: %s", bf); + return false; + } + f = fdopen(fd, "w"); + } + else + { + f = fopen(bf, "w"); + } + + if ( ! f ) + { + reporter->Error("Failed to open BRO_PROFILER_FILE destination '%s' for writing", bf); + return false; + } + + for ( list::const_iterator it = stmts.begin(); + it != stmts.end(); ++it ) + { + ODesc location_info; + (*it)->GetLocationInfo()->Describe(&location_info); + ODesc desc_info; + (*it)->Describe(&desc_info); + string desc(desc_info.Description()); + for_each(desc.begin(), desc.end(), canonicalize_desc()); + pair location_desc(location_info.Description(), desc); + if ( usage_map.find(location_desc) != usage_map.end() ) + usage_map[location_desc] += (*it)->GetAccessCount(); + else + usage_map[location_desc] = (*it)->GetAccessCount(); + } + + map, uint64 >::const_iterator it; + for ( it = usage_map.begin(); it != usage_map.end(); ++it ) + { + fprintf(f, "%"PRIu64"%c%s%c%s\n", it->second, delim, + it->first.first.c_str(), delim, it->first.second.c_str()); + } + + fclose(f); + return true; + } + diff --git a/src/Brofiler.h b/src/Brofiler.h new file mode 100644 index 0000000000..22e5808bf6 --- /dev/null +++ b/src/Brofiler.h @@ -0,0 +1,81 @@ +#ifndef BROFILER_H_ +#define BROFILER_H_ + +#include +#include +#include +#include + + +/** + * A simple class for managing stats of Bro script coverage across Bro runs. + */ +class Brofiler { +public: + Brofiler(); + virtual ~Brofiler(); + + /** + * Imports Bro script Stmt usage information from file pointed to by + * environment variable BRO_PROFILER_FILE. + * + * @return: true if usage info was read, otherwise false. + */ + bool ReadStats(); + + /** + * Combines usage stats from current run with any read from ReadStats(), + * then writes information to file pointed to by environment variable + * BRO_PROFILER_FILE. If the value of that env. variable ends with + * ".XXXXXX" (exactly 6 X's), then it is first passed through mkstemp + * to get a unique file. + * + * @return: true when usage info is written, otherwise false. + */ + bool WriteStats(); + + void SetDelim(char d) { delim = d; } + + void IncIgnoreDepth() { ignoring++; } + void DecIgnoreDepth() { ignoring--; } + + void AddStmt(const Stmt* s) { if ( ignoring == 0 ) stmts.push_back(s); } + +private: + /** + * The current, global Brofiler instance creates this list at parse-time. + */ + list stmts; + + /** + * Indicates whether new statments will not be considered as part of + * coverage statistics because it was marked with the @no-test tag. + */ + unsigned int ignoring; + + /** + * This maps Stmt location-desc pairs to the total number of times that + * Stmt has been executed. The map can be initialized from a file at + * startup time and modified at shutdown time before writing back + * to a file. + */ + map, uint64> usage_map; + + /** + * The character to use to delimit Brofiler output files. Default is '\t'. + */ + char delim; + + /** + * A canonicalization routine for Stmt descriptions containing characters + * that don't agree with the output format of Brofiler. + */ + struct canonicalize_desc { + void operator() (char& c) + { + if ( c == '\n' ) c = ' '; + } + }; +}; + +#endif /* BROFILER_H_ */ diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index b4779e1557..b77863d107 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -1,8 +1,10 @@ -include_directories(${CMAKE_CURRENT_SOURCE_DIR} +include_directories(BEFORE + ${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR} ) configure_file(version.c.in ${CMAKE_CURRENT_BINARY_DIR}/version.c) +configure_file(util-config.h.in ${CMAKE_CURRENT_BINARY_DIR}/util-config.h) # This creates a custom command to transform a bison output file (inFile) # into outFile in order to avoid symbol conflicts: @@ -141,6 +143,7 @@ endmacro(GET_BIF_OUTPUT_FILES) set(BIF_SRCS bro.bif logging.bif + input.bif event.bif const.bif types.bif @@ -185,6 +188,9 @@ endmacro(BINPAC_TARGET) binpac_target(binpac-lib.pac) binpac_target(binpac_bro-lib.pac) + +binpac_target(ayiya.pac + ayiya-protocol.pac ayiya-analyzer.pac) binpac_target(bittorrent.pac bittorrent-protocol.pac bittorrent-analyzer.pac) binpac_target(dce_rpc.pac @@ -204,6 +210,8 @@ binpac_target(netflow.pac netflow-protocol.pac netflow-analyzer.pac) binpac_target(smb.pac smb-protocol.pac smb-pipe.pac smb-mailslot.pac) +binpac_target(socks.pac + socks-protocol.pac socks-analyzer.pac) binpac_target(ssl.pac ssl-defs.pac ssl-protocol.pac ssl-analyzer.pac) binpac_target(syslog.pac @@ -212,6 +220,8 @@ binpac_target(syslog.pac ######################################################################## ## bro target +find_package (Threads) + # This macro stores associated headers for any C/C++ source files given # as arguments (past _var) as a list in the CMake variable named "_var". macro(COLLECT_HEADERS _var) @@ -244,7 +254,6 @@ add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/DebugCmdConstants.h WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} ) -set(dns_SRCS nb_dns.c) set_source_files_properties(nb_dns.c PROPERTIES COMPILE_FLAGS -fno-strict-aliasing) @@ -274,6 +283,7 @@ set(bro_SRCS Anon.cc ARP.cc Attr.cc + AYIYA.cc BackDoor.cc Base64.cc BitTorrent.cc @@ -281,12 +291,12 @@ set(bro_SRCS BPF_Program.cc BroDoc.cc BroDocObj.cc + Brofiler.cc BroString.cc CCL.cc ChunkedIO.cc CompHash.cc Conn.cc - ConnCompressor.cc ConnSizeAnalyzer.cc ContentLine.cc DCE_RPC.cc @@ -329,13 +339,11 @@ set(bro_SRCS IntSet.cc InterConn.cc IOSource.cc + IP.cc + IPAddr.cc IRC.cc List.cc Reporter.cc - LogMgr.cc - LogWriter.cc - LogWriterAscii.cc - LogWriterNone.cc Login.cc MIME.cc NCP.cc @@ -374,8 +382,9 @@ set(bro_SRCS SmithWaterman.cc SMB.cc SMTP.cc + SOCKS.cc SSH.cc - SSL-binpac.cc + SSL.cc Scope.cc SerializationFormat.cc SerialObj.cc @@ -390,9 +399,11 @@ set(bro_SRCS TCP_Endpoint.cc TCP_Reassembler.cc Telnet.cc + Teredo.cc Timer.cc Traverse.cc Trigger.cc + TunnelEncapsulation.cc Type.cc UDP.cc Val.cc @@ -400,34 +411,43 @@ set(bro_SRCS XDR.cc ZIP.cc bsd-getopt-long.c + bro_inet_ntop.c cq.c - md5.c patricia.c setsignal.c PacketDumper.cc strsep.c modp_numtoa.c - ${dns_SRCS} - ${openssl_SRCS} + + threading/BasicThread.cc + threading/Manager.cc + threading/MsgThread.cc + threading/SerialTypes.cc + + logging/Manager.cc + logging/WriterBackend.cc + logging/WriterFrontend.cc + logging/writers/Ascii.cc + logging/writers/DataSeries.cc + logging/writers/ElasticSearch.cc + logging/writers/None.cc + + input/Manager.cc + input/ReaderBackend.cc + input/ReaderFrontend.cc + input/readers/Ascii.cc + input/readers/Raw.cc + input/readers/Benchmark.cc + + nb_dns.c + digest.h ) collect_headers(bro_HEADERS ${bro_SRCS}) -add_definitions(-DBRO_SCRIPT_INSTALL_PATH="${BRO_SCRIPT_INSTALL_PATH}") -add_definitions(-DBRO_SCRIPT_SOURCE_PATH="${BRO_SCRIPT_SOURCE_PATH}") -add_definitions(-DBRO_BUILD_PATH="${CMAKE_CURRENT_BINARY_DIR}") - add_executable(bro ${bro_SRCS} ${bro_HEADERS}) -set(brolibs - ${BinPAC_LIBRARY} - ${PCAP_LIBRARY} - ${OpenSSL_LIBRARIES} - ${BIND_LIBRARY} - ${OPTLIBS} -) - -target_link_libraries(bro ${brolibs}) +target_link_libraries(bro ${brodeps} ${CMAKE_THREAD_LIBS_INIT}) install(TARGETS bro DESTINATION bin) install(FILES ${INSTALL_BIF_OUTPUTS} DESTINATION ${BRO_SCRIPT_INSTALL_PATH}/base) diff --git a/src/ChunkedIO.cc b/src/ChunkedIO.cc index ff84a343c7..2c766c7eb1 100644 --- a/src/ChunkedIO.cc +++ b/src/ChunkedIO.cc @@ -76,7 +76,7 @@ void ChunkedIO::DumpDebugData(const char* basefnname, bool want_reads) ChunkedIOFd io(fd, "dump-file"); io.Write(*i); io.Flush(); - close(fd); + safe_close(fd); } l->clear(); @@ -127,7 +127,7 @@ ChunkedIOFd::~ChunkedIOFd() delete [] read_buffer; delete [] write_buffer; - close(fd); + safe_close(fd); if ( partial ) { @@ -686,7 +686,7 @@ ChunkedIOSSL::~ChunkedIOSSL() ssl = 0; } - close(socket); + safe_close(socket); } @@ -1170,8 +1170,6 @@ void ChunkedIOSSL::Stats(char* buffer, int length) ChunkedIO::Stats(buffer + i, length - i); } -#ifdef HAVE_LIBZ - bool CompressedChunkedIO::Init() { zin.zalloc = 0; @@ -1348,5 +1346,3 @@ void CompressedChunkedIO::Stats(char* buffer, int length) io->Stats(buffer + i, length - i); buffer[length-1] = '\0'; } - -#endif /* HAVE_LIBZ */ diff --git a/src/ChunkedIO.h b/src/ChunkedIO.h index ca95f4b40b..56b5656945 100644 --- a/src/ChunkedIO.h +++ b/src/ChunkedIO.h @@ -287,8 +287,6 @@ private: static SSL_CTX* ctx; }; -#ifdef HAVE_LIBZ - #include // Wrapper class around a another ChunkedIO which the (un-)compresses data. @@ -335,6 +333,4 @@ protected: unsigned long uncompressed_bytes_written; }; -#endif /* HAVE_LIBZ */ - #endif diff --git a/src/CompHash.cc b/src/CompHash.cc index 148c8384ee..86677f9719 100644 --- a/src/CompHash.cc +++ b/src/CompHash.cc @@ -107,40 +107,18 @@ char* CompositeHash::SingleValHash(int type_check, char* kp0, case TYPE_INTERNAL_ADDR: { - // Use uint32 instead of int, because 'int' is not - // guaranteed to be 32-bit. uint32* kp = AlignAndPadType(kp0); -#ifdef BROv6 - const addr_type av = v->AsAddr(); - kp[0] = av[0]; - kp[1] = av[1]; - kp[2] = av[2]; - kp[3] = av[3]; + v->AsAddr().CopyIPv6(kp); kp1 = reinterpret_cast(kp+4); -#else - *kp = v->AsAddr(); - kp1 = reinterpret_cast(kp+1); -#endif } break; case TYPE_INTERNAL_SUBNET: { uint32* kp = AlignAndPadType(kp0); -#ifdef BROv6 - const subnet_type* sv = v->AsSubNet(); - kp[0] = sv->net[0]; - kp[1] = sv->net[1]; - kp[2] = sv->net[2]; - kp[3] = sv->net[3]; - kp[4] = sv->width; + v->AsSubNet().Prefix().CopyIPv6(kp); + kp[4] = v->AsSubNet().Length(); kp1 = reinterpret_cast(kp+5); -#else - const subnet_type* sv = v->AsSubNet(); - kp[0] = sv->net; - kp[1] = sv->width; - kp1 = reinterpret_cast(kp+2); -#endif } break; @@ -155,14 +133,16 @@ char* CompositeHash::SingleValHash(int type_check, char* kp0, case TYPE_INTERNAL_VOID: case TYPE_INTERNAL_OTHER: { - if ( v->Type()->Tag() == TYPE_FUNC ) + switch ( v->Type()->Tag() ) { + case TYPE_FUNC: { uint32* kp = AlignAndPadType(kp0); *kp = v->AsFunc()->GetUniqueFuncID(); kp1 = reinterpret_cast(kp+1); + break; } - else if ( v->Type()->Tag() == TYPE_RECORD ) + case TYPE_RECORD: { char* kp = kp0; RecordVal* rv = v->AsRecordVal(); @@ -186,14 +166,87 @@ char* CompositeHash::SingleValHash(int type_check, char* kp0, } kp1 = kp; + break; } - else + + case TYPE_TABLE: + { + int* kp = AlignAndPadType(kp0); + TableVal* tv = v->AsTableVal(); + ListVal* lv = tv->ConvertToList(); + *kp = tv->Size(); + kp1 = reinterpret_cast(kp+1); + for ( int i = 0; i < tv->Size(); ++i ) + { + Val* key = lv->Index(i); + if ( ! (kp1 = SingleValHash(type_check, kp1, key->Type(), key, + false)) ) + return 0; + + if ( ! v->Type()->IsSet() ) + { + Val* val = tv->Lookup(key); + if ( ! (kp1 = SingleValHash(type_check, kp1, val->Type(), + val, false)) ) + return 0; + } + } + } + break; + + case TYPE_VECTOR: + { + unsigned int* kp = AlignAndPadType(kp0); + VectorVal* vv = v->AsVectorVal(); + VectorType* vt = v->Type()->AsVectorType(); + vector* indices = v->AsVector(); + *kp = vv->Size(); + kp1 = reinterpret_cast(kp+1); + for ( unsigned int i = 0; i < vv->Size(); ++i ) + { + Val* val = vv->Lookup(i); + unsigned int* kp = AlignAndPadType(kp1); + *kp = i; + kp1 = reinterpret_cast(kp+1); + kp = AlignAndPadType(kp1); + *kp = val ? 1 : 0; + kp1 = reinterpret_cast(kp+1); + + if ( val ) + { + if ( ! (kp1 = SingleValHash(type_check, kp1, + vt->YieldType(), val, false)) ) + return 0; + } + } + } + break; + + case TYPE_LIST: + { + int* kp = AlignAndPadType(kp0); + ListVal* lv = v->AsListVal(); + *kp = lv->Length(); + kp1 = reinterpret_cast(kp+1); + for ( int i = 0; i < lv->Length(); ++i ) + { + Val* v = lv->Index(i); + if ( ! (kp1 = SingleValHash(type_check, kp1, v->Type(), v, + false)) ) + return 0; + } + } + break; + + default: { reporter->InternalError("bad index type in CompositeHash::SingleValHash"); return 0; } } - break; + + break; // case TYPE_INTERNAL_VOID/OTHER + } case TYPE_INTERNAL_STRING: { @@ -283,26 +336,16 @@ HashKey* CompositeHash::ComputeSingletonHash(const Val* v, int type_check) const if ( type_check && v->Type()->InternalType() != singleton_tag ) return 0; - uint32 tmp_addr; switch ( singleton_tag ) { case TYPE_INTERNAL_INT: case TYPE_INTERNAL_UNSIGNED: return new HashKey(v->ForceAsInt()); case TYPE_INTERNAL_ADDR: -#ifdef BROv6 - return new HashKey(v->AsAddr(), 4); -#else - return new HashKey(v->AsAddr()); -#endif + return v->AsAddr().GetHashKey(); case TYPE_INTERNAL_SUBNET: -#ifdef BROv6 - return new HashKey((const uint32*) v->AsSubNet(), 5); -#else - return new HashKey((const uint32*) v->AsSubNet(), 2); - -#endif + return v->AsSubNet().GetHashKey(); case TYPE_INTERNAL_DOUBLE: return new HashKey(v->InternalDouble()); @@ -350,22 +393,13 @@ int CompositeHash::SingleTypeKeySize(BroType* bt, const Val* v, break; case TYPE_INTERNAL_ADDR: -#ifdef BROv6 sz = SizeAlign(sz, sizeof(uint32)); sz += sizeof(uint32) * 3; // to make a total of 4 words -#else - sz = SizeAlign(sz, sizeof(uint32)); -#endif break; case TYPE_INTERNAL_SUBNET: -#ifdef BROv6 sz = SizeAlign(sz, sizeof(uint32)); sz += sizeof(uint32) * 4; // to make a total of 5 words -#else - sz = SizeAlign(sz, sizeof(uint32)); - sz += sizeof(uint32); // make room for width -#endif break; case TYPE_INTERNAL_DOUBLE: @@ -375,10 +409,14 @@ int CompositeHash::SingleTypeKeySize(BroType* bt, const Val* v, case TYPE_INTERNAL_VOID: case TYPE_INTERNAL_OTHER: { - if ( bt->Tag() == TYPE_FUNC ) + switch ( bt->Tag() ) { + case TYPE_FUNC: + { sz = SizeAlign(sz, sizeof(uint32)); + break; + } - else if ( bt->Tag() == TYPE_RECORD ) + case TYPE_RECORD: { const RecordVal* rv = v ? v->AsRecordVal() : 0; RecordType* rt = bt->AsRecordType(); @@ -396,14 +434,81 @@ int CompositeHash::SingleTypeKeySize(BroType* bt, const Val* v, if ( ! sz ) return 0; } + + break; } - else + + case TYPE_TABLE: + { + if ( ! v ) + return (optional && ! calc_static_size) ? sz : 0; + + sz = SizeAlign(sz, sizeof(int)); + TableVal* tv = const_cast(v->AsTableVal()); + ListVal* lv = tv->ConvertToList(); + for ( int i = 0; i < tv->Size(); ++i ) + { + Val* key = lv->Index(i); + sz = SingleTypeKeySize(key->Type(), key, type_check, sz, false, + calc_static_size); + if ( ! sz ) return 0; + if ( ! bt->IsSet() ) + { + Val* val = tv->Lookup(key); + sz = SingleTypeKeySize(val->Type(), val, type_check, sz, + false, calc_static_size); + if ( ! sz ) return 0; + } + } + + break; + } + + case TYPE_VECTOR: + { + if ( ! v ) + return (optional && ! calc_static_size) ? sz : 0; + + sz = SizeAlign(sz, sizeof(unsigned int)); + VectorVal* vv = const_cast(v->AsVectorVal()); + for ( unsigned int i = 0; i < vv->Size(); ++i ) + { + Val* val = vv->Lookup(i); + sz = SizeAlign(sz, sizeof(unsigned int)); + sz = SizeAlign(sz, sizeof(unsigned int)); + if ( val ) + sz = SingleTypeKeySize(bt->AsVectorType()->YieldType(), + val, type_check, sz, false, + calc_static_size); + if ( ! sz ) return 0; + } + + break; + } + + case TYPE_LIST: + { + sz = SizeAlign(sz, sizeof(int)); + ListVal* lv = const_cast(v->AsListVal()); + for ( int i = 0; i < lv->Length(); ++i ) + { + sz = SingleTypeKeySize(lv->Index(i)->Type(), lv->Index(i), + type_check, sz, false, calc_static_size); + if ( ! sz) return 0; + } + + break; + } + + default: { reporter->InternalError("bad index type in CompositeHash::CompositeHash"); return 0; } } - break; + + break; // case TYPE_INTERNAL_VOID/OTHER + } case TYPE_INTERNAL_STRING: if ( ! v ) @@ -602,16 +707,13 @@ const char* CompositeHash::RecoverOneVal(const HashKey* k, const char* kp0, case TYPE_INTERNAL_ADDR: { const uint32* const kp = AlignType(kp0); -#ifdef BROv6 - const_addr_type addr_val = kp; kp1 = reinterpret_cast(kp+4); -#else - const_addr_type addr_val = *kp; - kp1 = reinterpret_cast(kp+1); -#endif + + IPAddr addr(IPv6, kp, IPAddr::Network); + switch ( tag ) { case TYPE_ADDR: - pval = new AddrVal(addr_val); + pval = new AddrVal(addr); break; default: @@ -624,19 +726,17 @@ const char* CompositeHash::RecoverOneVal(const HashKey* k, const char* kp0, case TYPE_INTERNAL_SUBNET: { - const subnet_type* const kp = - reinterpret_cast( - AlignType(kp0)); - kp1 = reinterpret_cast(kp+1); - - pval = new SubNetVal(kp->net, kp->width); + const uint32* const kp = AlignType(kp0); + kp1 = reinterpret_cast(kp+5); + pval = new SubNetVal(kp, kp[4]); } break; case TYPE_INTERNAL_VOID: case TYPE_INTERNAL_OTHER: { - if ( t->Tag() == TYPE_FUNC ) + switch ( t->Tag() ) { + case TYPE_FUNC: { const uint32* const kp = AlignType(kp0); kp1 = reinterpret_cast(kp+1); @@ -662,8 +762,9 @@ const char* CompositeHash::RecoverOneVal(const HashKey* k, const char* kp0, pval->Type()->Tag() != TYPE_FUNC ) reporter->InternalError("inconsistent aggregate Val in CompositeHash::RecoverOneVal()"); } + break; - else if ( t->Tag() == TYPE_RECORD ) + case TYPE_RECORD: { const char* kp = kp0; RecordType* rt = t->AsRecordType(); @@ -700,11 +801,91 @@ const char* CompositeHash::RecoverOneVal(const HashKey* k, const char* kp0, pval = rv; kp1 = kp; } - else + break; + + case TYPE_TABLE: + { + int n; + const int* const kp = AlignType(kp0); + n = *kp; + kp1 = reinterpret_cast(kp+1); + TableType* tt = t->AsTableType(); + TableVal* tv = new TableVal(tt); + vector keys, values; + for ( int i = 0; i < n; ++i ) + { + Val* key; + kp1 = RecoverOneVal(k, kp1, k_end, tt->Indices(), key, false); + keys.push_back(key); + if ( ! t->IsSet() ) + { + Val* value; + kp1 = RecoverOneVal(k, kp1, k_end, tt->YieldType(), value, + false); + values.push_back(value); + } + } + + for ( int i = 0; i < n; ++i ) + tv->Assign(keys[i], t->IsSet() ? 0 : values[i]); + + pval = tv; + } + break; + + case TYPE_VECTOR: + { + unsigned int n; + const unsigned int* kp = AlignType(kp0); + n = *kp; + kp1 = reinterpret_cast(kp+1); + VectorType* vt = t->AsVectorType(); + VectorVal* vv = new VectorVal(vt); + for ( unsigned int i = 0; i < n; ++i ) + { + kp = AlignType(kp1); + unsigned int index = *kp; + kp1 = reinterpret_cast(kp+1); + kp = AlignType(kp1); + unsigned int have_val = *kp; + kp1 = reinterpret_cast(kp+1); + Val* value = 0; + if ( have_val ) + kp1 = RecoverOneVal(k, kp1, k_end, vt->YieldType(), value, + false); + vv->Assign(index, value, 0); + } + + pval = vv; + } + break; + + case TYPE_LIST: + { + int n; + const int* const kp = AlignType(kp0); + n = *kp; + kp1 = reinterpret_cast(kp+1); + TypeList* tl = t->AsTypeList(); + ListVal* lv = new ListVal(TYPE_ANY); + for ( int i = 0; i < n; ++i ) + { + Val* v; + BroType* it = (*tl->Types())[i]; + kp1 = RecoverOneVal(k, kp1, k_end, it, v, false); + lv->Append(v); + } + + pval = lv; + } + break; + + default: { reporter->InternalError("bad index type in CompositeHash::DescribeKey"); } } + } break; case TYPE_INTERNAL_STRING: diff --git a/src/Conn.cc b/src/Conn.cc index df59b1037a..bc2e7fb5cf 100644 --- a/src/Conn.cc +++ b/src/Conn.cc @@ -13,32 +13,7 @@ #include "Timer.h" #include "PIA.h" #include "binpac.h" - -HashKey* ConnID::BuildConnKey() const - { - Key key; - - // Lookup up connection based on canonical ordering, which is - // the smaller of and - // followed by the other. - if ( is_one_way || - addr_port_canon_lt(src_addr, src_port, dst_addr, dst_port) ) - { - copy_addr(src_addr, key.ip1); - copy_addr(dst_addr, key.ip2); - key.port1 = src_port; - key.port2 = dst_port; - } - else - { - copy_addr(dst_addr, key.ip1); - copy_addr(src_addr, key.ip2); - key.port1 = dst_port; - key.port2 = src_port; - } - - return new HashKey(&key, sizeof(key)); - } +#include "TunnelEncapsulation.h" void ConnectionTimer::Init(Connection* arg_conn, timer_func arg_timer, int arg_do_expire) @@ -137,17 +112,22 @@ unsigned int Connection::external_connections = 0; IMPLEMENT_SERIAL(Connection, SER_CONNECTION); -Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id) +Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id, + uint32 flow, const EncapsulationStack* arg_encap) { sessions = s; key = k; start_time = last_time = t; - copy_addr(id->src_addr, orig_addr); - copy_addr(id->dst_addr, resp_addr); + orig_addr = id->src_addr; + resp_addr = id->dst_addr; orig_port = id->src_port; resp_port = id->dst_port; proto = TRANSPORT_UNKNOWN; + orig_flow_label = flow; + resp_flow_label = 0; + saw_first_orig_packet = 1; + saw_first_resp_packet = 0; conn_val = 0; login_conn = 0; @@ -181,6 +161,11 @@ Connection::Connection(NetSessions* s, HashKey* k, double t, const ConnID* id) uid = 0; // Will set later. + if ( arg_encap ) + encapsulation = new EncapsulationStack(*arg_encap); + else + encapsulation = 0; + if ( conn_timer_mgr ) { ++external_connections; @@ -208,12 +193,40 @@ Connection::~Connection() delete key; delete root_analyzer; delete conn_timer_mgr; + delete encapsulation; --current_connections; if ( conn_timer_mgr ) --external_connections; } +void Connection::CheckEncapsulation(const EncapsulationStack* arg_encap) + { + if ( encapsulation && arg_encap ) + { + if ( *encapsulation != *arg_encap ) + { + Event(tunnel_changed, 0, arg_encap->GetVectorVal()); + delete encapsulation; + encapsulation = new EncapsulationStack(*arg_encap); + } + } + + else if ( encapsulation ) + { + EncapsulationStack empty; + Event(tunnel_changed, 0, empty.GetVectorVal()); + delete encapsulation; + encapsulation = 0; + } + + else if ( arg_encap ) + { + Event(tunnel_changed, 0, arg_encap->GetVectorVal()); + encapsulation = new EncapsulationStack(*arg_encap); + } + } + void Connection::Done() { finished = 1; @@ -349,10 +362,12 @@ RecordVal* Connection::BuildConnVal() RecordVal *orig_endp = new RecordVal(endpoint); orig_endp->Assign(0, new Val(0, TYPE_COUNT)); orig_endp->Assign(1, new Val(0, TYPE_COUNT)); + orig_endp->Assign(4, new Val(orig_flow_label, TYPE_COUNT)); RecordVal *resp_endp = new RecordVal(endpoint); resp_endp->Assign(0, new Val(0, TYPE_COUNT)); resp_endp->Assign(1, new Val(0, TYPE_COUNT)); + resp_endp->Assign(4, new Val(resp_flow_label, TYPE_COUNT)); conn_val->Assign(0, id_val); conn_val->Assign(1, orig_endp); @@ -368,6 +383,9 @@ RecordVal* Connection::BuildConnVal() char tmp[20]; conn_val->Assign(9, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62))); + + if ( encapsulation && encapsulation->Depth() > 0 ) + conn_val->Assign(10, encapsulation->GetVectorVal()); } if ( root_analyzer ) @@ -521,7 +539,7 @@ Val* Connection::BuildVersionVal(const char* s, int len) return sw; } -int Connection::VersionFoundEvent(const uint32* addr, const char* s, int len, +int Connection::VersionFoundEvent(const IPAddr& addr, const char* s, int len, Analyzer* analyzer) { if ( ! software_version_found && ! software_parse_error ) @@ -559,7 +577,7 @@ int Connection::VersionFoundEvent(const uint32* addr, const char* s, int len, return 1; } -int Connection::UnparsedVersionFoundEvent(const uint32* addr, +int Connection::UnparsedVersionFoundEvent(const IPAddr& addr, const char* full, int len, Analyzer* analyzer) { // Skip leading white space. @@ -693,15 +711,22 @@ TimerMgr* Connection::GetTimerMgr() const void Connection::FlipRoles() { - uint32 tmp_addr[NUM_ADDR_WORDS]; - copy_addr(resp_addr, tmp_addr); - copy_addr(orig_addr, resp_addr); - copy_addr(tmp_addr, orig_addr); + IPAddr tmp_addr = resp_addr; + orig_addr = resp_addr; + resp_addr = tmp_addr; uint32 tmp_port = resp_port; resp_port = orig_port; orig_port = tmp_port; + bool tmp_bool = saw_first_resp_packet; + saw_first_resp_packet = saw_first_orig_packet; + saw_first_orig_packet = tmp_bool; + + uint32 tmp_flow = resp_flow_label; + resp_flow_label = orig_flow_label; + orig_flow_label = tmp_flow; + Unref(conn_val); conn_val = 0; @@ -752,14 +777,14 @@ void Connection::Describe(ODesc* d) const } d->SP(); - d->Add(dotted_addr(orig_addr)); + d->Add(orig_addr); d->Add(":"); d->Add(ntohs(orig_port)); d->SP(); d->AddSP("->"); - d->Add(dotted_addr(resp_addr)); + d->Add(resp_addr); d->Add(":"); d->Add(ntohs(resp_port)); @@ -782,9 +807,8 @@ bool Connection::DoSerialize(SerialInfo* info) const // First we write the members which are needed to // create the HashKey. - for ( int j = 0; j < NUM_ADDR_WORDS; ++j ) - if ( ! SERIALIZE(orig_addr[j]) || ! SERIALIZE(resp_addr[j]) ) - return false; + if ( ! SERIALIZE(orig_addr) || ! SERIALIZE(resp_addr) ) + return false; if ( ! SERIALIZE(orig_port) || ! SERIALIZE(resp_port) ) return false; @@ -830,21 +854,21 @@ bool Connection::DoUnserialize(UnserialInfo* info) // Build the hash key first. Some of the recursive *::Unserialize() // functions may need it. - for ( int i = 0; i < NUM_ADDR_WORDS; ++i ) - if ( ! UNSERIALIZE(&orig_addr[i]) || ! UNSERIALIZE(&resp_addr[i]) ) - goto error; + ConnID id; + + if ( ! UNSERIALIZE(&orig_addr) || ! UNSERIALIZE(&resp_addr) ) + goto error; if ( ! UNSERIALIZE(&orig_port) || ! UNSERIALIZE(&resp_port) ) goto error; - ConnID id; id.src_addr = orig_addr; id.dst_addr = resp_addr; // This doesn't work for ICMP. But I guess this is not really important. id.src_port = orig_port; id.dst_port = resp_port; id.is_one_way = 0; // ### incorrect for ICMP - key = id.BuildConnKey(); + key = BuildConnIDHashKey(id); int len; if ( ! UNSERIALIZE(&len) ) @@ -910,3 +934,35 @@ void Connection::SetRootAnalyzer(TransportLayerAnalyzer* analyzer, PIA* pia) root_analyzer = analyzer; primary_PIA = pia; } + +void Connection::CheckFlowLabel(bool is_orig, uint32 flow_label) + { + uint32& my_flow_label = is_orig ? orig_flow_label : resp_flow_label; + + if ( my_flow_label != flow_label ) + { + if ( conn_val ) + { + RecordVal *endp = conn_val->Lookup(is_orig ? 1 : 2)->AsRecordVal(); + endp->Assign(4, new Val(flow_label, TYPE_COUNT)); + } + + if ( connection_flow_label_changed && + (is_orig ? saw_first_orig_packet : saw_first_resp_packet) ) + { + val_list* vl = new val_list(4); + vl->append(BuildConnVal()); + vl->append(new Val(is_orig, TYPE_BOOL)); + vl->append(new Val(my_flow_label, TYPE_COUNT)); + vl->append(new Val(flow_label, TYPE_COUNT)); + ConnectionEvent(connection_flow_label_changed, 0, vl); + } + + my_flow_label = flow_label; + } + + if ( is_orig ) + saw_first_orig_packet = 1; + else + saw_first_resp_packet = 1; + } diff --git a/src/Conn.h b/src/Conn.h index 8e90d6a9c3..782d41a801 100644 --- a/src/Conn.h +++ b/src/Conn.h @@ -12,6 +12,8 @@ #include "PersistenceSerializer.h" #include "RuleMatcher.h" #include "AnalyzerTags.h" +#include "IPAddr.h" +#include "TunnelEncapsulation.h" class Connection; class ConnectionTimer; @@ -32,61 +34,34 @@ typedef enum { typedef void (Connection::*timer_func)(double t); struct ConnID { - const uint32* src_addr; - const uint32* dst_addr; + IPAddr src_addr; + IPAddr dst_addr; uint32 src_port; uint32 dst_port; - bool is_one_way; // if true, don't canonicalize - - // Returns a ListVal suitable for looking up a connection in - // a hash table. addr/ports are expected to be in network order. - // Unless is_one_way is true, the lookup sorts src and dst, - // so src_addr/src_port and dst_addr/dst_port just have to - // reflect the two different sides of the connection, - // neither has to be the particular source/destination - // or originator/responder. - HashKey* BuildConnKey() const; - - // The structure used internally for hashing. - struct Key { - uint32 ip1[NUM_ADDR_WORDS]; - uint32 ip2[NUM_ADDR_WORDS]; - uint16 port1; - uint16 port2; - }; + bool is_one_way; // if true, don't canonicalize order }; -static inline int addr_port_canon_lt(const uint32* a1, uint32 p1, - const uint32* a2, uint32 p2) +static inline int addr_port_canon_lt(const IPAddr& addr1, uint32 p1, + const IPAddr& addr2, uint32 p2) { -#ifdef BROv6 - // Because it's a canonical ordering, not a strict ordering, - // we can choose to give more weight to the least significant - // word than to the most significant word. This matters - // because for the common case of IPv4 addresses embedded in - // a IPv6 address, the top three words are identical, so we can - // save a few cycles by first testing the bottom word. - return a1[3] < a2[3] || - (a1[3] == a2[3] && - (a1[2] < a2[2] || - (a1[2] == a2[2] && - (a1[1] < a2[1] || - (a1[1] == a2[1] && - (a1[0] < a2[0] || - (a1[0] == a2[0] && - p1 < p2))))))); -#else - return *a1 < *a2 || (*a1 == *a2 && p1 < p2); -#endif + return addr1 < addr2 || (addr1 == addr2 && p1 < p2); } class Analyzer; class Connection : public BroObj { public: - Connection(NetSessions* s, HashKey* k, double t, const ConnID* id); + Connection(NetSessions* s, HashKey* k, double t, const ConnID* id, + uint32 flow, const EncapsulationStack* arg_encap); virtual ~Connection(); + // Invoked when an encapsulation is discovered. It records the + // encapsulation with the connection and raises a "tunnel_changed" + // event if it's different from the previous encapsulation (or the + // first encountered). encap can be null to indicate no + // encapsulation. + void CheckEncapsulation(const EncapsulationStack* encap); + // Invoked when connection is about to be removed. Use Ref(this) // inside Done to keep the connection object around (though it'll // no longer be accessible from the dictionary of active @@ -119,8 +94,8 @@ public: double LastTime() const { return last_time; } void SetLastTime(double t) { last_time = t; } - const uint32* OrigAddr() const { return orig_addr; } - const uint32* RespAddr() const { return resp_addr; } + const IPAddr& OrigAddr() const { return orig_addr; } + const IPAddr& RespAddr() const { return resp_addr; } uint32 OrigPort() const { return orig_port; } uint32 RespPort() const { return resp_port; } @@ -185,11 +160,11 @@ public: // Raises a software_version_found event based on the // given string (returns false if it's not parseable). - int VersionFoundEvent(const uint32* addr, const char* s, int len, + int VersionFoundEvent(const IPAddr& addr, const char* s, int len, Analyzer* analyzer = 0); // Raises a software_unparsed_version_found event. - int UnparsedVersionFoundEvent(const uint32* addr, + int UnparsedVersionFoundEvent(const IPAddr& addr, const char* full_descr, int len, Analyzer* analyzer); void Event(EventHandlerPtr f, Analyzer* analyzer, const char* name = 0); @@ -273,32 +248,15 @@ public: // Sets the transport protocol in use. void SetTransport(TransportProto arg_proto) { proto = arg_proto; } - // If the connection compressor is activated, we need a special memory - // layout for connections. (See ConnCompressor.h) - void* operator new(size_t size) - { - if ( ! use_connection_compressor ) - return ::operator new(size); - - void* c = ::operator new(size + 4); - - // We have to turn off the is_pending bit. By setting the - // first four bytes to zero, we'll achieve this. - *((uint32*) c) = 0; - - return ((char *) c) + 4; - } - - void operator delete(void* ptr) - { - if ( ! use_connection_compressor ) - ::operator delete(ptr); - else - ::operator delete(((char*) ptr) - 4); - } - void SetUID(uint64 arg_uid) { uid = arg_uid; } + uint64 GetUID() const { return uid; } + + const EncapsulationStack* GetEncapsulation() const + { return encapsulation; } + + void CheckFlowLabel(bool is_orig, uint32 flow_label); + protected: Connection() { persistent = 0; } @@ -325,14 +283,16 @@ protected: TimerMgr::Tag* conn_timer_mgr; timer_list timers; - uint32 orig_addr[NUM_ADDR_WORDS]; // in network order - uint32 resp_addr[NUM_ADDR_WORDS]; // in network order + IPAddr orig_addr; + IPAddr resp_addr; uint32 orig_port, resp_port; // in network order TransportProto proto; + uint32 orig_flow_label, resp_flow_label; // most recent IPv6 flow labels double start_time, last_time; double inactivity_timeout; RecordVal* conn_val; LoginConn* login_conn; // either nil, or this + const EncapsulationStack* encapsulation; // tunnels int suppress_event; // suppress certain events to once per conn. unsigned int installed_status_timer:1; @@ -344,6 +304,7 @@ protected: unsigned int record_packets:1, record_contents:1; unsigned int persistent:1; unsigned int record_current_packet:1, record_current_content:1; + unsigned int saw_first_orig_packet:1, saw_first_resp_packet:1; // Count number of connections. static unsigned int total_connections; diff --git a/src/ConnCompressor.cc b/src/ConnCompressor.cc deleted file mode 100644 index 7a04cb4f0b..0000000000 --- a/src/ConnCompressor.cc +++ /dev/null @@ -1,1041 +0,0 @@ -#include - -#include "ConnCompressor.h" -#include "Event.h" -#include "ConnSizeAnalyzer.h" -#include "net_util.h" - -// The basic model of the compressor is to wait for an answer before -// instantiating full connection state. Until we see a reply, only a minimal -// amount of state is stored. This has some consequences: -// -// - We try to mimic TCP.cc as close as possible, but this works only to a -// certain degree; e.g., we don't consider any of the wait-a-bit-after- -// the-connection-has-been-closed timers. That means we will get differences -// in connection semantics if the compressor is turned on. On the other -// hand, these differences will occur only for not well-established -// sessions, and experience shows that for these kinds of connections -// semantics are ill-defined in any case. -// -// - If an originator sends multiple different packets before we see a reply, -// we lose the information about additional packets (more precisely, we -// merge the packet headers into one). In particular, we lose any payload. -// This is a major problem if we see only one direction of a connection. -// When analyzing only SYN/FIN/RSTs this leads to differences if we miss -// the SYN/ACK. -// -// To avoid losing payload, there is the option cc_instantiate_on_data: -// if enabled and the originator sends a non-control packet after the -// initial packet, we instantiate full connection state. -// -// - We lose some of the information contained in initial packets (e.g., most -// IP/TCP options and any payload). If you depend on them, you don't -// want to use the compressor. -// -// Optionally, the compressor can take care only of initial SYNs and -// instantiate full connection state for all other connection setups. -// To enable, set cc_handle_only_syns to true. -// -// - The compressor may handle refused connections (i.e., initial packets -// followed by RST from responder) itself. Again, this leads to differences -// from default TCP processing and is therefore turned off by default. -// To enable, set cc_handle_resets to true. -// -// - We don't match signatures on connections which are completely handled -// by the compressor. Matching would require significant additional state -// w/o being very helpful. -// -// - If use_conn_size_analyzer is True, the reported counts for bytes and -// packets may not account for some packets/data that is part of those -// packets which the connection compressor handles. The error, if any, will -// however be small. - - -#ifdef DEBUG -static inline const char* fmt_conn_id(const ConnCompressor::PendingConn* c) - { - if ( c->ip1_is_src ) - return fmt_conn_id(c->key.ip1, c->key.port1, - c->key.ip2, c->key.port2); - else - return fmt_conn_id(c->key.ip2, c->key.port2, - c->key.ip1, c->key.port1); - } - -static inline const char* fmt_conn_id(const Connection* c) - { - return fmt_conn_id(c->OrigAddr(), c->OrigPort(), - c->RespAddr(), c->RespPort()); - } - -static inline const char* fmt_conn_id(const IP_Hdr* ip) - { - const struct tcphdr* tp = (const struct tcphdr*) ip->Payload(); - return fmt_conn_id(ip->SrcAddr(), tp->th_sport, - ip->DstAddr(), tp->th_dport); - } -#endif - -ConnCompressor::ConnCompressor() - { - first_block = last_block = 0; - first_non_expired = 0; - conn_val = 0; - - sizes.connections = sizes.connections_total = 0; - sizes.pending_valid = sizes.pending_total = sizes.pending_in_mem = 0; - sizes.hash_table_size = 0; - sizes.memory = 0; - } - -ConnCompressor::~ConnCompressor() - { - Block* next; - for ( Block* b = first_block; b; b = next ) - { - next = b->next; - delete b; - } - } - -Connection* ConnCompressor::NextPacket(double t, HashKey* key, const IP_Hdr* ip, - const struct pcap_pkthdr* hdr, const u_char* const pkt) - { - // Expire old stuff. - DoExpire(t); - - // Most sanity checks on header sizes are already done ... - const struct tcphdr* tp = (const struct tcphdr*) ip->Payload(); - - // ... except this one. - uint32 tcp_hdr_len = tp->th_off * 4; - if ( tcp_hdr_len > uint32(ip->TotalLen() - ip->HdrLen()) ) - { - sessions->Weird("truncated_header", hdr, pkt); - delete key; - return 0; - } - - bool external = current_iosrc->GetCurrentTag(); - ConnData* c = conns.Lookup(key); - - Unref(conn_val); - conn_val = 0; - - // Do we already have a Connection object? - if ( c && IsConnPtr(c) ) - { - Connection* conn = MakeConnPtr(c); - int consistent = 1; - - if ( external ) - { - // External, and we already have a full connection. - // That means we use the same logic as in NetSessions - // to compare the tags. - consistent = sessions->CheckConnectionTag(conn); - if ( consistent < 0 ) - { - delete key; - return 0; - } - } - - if ( ! consistent || conn->IsReuse(t, ip->Payload()) ) - { - if ( consistent ) - { - DBG_LOG(DBG_COMPRESSOR, "%s reuse", fmt_conn_id(conn)); - conn->Event(connection_reused, 0); - } - - sessions->Remove(conn); - --sizes.connections; - - return Instantiate(t, key, ip); - } - - DBG_LOG(DBG_COMPRESSOR, "%s pass through", fmt_conn_id(conn)); - delete key; - return conn; - } - - PendingConn* pending = c ? MakePendingConnPtr(c) : 0; - - if ( c && external ) - { - // External, but previous packets were not, i.e., they used - // the global timer queue. We finish the old connection - // and instantiate a full one now. - DBG_LOG(DBG_TM, "got packet with tag %s for already" - "known cc connection, instantiating full conn", - current_iosrc->GetCurrentTag()->c_str()); - - Event(pending, 0, connection_attempt, - TCP_ENDPOINT_INACTIVE, 0, TCP_ENDPOINT_INACTIVE); - Event(pending, 0, connection_state_remove, - TCP_ENDPOINT_INACTIVE, 0, TCP_ENDPOINT_INACTIVE); - Remove(key); - - return Instantiate(t, key, ip); - } - - if ( c && pending->invalid && - network_time < pending->time + tcp_session_timer ) - { - // The old connection has terminated sooner than - // tcp_session_timer. We assume this packet to be - // a latecomer, and ignore it. - DBG_LOG(DBG_COMPRESSOR, "%s ignored", fmt_conn_id(pending)); - sessions->DumpPacket(hdr, pkt); - delete key; - return 0; - } - - // Simulate tcp_{reset,close}_delay for initial FINs/RSTs - if ( c && ! pending->invalid && - ((pending->FIN && pending->time + tcp_close_delay < t ) || - (pending->RST && pending->time + tcp_reset_delay < t )) ) - { - DBG_LOG(DBG_COMPRESSOR, "%s closed", fmt_conn_id(pending)); - int orig_state = - pending->FIN ? TCP_ENDPOINT_CLOSED : TCP_ENDPOINT_RESET; - - Event(pending, 0, connection_partial_close, orig_state, - ip->PayloadLen() - (tp->th_off * 4), - TCP_ENDPOINT_INACTIVE); - Event(pending, 0, connection_state_remove, orig_state, - ip->PayloadLen() - (tp->th_off * 4), - TCP_ENDPOINT_INACTIVE); - - Remove(key); - - Connection* tc = FirstFromOrig(t, key, ip, tp); - if ( ! tc ) - { - delete key; - sessions->DumpPacket(hdr, pkt); - } - - return tc; - } - - Connection* tc; - - if ( ! c || pending->invalid ) - { - // First packet of a connection. - if ( c ) - Remove(key); - - if ( external ) - // External, we directly instantiate a full connection. - tc = Instantiate(t, key, ip); - else - tc = FirstFromOrig(t, key, ip, tp); - } - - else if ( addr_eq(ip->SrcAddr(), SrcAddr(pending)) && - tp->th_sport == SrcPort(pending) ) - // Another packet from originator. - tc = NextFromOrig(pending, t, key, ip, tp); - - else - // A reply. - tc = Response(pending, t, key, ip, tp); - - if ( ! tc ) - { - delete key; - sessions->DumpPacket(hdr, pkt); - } - - return tc; - } - -static int parse_tcp_options(unsigned int opt, unsigned int optlen, - const u_char* option, TCP_Analyzer* analyzer, - bool is_orig, void* cookie) - { - ConnCompressor::PendingConn* c = (ConnCompressor::PendingConn*) cookie; - - // We're only interested in window_scale. - if ( opt == 3 ) - c->window_scale = option[2]; - - return 0; - } - -Connection* ConnCompressor::FirstFromOrig(double t, HashKey* key, - const IP_Hdr* ip, const tcphdr* tp) - { - if ( cc_handle_only_syns && ! (tp->th_flags & TH_SYN) ) - return Instantiate(t, key, ip); - - // The first packet of a connection. - PendingConn* pending = MakeNewState(t); - PktHdrToPendingConn(t, key, ip, tp, pending); - - DBG_LOG(DBG_COMPRESSOR, "%s our", fmt_conn_id(pending)); - - // The created DictEntry will point directly into our PendingConn. - // So, we have to be careful when we delete it. - conns.Dictionary::Insert(&pending->key, sizeof(pending->key), - pending->hash, MakeMapPtr(pending), 0); - - // Mimic some of TCP_Analyzer's weirds for SYNs. - // To be completely precise, we'd need to check this at a few - // more locations in NextFromOrig() and Reply(). However, that - // does not really seem worth it, as this is the standard case. - if ( tp->th_flags & TH_SYN ) - { - if ( tp->th_flags & TH_RST ) - Weird(pending, t, "TCP_christmas"); - - if ( tp->th_flags & TH_URG ) - Weird(pending, t, "baroque_SYN"); - - int len = ip->TotalLen() - ip->HdrLen() - tp->th_off * 4; - - if ( len > 0 ) - // T/TCP definitely complicates this. - Weird(pending, t, "SYN_with_data"); - } - - if ( tp->th_flags & TH_FIN ) - { - if ( ! (tp->th_flags & TH_SYN) ) - Weird(pending, t, "spontaneous_FIN"); - } - - if ( tp->th_flags & TH_RST ) - { - if ( ! (tp->th_flags & TH_SYN) ) - Weird(pending, t, "spontaneous_RST"); - } - - ++sizes.pending_valid; - ++sizes.pending_total; - ++sizes.pending_in_mem; - - Event(pending, 0, new_connection, - TCP_ENDPOINT_INACTIVE, 0, TCP_ENDPOINT_INACTIVE); - - if ( current_iosrc->GetCurrentTag() ) - { - Val* tag = - new StringVal(current_iosrc->GetCurrentTag()->c_str()); - Event(pending, 0, connection_external, - TCP_ENDPOINT_INACTIVE, 0, TCP_ENDPOINT_INACTIVE, tag); - } - - return 0; - } - -Connection* ConnCompressor::NextFromOrig(PendingConn* pending, double t, - HashKey* key, const IP_Hdr* ip, - const tcphdr* tp) - { - // Another packet from the same host without seeing an answer so far. - DBG_LOG(DBG_COMPRESSOR, "%s same again", fmt_conn_id(pending)); - - ++pending->num_pkts; - ++pending->num_bytes_ip += ip->PayloadLen(); - - // New window scale overrides old - not great, this is a (subtle) - // evasion opportunity. - if ( TCP_Analyzer::ParseTCPOptions(tp, parse_tcp_options, 0, 0, - pending) < 0 ) - Weird(pending, t, "corrupt_tcp_options"); - - if ( tp->th_flags & TH_SYN ) - // New seq overrides old. - pending->seq = tp->th_seq; - - // Mimic TCP_Endpoint::Size() - int size = ntohl(tp->th_seq) - ntohl(pending->seq); - if ( size != 0 ) - --size; - - if ( size != 0 && (pending->FIN || (tp->th_flags & TH_FIN)) ) - --size; - - if ( size < 0 ) - // We only care about the size for broken connections. - // Surely for those it's more likely that the sequence - // numbers are confused than that they really transferred - // > 2 GB of data. Plus, for 64-bit ints these sign-extend - // up to truly huge, non-sensical unsigned values. - size = 0; - - if ( pending->SYN ) - { - // We're in state SYN_SENT or SYN_ACK_SENT. - if ( tp->th_flags & TH_RST) - { - Event(pending, t, connection_reset, - TCP_ENDPOINT_RESET, size, TCP_ENDPOINT_INACTIVE); - Event(pending, t, connection_state_remove, - TCP_ENDPOINT_RESET, size, TCP_ENDPOINT_INACTIVE); - - Invalidate(key); - return 0; - } - - else if ( tp->th_flags & TH_FIN) - { - Event(pending, t, connection_partial_close, - TCP_ENDPOINT_CLOSED, size, TCP_ENDPOINT_INACTIVE); - Event(pending, t, connection_state_remove, - TCP_ENDPOINT_CLOSED, size, TCP_ENDPOINT_INACTIVE); - Invalidate(key); - return 0; - } - - else if ( tp->th_flags & TH_SYN ) - { - if ( (tp->th_flags & TH_ACK) && ! pending->ACK ) - Weird(pending, t, "repeated_SYN_with_ack"); - } - - else - { - // A data packet without seeing a SYN/ACK first. As - // long as we stick with the principle of instantiating - // state only when we see a reply, we have to throw - // this data away. Optionally we may instantiate a - // real connection now. - - if ( cc_instantiate_on_data ) - return Instantiate(key, pending); - // else - // Weird(pending, t, "data_without_SYN_ACK"); - } - } - - else - { // We're in state INACTIVE. - if ( tp->th_flags & TH_RST) - { - Event(pending, t, connection_reset, - TCP_ENDPOINT_RESET, size, TCP_ENDPOINT_INACTIVE); - Event(pending, t, connection_state_remove, - TCP_ENDPOINT_RESET, size, TCP_ENDPOINT_INACTIVE); - - Invalidate(key); - return 0; - } - - else if ( tp->th_flags & TH_FIN) - { - Event(pending, t, connection_half_finished, - TCP_ENDPOINT_CLOSED, size, TCP_ENDPOINT_INACTIVE); - Event(pending, t, connection_state_remove, - TCP_ENDPOINT_CLOSED, size, TCP_ENDPOINT_INACTIVE); - - Invalidate(key); - return 0; - } - - else if ( tp->th_flags & TH_SYN ) - { - if ( ! tp->th_flags & TH_ACK ) - { - Weird(pending, t, "SYN_after_partial"); - pending->SYN = 1; - } - } - - else - // Another data packet. See discussion above. - if ( cc_instantiate_on_data ) - return Instantiate(key, pending); - - // else - // Weird(pending, t, "data_without_SYN_ACK"); - } - - return 0; - } - -Connection* ConnCompressor::Response(PendingConn* pending, double t, - HashKey* key, const IP_Hdr* ip, - const tcphdr* tp) - { - // The packet comes from the former responder. That means we are - // seeing a reply, so we are going to create a "real" connection now. - DBG_LOG(DBG_COMPRESSOR, "%s response", fmt_conn_id(pending)); - - // Optional: if it's a RST after SYN, we directly generate a - // connection_rejected and throw the state away. - if ( cc_handle_resets && (tp->th_flags & TH_RST) && pending->SYN ) - { - // See discussion of size in DoExpire(). - DBG_LOG(DBG_COMPRESSOR, "%s reset", fmt_conn_id(pending)); - - Event(pending, t, connection_reset, - TCP_ENDPOINT_SYN_SENT, 0, TCP_ENDPOINT_RESET); - Event(pending, t, connection_state_remove, - TCP_ENDPOINT_SYN_SENT, 0, TCP_ENDPOINT_RESET); - - Invalidate(key); - return 0; - } - - // If a connection's initial packet is a RST, Bro's standard TCP - // processing considers the connection done right away. We simulate - // this by instantiating a second connection in this case. The - // first one will time out eventually. - if ( pending->RST && ! pending->SYN ) - { - int orig_state = - pending->RST ? TCP_ENDPOINT_RESET : TCP_ENDPOINT_CLOSED; - Event(pending, 0, connection_attempt, - orig_state, 0, TCP_ENDPOINT_INACTIVE); - Event(pending, 0, connection_state_remove, - orig_state, 0, TCP_ENDPOINT_INACTIVE); - - // Override with current packet. - PktHdrToPendingConn(t, key, ip, tp, pending); - return 0; - } - - return Instantiate(key, pending); - } - -Connection* ConnCompressor::Instantiate(HashKey* key, PendingConn* pending) - { - // Instantantiate a Connection. - ConnID conn_id; - conn_id.src_addr = SrcAddr(pending); - conn_id.dst_addr = DstAddr(pending); - conn_id.src_port = SrcPort(pending); - conn_id.dst_port = DstPort(pending); - - pending->invalid = 1; - --sizes.pending_valid; - --sizes.pending_total; - - // Fake the first packet. - const IP_Hdr* faked_pkt = PendingConnToPacket(pending); - Connection* new_conn = sessions->NewConn(key, pending->time, &conn_id, - faked_pkt->Payload(), IPPROTO_TCP); - - if ( ! new_conn ) - { - // This connection is not to be analyzed (e.g., it may be - // a partial one). - DBG_LOG(DBG_COMPRESSOR, "%s nop", fmt_conn_id(pending)); - return 0; - } - - new_conn->SetUID(pending->uid); - - DBG_LOG(DBG_COMPRESSOR, "%s instantiated", fmt_conn_id(pending)); - - ++sizes.connections; - ++sizes.connections_total; - - if ( new_packet ) - new_conn->Event(new_packet, 0, - sessions->BuildHeader(faked_pkt->IP4_Hdr())); - - // NewConn() may have swapped originator and responder. - int is_orig = addr_eq(conn_id.src_addr, new_conn->OrigAddr()) && - conn_id.src_port == new_conn->OrigPort(); - - // Pass the faked packet to the connection. - const u_char* payload = faked_pkt->Payload(); - - int dummy_record_packet, dummy_record_content; - new_conn->NextPacket(pending->time, is_orig, - faked_pkt, faked_pkt->PayloadLen(), - faked_pkt->PayloadLen(), payload, - dummy_record_packet, dummy_record_content, 0, 0, 0); - - // Removing necessary because the key will be destroyed at some point. - conns.Remove(&pending->key, sizeof(pending->key), pending->hash, true); - conns.Insert(key, MakeMapPtr(new_conn)); - - return new_conn; - } - -Connection* ConnCompressor::Instantiate(double t, HashKey* key, - const IP_Hdr* ip) - { - const struct tcphdr* tp = (const struct tcphdr*) ip->Payload(); - - ConnID conn_id; - conn_id.src_addr = ip->SrcAddr(); - conn_id.dst_addr = ip->DstAddr(); - conn_id.src_port = tp->th_sport; - conn_id.dst_port = tp->th_dport; - - Connection* new_conn = - sessions->NewConn(key, t, &conn_id, ip->Payload(), IPPROTO_TCP); - - if ( ! new_conn ) - { - // This connection is not to be analyzed (e.g., it may be - // a partial one). - DBG_LOG(DBG_COMPRESSOR, "%s nop", fmt_conn_id(ip)); - return 0; - } - - DBG_LOG(DBG_COMPRESSOR, "%s instantiated", fmt_conn_id(ip)); - - conns.Insert(key, MakeMapPtr(new_conn)); - ++sizes.connections; - ++sizes.connections_total; - - if ( new_connection ) - new_conn->Event(new_connection, 0); - - if ( current_iosrc->GetCurrentTag() ) - { - Val* tag = - new StringVal(current_iosrc->GetCurrentTag()->c_str()); - new_conn->Event(connection_external, 0, tag); - } - - return new_conn; - } - -void ConnCompressor::PktHdrToPendingConn(double time, const HashKey* key, - const IP_Hdr* ip, const struct tcphdr* tp, PendingConn* c) - { - memcpy(&c->key, key->Key(), key->Size()); - - c->hash = key->Hash(); - c->ip1_is_src = addr_eq(c->key.ip1, ip->SrcAddr()) && - c->key.port1 == tp->th_sport; - c->time = time; - c->window = tp->th_win; - c->seq = tp->th_seq; - c->ack = tp->th_ack; - c->window_scale = 0; - c->SYN = (tp->th_flags & TH_SYN) != 0; - c->FIN = (tp->th_flags & TH_FIN) != 0; - c->RST = (tp->th_flags & TH_RST) != 0; - c->ACK = (tp->th_flags & TH_ACK) != 0; - c->uid = calculate_unique_id(); - c->num_bytes_ip = ip->TotalLen(); - c->num_pkts = 1; - c->invalid = 0; - - if ( TCP_Analyzer::ParseTCPOptions(tp, parse_tcp_options, 0, 0, c) < 0 ) - sessions->Weird("corrupt_tcp_options", ip); - } - -// Fakes an empty TCP packet based on the information in PendingConn. -const IP_Hdr* ConnCompressor::PendingConnToPacket(const PendingConn* c) - { - static ip* ip = 0; - static tcphdr* tp = 0; - static IP_Hdr* ip_hdr = 0; - - if ( ! ip ) - { // Initialize. ### Note, only handles IPv4 for now. - int packet_length = sizeof(*ip) + sizeof(*tp); - ip = (struct ip*) new char[packet_length]; - tp = (struct tcphdr*) (((char*) ip) + sizeof(*ip)); - ip_hdr = new IP_Hdr(ip); - - // Constant fields. - ip->ip_v = 4; - ip->ip_hl = sizeof(*ip) / 4; // no options - ip->ip_tos = 0; - ip->ip_len = htons(packet_length); - ip->ip_id = 0; - ip->ip_off = 0; - ip->ip_ttl = 255; - ip->ip_p = IPPROTO_TCP; - ip->ip_sum = 0; // is not going to be checked - - tp->th_off = sizeof(*tp) / 4; // no options for now - tp->th_urp = 0; - } - - // Note, do *not* use copy_addr() here. This is because we're - // copying to an IPv4 header, which has room for exactly and - // only an IPv4 address. -#ifdef BROv6 - if ( ! is_v4_addr(c->key.ip1) || ! is_v4_addr(c->key.ip2) ) - reporter->InternalError("IPv6 snuck into connection compressor"); -#endif - *(uint32*) &ip->ip_src = - to_v4_addr(c->ip1_is_src ? c->key.ip1 : c->key.ip2); - *(uint32*) &ip->ip_dst = - to_v4_addr(c->ip1_is_src ? c->key.ip2 : c->key.ip1); - - if ( c->ip1_is_src ) - { - tp->th_sport = c->key.port1; - tp->th_dport = c->key.port2; - } - else - { - tp->th_sport = c->key.port2; - tp->th_dport = c->key.port1; - } - - tp->th_win = c->window; - tp->th_seq = c->seq; - tp->th_ack = c->ack; - tp->th_flags = MakeFlags(c); - tp->th_sum = 0; - tp->th_sum = 0xffff - tcp_checksum(ip, tp, 0); - - // FIXME: Add TCP options. - return ip_hdr; - } - -uint8 ConnCompressor::MakeFlags(const PendingConn* c) const - { - uint8 tcp_flags = 0; - if ( c->SYN ) - tcp_flags |= TH_SYN; - if ( c->FIN ) - tcp_flags |= TH_FIN; - if ( c->RST ) - tcp_flags |= TH_RST; - if ( c->ACK ) - tcp_flags |= TH_ACK; - - return tcp_flags; - } - -ConnCompressor::PendingConn* ConnCompressor::MakeNewState(double t) - { - // See if there is enough space in the current block. - if ( last_block && - int(sizeof(PendingConn)) <= BLOCK_SIZE - last_block->bytes_used ) - { - PendingConn* c = (PendingConn*) &last_block->data[last_block->bytes_used]; - last_block->bytes_used += sizeof(PendingConn); - c->is_pending = true; - return c; - } - - // Get new block. - Block* b = new Block; - b->time = t; - b->bytes_used = sizeof(PendingConn); - b->next = 0; - b->prev = last_block; - - if ( last_block ) - last_block->next = b; - else - first_block = b; - - last_block = b; - - sizes.memory += padded_sizeof(*b); - PendingConn* c = (PendingConn*) &b->data; - c->is_pending = true; - return c; - } - -void ConnCompressor::DoExpire(double t) - { - while ( first_block ) - { - Block* b = first_block; - - unsigned char* p = - first_non_expired ? first_non_expired : b->data; - - while ( p < b->data + b->bytes_used ) - { - Unref(conn_val); - conn_val = 0; - - PendingConn* c = (PendingConn*) p; - if ( t && (c->time + tcp_SYN_timeout > t) ) - { - // All following entries are still - // recent enough. - first_non_expired = p; - return; - } - - if ( ! c->invalid ) - { - // Expired. - DBG_LOG(DBG_COMPRESSOR, "%s expire", fmt_conn_id(c)); - - HashKey key(&c->key, sizeof(c->key), c->hash, true); - - ConnData* cd = conns.Lookup(&key); - if ( cd && ! IsConnPtr(cd) ) - conns.Remove(&c->key, sizeof(c->key), - c->hash, true); - - int orig_state = TCP_ENDPOINT_INACTIVE; - - if ( c->FIN ) - orig_state = TCP_ENDPOINT_CLOSED; - if ( c->RST ) - orig_state = TCP_ENDPOINT_RESET; - if ( c->SYN ) - orig_state = TCP_ENDPOINT_SYN_SENT; - - // We're not able to get the correct size - // here (with "correct" meaning value that - // standard connection processing reports). - // We could if would also store last_seq, but - // doesn't seem worth it. - - Event(c, 0, connection_attempt, - orig_state, 0, TCP_ENDPOINT_INACTIVE); - Event(c, 0, connection_state_remove, - orig_state, 0, TCP_ENDPOINT_INACTIVE); - - c->invalid = 1; - --sizes.pending_valid; - } - - p += sizeof(PendingConn); - --sizes.pending_in_mem; - } - - // Full block expired, so delete it. - first_block = b->next; - - if ( b->next ) - b->next->prev = 0; - else - last_block = 0; - - delete b; - - first_non_expired = 0; - sizes.memory -= padded_sizeof(*b); - } - } - -void ConnCompressor::Event(const PendingConn* pending, double t, - const EventHandlerPtr& event, int orig_state, - int orig_size, int resp_state, Val* arg) - { - if ( ! conn_val ) - { - if ( ! event ) - return; - - // We only raise events if NewConn() would have actually - // instantiated the Connection. - bool flip_roles; - if ( ! sessions->WantConnection(ntohs(SrcPort(pending)), - ntohs(DstPort(pending)), - TRANSPORT_TCP, - MakeFlags(pending), - flip_roles) ) - return; - - conn_val = new RecordVal(connection_type); - RecordVal* id_val = new RecordVal(conn_id); - RecordVal* orig_endp = new RecordVal(endpoint); - RecordVal* resp_endp = new RecordVal(endpoint); - - if ( orig_state == TCP_ENDPOINT_INACTIVE ) - { - if ( pending->SYN ) - orig_state = pending->ACK ? - TCP_ENDPOINT_SYN_ACK_SENT : - TCP_ENDPOINT_SYN_SENT; - else - orig_state = TCP_ENDPOINT_PARTIAL; - } - - int tcp_state = TCP_ENDPOINT_INACTIVE; - - if ( ! flip_roles ) - { - id_val->Assign(0, new AddrVal(SrcAddr(pending))); - id_val->Assign(1, new PortVal(ntohs(SrcPort(pending)), - TRANSPORT_TCP)); - id_val->Assign(2, new AddrVal(DstAddr(pending))); - id_val->Assign(3, new PortVal(ntohs(DstPort(pending)), - TRANSPORT_TCP)); - orig_endp->Assign(0, new Val(orig_size, TYPE_COUNT)); - orig_endp->Assign(1, new Val(orig_state, TYPE_COUNT)); - - if ( ConnSize_Analyzer::Available() ) - { - // Fill in optional fields if ConnSize_Analyzer is on. - orig_endp->Assign(2, new Val(pending->num_pkts, TYPE_COUNT)); - orig_endp->Assign(3, new Val(pending->num_bytes_ip, TYPE_COUNT)); - } - - resp_endp->Assign(0, new Val(0, TYPE_COUNT)); - resp_endp->Assign(1, new Val(resp_state, TYPE_COUNT)); - resp_endp->Assign(2, new Val(0, TYPE_COUNT)); - resp_endp->Assign(3, new Val(0, TYPE_COUNT)); - } - else - { - id_val->Assign(0, new AddrVal(DstAddr(pending))); - id_val->Assign(1, new PortVal(ntohs(DstPort(pending)), - TRANSPORT_TCP)); - id_val->Assign(2, new AddrVal(SrcAddr(pending))); - id_val->Assign(3, new PortVal(ntohs(SrcPort(pending)), - TRANSPORT_TCP)); - - orig_endp->Assign(0, new Val(0, TYPE_COUNT)); - orig_endp->Assign(1, new Val(resp_state, TYPE_COUNT)); - orig_endp->Assign(2, new Val(0, TYPE_COUNT)); - orig_endp->Assign(3, new Val(0, TYPE_COUNT)); - - resp_endp->Assign(0, new Val(orig_size, TYPE_COUNT)); - resp_endp->Assign(1, new Val(orig_state, TYPE_COUNT)); - - if ( ConnSize_Analyzer::Available() ) - { - // Fill in optional fields if ConnSize_Analyzer is on - resp_endp->Assign(2, new Val(pending->num_pkts, TYPE_COUNT)); - resp_endp->Assign(3, new Val(pending->num_bytes_ip, TYPE_COUNT)); - } - - DBG_LOG(DBG_COMPRESSOR, "%s swapped direction", fmt_conn_id(pending)); - } - - conn_val->Assign(0, id_val); - conn_val->Assign(1, orig_endp); - conn_val->Assign(2, resp_endp); - conn_val->Assign(3, new Val(pending->time, TYPE_TIME)); - conn_val->Assign(4, new Val(t > 0 ? t - pending->time : 0, - TYPE_INTERVAL)); // duration - conn_val->Assign(5, new TableVal(string_set)); // service - conn_val->Assign(6, new StringVal("cc=1")); // addl - conn_val->Assign(7, new Val(0, TYPE_COUNT)); // hot - conn_val->Assign(8, new StringVal("")); // history - - char tmp[20]; // uid. - conn_val->Assign(9, new StringVal(uitoa_n(pending->uid, tmp, sizeof(tmp), 62))); - - conn_val->SetOrigin(0); - } - - if ( event == conn_weird ) - { - // Special case to go through the logger. - const char* msg = arg->AsString()->CheckString(); - reporter->Weird(conn_val->Ref(), msg); - return; - } - - val_list* vl = new val_list; - if ( arg ) - vl->append(arg); - vl->append(conn_val->Ref()); - - mgr.QueueEvent(event, vl, SOURCE_LOCAL); - } - -void ConnCompressor::Drain() - { - IterCookie* cookie = conns.InitForIteration(); - ConnData* c; - - DoExpire(0); - - while ( (c = conns.NextEntry(cookie)) ) - { - Unref(conn_val); - conn_val = 0; - - if ( IsConnPtr(c) ) - { - Connection* tc = MakeConnPtr(c); - tc->Done(); - tc->Event(connection_state_remove, 0); - Unref(tc); - --sizes.connections; - } - - else - { - PendingConn* pc = MakePendingConnPtr(c); - if ( ! pc->invalid ) - { - // Same discussion for size here than - // in DoExpire(). - Event(pc, 0, connection_attempt, - TCP_ENDPOINT_INACTIVE, 0, - TCP_ENDPOINT_INACTIVE); - Event(pc, 0, connection_state_remove, - TCP_ENDPOINT_INACTIVE, 0, - TCP_ENDPOINT_INACTIVE); - - --sizes.pending_valid; - pc->invalid = 1; - } - } - } - } - -void ConnCompressor::Invalidate(HashKey* k) - { - ConnData* c = (ConnData*) conns.Lookup(k); - - assert(c && ! IsConnPtr(c)); - PendingConn* pc = MakePendingConnPtr(c); - - DBG_LOG(DBG_COMPRESSOR, "%s invalidate", fmt_conn_id(pc)); - - if ( ! pc->invalid ) - { - conns.Remove(&pc->key, sizeof(pc->key), pc->hash, true); - pc->invalid = 1; - --sizes.pending_valid; - } - } - -Connection* ConnCompressor::Insert(Connection* newconn) - { - HashKey* key = newconn->Key(); - ConnData* c = conns.Lookup(key); - Connection* old = 0; - - // Do we already have a Connection object? - if ( c ) - { - if ( IsConnPtr(c) ) - old = MakeConnPtr(c); - Remove(key); - } - - conns.Insert(key, MakeMapPtr(newconn)); - return old; - } - -bool ConnCompressor::Remove(HashKey* k) - { - ConnData* c = (ConnData*) conns.Lookup(k); - if ( ! c ) - return false; - - if ( IsConnPtr(c) ) - { - DBG_LOG(DBG_COMPRESSOR, "%s remove", fmt_conn_id(MakeConnPtr(c))); - conns.Remove(k); - --sizes.connections; - } - else - { - PendingConn* pc = MakePendingConnPtr(c); - DBG_LOG(DBG_COMPRESSOR, "%s remove", fmt_conn_id(pc)); - - conns.Remove(&pc->key, sizeof(pc->key), pc->hash, true); - - if ( ! pc->invalid ) - { - pc->invalid = 1; - --sizes.pending_valid; - } - } - - return true; - } diff --git a/src/ConnCompressor.h b/src/ConnCompressor.h deleted file mode 100644 index 36959b615c..0000000000 --- a/src/ConnCompressor.h +++ /dev/null @@ -1,235 +0,0 @@ -// The ConnCompressor keeps track of the first packet seen for a conn_id using -// only a minimal amount of memory. This helps us to avoid instantiating -// full Connection objects for never-established sessions. -// -// TCP only. - -#ifndef CONNCOMPRESSOR_H -#define CONNCOMPRESSOR_H - -#include "Conn.h" -#include "Dict.h" -#include "NetVar.h" -#include "TCP.h" - -class ConnCompressor { -public: - ConnCompressor(); - ~ConnCompressor(); - - // Handle next packet. Returns 0 if packet in handled internally. - // Takes ownership of key. - Connection* NextPacket(double t, HashKey* k, const IP_Hdr* ip_hdr, - const struct pcap_pkthdr* hdr, const u_char* const pkt); - - // Look up a connection. Returns non-nil for connections for - // which a Connection object has already been instantiated. - Connection* Lookup(HashKey* k) - { - ConnData* c = conns.Lookup(k); - return c && IsConnPtr(c) ? MakeConnPtr(c) : 0; - } - - // Inserts connection into compressor. If another entry with this key - // already exists, it's replaced. If that was a full connection, it is - // also returned. - Connection* Insert(Connection* c); - - // Remove all state belonging to the given connection. Returns - // true if the connection was found in the compressor's table, - // false if not. - bool Remove(HashKey* k); - - // Flush state. - void Drain(); - - struct Sizes { - // Current number of already fully instantiated connections. - unsigned int connections; - - // Total number of fully instantiated connections. - unsigned int connections_total; - - // Current number of seen but non-yet instantiated connections. - unsigned int pending_valid; - - // Total number of seen but non-yet instantiated connections. - unsigned int pending_total; - - // Total number of all entries in pending list (some a which - // may already been invalid, but not yet removed from memory). - unsigned int pending_in_mem; - - // Total number of hash table entires - // (should equal connections + pending_valid) - unsigned int hash_table_size; - - // Total memory usage; - unsigned int memory; - }; - - const Sizes& Size() - { sizes.hash_table_size = conns.Length(); return sizes; } - - unsigned int MemoryAllocation() const { return sizes.memory; } - - // As long as we have only seen packets from one side, we just - // store a PendingConn. - struct PendingConn { - // True if the block is indeed a PendingConn (see below). - unsigned int is_pending:1; - - // Whether roles in key are flipped. - unsigned int ip1_is_src:1; - - unsigned int invalid:1; // deleted - int window_scale:4; - unsigned int SYN:1; - unsigned int FIN:1; - unsigned int RST:1; - unsigned int ACK:1; - - double time; - ConnID::Key key; - uint32 seq; - uint32 ack; - hash_t hash; - uint16 window; - uint64 uid; - - // The following are set if use_conn_size_analyzer is T. - uint16 num_pkts; - uint16 num_bytes_ip; - }; - -private: - // Helpers to extract addrs/ports from PendingConn. - - const uint32* SrcAddr(const PendingConn* c) - { return c->ip1_is_src ? c->key.ip1 : c->key.ip2; } - const uint32* DstAddr(const PendingConn* c) - { return c->ip1_is_src ? c->key.ip2 : c->key.ip1; } - - uint16 SrcPort(const PendingConn* c) - { return c->ip1_is_src ? c->key.port1 : c->key.port2; } - uint16 DstPort(const PendingConn* c) - { return c->ip1_is_src ? c->key.port2 : c->key.port1; } - - - // Called for the first packet in a connection. - Connection* FirstFromOrig(double t, HashKey* key, - const IP_Hdr* ip, const tcphdr* tp); - - // Called for more packets from the orginator w/o seeing a response. - Connection* NextFromOrig(PendingConn* pending, double t, HashKey* key, - const IP_Hdr* ip, const tcphdr* tp); - - // Called for the first response packet. Instantiates a Connection. - Connection* Response(PendingConn* pending, double t, HashKey* key, - const IP_Hdr* ip, const tcphdr* tp); - - // Instantiates a full TCP connection (invalidates pending connection). - Connection* Instantiate(HashKey* key, PendingConn* pending); - - // Same but based on packet. - Connection* Instantiate(double t, HashKey* key, const IP_Hdr* ip); - - // Fills the attributes of a PendingConn based on the given arguments. - void PktHdrToPendingConn(double time, const HashKey* key, - const IP_Hdr* ip, const struct tcphdr* tp, PendingConn* c); - - // Fakes a TCP packet based on the available information. - const IP_Hdr* PendingConnToPacket(const PendingConn* c); - - // Construct a TCP-flags byte. - uint8 MakeFlags(const PendingConn* c) const; - - // Allocate room for a new (Ext)PendingConn. - PendingConn* MakeNewState(double t); - - // Expire PendingConns. - void DoExpire(double t); - - // Remove all state belonging to the given connection. - void Invalidate(HashKey* k); - - // Sends the given connection_* event. If orig_state is - // TCP_ENDPOINT__INACTIVE, tries to guess a better one based - // on pending. If arg in non-nil, it will be used as the - // *first* argument of the event call (this is for conn_weird()). - void Event(const PendingConn* pending, double t, - const EventHandlerPtr& event, int orig_state, - int orig_size, int resp_state, Val* arg = 0); - - void Weird(const PendingConn* pending, double t, const char* msg) - { - // This will actually go through the Reporter; Event() takes - // care of that. - Event(pending, t, conn_weird, TCP_ENDPOINT_INACTIVE, 0, - TCP_ENDPOINT_INACTIVE, new StringVal(msg)); - } - - static const int BLOCK_SIZE = 16 * 1024; - - // The memory managment for PendConns. - struct Block { - double time; - Block* prev; - Block* next; - int bytes_used; - unsigned char data[BLOCK_SIZE]; - }; - - // In the connection hash table, we store pointers to both PendingConns - // and Connections. Thus, we need a way to differentiate between - // these two types. To avoid an additional indirection, we use a little - // hack: a pointer retrieved from the table is interpreted as a - // PendingConn first. However, if is_pending is false, it's in fact a - // Connection which starts at offset 4. The methods below help to - // implement this scheme transparently. An "operator new" in - // Connection takes care of building Connection's accordingly. - typedef PendingConn ConnData; - declare(PDict, ConnData); - typedef PDict(ConnData) ConnMap; - ConnMap conns; - - static ConnData* MakeMapPtr(PendingConn* c) - { assert(c->is_pending); return c; } - - static ConnData* MakeMapPtr(Connection* c) - { - ConnData* p = (ConnData*) (((char*) c) - 4); - assert(!p->is_pending); - return p; - } - - static PendingConn* MakePendingConnPtr(ConnData* c) - { assert(c->is_pending); return c; } - - static Connection* MakeConnPtr(ConnData* c) - { - assert(!c->is_pending); - return (Connection*) (((char*) c) + 4); - } - - static bool IsConnPtr(ConnData* c) - { return ! c->is_pending; } - - // New blocks are inserted at the end. - Block* first_block; - Block* last_block; - - // If we have already expired some entries in a block, - // this points to the first non-expired. - unsigned char* first_non_expired; - - // Last "connection" that we have build. - RecordVal* conn_val; - - // Statistics. - Sizes sizes; - }; - -extern ConnCompressor* conn_compressor; - -#endif diff --git a/src/DCE_RPC.cc b/src/DCE_RPC.cc index 1d9acaf1fa..21cb3be9a0 100644 --- a/src/DCE_RPC.cc +++ b/src/DCE_RPC.cc @@ -137,12 +137,12 @@ static bool is_mapped_dce_rpc_endpoint(const dce_rpc_endpoint_addr& addr) bool is_mapped_dce_rpc_endpoint(const ConnID* id, TransportProto proto) { -#ifdef BROv6 - if ( ! is_v4_addr(id->dst_addr) ) + if ( id->dst_addr.GetFamily() == IPv6 ) + // TODO: Does the protocol support v6 addresses? #773 return false; -#endif + dce_rpc_endpoint_addr addr; - addr.addr = ntohl(to_v4_addr(id->dst_addr)); + addr.addr = id->dst_addr; addr.port = ntohs(id->dst_port); addr.proto = proto; @@ -160,12 +160,7 @@ static void add_dce_rpc_endpoint(const dce_rpc_endpoint_addr& addr, // of the dce_rpc_endpoints table. // FIXME: Don't hard-code the timeout. - // Convert the address to a v4/v6 address (depending on how - // Bro was configured). This is all based on the address currently - // being a 32-bit host-order v4 address. - AddrVal a(htonl(addr.addr)); - const addr_type at = a.AsAddr(); - dpm->ExpectConnection(0, at, addr.port, addr.proto, + dpm->ExpectConnection(IPAddr(), addr.addr, addr.port, addr.proto, AnalyzerTag::DCE_RPC, 5 * 60, 0); } @@ -418,8 +413,8 @@ void DCE_RPC_Session::DeliverEpmapperMapResponse( break; case binpac::DCE_RPC_Simple::EPM_PROTOCOL_IP: - mapped.addr.addr = - floor->rhs()->data()->ip(); + uint32 hostip = floor->rhs()->data()->ip(); + mapped.addr.addr = IPAddr(IPv4, &hostip, IPAddr::Host); break; } } @@ -433,7 +428,7 @@ void DCE_RPC_Session::DeliverEpmapperMapResponse( vl->append(analyzer->BuildConnVal()); vl->append(new StringVal(mapped.uuid.to_string())); vl->append(new PortVal(mapped.addr.port, mapped.addr.proto)); - vl->append(new AddrVal(htonl(mapped.addr.addr))); + vl->append(new AddrVal(mapped.addr.addr)); analyzer->ConnectionEvent(epm_map_response, vl); } diff --git a/src/DCE_RPC.h b/src/DCE_RPC.h index 63237a151b..acdbf1637d 100644 --- a/src/DCE_RPC.h +++ b/src/DCE_RPC.h @@ -8,6 +8,7 @@ #include "NetVar.h" #include "TCP.h" +#include "IPAddr.h" #include "dce_rpc_simple_pac.h" @@ -34,19 +35,19 @@ const char* uuid_to_string(const u_char* uuid_data); struct dce_rpc_endpoint_addr { // All fields are in host byteorder. - uint32 addr; + IPAddr addr; u_short port; TransportProto proto; dce_rpc_endpoint_addr() { - addr = 0; + addr = IPAddr(); port = 0; proto = TRANSPORT_UNKNOWN; } bool is_valid_addr() const - { return addr != 0 && port != 0 && proto != TRANSPORT_UNKNOWN; } + { return addr != IPAddr() && port != 0 && proto != TRANSPORT_UNKNOWN; } bool operator<(dce_rpc_endpoint_addr const &e) const { @@ -64,7 +65,7 @@ struct dce_rpc_endpoint_addr { { static char buf[128]; snprintf(buf, sizeof(buf), "%s/%d/%s", - dotted_addr(htonl(addr)), port, + addr.AsString().c_str(), port, proto == TRANSPORT_TCP ? "tcp" : (proto == TRANSPORT_UDP ? "udp" : "?")); diff --git a/src/DFA.cc b/src/DFA.cc index e58ea260e5..06ccfd9342 100644 --- a/src/DFA.cc +++ b/src/DFA.cc @@ -2,9 +2,10 @@ #include "config.h" +#include + #include "EquivClass.h" #include "DFA.h" -#include "md5.h" int dfa_state_cache_size = 10000; @@ -312,8 +313,8 @@ DFA_State* DFA_State_Cache::Lookup(const NFA_state_list& nfas, { // We assume that state ID's don't exceed 10 digits, plus // we allow one more character for the delimiter. - md5_byte_t id_tag[nfas.length() * 11 + 1]; - md5_byte_t* p = id_tag; + u_char id_tag[nfas.length() * 11 + 1]; + u_char* p = id_tag; for ( int i = 0; i < nfas.length(); ++i ) { @@ -335,12 +336,8 @@ DFA_State* DFA_State_Cache::Lookup(const NFA_state_list& nfas, // We use the short MD5 instead of the full string for the // HashKey because the data is copied into the key. - md5_state_t state; - md5_byte_t digest[16]; - - md5_init(&state); - md5_append(&state, id_tag, p - id_tag); - md5_finish(&state, digest); + u_char digest[16]; + MD5(id_tag, p - id_tag, digest); *hash = new HashKey(&digest, sizeof(digest)); CacheEntry* e = states.Lookup(*hash); diff --git a/src/DNS-binpac.cc b/src/DNS-binpac.cc index eb95ac2e1c..999f6015c0 100644 --- a/src/DNS-binpac.cc +++ b/src/DNS-binpac.cc @@ -63,10 +63,10 @@ void DNS_TCP_Analyzer_binpac::Done() interp->FlowEOF(false); } -void DNS_TCP_Analyzer_binpac::EndpointEOF(TCP_Reassembler* endp) +void DNS_TCP_Analyzer_binpac::EndpointEOF(bool is_orig) { - TCP_ApplicationAnalyzer::EndpointEOF(endp); - interp->FlowEOF(endp->IsOrig()); + TCP_ApplicationAnalyzer::EndpointEOF(is_orig); + interp->FlowEOF(is_orig); } void DNS_TCP_Analyzer_binpac::DeliverStream(int len, const u_char* data, diff --git a/src/DNS-binpac.h b/src/DNS-binpac.h index 9e8cb16f69..0bbacf9192 100644 --- a/src/DNS-binpac.h +++ b/src/DNS-binpac.h @@ -45,7 +45,7 @@ public: virtual void Done(); virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void Undelivered(int seq, int len, bool orig); - virtual void EndpointEOF(TCP_Reassembler* endp); + virtual void EndpointEOF(bool is_orig); static Analyzer* InstantiateAnalyzer(Connection* conn) { return new DNS_TCP_Analyzer_binpac(conn); } diff --git a/src/DNS.cc b/src/DNS.cc index c93ea6973d..a3b0b62ef3 100644 --- a/src/DNS.cc +++ b/src/DNS.cc @@ -758,62 +758,37 @@ int DNS_Interpreter::ParseRR_A(DNS_MsgInfo* msg, int DNS_Interpreter::ParseRR_AAAA(DNS_MsgInfo* msg, const u_char*& data, int& len, int rdlength) { - // We need to parse an IPv6 address, high-order byte first. - // ### Currently, we fake an A reply rather than an AAAA reply, - // since for the latter we won't be able to express the full - // address (unless Bro was compiled for IPv6 addresses). We do - // this fake by using just the bottom 4 bytes of the IPv6 address. uint32 addr[4]; - int i; - for ( i = 0; i < 4; ++i ) + for ( int i = 0; i < 4; ++i ) { - addr[i] = ntohl(ExtractLong(data, len)); + addr[i] = htonl(ExtractLong(data, len)); if ( len < 0 ) { - analyzer->Weird("DNS_AAAA_neg_length"); + if ( msg->atype == TYPE_AAAA ) + analyzer->Weird("DNS_AAAA_neg_length"); + else + analyzer->Weird("DNS_A6_neg_length"); return 0; } } - // Currently, dns_AAAA_reply is treated like dns_A_reply, since - // IPv6 addresses are not generally processed. This needs to be - // fixed. ### - if ( dns_A_reply && ! msg->skip_event ) + EventHandlerPtr event; + if ( msg->atype == TYPE_AAAA ) + event = dns_AAAA_reply; + else + event = dns_A6_reply; + if ( event && ! msg->skip_event ) { val_list* vl = new val_list; vl->append(analyzer->BuildConnVal()); vl->append(msg->BuildHdrVal()); vl->append(msg->BuildAnswerVal()); - vl->append(new AddrVal(htonl(addr[3]))); - - analyzer->ConnectionEvent(dns_A_reply, vl); - } - -#if 0 -alternative AAAA code from Chris - if ( dns_AAAA_reply && ! msg->skip_event ) - { - val_list* vl = new val_list; - - vl->append(analyzer->BuildConnVal()); - vl->append(msg->BuildHdrVal()); - vl->append(msg->BuildAnswerVal()); -#ifdef BROv6 - // FIXME: might need to htonl the addr first vl->append(new AddrVal(addr)); -#else - vl->append(new AddrVal((uint32)0x0000)); -#endif - char addrstr[INET6_ADDRSTRLEN]; - inet_ntop(AF_INET6, addr, addrstr, INET6_ADDRSTRLEN); - vl->append(new StringVal(addrstr)); - - analyzer->ConnectionEvent(dns_AAAA_reply, vl); + analyzer->ConnectionEvent(event, vl); } -#endif return 1; } diff --git a/src/DNS_Mgr.cc b/src/DNS_Mgr.cc index 736c262222..6b0f18f459 100644 --- a/src/DNS_Mgr.cc +++ b/src/DNS_Mgr.cc @@ -46,13 +46,13 @@ extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *); class DNS_Mgr_Request { public: - DNS_Mgr_Request(const char* h) { host = copy_string(h); addr = 0; } - DNS_Mgr_Request(uint32 a) { addr = a; host = 0; } + DNS_Mgr_Request(const char* h, int af) { host = copy_string(h); fam = af; } + DNS_Mgr_Request(const IPAddr& a) { addr = a; host = 0; fam = 0; } ~DNS_Mgr_Request() { delete [] host; } // Returns nil if this was an address request. const char* ReqHost() const { return host; } - uint32 ReqAddr() const { return addr; } + const IPAddr& ReqAddr() const { return addr; } int MakeRequest(nb_dns_info* nb_dns); int RequestPending() const { return request_pending; } @@ -61,8 +61,8 @@ public: protected: char* host; // if non-nil, this is a host request - uint32 addr; - uint32 ttl; + int fam; // address family query type for host requests + IPAddr addr; int request_pending; }; @@ -75,15 +75,20 @@ int DNS_Mgr_Request::MakeRequest(nb_dns_info* nb_dns) char err[NB_DNS_ERRSIZE]; if ( host ) - return nb_dns_host_request(nb_dns, host, (void*) this, err) >= 0; + return nb_dns_host_request2(nb_dns, host, fam, (void*) this, err) >= 0; else - return nb_dns_addr_request(nb_dns, addr, (void*) this, err) >= 0; + { + const uint32* bytes; + int len = addr.GetBytes(&bytes); + return nb_dns_addr_request2(nb_dns, (char*) bytes, + len == 1 ? AF_INET : AF_INET6, (void*) this, err) >= 0; + } } class DNS_Mapping { public: DNS_Mapping(const char* host, struct hostent* h, uint32 ttl); - DNS_Mapping(uint32 addr, struct hostent* h, uint32 ttl); + DNS_Mapping(const IPAddr& addr, struct hostent* h, uint32 ttl); DNS_Mapping(FILE* f); int NoMapping() const { return no_mapping; } @@ -93,9 +98,11 @@ public: // Returns nil if this was an address request. const char* ReqHost() const { return req_host; } - uint32 ReqAddr() const { return req_addr; } - const char* ReqStr() const - { return req_host ? req_host : dotted_addr(ReqAddr()); } + IPAddr ReqAddr() const { return req_addr; } + string ReqStr() const + { + return req_host ? req_host : req_addr; + } ListVal* Addrs(); TableVal* AddrsSet(); // addresses returned as a set @@ -109,7 +116,14 @@ public: int Valid() const { return ! failed; } bool Expired() const - { return current_time() > (creation_time + req_ttl); } + { + if ( req_host && num_addrs == 0) + return false; // nothing to expire + + return current_time() > (creation_time + req_ttl); + } + + int Type() const { return map_type; } protected: friend class DNS_Mgr; @@ -121,7 +135,7 @@ protected: int init_failed; char* req_host; - uint32 req_addr; + IPAddr req_addr; uint32 req_ttl; int num_names; @@ -129,11 +143,12 @@ protected: StringVal* host_val; int num_addrs; - uint32* addrs; + IPAddr* addrs; ListVal* addrs_val; int failed; double creation_time; + int map_type; }; void DNS_Mgr_mapping_delete_func(void* v) @@ -154,14 +169,13 @@ DNS_Mapping::DNS_Mapping(const char* host, struct hostent* h, uint32 ttl) { Init(h); req_host = copy_string(host); - req_addr = 0; req_ttl = ttl; if ( names && ! names[0] ) names[0] = copy_string(host); } -DNS_Mapping::DNS_Mapping(uint32 addr, struct hostent* h, uint32 ttl) +DNS_Mapping::DNS_Mapping(const IPAddr& addr, struct hostent* h, uint32 ttl) { Init(h); req_addr = addr; @@ -175,7 +189,6 @@ DNS_Mapping::DNS_Mapping(FILE* f) init_failed = 1; req_host = 0; - req_addr = 0; char buf[512]; @@ -188,14 +201,15 @@ DNS_Mapping::DNS_Mapping(FILE* f) char req_buf[512+1], name_buf[512+1]; int is_req_host; - if ( sscanf(buf, "%lf %d %512s %d %512s %d", &creation_time, &is_req_host, - req_buf, &failed, name_buf, &num_addrs) != 6 ) + if ( sscanf(buf, "%lf %d %512s %d %512s %d %d %"PRIu32, &creation_time, + &is_req_host, req_buf, &failed, name_buf, &map_type, &num_addrs, + &req_ttl) != 8 ) return; if ( is_req_host ) req_host = copy_string(req_buf); else - req_addr = dotted_to_addr(req_buf); + req_addr = IPAddr(req_buf); num_names = 1; names = new char*[num_names]; @@ -203,7 +217,7 @@ DNS_Mapping::DNS_Mapping(FILE* f) if ( num_addrs > 0 ) { - addrs = new uint32[num_addrs]; + addrs = new IPAddr[num_addrs]; for ( int i = 0; i < num_addrs; ++i ) { @@ -217,7 +231,7 @@ DNS_Mapping::DNS_Mapping(FILE* f) if ( newline ) *newline = '\0'; - addrs[i] = dotted_to_addr(buf); + addrs[i] = IPAddr(buf); } } else @@ -280,14 +294,6 @@ StringVal* DNS_Mapping::Host() return host_val; } -// Converts an array of 4 bytes in network order to the corresponding -// 32-bit network long. -static uint32 raw_bytes_to_addr(const unsigned char b[4]) - { - uint32 l = (b[0] << 24) | (b[1] << 16) | (b[2] << 8) | b[3]; - return uint32(htonl(l)); - } - void DNS_Mapping::Init(struct hostent* h) { no_mapping = 0; @@ -296,12 +302,13 @@ void DNS_Mapping::Init(struct hostent* h) host_val = 0; addrs_val = 0; - if ( ! h || h->h_addrtype != AF_INET || h->h_length != 4 ) + if ( ! h ) { Clear(); return; } + map_type = h->h_addrtype; num_names = 1; // for now, just use official name names = new char*[num_names]; names[0] = h->h_name ? copy_string(h->h_name) : 0; @@ -311,10 +318,14 @@ void DNS_Mapping::Init(struct hostent* h) if ( num_addrs > 0 ) { - addrs = new uint32[num_addrs]; + addrs = new IPAddr[num_addrs]; for ( int i = 0; i < num_addrs; ++i ) - addrs[i] = raw_bytes_to_addr( - (unsigned char*)h->h_addr_list[i]); + if ( h->h_addrtype == AF_INET ) + addrs[i] = IPAddr(IPv4, (uint32*)h->h_addr_list[i], + IPAddr::Network); + else if ( h->h_addrtype == AF_INET6 ) + addrs[i] = IPAddr(IPv6, (uint32*)h->h_addr_list[i], + IPAddr::Network); } else addrs = 0; @@ -330,18 +341,19 @@ void DNS_Mapping::Clear() host_val = 0; addrs_val = 0; no_mapping = 0; + map_type = 0; failed = 1; } void DNS_Mapping::Save(FILE* f) const { - fprintf(f, "%.0f %d %s %d %s %d\n", creation_time, req_host != 0, - req_host ? req_host : dotted_addr(req_addr), + fprintf(f, "%.0f %d %s %d %s %d %d %"PRIu32"\n", creation_time, req_host != 0, + req_host ? req_host : req_addr.AsString().c_str(), failed, (names && names[0]) ? names[0] : "*", - num_addrs); + map_type, num_addrs, req_ttl); for ( int i = 0; i < num_addrs; ++i ) - fprintf(f, "%s\n", dotted_addr(addrs[i])); + fprintf(f, "%s\n", addrs[i].AsString().c_str()); } @@ -351,9 +363,6 @@ DNS_Mgr::DNS_Mgr(DNS_MgrMode arg_mode) mode = arg_mode; - host_mappings.SetDeleteFunc(DNS_Mgr_mapping_delete_func); - addr_mappings.SetDeleteFunc(DNS_Mgr_mapping_delete_func); - char err[NB_DNS_ERRSIZE]; nb_dns = nb_dns_init(err); @@ -440,24 +449,34 @@ TableVal* DNS_Mgr::LookupHost(const char* name) if ( mode != DNS_PRIME ) { - DNS_Mapping* d = host_mappings.Lookup(name); + HostMap::iterator it = host_mappings.find(name); - if ( d ) + if ( it != host_mappings.end() ) { - if ( d->Valid() ) - return d->Addrs()->ConvertToSet(); - else + DNS_Mapping* d4 = it->second.first; + DNS_Mapping* d6 = it->second.second; + + if ( (d4 && d4->Failed()) || (d6 && d6->Failed()) ) { reporter->Warning("no such host: %s", name); return empty_addr_set(); } + else if ( d4 && d6 ) + { + TableVal* tv4 = d4->AddrsSet(); + TableVal* tv6 = d6->AddrsSet(); + tv4->AddTo(tv6, false); + Unref(tv4); + return tv6; + } } } // Not found, or priming. switch ( mode ) { case DNS_PRIME: - requests.append(new DNS_Mgr_Request(name)); + requests.append(new DNS_Mgr_Request(name, AF_INET)); + requests.append(new DNS_Mgr_Request(name, AF_INET6)); return empty_addr_set(); case DNS_FORCE: @@ -465,7 +484,8 @@ TableVal* DNS_Mgr::LookupHost(const char* name) return 0; case DNS_DEFAULT: - requests.append(new DNS_Mgr_Request(name)); + requests.append(new DNS_Mgr_Request(name, AF_INET)); + requests.append(new DNS_Mgr_Request(name, AF_INET6)); Resolve(); return LookupHost(name); @@ -475,24 +495,25 @@ TableVal* DNS_Mgr::LookupHost(const char* name) } } -Val* DNS_Mgr::LookupAddr(uint32 addr) +Val* DNS_Mgr::LookupAddr(const IPAddr& addr) { if ( ! did_init ) Init(); if ( mode != DNS_PRIME ) { - HashKey h(&addr, 1); - DNS_Mapping* d = addr_mappings.Lookup(&h); + AddrMap::iterator it = addr_mappings.find(addr); - if ( d ) + if ( it != addr_mappings.end() ) { + DNS_Mapping* d = it->second; if ( d->Valid() ) return d->Host(); else { - reporter->Warning("can't resolve IP address: %s", dotted_addr(addr)); - return new StringVal(dotted_addr(addr)); + string s(addr); + reporter->Warning("can't resolve IP address: %s", s.c_str()); + return new StringVal(s.c_str()); } } } @@ -505,7 +526,7 @@ Val* DNS_Mgr::LookupAddr(uint32 addr) case DNS_FORCE: reporter->FatalError("can't find DNS entry for %s in cache", - dotted_addr(addr)); + addr.AsString().c_str()); return 0; case DNS_DEFAULT: @@ -595,8 +616,6 @@ void DNS_Mgr::Resolve() } else --num_pending; - - delete dr; } } @@ -674,7 +693,7 @@ Val* DNS_Mgr::BuildMappingVal(DNS_Mapping* dm) void DNS_Mgr::AddResult(DNS_Mgr_Request* dr, struct nb_dns_result* r) { struct hostent* h = (r && r->host_errno == 0) ? r->hostent : 0; - u_int32_t ttl = r->ttl; + u_int32_t ttl = (r && r->host_errno == 0) ? r->ttl : 0; DNS_Mapping* new_dm; DNS_Mapping* prev_dm; @@ -683,28 +702,53 @@ void DNS_Mgr::AddResult(DNS_Mgr_Request* dr, struct nb_dns_result* r) if ( dr->ReqHost() ) { new_dm = new DNS_Mapping(dr->ReqHost(), h, ttl); - prev_dm = host_mappings.Insert(dr->ReqHost(), new_dm); + prev_dm = 0; + + HostMap::iterator it = host_mappings.find(dr->ReqHost()); + if ( it == host_mappings.end() ) + { + host_mappings[dr->ReqHost()].first = + new_dm->Type() == AF_INET ? new_dm : 0; + host_mappings[dr->ReqHost()].second = + new_dm->Type() == AF_INET ? 0 : new_dm; + } + + else + { + if ( new_dm->Type() == AF_INET ) + { + prev_dm = it->second.first; + it->second.first = new_dm; + } + else + { + prev_dm = it->second.second; + it->second.second = new_dm; + } + } if ( new_dm->Failed() && prev_dm && prev_dm->Valid() ) { // Put previous, valid entry back - CompareMappings // will generate a corresponding warning. - (void) host_mappings.Insert(dr->ReqHost(), prev_dm); + if ( prev_dm->Type() == AF_INET ) + host_mappings[dr->ReqHost()].first = prev_dm; + else + host_mappings[dr->ReqHost()].second = prev_dm; + ++keep_prev; } } else { new_dm = new DNS_Mapping(dr->ReqAddr(), h, ttl); - uint32 tmp_addr = dr->ReqAddr(); - HashKey k(&tmp_addr, 1); - prev_dm = addr_mappings.Insert(&k, new_dm); + AddrMap::iterator it = addr_mappings.find(dr->ReqAddr()); + prev_dm = (it == addr_mappings.end()) ? 0 : it->second; + addr_mappings[dr->ReqAddr()] = new_dm; if ( new_dm->Failed() && prev_dm && prev_dm->Valid() ) { - uint32 tmp_addr = dr->ReqAddr(); - HashKey k2(&tmp_addr, 1); - (void) addr_mappings.Insert(&k2, prev_dm); + addr_mappings[dr->ReqAddr()] = prev_dm; ++keep_prev; } } @@ -776,17 +820,13 @@ ListVal* DNS_Mgr::AddrListDelta(ListVal* al1, ListVal* al2) for ( int i = 0; i < al1->Length(); ++i ) { - addr_type al1_i = al1->Index(i)->AsAddr(); + const IPAddr& al1_i = al1->Index(i)->AsAddr(); int j; for ( j = 0; j < al2->Length(); ++j ) { - addr_type al2_j = al2->Index(j)->AsAddr(); -#ifdef BROv6 - if ( addr_eq(al1_i, al2_j) ) -#else + const IPAddr& al2_j = al2->Index(j)->AsAddr(); if ( al1_i == al2_j ) -#endif break; } @@ -802,8 +842,8 @@ void DNS_Mgr::DumpAddrList(FILE* f, ListVal* al) { for ( int i = 0; i < al->Length(); ++i ) { - addr_type al_i = al->Index(i)->AsAddr(); - fprintf(f, "%s%s", i > 0 ? "," : "", dotted_addr(al_i)); + const IPAddr& al_i = al->Index(i)->AsAddr(); + fprintf(f, "%s%s", i > 0 ? "," : "", al_i.AsString().c_str()); } } @@ -816,12 +856,20 @@ void DNS_Mgr::LoadCache(FILE* f) for ( ; ! m->NoMapping() && ! m->InitFailed(); m = new DNS_Mapping(f) ) { if ( m->ReqHost() ) - host_mappings.Insert(m->ReqHost(), m); + { + if ( host_mappings.find(m->ReqHost()) == host_mappings.end() ) + { + host_mappings[m->ReqHost()].first = 0; + host_mappings[m->ReqHost()].second = 0; + } + if ( m->Type() == AF_INET ) + host_mappings[m->ReqHost()].first = m; + else + host_mappings[m->ReqHost()].second = m; + } else { - uint32 tmp_addr = m->ReqAddr(); - HashKey h(&tmp_addr, 1); - addr_mappings.Insert(&h, m); + addr_mappings[m->ReqAddr()] = m; } } @@ -832,26 +880,41 @@ void DNS_Mgr::LoadCache(FILE* f) fclose(f); } -void DNS_Mgr::Save(FILE* f, PDict(DNS_Mapping)& m) +void DNS_Mgr::Save(FILE* f, const AddrMap& m) { - IterCookie* cookie = m.InitForIteration(); - DNS_Mapping* dm; - - while ( (dm = m.NextEntry(cookie)) ) - dm->Save(f); + for ( AddrMap::const_iterator it = m.begin(); it != m.end(); ++it ) + { + if ( it->second ) + it->second->Save(f); + } } -const char* DNS_Mgr::LookupAddrInCache(dns_mgr_addr_type addr) +void DNS_Mgr::Save(FILE* f, const HostMap& m) { - HashKey h(&addr, 1); - DNS_Mapping* d = dns_mgr->addr_mappings.Lookup(&h); + HostMap::const_iterator it; - if ( ! d ) + for ( it = m.begin(); it != m.end(); ++it ) + { + if ( it->second.first ) + it->second.first->Save(f); + + if ( it->second.second ) + it->second.second->Save(f); + } + } + +const char* DNS_Mgr::LookupAddrInCache(const IPAddr& addr) + { + AddrMap::iterator it = dns_mgr->addr_mappings.find(addr); + + if ( it == addr_mappings.end() ) return 0; + DNS_Mapping* d = it->second; + if ( d->Expired() ) { - dns_mgr->addr_mappings.Remove(&h); + dns_mgr->addr_mappings.erase(it); delete d; return 0; } @@ -863,23 +926,32 @@ const char* DNS_Mgr::LookupAddrInCache(dns_mgr_addr_type addr) TableVal* DNS_Mgr::LookupNameInCache(string name) { - DNS_Mapping* d = dns_mgr->host_mappings.Lookup(name.c_str()); - - if ( ! d || ! d->names ) + HostMap::iterator it = dns_mgr->host_mappings.find(name); + if ( it == dns_mgr->host_mappings.end() ) return 0; - if ( d->Expired() ) + DNS_Mapping* d4 = it->second.first; + DNS_Mapping* d6 = it->second.second; + + if ( ! d4 || ! d4->names || ! d6 || ! d6->names ) + return 0; + + if ( d4->Expired() || d6->Expired() ) { - HashKey h(name.c_str()); - dns_mgr->host_mappings.Remove(&h); - delete d; + dns_mgr->host_mappings.erase(it); + delete d4; + delete d6; return 0; } - return d->AddrsSet(); + TableVal* tv4 = d4->AddrsSet(); + TableVal* tv6 = d6->AddrsSet(); + tv4->AddTo(tv6, false); + Unref(tv4); + return tv6; } -void DNS_Mgr::AsyncLookupAddr(dns_mgr_addr_type host, LookupCallback* callback) +void DNS_Mgr::AsyncLookupAddr(const IPAddr& host, LookupCallback* callback) { if ( ! did_init ) Init(); @@ -958,10 +1030,15 @@ void DNS_Mgr::IssueAsyncRequests() ++num_requests; DNS_Mgr_Request* dr; + DNS_Mgr_Request* dr6 = 0; + if ( req->IsAddrReq() ) dr = new DNS_Mgr_Request(req->host); else - dr = new DNS_Mgr_Request(req->name.c_str()); + { + dr = new DNS_Mgr_Request(req->name.c_str(), AF_INET); + dr6 = new DNS_Mgr_Request(req->name.c_str(), AF_INET6); + } if ( ! dr->MakeRequest(nb_dns) ) { @@ -971,6 +1048,14 @@ void DNS_Mgr::IssueAsyncRequests() continue; } + if ( dr6 && ! dr6->MakeRequest(nb_dns) ) + { + reporter->Warning("can't issue DNS request"); + ++failed; + req->Timeout(); + continue; + } + req->time = current_time(); asyncs_timeouts.push(req); @@ -989,7 +1074,7 @@ double DNS_Mgr::NextTimestamp(double* network_time) return asyncs_timeouts.size() ? timer_mgr->Time() : -1.0; } -void DNS_Mgr::CheckAsyncAddrRequest(dns_mgr_addr_type addr, bool timeout) +void DNS_Mgr::CheckAsyncAddrRequest(const IPAddr& addr, bool timeout) { // Note that this code is a mirror of that for CheckAsyncHostRequest. @@ -1062,11 +1147,18 @@ void DNS_Mgr::Flush() { DoProcess(false); - IterCookie* cookie = addr_mappings.InitForIteration(); - DNS_Mapping* dm; + HostMap::iterator it; + for ( it = host_mappings.begin(); it != host_mappings.end(); ++it ) + { + delete it->second.first; + delete it->second.second; + } - host_mappings.Clear(); - addr_mappings.Clear(); + for ( AddrMap::iterator it2 = addr_mappings.begin(); it2 != addr_mappings.end(); ++it2 ) + delete it2->second; + + host_mappings.clear(); + addr_mappings.clear(); } void DNS_Mgr::Process() @@ -1109,6 +1201,14 @@ void DNS_Mgr::DoProcess(bool flush) else if ( status > 0 ) { DNS_Mgr_Request* dr = (DNS_Mgr_Request*) r.cookie; + + bool do_host_timeout = true; + if ( dr->ReqHost() && + host_mappings.find(dr->ReqHost()) == host_mappings.end() ) + // Don't timeout when this is the first result in an expected pair + // (one result each for A and AAAA queries). + do_host_timeout = false; + if ( dr->RequestPending() ) { AddResult(dr, &r); @@ -1118,7 +1218,7 @@ void DNS_Mgr::DoProcess(bool flush) if ( ! dr->ReqHost() ) CheckAsyncAddrRequest(dr->ReqAddr(), true); else - CheckAsyncHostRequest(dr->ReqHost(), true); + CheckAsyncHostRequest(dr->ReqHost(), do_host_timeout); IssueAsyncRequests(); @@ -1169,7 +1269,7 @@ void DNS_Mgr::GetStats(Stats* stats) stats->successful = successful; stats->failed = failed; stats->pending = asyncs_pending; - stats->cached_hosts = host_mappings.Length(); - stats->cached_addresses = addr_mappings.Length(); + stats->cached_hosts = host_mappings.size(); + stats->cached_addresses = addr_mappings.size(); } diff --git a/src/DNS_Mgr.h b/src/DNS_Mgr.h index def608d064..7983a91573 100644 --- a/src/DNS_Mgr.h +++ b/src/DNS_Mgr.h @@ -6,12 +6,14 @@ #include #include #include +#include #include "util.h" #include "BroList.h" #include "Dict.h" #include "EventHandler.h" #include "IOSource.h" +#include "IPAddr.h" class Val; class ListVal; @@ -27,7 +29,6 @@ struct nb_dns_result; declare(PDict,ListVal); class DNS_Mapping; -declare(PDict,DNS_Mapping); enum DNS_MgrMode { DNS_PRIME, // used to prime the cache @@ -39,10 +40,6 @@ enum DNS_MgrMode { // Number of seconds we'll wait for a reply. #define DNS_TIMEOUT 5 -// ### For now, we don't support IPv6 lookups. When we do, this -// should become addr_type. -typedef uint32 dns_mgr_addr_type; - class DNS_Mgr : public IOSource { public: DNS_Mgr(DNS_MgrMode mode); @@ -55,7 +52,7 @@ public: // a set of addr. TableVal* LookupHost(const char* host); - Val* LookupAddr(uint32 addr); + Val* LookupAddr(const IPAddr& addr); // Define the directory where to store the data. void SetDir(const char* arg_dir) { dir = copy_string(arg_dir); } @@ -64,7 +61,7 @@ public: void Resolve(); int Save(); - const char* LookupAddrInCache(dns_mgr_addr_type addr); + const char* LookupAddrInCache(const IPAddr& addr); TableVal* LookupNameInCache(string name); // Support for async lookups. @@ -78,7 +75,7 @@ public: virtual void Timeout() = 0; }; - void AsyncLookupAddr(dns_mgr_addr_type host, LookupCallback* callback); + void AsyncLookupAddr(const IPAddr& host, LookupCallback* callback); void AsyncLookupName(string name, LookupCallback* callback); struct Stats { @@ -107,8 +104,11 @@ protected: ListVal* AddrListDelta(ListVal* al1, ListVal* al2); void DumpAddrList(FILE* f, ListVal* al); + typedef map > HostMap; + typedef map AddrMap; void LoadCache(FILE* f); - void Save(FILE* f, PDict(DNS_Mapping)& m); + void Save(FILE* f, const AddrMap& m); + void Save(FILE* f, const HostMap& m); // Selects on the fd to see if there is an answer available (timeout // is secs). Returns 0 on timeout, -1 on EINTR or other error, and 1 @@ -120,7 +120,7 @@ protected: // Finish the request if we have a result. If not, time it out if // requested. - void CheckAsyncAddrRequest(dns_mgr_addr_type addr, bool timeout); + void CheckAsyncAddrRequest(const IPAddr& addr, bool timeout); void CheckAsyncHostRequest(const char* host, bool timeout); // Process outstanding requests. @@ -136,8 +136,8 @@ protected: PDict(ListVal) services; - PDict(DNS_Mapping) host_mappings; - PDict(DNS_Mapping) addr_mappings; + HostMap host_mappings; + AddrMap addr_mappings; DNS_mgr_request_list requests; @@ -163,7 +163,7 @@ protected: struct AsyncRequest { double time; - dns_mgr_addr_type host; + IPAddr host; string name; CallbackList callbacks; @@ -204,7 +204,7 @@ protected: }; - typedef map AsyncRequestAddrMap; + typedef map AsyncRequestAddrMap; AsyncRequestAddrMap asyncs_addrs; typedef map AsyncRequestNameMap; diff --git a/src/DPM.cc b/src/DPM.cc index 345741dfc8..d7e5cd25ef 100644 --- a/src/DPM.cc +++ b/src/DPM.cc @@ -11,53 +11,28 @@ #include "ConnSizeAnalyzer.h" -ExpectedConn::ExpectedConn(const uint32* _orig, const uint32* _resp, +ExpectedConn::ExpectedConn(const IPAddr& _orig, const IPAddr& _resp, uint16 _resp_p, uint16 _proto) { - if ( orig ) - copy_addr(_orig, orig); + if ( _orig == IPAddr(string("0.0.0.0")) ) + // don't use the IPv4 mapping, use the literal unspecified address + // to indicate a wildcard + orig = IPAddr(string("::")); else - { - for ( int i = 0; i < NUM_ADDR_WORDS; ++i ) - orig[i] = 0; - } - - copy_addr(_resp, resp); - - resp_p = _resp_p; - proto = _proto; - } - -ExpectedConn::ExpectedConn(uint32 _orig, uint32 _resp, - uint16 _resp_p, uint16 _proto) - { -#ifdef BROv6 - // Use the IPv4-within-IPv6 convention, as this is what's - // needed when we mix uint32's (like in this construction) - // with addr_type's (for example, when looking up expected - // connections). - - orig[0] = orig[1] = orig[2] = 0; - resp[0] = resp[1] = resp[2] = 0; - orig[3] = _orig; - resp[3] = _resp; -#else - orig[0] = _orig; - resp[0] = _resp; -#endif + orig = _orig; + resp = _resp; resp_p = _resp_p; proto = _proto; } ExpectedConn::ExpectedConn(const ExpectedConn& c) { - copy_addr(c.orig, orig); - copy_addr(c.resp, resp); + orig = c.orig; + resp = c.resp; resp_p = c.resp_p; proto = c.proto; } - DPM::DPM() : expected_conns_queue(AssignedAnalyzer::compare) { @@ -99,7 +74,7 @@ void DPM::PostScriptInit() void DPM::AddConfig(const Analyzer::Config& cfg) { -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG HeapLeakChecker::Disabler disabler; #endif @@ -158,23 +133,18 @@ AnalyzerTag::Tag DPM::GetExpected(int proto, const Connection* conn) ExpectedConn c(conn->OrigAddr(), conn->RespAddr(), ntohs(conn->RespPort()), proto); - // Can't use sizeof(c) due to potential alignment issues. - // FIXME: I guess this is still not portable ... - HashKey key(&c, sizeof(c.orig) + sizeof(c.resp) + - sizeof(c.resp_p) + sizeof(c.proto)); - - AssignedAnalyzer* a = expected_conns.Lookup(&key); + HashKey* key = BuildExpectedConnHashKey(c); + AssignedAnalyzer* a = expected_conns.Lookup(key); + delete key; if ( ! a ) { // Wildcard for originator. - for ( int i = 0; i < NUM_ADDR_WORDS; ++i ) - c.orig[i] = 0; + c.orig = IPAddr(string("::")); - HashKey key(&c, sizeof(c.orig) + sizeof(c.resp) + - sizeof(c.resp_p) + sizeof(c.proto)); - - a = expected_conns.Lookup(&key); + HashKey* key = BuildExpectedConnHashKey(c); + a = expected_conns.Lookup(key); + delete key; } if ( ! a ) @@ -215,46 +185,8 @@ bool DPM::BuildInitialAnalyzerTree(TransportProto proto, Connection* conn, break; case TRANSPORT_ICMP: { - const struct icmp* icmpp = (const struct icmp *) data; - switch ( icmpp->icmp_type ) { - - case ICMP_ECHO: - case ICMP_ECHOREPLY: - if ( ICMP_Echo_Analyzer::Available() ) - { - root = icmp = new ICMP_Echo_Analyzer(conn); - DBG_DPD(conn, "activated ICMP Echo analyzer"); - } - break; - - case ICMP_REDIRECT: - if ( ICMP_Redir_Analyzer::Available() ) - { - root = new ICMP_Redir_Analyzer(conn); - DBG_DPD(conn, "activated ICMP Redir analyzer"); - } - break; - - case ICMP_UNREACH: - if ( ICMP_Unreachable_Analyzer::Available() ) - { - root = icmp = new ICMP_Unreachable_Analyzer(conn); - DBG_DPD(conn, "activated ICMP Unreachable analyzer"); - } - break; - - case ICMP_TIMXCEED: - if ( ICMP_TimeExceeded_Analyzer::Available() ) - { - root = icmp = new ICMP_TimeExceeded_Analyzer(conn); - DBG_DPD(conn, "activated ICMP Time Exceeded analyzer"); - } - break; - } - - if ( ! root ) - root = icmp = new ICMP_Analyzer(conn); - + root = icmp = new ICMP_Analyzer(conn); + DBG_DPD(conn, "activated ICMP analyzer"); analyzed = true; break; } @@ -404,7 +336,8 @@ bool DPM::BuildInitialAnalyzerTree(TransportProto proto, Connection* conn, return true; } -void DPM::ExpectConnection(addr_type orig, addr_type resp, uint16 resp_p, +void DPM::ExpectConnection(const IPAddr& orig, const IPAddr& resp, + uint16 resp_p, TransportProto proto, AnalyzerTag::Tag analyzer, double timeout, void* cookie) { @@ -416,11 +349,7 @@ void DPM::ExpectConnection(addr_type orig, addr_type resp, uint16 resp_p, { if ( ! a->deleted ) { - HashKey* key = new HashKey(&a->conn, - sizeof(a->conn.orig) + - sizeof(a->conn.resp) + - sizeof(a->conn.resp_p) + - sizeof(a->conn.proto)); + HashKey* key = BuildExpectedConnHashKey(a->conn); expected_conns.Remove(key); delete key; } @@ -439,10 +368,9 @@ void DPM::ExpectConnection(addr_type orig, addr_type resp, uint16 resp_p, ExpectedConn c(orig, resp, resp_p, proto); - HashKey key(&c, sizeof(c.orig) + sizeof(c.resp) + - sizeof(c.resp_p) + sizeof(c.proto)); + HashKey* key = BuildExpectedConnHashKey(c); - AssignedAnalyzer* a = expected_conns.Lookup(&key); + AssignedAnalyzer* a = expected_conns.Lookup(key); if ( a ) a->deleted = true; @@ -454,8 +382,9 @@ void DPM::ExpectConnection(addr_type orig, addr_type resp, uint16 resp_p, a->timeout = network_time + timeout; a->deleted = false; - expected_conns.Insert(&key, a); + expected_conns.Insert(key, a); expected_conns_queue.push(a); + delete key; } void DPM::Done() @@ -466,11 +395,7 @@ void DPM::Done() AssignedAnalyzer* a = expected_conns_queue.top(); if ( ! a->deleted ) { - HashKey* key = new HashKey(&a->conn, - sizeof(a->conn.orig) + - sizeof(a->conn.resp) + - sizeof(a->conn.resp_p) + - sizeof(a->conn.proto)); + HashKey* key = BuildExpectedConnHashKey(a->conn); expected_conns.Remove(key); delete key; } diff --git a/src/DPM.h b/src/DPM.h index 4bacbfbeea..f59d21dbfc 100644 --- a/src/DPM.h +++ b/src/DPM.h @@ -27,19 +27,13 @@ // Map to assign expected connections to analyzers. class ExpectedConn { public: - // This form can be used for IPv6 as well as IPv4. - ExpectedConn(const uint32* _orig, const uint32* _resp, + ExpectedConn(const IPAddr& _orig, const IPAddr& _resp, uint16 _resp_p, uint16 _proto); - // This form only works for expecting an IPv4 connection. Note - // that we do the right thing whether we're built IPv4-only or - // BROv6. - ExpectedConn(uint32 _orig, uint32 _resp, uint16 _resp_p, uint16 _proto); - ExpectedConn(const ExpectedConn& c); - uint32 orig[NUM_ADDR_WORDS]; - uint32 resp[NUM_ADDR_WORDS]; + IPAddr orig; + IPAddr resp; uint16 resp_p; uint16 proto; }; @@ -90,7 +84,7 @@ public: // Schedules a particular analyzer for an upcoming connection. // 0 acts as a wildcard for orig. (Cookie is currently unused. // Eventually, we may pass it on to the analyzer). - void ExpectConnection(addr_type orig, addr_type resp, uint16 resp_p, + void ExpectConnection(const IPAddr& orig, const IPAddr& resp, uint16 resp_p, TransportProto proto, AnalyzerTag::Tag analyzer, double timeout, void* cookie); diff --git a/src/Debug.cc b/src/Debug.cc index 1b849fff7e..535e193685 100644 --- a/src/Debug.cc +++ b/src/Debug.cc @@ -142,7 +142,7 @@ int TraceState::LogTrace(const char* fmt, ...) if ( ! loc.filename ) { - loc.filename = ""; + loc.filename = copy_string(""); loc.last_line = 0; } @@ -721,7 +721,6 @@ static char* get_prompt(bool reset_counter = false) string get_context_description(const Stmt* stmt, const Frame* frame) { - char buf[1024]; ODesc d; const BroFunc* func = frame->GetFunction(); @@ -735,14 +734,18 @@ string get_context_description(const Stmt* stmt, const Frame* frame) loc = *stmt->GetLocationInfo(); else { - loc.filename = ""; + loc.filename = copy_string(""); loc.last_line = 0; } - safe_snprintf(buf, sizeof(buf), "In %s at %s:%d", + size_t buf_size = strlen(d.Description()) + strlen(loc.filename) + 1024; + char* buf = new char[buf_size]; + safe_snprintf(buf, buf_size, "In %s at %s:%d", d.Description(), loc.filename, loc.last_line); - return string(buf); + string retval(buf); + delete [] buf; + return retval; } int dbg_handle_debug_input() @@ -924,6 +927,8 @@ bool post_execute_stmt(Stmt* stmt, Frame* f, Val* result, stmt_flow_type* flow) // Evaluates the given expression in the context of the currently selected // frame. Returns the resulting value, or nil if none (or there was an error). Expr* g_curr_debug_expr = 0; +const char* g_curr_debug_error = 0; +bool in_debug = false; // ### fix this hardwired access to external variables etc. struct yy_buffer_state; @@ -969,6 +974,11 @@ Val* dbg_eval_expr(const char* expr) Val* result = 0; if ( yyparse() ) { + if ( g_curr_debug_error ) + debug_msg("Parsing expression '%s' failed: %s\n", expr, g_curr_debug_error); + else + debug_msg("Parsing expression '%s' failed\n", expr); + if ( g_curr_debug_expr ) { delete g_curr_debug_expr; @@ -983,6 +993,9 @@ Val* dbg_eval_expr(const char* expr) delete g_curr_debug_expr; g_curr_debug_expr = 0; + delete [] g_curr_debug_error; + g_curr_debug_error = 0; + in_debug = false; return result; } diff --git a/src/DebugCmds.cc b/src/DebugCmds.cc index 1d3b9dd220..bfb4d6ecc8 100644 --- a/src/DebugCmds.cc +++ b/src/DebugCmds.cc @@ -553,7 +553,8 @@ int dbg_cmd_print(DebugCmd cmd, const vector& args) for ( int i = 0; i < int(args.size()); ++i ) { expr += args[i]; - expr += " "; + if ( i < int(args.size()) - 1 ) + expr += " "; } Val* val = dbg_eval_expr(expr.c_str()); @@ -566,8 +567,7 @@ int dbg_cmd_print(DebugCmd cmd, const vector& args) } else { - // ### Print something? - // debug_msg("\n"); + debug_msg("\n"); } return 1; diff --git a/src/DebugLogger.cc b/src/DebugLogger.cc index d60fdd70c8..3394486ff2 100644 --- a/src/DebugLogger.cc +++ b/src/DebugLogger.cc @@ -15,7 +15,8 @@ DebugLogger::Stream DebugLogger::streams[NUM_DBGS] = { { "compressor", 0, false }, {"string", 0, false }, { "notifiers", 0, false }, { "main-loop", 0, false }, { "dpd", 0, false }, { "tm", 0, false }, - { "logging", 0, false } + { "logging", 0, false }, {"input", 0, false }, + { "threading", 0, false } }; DebugLogger::DebugLogger(const char* filename) diff --git a/src/DebugLogger.h b/src/DebugLogger.h index a2dece5b3c..ca422072c5 100644 --- a/src/DebugLogger.h +++ b/src/DebugLogger.h @@ -24,6 +24,8 @@ enum DebugStream { DBG_DPD, // Dynamic application detection framework DBG_TM, // Time-machine packet input via Brocolli DBG_LOGGING, // Logging streams + DBG_INPUT, // Input streams + DBG_THREADING, // Threading system NUM_DBGS // Has to be last }; diff --git a/src/Desc.cc b/src/Desc.cc index c70878de34..9d94321427 100644 --- a/src/Desc.cc +++ b/src/Desc.cc @@ -41,8 +41,7 @@ ODesc::ODesc(desc_type t, BroFile* arg_f) do_flush = 1; include_stats = 0; indent_with_spaces = 0; - escape = 0; - escape_len = 0; + escape = false; } ODesc::~ODesc() @@ -56,10 +55,9 @@ ODesc::~ODesc() free(base); } -void ODesc::SetEscape(const char* arg_escape, int len) +void ODesc::EnableEscaping() { - escape = arg_escape; - escape_len = len; + escape = true; } void ODesc::PushIndent() @@ -159,6 +157,16 @@ void ODesc::Add(double d) } } +void ODesc::Add(const IPAddr& addr) + { + Add(addr.AsString()); + } + +void ODesc::Add(const IPPrefix& prefix) + { + Add(prefix.AsString()); + } + void ODesc::AddCS(const char* s) { int n = strlen(s); @@ -228,6 +236,25 @@ static const char* find_first_unprintable(ODesc* d, const char* bytes, unsigned return 0; } +pair ODesc::FirstEscapeLoc(const char* bytes, size_t n) + { + pair p(find_first_unprintable(this, bytes, n), 1); + + string str(bytes, n); + list::const_iterator it; + for ( it = escape_sequences.begin(); it != escape_sequences.end(); ++it ) + { + size_t pos = str.find(*it); + if ( pos != string::npos && (p.first == 0 || bytes + pos < p.first) ) + { + p.first = bytes + pos; + p.second = it->size(); + } + } + + return p; + } + void ODesc::AddBytes(const void* bytes, unsigned int n) { if ( ! escape ) @@ -241,45 +268,30 @@ void ODesc::AddBytes(const void* bytes, unsigned int n) while ( s < e ) { - const char* t1 = (const char*) memchr(s, escape[0], e - s); - - if ( ! t1 ) - t1 = e; - - const char* t2 = find_first_unprintable(this, s, t1 - s); - - if ( t2 && t2 < t1 ) + pair p = FirstEscapeLoc(s, e - s); + if ( p.first ) { - AddBytesRaw(s, t2 - s); - - char hex[6] = "\\x00"; - hex[2] = hex_chars[((*t2) & 0xf0) >> 4]; - hex[3] = hex_chars[(*t2) & 0x0f]; - AddBytesRaw(hex, 4); - - s = t2 + 1; - continue; + AddBytesRaw(s, p.first - s); + if ( p.second == 1 ) + { + char hex[6] = "\\x00"; + hex[2] = hex_chars[((*p.first) & 0xf0) >> 4]; + hex[3] = hex_chars[(*p.first) & 0x0f]; + AddBytesRaw(hex, 4); + } + else + { + string esc_str = get_escaped_string(string(p.first, p.second), true); + AddBytesRaw(esc_str.c_str(), esc_str.size()); + } + s = p.first + p.second; } - - if ( memcmp(t1, escape, escape_len) != 0 ) - break; - - AddBytesRaw(s, t1 - s); - - for ( int i = 0; i < escape_len; ++i ) + else { - char hex[5] = "\\x00"; - hex[2] = hex_chars[((*t1) & 0xf0) >> 4]; - hex[3] = hex_chars[(*t1) & 0x0f]; - AddBytesRaw(hex, 4); - ++t1; + AddBytesRaw(s, e - s); + break; } - - s = t1; } - - if ( s < e ) - AddBytesRaw(s, e - s); } void ODesc::AddBytesRaw(const void* bytes, unsigned int n) diff --git a/src/Desc.h b/src/Desc.h index 4ed05c1763..9c60c68106 100644 --- a/src/Desc.h +++ b/src/Desc.h @@ -4,6 +4,9 @@ #define descriptor_h #include +#include +#include + #include "BroString.h" typedef enum { @@ -19,6 +22,8 @@ typedef enum { } desc_style; class BroFile; +class IPAddr; +class IPPrefix; class ODesc { public: @@ -48,8 +53,13 @@ public: void SetFlush(int arg_do_flush) { do_flush = arg_do_flush; } - // The string passed in must remain valid as long as this object lives. - void SetEscape(const char* escape, int len); + void EnableEscaping(); + void AddEscapeSequence(const char* s) { escape_sequences.push_back(s); } + void AddEscapeSequence(const char* s, size_t n) + { escape_sequences.push_back(string(s, n)); } + void RemoveEscapeSequence(const char* s) { escape_sequences.remove(s); } + void RemoveEscapeSequence(const char* s, size_t n) + { escape_sequences.remove(string(s, n)); } void PushIndent(); void PopIndent(); @@ -61,11 +71,14 @@ public: void Add(const char* s, int do_indent=1); void AddN(const char* s, int len) { AddBytes(s, len); } + void Add(const string& s) { AddBytes(s.data(), s.size()); } void Add(int i); void Add(uint32 u); void Add(int64 i); void Add(uint64 u); void Add(double d); + void Add(const IPAddr& addr); + void Add(const IPPrefix& prefix); // Add s as a counted string. void AddCS(const char* s); @@ -133,6 +146,19 @@ protected: void OutOfMemory(); + /** + * Returns the location of the first place in the bytes to be hex-escaped. + * + * @param bytes the starting memory address to start searching for + * escapable character. + * @param n the maximum number of bytes to search. + * @return a pair whose first element represents a starting memory address + * to be escaped up to the number of characters indicated by the + * second element. The first element may be 0 if nothing is + * to be escaped. + */ + pair FirstEscapeLoc(const char* bytes, size_t n); + desc_type type; desc_style style; @@ -140,8 +166,8 @@ protected: unsigned int offset; // where we are in the buffer unsigned int size; // size of buffer in bytes - int escape_len; // number of bytes in to escape sequence - const char* escape; // bytes to escape on output + bool escape; // escape unprintable characters in output? + list escape_sequences; // additional sequences of chars to escape BroFile* f; // or the file we're using. diff --git a/src/Discard.cc b/src/Discard.cc index a71b810601..edfeea1408 100644 --- a/src/Discard.cc +++ b/src/Discard.cc @@ -10,11 +10,6 @@ Discarder::Discarder() { - ip_hdr = internal_type("ip_hdr")->AsRecordType(); - tcp_hdr = internal_type("tcp_hdr")->AsRecordType(); - udp_hdr = internal_type("udp_hdr")->AsRecordType(); - icmp_hdr = internal_type("icmp_hdr")->AsRecordType(); - check_ip = internal_func("discarder_check_ip"); check_tcp = internal_func("discarder_check_tcp"); check_udp = internal_func("discarder_check_udp"); @@ -36,12 +31,10 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen) { int discard_packet = 0; - const struct ip* ip4 = ip->IP4_Hdr(); - if ( check_ip ) { val_list* args = new val_list; - args->append(BuildHeader(ip4)); + args->append(ip->BuildPktHdrVal()); try { @@ -59,19 +52,18 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen) return discard_packet; } - int proto = ip4->ip_p; + int proto = ip->NextProto(); if ( proto != IPPROTO_TCP && proto != IPPROTO_UDP && proto != IPPROTO_ICMP ) // This is not a protocol we understand. return 0; // XXX shall we only check the first packet??? - uint32 frag_field = ntohs(ip4->ip_off); - if ( (frag_field & 0x3fff) != 0 ) + if ( ip->IsFragment() ) // Never check any fragment. return 0; - int ip_hdr_len = ip4->ip_hl * 4; + int ip_hdr_len = ip->HdrLen(); len -= ip_hdr_len; // remove IP header caplen -= ip_hdr_len; @@ -87,7 +79,7 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen) // Where the data starts - if this is a protocol we know about, // this gets advanced past the transport header. - const u_char* data = ((u_char*) ip4 + ip_hdr_len); + const u_char* data = ip->Payload(); if ( is_tcp ) { @@ -97,8 +89,7 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen) int th_len = tp->th_off * 4; val_list* args = new val_list; - args->append(BuildHeader(ip4)); - args->append(BuildHeader(tp, len)); + args->append(ip->BuildPktHdrVal()); args->append(BuildData(data, th_len, len, caplen)); try @@ -123,8 +114,7 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen) int uh_len = sizeof (struct udphdr); val_list* args = new val_list; - args->append(BuildHeader(ip4)); - args->append(BuildHeader(up)); + args->append(ip->BuildPktHdrVal()); args->append(BuildData(data, uh_len, len, caplen)); try @@ -148,8 +138,7 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen) const struct icmp* ih = (const struct icmp*) data; val_list* args = new val_list; - args->append(BuildHeader(ip4)); - args->append(BuildHeader(ih)); + args->append(ip->BuildPktHdrVal()); try { @@ -168,62 +157,6 @@ int Discarder::NextPacket(const IP_Hdr* ip, int len, int caplen) return discard_packet; } -Val* Discarder::BuildHeader(const struct ip* ip) - { - RecordVal* hdr = new RecordVal(ip_hdr); - - hdr->Assign(0, new Val(ip->ip_hl * 4, TYPE_COUNT)); - hdr->Assign(1, new Val(ip->ip_tos, TYPE_COUNT)); - hdr->Assign(2, new Val(ntohs(ip->ip_len), TYPE_COUNT)); - hdr->Assign(3, new Val(ntohs(ip->ip_id), TYPE_COUNT)); - hdr->Assign(4, new Val(ip->ip_ttl, TYPE_COUNT)); - hdr->Assign(5, new Val(ip->ip_p, TYPE_COUNT)); - hdr->Assign(6, new AddrVal(ip->ip_src.s_addr)); - hdr->Assign(7, new AddrVal(ip->ip_dst.s_addr)); - - return hdr; - } - -Val* Discarder::BuildHeader(const struct tcphdr* tp, int tcp_len) - { - RecordVal* hdr = new RecordVal(tcp_hdr); - - hdr->Assign(0, new PortVal(ntohs(tp->th_sport), TRANSPORT_TCP)); - hdr->Assign(1, new PortVal(ntohs(tp->th_dport), TRANSPORT_TCP)); - hdr->Assign(2, new Val(uint32(ntohl(tp->th_seq)), TYPE_COUNT)); - hdr->Assign(3, new Val(uint32(ntohl(tp->th_ack)), TYPE_COUNT)); - - int tcp_hdr_len = tp->th_off * 4; - - hdr->Assign(4, new Val(tcp_hdr_len, TYPE_COUNT)); - hdr->Assign(5, new Val(tcp_len - tcp_hdr_len, TYPE_COUNT)); - - hdr->Assign(6, new Val(tp->th_flags, TYPE_COUNT)); - hdr->Assign(7, new Val(ntohs(tp->th_win), TYPE_COUNT)); - - return hdr; - } - -Val* Discarder::BuildHeader(const struct udphdr* up) - { - RecordVal* hdr = new RecordVal(udp_hdr); - - hdr->Assign(0, new PortVal(ntohs(up->uh_sport), TRANSPORT_UDP)); - hdr->Assign(1, new PortVal(ntohs(up->uh_dport), TRANSPORT_UDP)); - hdr->Assign(2, new Val(ntohs(up->uh_ulen), TYPE_COUNT)); - - return hdr; - } - -Val* Discarder::BuildHeader(const struct icmp* icmp) - { - RecordVal* hdr = new RecordVal(icmp_hdr); - - hdr->Assign(0, new Val(icmp->icmp_type, TYPE_COUNT)); - - return hdr; - } - Val* Discarder::BuildData(const u_char* data, int hdrlen, int len, int caplen) { len -= hdrlen; diff --git a/src/Discard.h b/src/Discard.h index 16f7a58e6e..f4daabefa7 100644 --- a/src/Discard.h +++ b/src/Discard.h @@ -25,17 +25,8 @@ public: int NextPacket(const IP_Hdr* ip, int len, int caplen); protected: - Val* BuildHeader(const struct ip* ip); - Val* BuildHeader(const struct tcphdr* tp, int tcp_len); - Val* BuildHeader(const struct udphdr* up); - Val* BuildHeader(const struct icmp* icmp); Val* BuildData(const u_char* data, int hdrlen, int len, int caplen); - RecordType* ip_hdr; - RecordType* tcp_hdr; - RecordType* udp_hdr; - RecordType* icmp_hdr; - Func* check_ip; Func* check_tcp; Func* check_udp; diff --git a/src/EventHandler.cc b/src/EventHandler.cc index 2867b63437..5598f93f98 100644 --- a/src/EventHandler.cc +++ b/src/EventHandler.cc @@ -96,7 +96,7 @@ EventHandler* EventHandler::Unserialize(UnserialInfo* info) { char* name; if ( ! UNSERIALIZE_STR(&name, 0) ) - return false; + return 0; EventHandler* h = event_registry->Lookup(name); if ( ! h ) diff --git a/src/EventHandler.h b/src/EventHandler.h index 2aebe87584..a86b8a285c 100644 --- a/src/EventHandler.h +++ b/src/EventHandler.h @@ -7,7 +7,6 @@ #include "List.h" #include "BroList.h" -#include "net_util.h" class Func; class FuncType; diff --git a/src/Expr.cc b/src/Expr.cc index f6d1fc568e..e6936267d8 100644 --- a/src/Expr.cc +++ b/src/Expr.cc @@ -14,6 +14,7 @@ #include "Net.h" #include "Traverse.h" #include "Trigger.h" +#include "IPAddr.h" const char* expr_name(BroExprTag t) { @@ -359,7 +360,7 @@ bool NameExpr::DoUnserialize(UnserialInfo* info) if ( id ) ::Ref(id); else - reporter->Warning("unserialized unknown global name"); + reporter->Warning("configuration changed: unserialized unknown global name from persistent state"); delete [] name; } @@ -834,30 +835,30 @@ Val* BinaryExpr::StringFold(Val* v1, Val* v2) const Val* BinaryExpr::AddrFold(Val* v1, Val* v2) const { - addr_type a1 = v1->AsAddr(); - addr_type a2 = v2->AsAddr(); + IPAddr a1 = v1->AsAddr(); + IPAddr a2 = v2->AsAddr(); int result = 0; switch ( tag ) { -#undef DO_FOLD -#ifdef BROv6 -#define DO_FOLD(sense) { result = memcmp(a1, a2, 16) sense 0; break; } -#else -#define DO_FOLD(sense) \ - { \ - a1 = ntohl(a1); \ - a2 = ntohl(a2); \ - result = (a1 < a2 ? -1 : (a1 == a2 ? 0 : 1)) sense 0; \ - break; \ - } -#endif - case EXPR_LT: DO_FOLD(<) - case EXPR_LE: DO_FOLD(<=) - case EXPR_EQ: DO_FOLD(==) - case EXPR_NE: DO_FOLD(!=) - case EXPR_GE: DO_FOLD(>=) - case EXPR_GT: DO_FOLD(>) + case EXPR_LT: + result = a1 < a2; + break; + case EXPR_LE: + result = a1 < a2 || a1 == a2; + break; + case EXPR_EQ: + result = a1 == a2; + break; + case EXPR_NE: + result = a1 != a2; + break; + case EXPR_GE: + result = ! ( a1 < a2 ); + break; + case EXPR_GT: + result = ( ! ( a1 < a2 ) ) && ( a1 != a2 ); + break; default: BadTag("BinaryExpr::AddrFold", expr_name(tag)); @@ -868,20 +869,15 @@ Val* BinaryExpr::AddrFold(Val* v1, Val* v2) const Val* BinaryExpr::SubNetFold(Val* v1, Val* v2) const { - subnet_type* n1 = v1->AsSubNet(); - subnet_type* n2 = v2->AsSubNet(); + const IPPrefix& n1 = v1->AsSubNet(); + const IPPrefix& n2 = v2->AsSubNet(); - if ( n1->width != n2->width ) - return new Val(0, TYPE_BOOL); + bool result = ( n1 == n2 ) ? true : false; -#ifdef BROv6 - if ( memcmp(n1->net, n2->net, 16) ) -#else - if ( n1->net != n2->net ) -#endif - return new Val(0, TYPE_BOOL); + if ( tag == EXPR_NE ) + result = ! result; - return new Val(1, TYPE_BOOL); + return new Val(result, TYPE_BOOL); } void BinaryExpr::SwapOps() @@ -1041,12 +1037,10 @@ Val* IncrExpr::Eval(Frame* f) const { Val* new_elt = DoSingleEval(f, elt); v_vec->Assign(i, new_elt, this, OP_INCR); - Unref(new_elt); // was Ref()'d by Assign() } else v_vec->Assign(i, 0, this, OP_INCR); } - // FIXME: Is the next line needed? op->Assign(f, v_vec, OP_INCR); } @@ -1523,6 +1517,8 @@ RemoveFromExpr::RemoveFromExpr(Expr* arg_op1, Expr* arg_op2) if ( BothArithmetic(bt1, bt2) ) PromoteType(max_type(bt1, bt2), is_vector(op1) || is_vector(op2)); + else if ( BothInterval(bt1, bt2) ) + SetType(base_type(bt1)); else ExprError("requires two arithmetic operands"); } @@ -1681,15 +1677,13 @@ DivideExpr::DivideExpr(Expr* arg_op1, Expr* arg_op2) Val* DivideExpr::AddrFold(Val* v1, Val* v2) const { - addr_type a1 = v1->AsAddr(); - uint32 mask; if ( v2->Type()->Tag() == TYPE_COUNT ) mask = static_cast(v2->InternalUnsigned()); else mask = static_cast(v2->InternalInt()); - return new SubNetVal(a1, mask); + return new SubNetVal(v1->AsAddr(), mask); } Expr* DivideExpr::DoSimplify() @@ -2410,11 +2404,6 @@ Expr* RefExpr::MakeLvalue() return this; } -Val* RefExpr::Eval(Val* v) const - { - return Fold(v); - } - void RefExpr::Assign(Frame* f, Val* v, Opcode opcode) { op->Assign(f, v, opcode); @@ -2435,7 +2424,7 @@ bool RefExpr::DoUnserialize(UnserialInfo* info) } AssignExpr::AssignExpr(Expr* arg_op1, Expr* arg_op2, int arg_is_init, - Val* arg_val) + Val* arg_val, attr_list* arg_attrs) : BinaryExpr(EXPR_ASSIGN, arg_is_init ? arg_op1 : arg_op1->MakeLvalue(), arg_op2) { @@ -2455,14 +2444,14 @@ AssignExpr::AssignExpr(Expr* arg_op1, Expr* arg_op2, int arg_is_init, // We discard the status from TypeCheck since it has already // generated error messages. - (void) TypeCheck(); + (void) TypeCheck(arg_attrs); val = arg_val ? arg_val->Ref() : 0; SetLocationInfo(arg_op1->GetLocationInfo(), arg_op2->GetLocationInfo()); } -bool AssignExpr::TypeCheck() +bool AssignExpr::TypeCheck(attr_list* attrs) { TypeTag bt1 = op1->Type()->Tag(); TypeTag bt2 = op2->Type()->Tag(); @@ -2494,6 +2483,21 @@ bool AssignExpr::TypeCheck() return true; } + if ( bt1 == TYPE_TABLE && op2->Tag() == EXPR_LIST ) + { + attr_list* attr_copy = 0; + + if ( attrs ) + { + attr_copy = new attr_list; + loop_over_list(*attrs, i) + attr_copy->append((*attrs)[i]); + } + + op2 = new TableConstructorExpr(op2->AsListExpr(), attr_copy); + return true; + } + if ( bt1 == TYPE_VECTOR && bt2 == bt1 && op2->Type()->AsVectorType()->IsUnspecifiedVector() ) { @@ -2657,8 +2661,6 @@ void AssignExpr::EvalIntoAggregate(const BroType* t, Val* aggr, Frame* f) const Error("bad table insertion"); TableVal* tv = aggr->AsTableVal(); - const TableType* tt = tv->Type()->AsTableType(); - const BroType* yt = tv->Type()->YieldType(); Val* index = op1->Eval(f); Val* v = op2->Eval(f); @@ -3628,151 +3630,6 @@ bool FieldAssignExpr::DoUnserialize(UnserialInfo* info) return true; } -RecordMatchExpr::RecordMatchExpr(Expr* op1 /* record to match */, - Expr* op2 /* cases to match against */) -: BinaryExpr(EXPR_MATCH, op1, op2) - { - BroType* result_type = 0; - - // Make sure the second argument is of a suitable type. - if ( ! op2->Type()->IsSet() ) - { - ExprError("matching must be done against a set of match records"); - return; - } - - type_list* elt_types = op2->Type()->AsSetType()->Indices()->Types(); - - if ( ! elt_types->length() || - (*elt_types)[0]->Tag() != TYPE_RECORD ) - { - ExprError("matching must be done against a set of match records"); - return; - } - - RecordType* case_rec_type = (*elt_types)[0]->AsRecordType(); - - // NOTE: The "result" and "pred" field names are hardcoded here. - result_field_index = case_rec_type->FieldOffset("result"); - - if ( result_field_index < 0 ) - { - ExprError("match records must have a $result field"); - return; - } - - result_type = case_rec_type->FieldType("result")->Ref(); - - // Check that pred exists, and that the first argument matches it. - if ( (pred_field_index = case_rec_type->FieldOffset("pred")) < 0 || - case_rec_type->FieldType("pred")->Tag() != TYPE_FUNC ) - { - ExprError("match records must have a $pred' field of function type"); - return; - } - - FuncType* pred_type = case_rec_type->FieldType("pred")->AsFuncType(); - type_list* pred_arg_types = pred_type->ArgTypes()->Types(); - if ( pred_arg_types->length() != 1 || - ! check_and_promote_expr(op1, (*pred_arg_types)[0]) ) - ExprError("record to match does not have the same type as predicate argument"); - - // NOTE: The "priority" field name is hardcoded here. - if ( (priority_field_index = case_rec_type->FieldOffset("priority")) >= 0 && - ! IsArithmetic(case_rec_type->FieldType("priority")->Tag()) ) - ExprError("$priority field must have a numeric type"); - - SetType(result_type); - } - -void RecordMatchExpr::ExprDescribe(ODesc* d) const - { - if ( d->IsReadable() ) - { - d->Add("match "); - op1->Describe(d); - d->Add(" using "); - op2->Describe(d); - } - } - -Val* RecordMatchExpr::Fold(Val* v1, Val* v2) const - { - TableVal* match_set = v2->AsTableVal(); - if ( ! match_set ) - Internal("non-table in RecordMatchExpr"); - - Val* return_val = 0; - double highest_priority = -1e100; - - ListVal* match_recs = match_set->ConvertToList(TYPE_ANY); - for ( int i = 0; i < match_recs->Length(); ++i ) - { - val_list args(1); - args.append(v1->Ref()); - - double this_priority = 0; - - // ### Get rid of the double Index if TYPE_ANY->TYPE_RECORD. - Val* v = match_recs->Index(i)->AsListVal()->Index(0); - - const RecordVal* match_rec = v->AsRecordVal(); - if ( ! match_rec ) - Internal("Element of match set is not a record"); - - if ( priority_field_index >= 0 ) - { - this_priority = - match_rec->Lookup(priority_field_index)->CoerceToDouble(); - if ( this_priority <= highest_priority ) - { - Unref(v1); - continue; - } - } - - // No try/catch here; we pass exceptions upstream. - Val* pred_val = - match_rec->Lookup(pred_field_index)->AsFunc()->Call(&args); - bool is_zero = pred_val->IsZero(); - Unref(pred_val); - - if ( ! is_zero ) - { - Val* new_return_val = - match_rec->Lookup(result_field_index); - - Unref(return_val); - return_val = new_return_val->Ref(); - - if ( priority_field_index >= 0 ) - highest_priority = this_priority; - else - break; - } - } - - Unref(match_recs); - - return return_val; - } - -IMPLEMENT_SERIAL(RecordMatchExpr, SER_RECORD_MATCH_EXPR); - -bool RecordMatchExpr::DoSerialize(SerialInfo* info) const - { - DO_SERIALIZE(SER_RECORD_MATCH_EXPR, BinaryExpr); - return SERIALIZE(pred_field_index) && SERIALIZE(result_field_index) && - SERIALIZE(priority_field_index); - } - -bool RecordMatchExpr::DoUnserialize(UnserialInfo* info) - { - DO_UNSERIALIZE(BinaryExpr); - return UNSERIALIZE(&pred_field_index) && UNSERIALIZE(&result_field_index) && - UNSERIALIZE(&priority_field_index); - } - ArithCoerceExpr::ArithCoerceExpr(Expr* arg_op, TypeTag t) : UnaryExpr(EXPR_ARITH_COERCE, arg_op) { @@ -4053,7 +3910,15 @@ Val* RecordCoerceExpr::Fold(Val* v) const val->Assign(i, rhs); } else - val->Assign(i, 0); + { + const Attr* def = + Type()->AsRecordType()->FieldDecl(i)->FindAttr(ATTR_DEFAULT); + + if ( def ) + val->Assign(i, def->AttrExpr()->Eval(0)); + else + val->Assign(i, 0); + } } return val; @@ -4895,6 +4760,7 @@ Val* ListExpr::Eval(Frame* f) const if ( ! ev ) { Error("uninitialized list value"); + Unref(v); return 0; } diff --git a/src/Expr.h b/src/Expr.h index 95016a8d13..c16cf86612 100644 --- a/src/Expr.h +++ b/src/Expr.h @@ -608,10 +608,6 @@ public: void Assign(Frame* f, Val* v, Opcode op = OP_ASSIGN); Expr* MakeLvalue(); - // Only overridden to avoid special vector handling which doesn't apply - // for this class. - Val* Eval(Val* v) const; - protected: friend class Expr; RefExpr() { } @@ -623,7 +619,7 @@ class AssignExpr : public BinaryExpr { public: // If val is given, evaluating this expression will always yield the val // yet still perform the assignment. Used for triggers. - AssignExpr(Expr* op1, Expr* op2, int is_init, Val* val = 0); + AssignExpr(Expr* op1, Expr* op2, int is_init, Val* val = 0, attr_list* attrs = 0); virtual ~AssignExpr() { Unref(val); } Expr* Simplify(SimplifyType simp_type); @@ -638,7 +634,7 @@ protected: friend class Expr; AssignExpr() { } - bool TypeCheck(); + bool TypeCheck(attr_list* attrs = 0); bool TypeCheckArithmetics(TypeTag bt1, TypeTag bt2); DECLARE_SERIAL(AssignExpr); @@ -823,32 +819,6 @@ protected: string field_name; }; -class RecordMatchExpr : public BinaryExpr { -public: - RecordMatchExpr(Expr* op1 /* record to match */, - Expr* op2 /* cases to match against */); - -protected: - friend class Expr; - RecordMatchExpr() - { - pred_field_index = result_field_index = - priority_field_index = 0; - } - - virtual Val* Fold(Val* v1, Val* v2) const; - void ExprDescribe(ODesc*) const; - - DECLARE_SERIAL(RecordMatchExpr); - - // The following are used to hold the field offset of - // $pred, $result, $priority, so the names only need to - // be looked up at compile-time. - int pred_field_index; - int result_field_index; - int priority_field_index; -}; - class ArithCoerceExpr : public UnaryExpr { public: ArithCoerceExpr(Expr* op, TypeTag t); diff --git a/src/FTP.cc b/src/FTP.cc index 588348ea8d..5e7a66e304 100644 --- a/src/FTP.cc +++ b/src/FTP.cc @@ -8,6 +8,8 @@ #include "FTP.h" #include "NVT.h" #include "Event.h" +#include "SSL.h" +#include "Base64.h" FTP_Analyzer::FTP_Analyzer(Connection* conn) : TCP_ApplicationAnalyzer(AnalyzerTag::FTP, conn) @@ -44,6 +46,14 @@ void FTP_Analyzer::Done() Weird("partial_ftp_request"); } +static uint32 get_reply_code(int len, const char* line) + { + if ( len >= 3 && isdigit(line[0]) && isdigit(line[1]) && isdigit(line[2]) ) + return (line[0] - '0') * 100 + (line[1] - '0') * 10 + (line[2] - '0'); + else + return 0; + } + void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig) { TCP_ApplicationAnalyzer::DeliverStream(length, data, orig); @@ -93,16 +103,7 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig) } else { - uint32 reply_code; - if ( length >= 3 && - isdigit(line[0]) && isdigit(line[1]) && isdigit(line[2]) ) - { - reply_code = (line[0] - '0') * 100 + - (line[1] - '0') * 10 + - (line[2] - '0'); - } - else - reply_code = 0; + uint32 reply_code = get_reply_code(length, line); int cont_resp; @@ -143,19 +144,22 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig) else line = end_of_line; - if ( auth_requested.size() > 0 && - (reply_code == 234 || reply_code == 335) ) - // Server accepted AUTH requested, - // which means that very likely we - // won't be able to parse the rest - // of the session, and thus we stop - // here. - SetSkip(true); - cont_resp = 0; } } + if ( reply_code == 334 && auth_requested.size() > 0 && + auth_requested == "GSSAPI" ) + { + // Server wants to proceed with an ADAT exchange and we + // know how to analyze the GSI mechanism, so attach analyzer + // to look for that. + SSL_Analyzer* ssl = new SSL_Analyzer(Conn()); + ssl->AddSupportAnalyzer(new FTP_ADAT_Analyzer(Conn(), true)); + ssl->AddSupportAnalyzer(new FTP_ADAT_Analyzer(Conn(), false)); + AddChildAnalyzer(ssl); + } + vl->append(new Val(reply_code, TYPE_COUNT)); vl->append(new StringVal(end_of_line - line, line)); vl->append(new Val(cont_resp, TYPE_BOOL)); @@ -164,5 +168,140 @@ void FTP_Analyzer::DeliverStream(int length, const u_char* data, bool orig) } ConnectionEvent(f, vl); + + ForwardStream(length, data, orig); } +void FTP_ADAT_Analyzer::DeliverStream(int len, const u_char* data, bool orig) + { + // Don't know how to parse anything but the ADAT exchanges of GSI GSSAPI, + // which is basically just TLS/SSL. + if ( ! Parent()->GetTag() == AnalyzerTag::SSL ) + { + Parent()->Remove(); + return; + } + + bool done = false; + const char* line = (const char*) data; + const char* end_of_line = line + len; + + BroString* decoded_adat = 0; + + if ( orig ) + { + int cmd_len; + const char* cmd; + line = skip_whitespace(line, end_of_line); + get_word(len, line, cmd_len, cmd); + + if ( strncmp(cmd, "ADAT", cmd_len) == 0 ) + { + line = skip_whitespace(line + cmd_len, end_of_line); + StringVal encoded(end_of_line - line, line); + decoded_adat = decode_base64(encoded.AsString()); + + if ( first_token ) + { + // RFC 2743 section 3.1 specifies a framing format for tokens + // that includes an identifier for the mechanism type. The + // framing is supposed to be required for the initial context + // token, but GSI doesn't do that and starts right in on a + // TLS/SSL handshake, so look for that to identify it. + const u_char* msg = decoded_adat->Bytes(); + int msg_len = decoded_adat->Len(); + + // Just check that it looks like a viable TLS/SSL handshake + // record from the first byte (content type of 0x16) and + // that the fourth and fifth bytes indicating the length of + // the record match the length of the decoded data. + if ( msg_len < 5 || msg[0] != 0x16 || + msg_len - 5 != ntohs(*((uint16*)(msg + 3))) ) + { + // Doesn't look like TLS/SSL, so done analyzing. + done = true; + delete decoded_adat; + decoded_adat = 0; + } + } + + first_token = false; + } + + else if ( strncmp(cmd, "AUTH", cmd_len) == 0 ) + // Security state will be reset by a reissued AUTH. + done = true; + } + + else + { + uint32 reply_code = get_reply_code(len, line); + + switch ( reply_code ) { + case 232: + case 234: + // Indicates security data exchange is complete, but nothing + // more to decode in replies. + done = true; + break; + + case 235: + // Security data exchange complete, but may have more to decode + // in the reply (same format at 334 and 335). + done = true; + + // Fall-through. + + case 334: + case 335: + // Security data exchange still in progress, and there could be data + // to decode in the reply. + line += 3; + if ( len > 3 && line[0] == '-' ) + line++; + + line = skip_whitespace(line, end_of_line); + + if ( end_of_line - line >= 5 && strncmp(line, "ADAT=", 5) == 0 ) + { + line += 5; + StringVal encoded(end_of_line - line, line); + decoded_adat = decode_base64(encoded.AsString()); + } + + break; + + case 421: + case 431: + case 500: + case 501: + case 503: + case 535: + // Server isn't going to accept named security mechanism. + // Client has to restart back at the AUTH. + done = true; + break; + + case 631: + case 632: + case 633: + // If the server is sending protected replies, the security + // data exchange must have already succeeded. It does have + // encoded data in the reply, but 632 and 633 are also encrypted. + done = true; + break; + + default: + break; + } + } + + if ( decoded_adat ) + { + ForwardStream(decoded_adat->Len(), decoded_adat->Bytes(), orig); + delete decoded_adat; + } + + if ( done ) + Parent()->Remove(); + } diff --git a/src/FTP.h b/src/FTP.h index 4ef6c44d83..f8d7644808 100644 --- a/src/FTP.h +++ b/src/FTP.h @@ -30,4 +30,26 @@ protected: string auth_requested; // AUTH method requested }; +/** + * Analyzes security data of ADAT exchanges over FTP control session (RFC 2228). + * Currently only the GSI mechanism of GSSAPI AUTH method is understood. + * The ADAT exchange for GSI is base64 encoded TLS/SSL handshake tokens. This + * analyzer just decodes the tokens and passes them on to the parent, which must + * be an SSL analyzer instance. + */ +class FTP_ADAT_Analyzer : public SupportAnalyzer { +public: + FTP_ADAT_Analyzer(Connection* conn, bool arg_orig) + : SupportAnalyzer(AnalyzerTag::FTP_ADAT, conn, arg_orig), + first_token(true) { } + + void DeliverStream(int len, const u_char* data, bool orig); + +protected: + // Used by the client-side analyzer to tell if it needs to peek at the + // initial context token and do sanity checking (i.e. does it look like + // a TLS/SSL handshake token). + bool first_token; +}; + #endif diff --git a/src/File.cc b/src/File.cc index 080923ad37..880fd254ef 100644 --- a/src/File.cc +++ b/src/File.cc @@ -74,9 +74,8 @@ void RotateTimer::Dispatch(double t, int is_expire) // The following could in principle be part of a "file manager" object. -#define MAX_FILE_CACHE_SIZE 32 +#define MAX_FILE_CACHE_SIZE 512 static int num_files_in_cache = 0; -static int max_files_in_cache = 0; static BroFile* head = 0; static BroFile* tail = 0; @@ -87,12 +86,9 @@ double BroFile::default_rotation_size = 0; // that we should use for the cache. static int maximize_num_fds() { -#ifdef NO_HAVE_SETRLIMIT - return MAX_FILE_CACHE_SIZE; -#else struct rlimit rl; if ( getrlimit(RLIMIT_NOFILE, &rl) < 0 ) - reporter->InternalError("maximize_num_fds(): getrlimit failed"); + reporter->FatalError("maximize_num_fds(): getrlimit failed"); if ( rl.rlim_max == RLIM_INFINITY ) { @@ -108,10 +104,9 @@ static int maximize_num_fds() rl.rlim_cur = rl.rlim_max; if ( setrlimit(RLIMIT_NOFILE, &rl) < 0 ) - reporter->InternalError("maximize_num_fds(): setrlimit failed"); + reporter->FatalError("maximize_num_fds(): setrlimit failed"); return rl.rlim_cur / 2; -#endif } @@ -143,11 +138,22 @@ BroFile::BroFile(FILE* arg_f, const char* arg_name, const char* arg_access) BroFile::BroFile(const char* arg_name, const char* arg_access, BroType* arg_t) { Init(); - + f = 0; name = copy_string(arg_name); access = copy_string(arg_access); t = arg_t ? arg_t : base_type(TYPE_STRING); - if ( ! Open() ) + + if ( streq(name, "/dev/stdin") ) + f = stdin; + else if ( streq(name, "/dev/stdout") ) + f = stdout; + else if ( streq(name, "/dev/stderr") ) + f = stderr; + + if ( f ) + is_open = 1; + + else if ( ! Open() ) { reporter->Error("cannot open %s: %s", name, strerror(errno)); is_open = 0; @@ -172,7 +178,7 @@ const char* BroFile::Name() const return 0; } -bool BroFile::Open(FILE* file) +bool BroFile::Open(FILE* file, const char* mode) { open_time = network_time ? network_time : current_time(); @@ -196,7 +202,12 @@ bool BroFile::Open(FILE* file) InstallRotateTimer(); if ( ! f ) - f = fopen(name, access); + { + if ( ! mode ) + f = fopen(name, access); + else + f = fopen(name, mode); + } SetBuf(buffered); @@ -232,7 +243,7 @@ BroFile::~BroFile() delete [] access; delete [] cipher_buffer; -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG heap_checker->UnIgnoreObject(this); #endif } @@ -255,7 +266,7 @@ void BroFile::Init() cipher_ctx = 0; cipher_buffer = 0; -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG heap_checker->IgnoreObject(this); #endif } @@ -342,8 +353,8 @@ int BroFile::Close() FinishEncrypt(); - // Do not close stdout/stderr. - if ( f == stdout || f == stderr ) + // Do not close stdin/stdout/stderr. + if ( f == stdin || f == stdout || f == stderr ) return 0; if ( is_in_cache ) @@ -503,12 +514,9 @@ void BroFile::SetAttrs(Attributes* arg_attrs) InitEncrypt(log_encryption_key->AsString()->CheckString()); } - if ( attrs->FindAttr(ATTR_DISABLE_PRINT_HOOK) ) - DisablePrintHook(); - if ( attrs->FindAttr(ATTR_RAW_OUTPUT) ) EnableRawOutput(); - + InstallRotateTimer(); } @@ -523,6 +531,10 @@ RecordVal* BroFile::Rotate() if ( ! is_open ) return 0; + // Do not rotate stdin/stdout/stderr. + if ( f == stdin || f == stdout || f == stderr ) + return 0; + if ( okay_to_manage && ! is_in_cache ) BringIntoCache(); @@ -572,8 +584,9 @@ void BroFile::InstallRotateTimer() const char* base_time = log_rotate_base_time ? log_rotate_base_time->AsString()->CheckString() : 0; + double base = parse_rotate_base_time(base_time); double delta_t = - calc_next_rotate(rotate_interval, base_time); + calc_next_rotate(network_time, rotate_interval, base); rotate_timer = new RotateTimer(network_time + delta_t, this, true); } @@ -846,8 +859,8 @@ BroFile* BroFile::Unserialize(UnserialInfo* info) } } - // Otherwise, open. - if ( ! file->Open() ) + // Otherwise, open, but don't clobber. + if ( ! file->Open(0, "a") ) { info->s->Error(fmt("cannot open %s: %s", file->name, strerror(errno))); diff --git a/src/File.h b/src/File.h index 444d6209e2..8e3d0ca6e7 100644 --- a/src/File.h +++ b/src/File.h @@ -57,7 +57,7 @@ public: RecordVal* Rotate(); // Set &rotate_interval, &rotate_size, &postprocessor, - // &disable_print_hook, and &raw_output attributes. + // and &raw_output attributes. void SetAttrs(Attributes* attrs); // Returns the current size of the file, after fresh stat'ing. @@ -87,7 +87,13 @@ protected: BroFile() { Init(); } void Init(); - bool Open(FILE* f = 0); // if file is given, it's an open file to use + + /** + * If file is given, it's an open file to use already. + * If file is not given and mode is, the filename will be opened with that + * access mode. + */ + bool Open(FILE* f = 0, const char* mode = 0); BroFile* Prev() { return prev; } BroFile* Next() { return next; } diff --git a/src/FileAnalyzer.cc b/src/FileAnalyzer.cc index 672d1e1e09..d4064e8144 100644 --- a/src/FileAnalyzer.cc +++ b/src/FileAnalyzer.cc @@ -3,23 +3,19 @@ #include "FileAnalyzer.h" #include "Reporter.h" -#ifdef HAVE_LIBMAGIC magic_t File_Analyzer::magic = 0; magic_t File_Analyzer::magic_mime = 0; -#endif File_Analyzer::File_Analyzer(Connection* conn) : TCP_ApplicationAnalyzer(AnalyzerTag::File, conn) { buffer_len = 0; -#ifdef HAVE_LIBMAGIC if ( ! magic ) { InitMagic(&magic, MAGIC_NONE); InitMagic(&magic_mime, MAGIC_MIME); } -#endif } void File_Analyzer::DeliverStream(int len, const u_char* data, bool orig) @@ -52,13 +48,11 @@ void File_Analyzer::Identify() const char* descr = 0; const char* mime = 0; -#ifdef HAVE_LIBMAGIC if ( magic ) descr = magic_buffer(magic, buffer, buffer_len); if ( magic_mime ) mime = magic_buffer(magic_mime, buffer, buffer_len); -#endif val_list* vl = new val_list; vl->append(BuildConnVal()); @@ -68,7 +62,6 @@ void File_Analyzer::Identify() ConnectionEvent(file_transferred, vl); } -#ifdef HAVE_LIBMAGIC void File_Analyzer::InitMagic(magic_t* magic, int flags) { *magic = magic_open(flags); @@ -83,4 +76,3 @@ void File_Analyzer::InitMagic(magic_t* magic, int flags) *magic = 0; } } -#endif diff --git a/src/FileAnalyzer.h b/src/FileAnalyzer.h index 8c1890bb85..dcf9d22e8e 100644 --- a/src/FileAnalyzer.h +++ b/src/FileAnalyzer.h @@ -5,9 +5,7 @@ #include "TCP.h" -#ifdef HAVE_LIBMAGIC #include -#endif class File_Analyzer : public TCP_ApplicationAnalyzer { public: @@ -31,12 +29,10 @@ protected: char buffer[BUFFER_SIZE]; int buffer_len; -#ifdef HAVE_LIBMAGIC static void InitMagic(magic_t* magic, int flags); static magic_t magic; static magic_t magic_mime; -#endif }; #endif diff --git a/src/FlowSrc.cc b/src/FlowSrc.cc index fe6998ea79..59ce3fd6a4 100644 --- a/src/FlowSrc.cc +++ b/src/FlowSrc.cc @@ -58,7 +58,7 @@ void FlowSrc::Process() void FlowSrc::Close() { - close(selectable_fd); + safe_close(selectable_fd); } diff --git a/src/Frag.cc b/src/Frag.cc index b72fac4b16..d873f5bc0c 100644 --- a/src/Frag.cc +++ b/src/Frag.cc @@ -27,21 +27,30 @@ void FragTimer::Dispatch(double t, int /* is_expire */) FragReassembler::FragReassembler(NetSessions* arg_s, const IP_Hdr* ip, const u_char* pkt, - uint32 frag_field, HashKey* k, double t) -: Reassembler(0, ip->DstAddr(), REASSEM_IP) + HashKey* k, double t) + : Reassembler(0, REASSEM_IP) { s = arg_s; key = k; + const struct ip* ip4 = ip->IP4_Hdr(); - proto_hdr_len = ip4->ip_hl * 4; - proto_hdr = (struct ip*) new u_char[64]; // max IP header + slop - // Don't do a structure copy - need to pick up options, too. - memcpy((void*) proto_hdr, (const void*) ip4, proto_hdr_len); + if ( ip4 ) + { + proto_hdr_len = ip->HdrLen(); + proto_hdr = new u_char[64]; // max IP header + slop + // Don't do a structure copy - need to pick up options, too. + memcpy((void*) proto_hdr, (const void*) ip4, proto_hdr_len); + } + else + { + proto_hdr_len = ip->HdrLen() - 8; // minus length of fragment header + proto_hdr = new u_char[proto_hdr_len]; + memcpy(proto_hdr, ip->IP6_Hdr(), proto_hdr_len); + } reassembled_pkt = 0; frag_size = 0; // flag meaning "not known" - - AddFragment(t, ip, pkt, frag_field); + next_proto = ip->NextProto(); if ( frag_timeout != 0.0 ) { @@ -50,6 +59,8 @@ FragReassembler::FragReassembler(NetSessions* arg_s, } else expire_timer = 0; + + AddFragment(t, ip, pkt); } FragReassembler::~FragReassembler() @@ -60,28 +71,42 @@ FragReassembler::~FragReassembler() delete key; } -void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt, - uint32 frag_field) +void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt) { const struct ip* ip4 = ip->IP4_Hdr(); - if ( ip4->ip_p != proto_hdr->ip_p || ip4->ip_hl != proto_hdr->ip_hl ) + if ( ip4 ) + { + if ( ip4->ip_p != ((const struct ip*)proto_hdr)->ip_p || + ip4->ip_hl != ((const struct ip*)proto_hdr)->ip_hl ) // || ip4->ip_tos != proto_hdr->ip_tos // don't check TOS, there's at least one stack that actually // uses different values, and it's hard to see an associated // attack. s->Weird("fragment_protocol_inconsistency", ip); + } + else + { + if ( ip->NextProto() != next_proto || + ip->HdrLen() - 8 != proto_hdr_len ) + s->Weird("fragment_protocol_inconsistency", ip); + // TODO: more detailed unfrag header consistency checks? + } - if ( frag_field & 0x4000 ) + if ( ip->DF() ) // Linux MTU discovery for UDP can do this, for example. s->Weird("fragment_with_DF", ip); - int offset = (ntohs(ip4->ip_off) & 0x1fff) * 8; - int len = ntohs(ip4->ip_len); - int hdr_len = proto_hdr->ip_hl * 4; + int offset = ip->FragOffset(); + int len = ip->TotalLen(); + int hdr_len = ip->HdrLen(); int upper_seq = offset + len - hdr_len; - if ( (frag_field & 0x2000) == 0 ) + if ( ! offset ) + // Make sure to use the first fragment header's next field. + next_proto = ip->NextProto(); + + if ( ! ip->MF() ) { // Last fragment. if ( frag_size == 0 ) @@ -125,7 +150,7 @@ void FragReassembler::AddFragment(double t, const IP_Hdr* ip, const u_char* pkt, void FragReassembler::Overlap(const u_char* b1, const u_char* b2, int n) { - IP_Hdr proto_h((const struct ip*) proto_hdr); + IP_Hdr proto_h(proto_hdr, false, proto_hdr_len); if ( memcmp((const void*) b1, (const void*) b2, n) ) s->Weird("fragment_inconsistency", &proto_h); @@ -157,7 +182,7 @@ void FragReassembler::BlockInserted(DataBlock* /* start_block */) // can happen for benign reasons when we're // intermingling parts of two fragmented packets. - IP_Hdr proto_h((const struct ip*) proto_hdr); + IP_Hdr proto_h(proto_hdr, false, proto_hdr_len); s->Weird("fragment_size_inconsistency", &proto_h); // We decide to analyze the contiguous portion now. @@ -171,7 +196,7 @@ void FragReassembler::BlockInserted(DataBlock* /* start_block */) else if ( last_block->upper > frag_size ) { - IP_Hdr proto_h((const struct ip*) proto_hdr); + IP_Hdr proto_h(proto_hdr, false, proto_hdr_len); s->Weird("fragment_size_inconsistency", &proto_h); frag_size = last_block->upper; } @@ -193,8 +218,7 @@ void FragReassembler::BlockInserted(DataBlock* /* start_block */) u_char* pkt = new u_char[n]; memcpy((void*) pkt, (const void*) proto_hdr, proto_hdr_len); - struct ip* reassem4 = (struct ip*) pkt; - reassem4->ip_len = htons(frag_size + proto_hdr_len); + u_char* pkt_start = pkt; pkt += proto_hdr_len; @@ -214,7 +238,27 @@ void FragReassembler::BlockInserted(DataBlock* /* start_block */) } delete reassembled_pkt; - reassembled_pkt = new IP_Hdr(reassem4); + + if ( ((const struct ip*)pkt_start)->ip_v == 4 ) + { + struct ip* reassem4 = (struct ip*) pkt_start; + reassem4->ip_len = htons(frag_size + proto_hdr_len); + reassembled_pkt = new IP_Hdr(reassem4, true); + } + + else if ( ((const struct ip*)pkt_start)->ip_v == 6 ) + { + struct ip6_hdr* reassem6 = (struct ip6_hdr*) pkt_start; + reassem6->ip6_plen = htons(frag_size + proto_hdr_len - 40); + const IPv6_Hdr_Chain* chain = new IPv6_Hdr_Chain(reassem6, next_proto, n); + reassembled_pkt = new IP_Hdr(reassem6, true, n, chain); + } + + else + { + reporter->InternalError("bad IP version in fragment reassembly"); + } + DeleteTimer(); } diff --git a/src/Frag.h b/src/Frag.h index 92bf1b3bbd..86cf3a9dd4 100644 --- a/src/Frag.h +++ b/src/Frag.h @@ -20,11 +20,10 @@ typedef void (FragReassembler::*frag_timer_func)(double t); class FragReassembler : public Reassembler { public: FragReassembler(NetSessions* s, const IP_Hdr* ip, const u_char* pkt, - uint32 frag_field, HashKey* k, double t); + HashKey* k, double t); ~FragReassembler(); - void AddFragment(double t, const IP_Hdr* ip, const u_char* pkt, - uint32 frag_field); + void AddFragment(double t, const IP_Hdr* ip, const u_char* pkt); void Expire(double t); void DeleteTimer(); @@ -37,11 +36,12 @@ protected: void BlockInserted(DataBlock* start_block); void Overlap(const u_char* b1, const u_char* b2, int n); - struct ip* proto_hdr; + u_char* proto_hdr; IP_Hdr* reassembled_pkt; int proto_hdr_len; NetSessions* s; int frag_size; // size of fully reassembled fragment + uint16 next_proto; // first IPv6 fragment header's next proto field HashKey* key; FragTimer* expire_timer; diff --git a/src/Func.cc b/src/Func.cc index 65cb22b09d..582de1d9bb 100644 --- a/src/Func.cc +++ b/src/Func.cc @@ -29,7 +29,6 @@ #include -#include "md5.h" #include "Base64.h" #include "Stmt.h" #include "Scope.h" @@ -101,7 +100,7 @@ Func* Func::Unserialize(UnserialInfo* info) if ( ! (id->HasVal() && id->ID_Val()->Type()->Tag() == TYPE_FUNC) ) { info->s->Error(fmt("ID %s is not a built-in", name)); - return false; + return 0; } Unref(f); @@ -330,7 +329,17 @@ Val* BroFunc::Call(val_list* args, Frame* parent) const bodies[i].stmts->GetLocationInfo()); Unref(result); - result = bodies[i].stmts->Exec(f, flow); + + try + { + result = bodies[i].stmts->Exec(f, flow); + } + + catch ( InterpreterException& e ) + { + // Already reported, but we continue exec'ing remaining bodies. + continue; + } if ( f->HasDelayed() ) { @@ -523,11 +532,13 @@ void builtin_error(const char* msg, BroObj* arg) #include "bro.bif.func_h" #include "logging.bif.func_h" +#include "input.bif.func_h" #include "reporter.bif.func_h" #include "strings.bif.func_h" #include "bro.bif.func_def" #include "logging.bif.func_def" +#include "input.bif.func_def" #include "reporter.bif.func_def" #include "strings.bif.func_def" @@ -542,6 +553,7 @@ void init_builtin_funcs() #include "bro.bif.func_init" #include "logging.bif.func_init" +#include "input.bif.func_init" #include "reporter.bif.func_init" #include "strings.bif.func_init" diff --git a/src/Gnutella.cc b/src/Gnutella.cc index 448c8dcb3b..6b5e901bc5 100644 --- a/src/Gnutella.cc +++ b/src/Gnutella.cc @@ -42,6 +42,12 @@ Gnutella_Analyzer::Gnutella_Analyzer(Connection* conn) resp_msg_state = new GnutellaMsgState(); } +Gnutella_Analyzer::~Gnutella_Analyzer() + { + delete orig_msg_state; + delete resp_msg_state; + } + void Gnutella_Analyzer::Done() { TCP_ApplicationAnalyzer::Done(); diff --git a/src/Gnutella.h b/src/Gnutella.h index f06c816c90..455876462d 100644 --- a/src/Gnutella.h +++ b/src/Gnutella.h @@ -35,6 +35,7 @@ public: class Gnutella_Analyzer : public TCP_ApplicationAnalyzer { public: Gnutella_Analyzer(Connection* conn); + ~Gnutella_Analyzer(); virtual void Done (); virtual void DeliverStream(int len, const u_char* data, bool orig); diff --git a/src/HTTP-binpac.cc b/src/HTTP-binpac.cc index 70cf37457b..47b2c479ec 100644 --- a/src/HTTP-binpac.cc +++ b/src/HTTP-binpac.cc @@ -20,10 +20,10 @@ void HTTP_Analyzer_binpac::Done() interp->FlowEOF(false); } -void HTTP_Analyzer_binpac::EndpointEOF(TCP_Reassembler* endp) +void HTTP_Analyzer_binpac::EndpointEOF(bool is_orig) { - TCP_ApplicationAnalyzer::EndpointEOF(endp); - interp->FlowEOF(endp->IsOrig()); + TCP_ApplicationAnalyzer::EndpointEOF(is_orig); + interp->FlowEOF(is_orig); } void HTTP_Analyzer_binpac::DeliverStream(int len, const u_char* data, bool orig) diff --git a/src/HTTP-binpac.h b/src/HTTP-binpac.h index 62b6fd0db3..ef7cc7dd7d 100644 --- a/src/HTTP-binpac.h +++ b/src/HTTP-binpac.h @@ -13,7 +13,7 @@ public: virtual void Done(); virtual void DeliverStream(int len, const u_char* data, bool orig); virtual void Undelivered(int seq, int len, bool orig); - virtual void EndpointEOF(TCP_Reassembler* endp); + virtual void EndpointEOF(bool is_orig); static Analyzer* InstantiateAnalyzer(Connection* conn) { return new HTTP_Analyzer_binpac(conn); } diff --git a/src/HTTP.cc b/src/HTTP.cc index 71fa1a3dd0..9d9f01be64 100644 --- a/src/HTTP.cc +++ b/src/HTTP.cc @@ -43,9 +43,7 @@ HTTP_Entity::HTTP_Entity(HTTP_Message *arg_message, MIME_Entity* parent_entity, header_length = 0; deliver_body = (http_entity_data != 0); encoding = IDENTITY; -#ifdef HAVE_LIBZ zip = 0; -#endif } void HTTP_Entity::EndOfData() @@ -53,7 +51,6 @@ void HTTP_Entity::EndOfData() if ( DEBUG_http ) DEBUG_MSG("%.6f: end of data\n", network_time); -#ifdef HAVE_LIBZ if ( zip ) { zip->Done(); @@ -61,7 +58,6 @@ void HTTP_Entity::EndOfData() zip = 0; encoding = IDENTITY; } -#endif if ( body_length ) http_message->MyHTTP_Analyzer()-> @@ -179,7 +175,6 @@ private: void HTTP_Entity::DeliverBody(int len, const char* data, int trailing_CRLF) { -#ifdef HAVE_LIBZ if ( encoding == GZIP || encoding == DEFLATE ) { ZIP_Analyzer::Method method = @@ -198,7 +193,6 @@ void HTTP_Entity::DeliverBody(int len, const char* data, int trailing_CRLF) zip->NextStream(len, (const u_char*) data, false); } else -#endif DeliverBodyClear(len, data, trailing_CRLF); } @@ -450,9 +444,7 @@ void HTTP_Entity::SubmitAllHeaders() // content-length headers or if connection is to be closed afterwards // anyway. else if ( http_message->MyHTTP_Analyzer()->IsConnectionClose () -#ifdef HAVE_LIBZ || encoding == GZIP || encoding == DEFLATE -#endif ) { // FIXME: Using INT_MAX is kind of a hack here. Better @@ -1551,7 +1543,7 @@ void HTTP_Analyzer::HTTP_Header(int is_orig, MIME_Header* h) } } -void HTTP_Analyzer::ParseVersion(data_chunk_t ver, const uint32* host, +void HTTP_Analyzer::ParseVersion(data_chunk_t ver, const IPAddr& host, bool user_agent) { int len = ver.length; diff --git a/src/HTTP.h b/src/HTTP.h index 13b87d219f..c9d8ae55d1 100644 --- a/src/HTTP.h +++ b/src/HTTP.h @@ -8,6 +8,7 @@ #include "MIME.h" #include "binpac_bro.h" #include "ZIP.h" +#include "IPAddr.h" enum CHUNKED_TRANSFER_STATE { NON_CHUNKED_TRANSFER, @@ -29,10 +30,8 @@ public: int expect_body); ~HTTP_Entity() { -#ifdef HAVE_LIBZ if ( zip ) { zip->Done(); delete zip; } -#endif } void EndOfData(); @@ -55,9 +54,7 @@ protected: int64_t header_length; int deliver_body; enum { IDENTITY, GZIP, COMPRESS, DEFLATE } encoding; -#ifdef HAVE_LIBZ ZIP_Analyzer* zip; -#endif MIME_Entity* NewChildEntity() { return new HTTP_Entity(http_message, this, 1); } @@ -216,7 +213,7 @@ protected: const BroString* UnansweredRequestMethod(); - void ParseVersion(data_chunk_t ver, const uint32* host, bool user_agent); + void ParseVersion(data_chunk_t ver, const IPAddr& host, bool user_agent); int HTTP_ReplyCode(const char* code_str); int ExpectReplyMessageBody(); diff --git a/src/Hash.h b/src/Hash.h index 3a1b42084c..00db53d075 100644 --- a/src/Hash.h +++ b/src/Hash.h @@ -7,7 +7,7 @@ #include "BroString.h" -#define UHASH_KEY_SIZE 32 +#define UHASH_KEY_SIZE 36 typedef uint64 hash_t; diff --git a/src/ICMP.cc b/src/ICMP.cc index bc081ace51..b9b4e89404 100644 --- a/src/ICMP.cc +++ b/src/ICMP.cc @@ -9,6 +9,8 @@ #include "Event.h" #include "ICMP.h" +#include + ICMP_Analyzer::ICMP_Analyzer(Connection* c) : TransportLayerAnalyzer(AnalyzerTag::ICMP, c) { @@ -32,7 +34,7 @@ void ICMP_Analyzer::Done() matcher_state.FinishEndpointMatcher(); } -void ICMP_Analyzer::DeliverPacket(int arg_len, const u_char* data, +void ICMP_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig, int seq, const IP_Hdr* ip, int caplen) { assert(ip); @@ -46,13 +48,32 @@ void ICMP_Analyzer::DeliverPacket(int arg_len, const u_char* data, PacketContents(data + 8, min(len, caplen) - 8); const struct icmp* icmpp = (const struct icmp*) data; - len = arg_len; - if ( ! ignore_checksums && caplen >= len && - icmp_checksum(icmpp, len) != 0xffff ) + if ( ! ignore_checksums && caplen >= len ) { - Weird("bad_ICMP_checksum"); - return; + int chksum = 0; + + switch ( ip->NextProto() ) + { + case IPPROTO_ICMP: + chksum = icmp_checksum(icmpp, len); + break; + + case IPPROTO_ICMPV6: + chksum = icmp6_checksum(icmpp, ip, len); + break; + + default: + reporter->InternalError("unexpected IP proto in ICMP analyzer: %d", + ip->NextProto()); + break; + } + + if ( chksum != 0xffff ) + { + Weird("bad_ICMP_checksum"); + return; + } } Conn()->SetLastTime(current_timestamp); @@ -77,7 +98,13 @@ void ICMP_Analyzer::DeliverPacket(int arg_len, const u_char* data, else len_stat += len; - NextICMP(current_timestamp, icmpp, len, caplen, data); + if ( ip->NextProto() == IPPROTO_ICMP ) + NextICMP4(current_timestamp, icmpp, len, caplen, data, ip); + else if ( ip->NextProto() == IPPROTO_ICMPV6 ) + NextICMP6(current_timestamp, icmpp, len, caplen, data, ip); + else + reporter->InternalError("unexpected next protocol in ICMP::DeliverPacket()"); + if ( caplen >= len ) ForwardPacket(len, data, is_orig, seq, ip, caplen); @@ -87,26 +114,99 @@ void ICMP_Analyzer::DeliverPacket(int arg_len, const u_char* data, false, false, true); } -void ICMP_Analyzer::NextICMP(double /* t */, const struct icmp* /* icmpp */, - int /* len */, int /* caplen */, - const u_char*& /* data */) +void ICMP_Analyzer::NextICMP4(double t, const struct icmp* icmpp, int len, int caplen, + const u_char*& data, const IP_Hdr* ip_hdr ) { - ICMPEvent(icmp_sent); + switch ( icmpp->icmp_type ) + { + case ICMP_ECHO: + case ICMP_ECHOREPLY: + Echo(t, icmpp, len, caplen, data, ip_hdr); + break; + + case ICMP_UNREACH: + case ICMP_TIMXCEED: + Context4(t, icmpp, len, caplen, data, ip_hdr); + break; + + default: + ICMPEvent(icmp_sent, icmpp, len, 0, ip_hdr); + break; + } } -void ICMP_Analyzer::ICMPEvent(EventHandlerPtr f) +void ICMP_Analyzer::NextICMP6(double t, const struct icmp* icmpp, int len, int caplen, + const u_char*& data, const IP_Hdr* ip_hdr ) { + switch ( icmpp->icmp_type ) + { + // Echo types. + case ICMP6_ECHO_REQUEST: + case ICMP6_ECHO_REPLY: + Echo(t, icmpp, len, caplen, data, ip_hdr); + break; + + // Error messages all have the same structure for their context, + // and are handled by the same function. + case ICMP6_PARAM_PROB: + case ICMP6_TIME_EXCEEDED: + case ICMP6_PACKET_TOO_BIG: + case ICMP6_DST_UNREACH: + Context6(t, icmpp, len, caplen, data, ip_hdr); + break; + + // Router related messages. + case ND_REDIRECT: + Redirect(t, icmpp, len, caplen, data, ip_hdr); + break; + case ND_ROUTER_ADVERT: + RouterAdvert(t, icmpp, len, caplen, data, ip_hdr); + break; + case ND_NEIGHBOR_ADVERT: + NeighborAdvert(t, icmpp, len, caplen, data, ip_hdr); + break; + case ND_NEIGHBOR_SOLICIT: + NeighborSolicit(t, icmpp, len, caplen, data, ip_hdr); + break; + case ND_ROUTER_SOLICIT: + RouterSolicit(t, icmpp, len, caplen, data, ip_hdr); + break; + case ICMP6_ROUTER_RENUMBERING: + ICMPEvent(icmp_sent, icmpp, len, 1, ip_hdr); + break; + +#if 0 + // Currently not specifically implemented. + case MLD_LISTENER_QUERY: + case MLD_LISTENER_REPORT: + case MLD_LISTENER_REDUCTION: +#endif + default: + // Error messages (i.e., ICMPv6 type < 128) all have + // the same structure for their context, and are + // handled by the same function. + if ( icmpp->icmp_type < 128 ) + Context6(t, icmpp, len, caplen, data, ip_hdr); + else + ICMPEvent(icmp_sent, icmpp, len, 1, ip_hdr); + break; + } + } + +void ICMP_Analyzer::ICMPEvent(EventHandlerPtr f, const struct icmp* icmpp, + int len, int icmpv6, const IP_Hdr* ip_hdr) + { if ( ! f ) return; val_list* vl = new val_list; vl->append(BuildConnVal()); - vl->append(BuildICMPVal()); - + vl->append(BuildICMPVal(icmpp, len, icmpv6, ip_hdr)); ConnectionEvent(f, vl); } -RecordVal* ICMP_Analyzer::BuildICMPVal() +RecordVal* ICMP_Analyzer::BuildICMPVal(const struct icmp* icmpp, int len, + int icmpv6, const IP_Hdr* ip_hdr) { if ( ! icmp_conn_val ) { @@ -114,9 +214,11 @@ RecordVal* ICMP_Analyzer::BuildICMPVal() icmp_conn_val->Assign(0, new AddrVal(Conn()->OrigAddr())); icmp_conn_val->Assign(1, new AddrVal(Conn()->RespAddr())); - icmp_conn_val->Assign(2, new Val(type, TYPE_COUNT)); - icmp_conn_val->Assign(3, new Val(code, TYPE_COUNT)); + icmp_conn_val->Assign(2, new Val(icmpp->icmp_type, TYPE_COUNT)); + icmp_conn_val->Assign(3, new Val(icmpp->icmp_code, TYPE_COUNT)); icmp_conn_val->Assign(4, new Val(len, TYPE_COUNT)); + icmp_conn_val->Assign(5, new Val(ip_hdr->TTL(), TYPE_COUNT)); + icmp_conn_val->Assign(6, new Val(icmpv6, TYPE_BOOL)); } Ref(icmp_conn_val); @@ -124,91 +226,115 @@ RecordVal* ICMP_Analyzer::BuildICMPVal() return icmp_conn_val; } -RecordVal* ICMP_Analyzer::ExtractICMPContext(int len, const u_char*& data) +TransportProto ICMP_Analyzer::GetContextProtocol(const IP_Hdr* ip_hdr, uint32* src_port, uint32* dst_port) { - const struct ip* ip = (const struct ip *) data; - uint32 ip_hdr_len = ip->ip_hl * 4; + const u_char* transport_hdr; + uint32 ip_hdr_len = ip_hdr->HdrLen(); + bool ip4 = ip_hdr->IP4_Hdr(); + + if ( ip4 ) + transport_hdr = ((u_char *) ip_hdr->IP4_Hdr() + ip_hdr_len); + else + transport_hdr = ((u_char *) ip_hdr->IP6_Hdr() + ip_hdr_len); + + TransportProto proto; + + switch ( ip_hdr->NextProto() ) { + case 1: proto = TRANSPORT_ICMP; break; + case 6: proto = TRANSPORT_TCP; break; + case 17: proto = TRANSPORT_UDP; break; + case 58: proto = TRANSPORT_ICMP; break; + default: proto = TRANSPORT_UNKNOWN; break; + } + + switch ( proto ) { + case TRANSPORT_ICMP: + { + const struct icmp* icmpp = + (const struct icmp *) transport_hdr; + bool is_one_way; // dummy + *src_port = ntohs(icmpp->icmp_type); + + if ( ip4 ) + *dst_port = ntohs(ICMP4_counterpart(icmpp->icmp_type, + icmpp->icmp_code, is_one_way)); + else + *dst_port = ntohs(ICMP6_counterpart(icmpp->icmp_type, + icmpp->icmp_code, is_one_way)); + + break; + } + + case TRANSPORT_TCP: + { + const struct tcphdr* tp = + (const struct tcphdr *) transport_hdr; + *src_port = ntohs(tp->th_sport); + *dst_port = ntohs(tp->th_dport); + break; + } + + case TRANSPORT_UDP: + { + const struct udphdr* up = + (const struct udphdr *) transport_hdr; + *src_port = ntohs(up->uh_sport); + *dst_port = ntohs(up->uh_dport); + break; + } + + default: + *src_port = *dst_port = ntohs(0); + break; + } + + return proto; + } + +RecordVal* ICMP_Analyzer::ExtractICMP4Context(int len, const u_char*& data) + { + const IP_Hdr ip_hdr_data((const struct ip*) data, false); + const IP_Hdr* ip_hdr = &ip_hdr_data; + + uint32 ip_hdr_len = ip_hdr->HdrLen(); uint32 ip_len, frag_offset; TransportProto proto = TRANSPORT_UNKNOWN; int DF, MF, bad_hdr_len, bad_checksum; - uint32 src_addr, dst_addr; + IPAddr src_addr, dst_addr; uint32 src_port, dst_port; - if ( ip_hdr_len < sizeof(struct ip) || ip_hdr_len > uint32(len) ) - { // We don't have an entire IP header. + if ( len < (int)sizeof(struct ip) || ip_hdr_len > uint32(len) ) + { + // We don't have an entire IP header. bad_hdr_len = 1; ip_len = frag_offset = 0; DF = MF = bad_checksum = 0; - src_addr = dst_addr = 0; src_port = dst_port = 0; } else { bad_hdr_len = 0; - ip_len = ntohs(ip->ip_len); - bad_checksum = ones_complement_checksum((void*) ip, ip_hdr_len, 0) != 0xffff; + ip_len = ip_hdr->TotalLen(); + bad_checksum = (ones_complement_checksum((void*) ip_hdr->IP4_Hdr(), ip_hdr_len, 0) != 0xffff); - src_addr = uint32(ip->ip_src.s_addr); - dst_addr = uint32(ip->ip_dst.s_addr); + src_addr = ip_hdr->SrcAddr(); + dst_addr = ip_hdr->DstAddr(); - switch ( ip->ip_p ) { - case 1: proto = TRANSPORT_ICMP; break; - case 6: proto = TRANSPORT_TCP; break; - case 17: proto = TRANSPORT_UDP; break; + DF = ip_hdr->DF(); + MF = ip_hdr->MF(); + frag_offset = ip_hdr->FragOffset(); - // Default uses TRANSPORT_UNKNOWN, per initialization above. - } - - uint32 frag_field = ntohs(ip->ip_off); - DF = frag_field & 0x4000; - MF = frag_field & 0x2000; - frag_offset = frag_field & /* IP_OFFMASK not portable */ 0x1fff; - const u_char* transport_hdr = ((u_char *) ip + ip_hdr_len); - - if ( uint32(len) < ip_hdr_len + 4 ) + if ( uint32(len) >= ip_hdr_len + 4 ) + proto = GetContextProtocol(ip_hdr, &src_port, &dst_port); + else { // 4 above is the magic number meaning that both // port numbers are included in the ICMP. - bad_hdr_len = 1; src_port = dst_port = 0; + bad_hdr_len = 1; } - - switch ( proto ) { - case TRANSPORT_ICMP: - { - const struct icmp* icmpp = - (const struct icmp *) transport_hdr; - bool is_one_way; // dummy - src_port = ntohs(icmpp->icmp_type); - dst_port = ntohs(ICMP_counterpart(icmpp->icmp_type, - icmpp->icmp_code, - is_one_way)); - } - break; - - case TRANSPORT_TCP: - { - const struct tcphdr* tp = - (const struct tcphdr *) transport_hdr; - src_port = ntohs(tp->th_sport); - dst_port = ntohs(tp->th_dport); - } - break; - - case TRANSPORT_UDP: - { - const struct udphdr* up = - (const struct udphdr *) transport_hdr; - src_port = ntohs(up->uh_sport); - dst_port = ntohs(up->uh_dport); - } - break; - - default: - src_port = dst_port = ntohs(0); - } } RecordVal* iprec = new RecordVal(icmp_context); @@ -218,8 +344,8 @@ RecordVal* ICMP_Analyzer::ExtractICMPContext(int len, const u_char*& data) id_val->Assign(1, new PortVal(src_port, proto)); id_val->Assign(2, new AddrVal(dst_addr)); id_val->Assign(3, new PortVal(dst_port, proto)); - iprec->Assign(0, id_val); + iprec->Assign(0, id_val); iprec->Assign(1, new Val(ip_len, TYPE_COUNT)); iprec->Assign(2, new Val(proto, TYPE_COUNT)); iprec->Assign(3, new Val(frag_offset, TYPE_COUNT)); @@ -231,6 +357,66 @@ RecordVal* ICMP_Analyzer::ExtractICMPContext(int len, const u_char*& data) return iprec; } +RecordVal* ICMP_Analyzer::ExtractICMP6Context(int len, const u_char*& data) + { + int DF = 0, MF = 0, bad_hdr_len = 0; + TransportProto proto = TRANSPORT_UNKNOWN; + + IPAddr src_addr; + IPAddr dst_addr; + uint32 ip_len, frag_offset = 0; + uint32 src_port, dst_port; + + if ( len < (int)sizeof(struct ip6_hdr) ) + { + bad_hdr_len = 1; + ip_len = 0; + src_port = dst_port = 0; + } + else + { + const IP_Hdr ip_hdr_data((const struct ip6_hdr*) data, false, len); + const IP_Hdr* ip_hdr = &ip_hdr_data; + + ip_len = ip_hdr->TotalLen(); + src_addr = ip_hdr->SrcAddr(); + dst_addr = ip_hdr->DstAddr(); + frag_offset = ip_hdr->FragOffset(); + MF = ip_hdr->MF(); + DF = ip_hdr->DF(); + + if ( uint32(len) >= uint32(ip_hdr->HdrLen() + 4) ) + proto = GetContextProtocol(ip_hdr, &src_port, &dst_port); + else + { + // 4 above is the magic number meaning that both + // port numbers are included in the ICMP. + src_port = dst_port = 0; + bad_hdr_len = 1; + } + } + + RecordVal* iprec = new RecordVal(icmp_context); + RecordVal* id_val = new RecordVal(conn_id); + + id_val->Assign(0, new AddrVal(src_addr)); + id_val->Assign(1, new PortVal(src_port, proto)); + id_val->Assign(2, new AddrVal(dst_addr)); + id_val->Assign(3, new PortVal(dst_port, proto)); + + iprec->Assign(0, id_val); + iprec->Assign(1, new Val(ip_len, TYPE_COUNT)); + iprec->Assign(2, new Val(proto, TYPE_COUNT)); + iprec->Assign(3, new Val(frag_offset, TYPE_COUNT)); + iprec->Assign(4, new Val(bad_hdr_len, TYPE_BOOL)); + // bad_checksum is always false since IPv6 layer doesn't have a checksum. + iprec->Assign(5, new Val(0, TYPE_BOOL)); + iprec->Assign(6, new Val(MF, TYPE_BOOL)); + iprec->Assign(7, new Val(DF, TYPE_BOOL)); + + return iprec; + } + bool ICMP_Analyzer::IsReuse(double /* t */, const u_char* /* pkt */) { return 0; @@ -243,7 +429,7 @@ void ICMP_Analyzer::Describe(ODesc* d) const d->Add(Conn()->LastTime()); d->AddSP(")"); - d->Add(dotted_addr(Conn()->OrigAddr())); + d->Add(Conn()->OrigAddr()); d->Add("."); d->Add(type); d->Add("."); @@ -252,7 +438,7 @@ void ICMP_Analyzer::Describe(ODesc* d) const d->SP(); d->AddSP("->"); - d->Add(dotted_addr(Conn()->RespAddr())); + d->Add(Conn()->RespAddr()); } void ICMP_Analyzer::UpdateConnVal(RecordVal *conn_val) @@ -294,15 +480,20 @@ unsigned int ICMP_Analyzer::MemoryAllocation() const + (icmp_conn_val ? icmp_conn_val->MemoryAllocation() : 0); } -ICMP_Echo_Analyzer::ICMP_Echo_Analyzer(Connection* c) -: ICMP_Analyzer(AnalyzerTag::ICMP_Echo, c) - { - } -void ICMP_Echo_Analyzer::NextICMP(double t, const struct icmp* icmpp, int len, - int caplen, const u_char*& data) +void ICMP_Analyzer::Echo(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr) { - EventHandlerPtr f = type == ICMP_ECHO ? icmp_echo_request : icmp_echo_reply; + // For handling all Echo related ICMP messages + EventHandlerPtr f = 0; + + if ( ip_hdr->NextProto() == IPPROTO_ICMPV6 ) + f = (icmpp->icmp_type == ICMP6_ECHO_REQUEST) + ? icmp_echo_request : icmp_echo_reply; + else + f = (icmpp->icmp_type == ICMP_ECHO) + ? icmp_echo_request : icmp_echo_reply; + if ( ! f ) return; @@ -313,7 +504,7 @@ void ICMP_Echo_Analyzer::NextICMP(double t, const struct icmp* icmpp, int len, val_list* vl = new val_list; vl->append(BuildConnVal()); - vl->append(BuildICMPVal()); + vl->append(BuildICMPVal(icmpp, len, ip_hdr->NextProto() != IPPROTO_ICMP, ip_hdr)); vl->append(new Val(iid, TYPE_COUNT)); vl->append(new Val(iseq, TYPE_COUNT)); vl->append(new StringVal(payload)); @@ -321,66 +512,384 @@ void ICMP_Echo_Analyzer::NextICMP(double t, const struct icmp* icmpp, int len, ConnectionEvent(f, vl); } -ICMP_Redir_Analyzer::ICMP_Redir_Analyzer(Connection* c) -: ICMP_Analyzer(AnalyzerTag::ICMP_Redir, c) - { - } -void ICMP_Redir_Analyzer::NextICMP(double t, const struct icmp* icmpp, int len, - int caplen, const u_char*& data) +void ICMP_Analyzer::RouterAdvert(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr) { - uint32 addr = ntohl(icmpp->icmp_hun.ih_void); + EventHandlerPtr f = icmp_router_advertisement; + uint32 reachable = 0, retrans = 0; + + if ( caplen >= (int)sizeof(reachable) ) + memcpy(&reachable, data, sizeof(reachable)); + + if ( caplen >= (int)sizeof(reachable) + (int)sizeof(retrans) ) + memcpy(&retrans, data + sizeof(reachable), sizeof(retrans)); val_list* vl = new val_list; vl->append(BuildConnVal()); - vl->append(BuildICMPVal()); - vl->append(new AddrVal(htonl(addr))); + vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); + vl->append(new Val(icmpp->icmp_num_addrs, TYPE_COUNT)); // Cur Hop Limit + vl->append(new Val(icmpp->icmp_wpa & 0x80, TYPE_BOOL)); // Managed + vl->append(new Val(icmpp->icmp_wpa & 0x40, TYPE_BOOL)); // Other + vl->append(new Val(icmpp->icmp_wpa & 0x20, TYPE_BOOL)); // Home Agent + vl->append(new Val((icmpp->icmp_wpa & 0x18)>>3, TYPE_COUNT)); // Pref + vl->append(new Val(icmpp->icmp_wpa & 0x04, TYPE_BOOL)); // Proxy + vl->append(new Val(icmpp->icmp_wpa & 0x02, TYPE_COUNT)); // Reserved + vl->append(new IntervalVal((double)ntohs(icmpp->icmp_lifetime), Seconds)); + vl->append(new IntervalVal((double)ntohl(reachable), Milliseconds)); + vl->append(new IntervalVal((double)ntohl(retrans), Milliseconds)); - ConnectionEvent(icmp_redirect, vl); + int opt_offset = sizeof(reachable) + sizeof(retrans); + vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset)); + + ConnectionEvent(f, vl); } -void ICMP_Context_Analyzer::NextICMP(double t, const struct icmp* icmpp, - int len, int caplen, const u_char*& data) +void ICMP_Analyzer::NeighborAdvert(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr) + { + EventHandlerPtr f = icmp_neighbor_advertisement; + IPAddr tgtaddr; + + if ( caplen >= (int)sizeof(in6_addr) ) + tgtaddr = IPAddr(*((const in6_addr*)data)); + + val_list* vl = new val_list; + vl->append(BuildConnVal()); + vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); + vl->append(new Val(icmpp->icmp_num_addrs & 0x80, TYPE_BOOL)); // Router + vl->append(new Val(icmpp->icmp_num_addrs & 0x40, TYPE_BOOL)); // Solicited + vl->append(new Val(icmpp->icmp_num_addrs & 0x20, TYPE_BOOL)); // Override + vl->append(new AddrVal(tgtaddr)); + + int opt_offset = sizeof(in6_addr); + vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset)); + + ConnectionEvent(f, vl); + } + + +void ICMP_Analyzer::NeighborSolicit(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr) + { + EventHandlerPtr f = icmp_neighbor_solicitation; + IPAddr tgtaddr; + + if ( caplen >= (int)sizeof(in6_addr) ) + tgtaddr = IPAddr(*((const in6_addr*)data)); + + val_list* vl = new val_list; + vl->append(BuildConnVal()); + vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); + vl->append(new AddrVal(tgtaddr)); + + int opt_offset = sizeof(in6_addr); + vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset)); + + ConnectionEvent(f, vl); + } + + +void ICMP_Analyzer::Redirect(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr) + { + EventHandlerPtr f = icmp_redirect; + IPAddr tgtaddr, dstaddr; + + if ( caplen >= (int)sizeof(in6_addr) ) + tgtaddr = IPAddr(*((const in6_addr*)data)); + + if ( caplen >= 2 * (int)sizeof(in6_addr) ) + dstaddr = IPAddr(*((const in6_addr*)(data + sizeof(in6_addr)))); + + val_list* vl = new val_list; + vl->append(BuildConnVal()); + vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); + vl->append(new AddrVal(tgtaddr)); + vl->append(new AddrVal(dstaddr)); + + int opt_offset = 2 * sizeof(in6_addr); + vl->append(BuildNDOptionsVal(caplen - opt_offset, data + opt_offset)); + + ConnectionEvent(f, vl); + } + + +void ICMP_Analyzer::RouterSolicit(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr) + { + EventHandlerPtr f = icmp_router_solicitation; + + val_list* vl = new val_list; + vl->append(BuildConnVal()); + vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); + vl->append(BuildNDOptionsVal(caplen, data)); + + ConnectionEvent(f, vl); + } + + +void ICMP_Analyzer::Context4(double t, const struct icmp* icmpp, + int len, int caplen, const u_char*& data, const IP_Hdr* ip_hdr) { EventHandlerPtr f = 0; - switch ( type ) { - case ICMP_UNREACH: f = icmp_unreachable; break; - case ICMP_TIMXCEED: f = icmp_time_exceeded; break; - } + + switch ( icmpp->icmp_type ) + { + case ICMP_UNREACH: + f = icmp_unreachable; + break; + + case ICMP_TIMXCEED: + f = icmp_time_exceeded; + break; + } if ( f ) { val_list* vl = new val_list; vl->append(BuildConnVal()); - vl->append(BuildICMPVal()); - vl->append(new Val(code, TYPE_COUNT)); - vl->append(ExtractICMPContext(caplen, data)); - + vl->append(BuildICMPVal(icmpp, len, 0, ip_hdr)); + vl->append(new Val(icmpp->icmp_code, TYPE_COUNT)); + vl->append(ExtractICMP4Context(caplen, data)); ConnectionEvent(f, vl); } } -int ICMP_counterpart(int icmp_type, int icmp_code, bool& is_one_way) +void ICMP_Analyzer::Context6(double t, const struct icmp* icmpp, + int len, int caplen, const u_char*& data, const IP_Hdr* ip_hdr) + { + EventHandlerPtr f = 0; + + switch ( icmpp->icmp_type ) + { + case ICMP6_DST_UNREACH: + f = icmp_unreachable; + break; + + case ICMP6_PARAM_PROB: + f = icmp_parameter_problem; + break; + + case ICMP6_TIME_EXCEEDED: + f = icmp_time_exceeded; + break; + + case ICMP6_PACKET_TOO_BIG: + f = icmp_packet_too_big; + break; + + default: + f = icmp_error_message; + break; + } + + if ( f ) + { + val_list* vl = new val_list; + vl->append(BuildConnVal()); + vl->append(BuildICMPVal(icmpp, len, 1, ip_hdr)); + vl->append(new Val(icmpp->icmp_code, TYPE_COUNT)); + vl->append(ExtractICMP6Context(caplen, data)); + ConnectionEvent(f, vl); + } + } + +VectorVal* ICMP_Analyzer::BuildNDOptionsVal(int caplen, const u_char* data) + { + static RecordType* icmp6_nd_option_type = 0; + static RecordType* icmp6_nd_prefix_info_type = 0; + + if ( ! icmp6_nd_option_type ) + { + icmp6_nd_option_type = internal_type("icmp6_nd_option")->AsRecordType(); + icmp6_nd_prefix_info_type = + internal_type("icmp6_nd_prefix_info")->AsRecordType(); + } + + VectorVal* vv = new VectorVal( + internal_type("icmp6_nd_options")->AsVectorType()); + + while ( caplen > 0 ) + { + // Must have at least type & length to continue parsing options. + if ( caplen < 2 ) + { + Weird("truncated_ICMPv6_ND_options"); + break; + } + + uint8 type = *((const uint8*)data); + uint8 length = *((const uint8*)(data + 1)); + + if ( length == 0 ) + { + Weird("zero_length_ICMPv6_ND_option"); + break; + } + + RecordVal* rv = new RecordVal(icmp6_nd_option_type); + rv->Assign(0, new Val(type, TYPE_COUNT)); + rv->Assign(1, new Val(length, TYPE_COUNT)); + + // Adjust length to be in units of bytes, exclude type/length fields. + length = length * 8 - 2; + + data += 2; + caplen -= 2; + + bool set_payload_field = false; + + // Only parse out known options that are there in full. + switch ( type ) { + case 1: + case 2: + // Source/Target Link-layer Address option + { + if ( caplen >= length ) + { + BroString* link_addr = new BroString(data, length, 0); + rv->Assign(2, new StringVal(link_addr)); + } + else + set_payload_field = true; + + break; + } + + case 3: + // Prefix Information option + { + if ( caplen >= 30 ) + { + RecordVal* info = new RecordVal(icmp6_nd_prefix_info_type); + uint8 prefix_len = *((const uint8*)(data)); + bool L_flag = (*((const uint8*)(data + 1)) & 0x80) != 0; + bool A_flag = (*((const uint8*)(data + 1)) & 0x40) != 0; + uint32 valid_life = *((const uint32*)(data + 2)); + uint32 prefer_life = *((const uint32*)(data + 6)); + in6_addr prefix = *((const in6_addr*)(data + 14)); + info->Assign(0, new Val(prefix_len, TYPE_COUNT)); + info->Assign(1, new Val(L_flag, TYPE_BOOL)); + info->Assign(2, new Val(A_flag, TYPE_BOOL)); + info->Assign(3, new IntervalVal((double)ntohl(valid_life), Seconds)); + info->Assign(4, new IntervalVal((double)ntohl(prefer_life), Seconds)); + info->Assign(5, new AddrVal(IPAddr(prefix))); + rv->Assign(3, info); + } + + else + set_payload_field = true; + break; + } + + case 4: + // Redirected Header option + { + if ( caplen >= length ) + { + const u_char* hdr = data + 6; + rv->Assign(4, ExtractICMP6Context(length - 6, hdr)); + } + + else + set_payload_field = true; + + break; + } + + case 5: + // MTU option + { + if ( caplen >= 6 ) + rv->Assign(5, new Val(ntohl(*((const uint32*)(data + 2))), + TYPE_COUNT)); + else + set_payload_field = true; + + break; + } + + default: + { + set_payload_field = true; + break; + } + } + + if ( set_payload_field ) + { + BroString* payload = + new BroString(data, min((int)length, caplen), 0); + rv->Assign(6, new StringVal(payload)); + } + + data += length; + caplen -= length; + + vv->Assign(vv->Size(), rv, 0); + } + + return vv; + } + +int ICMP4_counterpart(int icmp_type, int icmp_code, bool& is_one_way) { is_one_way = false; - // return the counterpart type if one exists. This allows us + // Return the counterpart type if one exists. This allows us // to track corresponding ICMP requests/replies. // Note that for the two-way ICMP messages, icmp_code is // always 0 (RFC 792). switch ( icmp_type ) { case ICMP_ECHO: return ICMP_ECHOREPLY; case ICMP_ECHOREPLY: return ICMP_ECHO; + case ICMP_TSTAMP: return ICMP_TSTAMPREPLY; case ICMP_TSTAMPREPLY: return ICMP_TSTAMP; + case ICMP_IREQ: return ICMP_IREQREPLY; case ICMP_IREQREPLY: return ICMP_IREQ; + case ICMP_ROUTERSOLICIT: return ICMP_ROUTERADVERT; + case ICMP_MASKREQ: return ICMP_MASKREPLY; case ICMP_MASKREPLY: return ICMP_MASKREQ; default: is_one_way = true; return icmp_code; } } + +int ICMP6_counterpart(int icmp_type, int icmp_code, bool& is_one_way) + { + is_one_way = false; + + switch ( icmp_type ) { + case ICMP6_ECHO_REQUEST: return ICMP6_ECHO_REPLY; + case ICMP6_ECHO_REPLY: return ICMP6_ECHO_REQUEST; + + case ND_ROUTER_SOLICIT: return ND_ROUTER_ADVERT; + case ND_ROUTER_ADVERT: return ND_ROUTER_SOLICIT; + + case ND_NEIGHBOR_SOLICIT: return ND_NEIGHBOR_ADVERT; + case ND_NEIGHBOR_ADVERT: return ND_NEIGHBOR_SOLICIT; + + case MLD_LISTENER_QUERY: return MLD_LISTENER_REPORT; + case MLD_LISTENER_REPORT: return MLD_LISTENER_QUERY; + + // ICMP node information query and response respectively (not defined in + // icmp6.h) + case 139: return 140; + case 140: return 139; + + // Home Agent Address Discovery Request Message and reply + case 144: return 145; + case 145: return 144; + + // TODO: Add further counterparts. + + default: is_one_way = true; return icmp_code; + } + } diff --git a/src/ICMP.h b/src/ICMP.h index ad43d7b948..1e30b7ff54 100644 --- a/src/ICMP.h +++ b/src/ICMP.h @@ -33,21 +33,54 @@ protected: virtual bool IsReuse(double t, const u_char* pkt); virtual unsigned int MemoryAllocation() const; - void ICMPEvent(EventHandlerPtr f); + void ICMPEvent(EventHandlerPtr f, const struct icmp* icmpp, int len, + int icmpv6, const IP_Hdr* ip_hdr); + + void Echo(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr); + void Context(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr); + void Redirect(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr); + void RouterAdvert(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr); + void NeighborAdvert(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr); + void NeighborSolicit(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr); + void RouterSolicit(double t, const struct icmp* icmpp, int len, + int caplen, const u_char*& data, const IP_Hdr* ip_hdr); + void Describe(ODesc* d) const; - RecordVal* BuildICMPVal(); + RecordVal* BuildICMPVal(const struct icmp* icmpp, int len, int icmpv6, + const IP_Hdr* ip_hdr); - virtual void NextICMP(double t, const struct icmp* icmpp, - int len, int caplen, const u_char*& data); + void NextICMP4(double t, const struct icmp* icmpp, int len, int caplen, + const u_char*& data, const IP_Hdr* ip_hdr ); - RecordVal* ExtractICMPContext(int len, const u_char*& data); + RecordVal* ExtractICMP4Context(int len, const u_char*& data); + + void Context4(double t, const struct icmp* icmpp, int len, int caplen, + const u_char*& data, const IP_Hdr* ip_hdr); + + TransportProto GetContextProtocol(const IP_Hdr* ip_hdr, uint32* src_port, + uint32* dst_port); + + void NextICMP6(double t, const struct icmp* icmpp, int len, int caplen, + const u_char*& data, const IP_Hdr* ip_hdr ); + + RecordVal* ExtractICMP6Context(int len, const u_char*& data); + + void Context6(double t, const struct icmp* icmpp, int len, int caplen, + const u_char*& data, const IP_Hdr* ip_hdr); + + // RFC 4861 Neighbor Discover message options + VectorVal* BuildNDOptionsVal(int caplen, const u_char* data); RecordVal* icmp_conn_val; int type; int code; - int len; - int request_len, reply_len; RuleMatcherState matcher_state; @@ -56,81 +89,9 @@ private: void UpdateEndpointVal(RecordVal* endp, int is_orig); }; -class ICMP_Echo_Analyzer : public ICMP_Analyzer { -public: - ICMP_Echo_Analyzer(Connection* conn); - - static Analyzer* InstantiateAnalyzer(Connection* conn) - { return new ICMP_Echo_Analyzer(conn); } - - static bool Available() { return icmp_echo_request || icmp_echo_reply; } - -protected: - ICMP_Echo_Analyzer() { } - - virtual void NextICMP(double t, const struct icmp* icmpp, - int len, int caplen, const u_char*& data); -}; - -class ICMP_Redir_Analyzer : public ICMP_Analyzer { -public: - ICMP_Redir_Analyzer(Connection* conn); - - static Analyzer* InstantiateAnalyzer(Connection* conn) - { return new ICMP_Redir_Analyzer(conn); } - - static bool Available() { return icmp_redirect; } - -protected: - ICMP_Redir_Analyzer() { } - - virtual void NextICMP(double t, const struct icmp* icmpp, - int len, int caplen, const u_char*& data); -}; - -class ICMP_Context_Analyzer : public ICMP_Analyzer { -public: - ICMP_Context_Analyzer(AnalyzerTag::Tag tag, Connection* conn) - : ICMP_Analyzer(tag, conn) { } - -protected: - ICMP_Context_Analyzer() { } - - virtual void NextICMP(double t, const struct icmp* icmpp, - int len, int caplen, const u_char*& data); -}; - -class ICMP_TimeExceeded_Analyzer : public ICMP_Context_Analyzer { -public: - ICMP_TimeExceeded_Analyzer(Connection* conn) - : ICMP_Context_Analyzer(AnalyzerTag::ICMP_TimeExceeded, conn) { } - - static Analyzer* InstantiateAnalyzer(Connection* conn) - { return new ICMP_TimeExceeded_Analyzer(conn); } - - static bool Available() { return icmp_time_exceeded; } - -protected: - ICMP_TimeExceeded_Analyzer() { } -}; - -class ICMP_Unreachable_Analyzer : public ICMP_Context_Analyzer { -public: - ICMP_Unreachable_Analyzer(Connection* conn) - : ICMP_Context_Analyzer(AnalyzerTag::ICMP_Unreachable, conn) { } - - static Analyzer* InstantiateAnalyzer(Connection* conn) - { return new ICMP_Unreachable_Analyzer(conn); } - - static bool Available() { return icmp_unreachable; } - -protected: - ICMP_Unreachable_Analyzer() { } -}; - - // Returns the counterpart type to the given type (e.g., the counterpart // to ICMP_ECHOREPLY is ICMP_ECHO). -extern int ICMP_counterpart(int icmp_type, int icmp_code, bool& is_one_way); +extern int ICMP4_counterpart(int icmp_type, int icmp_code, bool& is_one_way); +extern int ICMP6_counterpart(int icmp_type, int icmp_code, bool& is_one_way); #endif diff --git a/src/ID.cc b/src/ID.cc index 3f5c76ca1d..a70aa3fd0e 100644 --- a/src/ID.cc +++ b/src/ID.cc @@ -372,7 +372,7 @@ ID* ID::Unserialize(UnserialInfo* info) Ref(id); global_scope()->Insert(id->Name(), id); -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG heap_checker->IgnoreObject(id); #endif } diff --git a/src/IP.cc b/src/IP.cc new file mode 100644 index 0000000000..16424e26f2 --- /dev/null +++ b/src/IP.cc @@ -0,0 +1,633 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "IP.h" +#include "Type.h" +#include "Val.h" +#include "Var.h" + +static RecordType* ip4_hdr_type = 0; +static RecordType* ip6_hdr_type = 0; +static RecordType* ip6_ext_hdr_type = 0; +static RecordType* ip6_option_type = 0; +static RecordType* ip6_hopopts_type = 0; +static RecordType* ip6_dstopts_type = 0; +static RecordType* ip6_routing_type = 0; +static RecordType* ip6_fragment_type = 0; +static RecordType* ip6_ah_type = 0; +static RecordType* ip6_esp_type = 0; +static RecordType* ip6_mob_type = 0; +static RecordType* ip6_mob_msg_type = 0; +static RecordType* ip6_mob_brr_type = 0; +static RecordType* ip6_mob_hoti_type = 0; +static RecordType* ip6_mob_coti_type = 0; +static RecordType* ip6_mob_hot_type = 0; +static RecordType* ip6_mob_cot_type = 0; +static RecordType* ip6_mob_bu_type = 0; +static RecordType* ip6_mob_back_type = 0; +static RecordType* ip6_mob_be_type = 0; + +static inline RecordType* hdrType(RecordType*& type, const char* name) + { + if ( ! type ) + type = internal_type(name)->AsRecordType(); + + return type; + } + +static VectorVal* BuildOptionsVal(const u_char* data, int len) + { + VectorVal* vv = new VectorVal(internal_type("ip6_options")->AsVectorType()); + + while ( len > 0 ) + { + const struct ip6_opt* opt = (const struct ip6_opt*) data; + RecordVal* rv = new RecordVal(hdrType(ip6_option_type, "ip6_option")); + rv->Assign(0, new Val(opt->ip6o_type, TYPE_COUNT)); + + if ( opt->ip6o_type == 0 ) + { + // Pad1 option + rv->Assign(1, new Val(0, TYPE_COUNT)); + rv->Assign(2, new StringVal("")); + data += sizeof(uint8); + len -= sizeof(uint8); + } + else + { + // PadN or other option + uint16 off = 2 * sizeof(uint8); + rv->Assign(1, new Val(opt->ip6o_len, TYPE_COUNT)); + rv->Assign(2, new StringVal( + new BroString(data + off, opt->ip6o_len, 1))); + data += opt->ip6o_len + off; + len -= opt->ip6o_len + off; + } + + vv->Assign(vv->Size(), rv, 0); + } + + return vv; + } + +RecordVal* IPv6_Hdr::BuildRecordVal(VectorVal* chain) const + { + RecordVal* rv = 0; + + switch ( type ) { + case IPPROTO_IPV6: + { + rv = new RecordVal(hdrType(ip6_hdr_type, "ip6_hdr")); + const struct ip6_hdr* ip6 = (const struct ip6_hdr*)data; + rv->Assign(0, new Val((ntohl(ip6->ip6_flow) & 0x0ff00000)>>20, TYPE_COUNT)); + rv->Assign(1, new Val(ntohl(ip6->ip6_flow) & 0x000fffff, TYPE_COUNT)); + rv->Assign(2, new Val(ntohs(ip6->ip6_plen), TYPE_COUNT)); + rv->Assign(3, new Val(ip6->ip6_nxt, TYPE_COUNT)); + rv->Assign(4, new Val(ip6->ip6_hlim, TYPE_COUNT)); + rv->Assign(5, new AddrVal(IPAddr(ip6->ip6_src))); + rv->Assign(6, new AddrVal(IPAddr(ip6->ip6_dst))); + if ( ! chain ) + chain = new VectorVal( + internal_type("ip6_ext_hdr_chain")->AsVectorType()); + rv->Assign(7, chain); + } + break; + + case IPPROTO_HOPOPTS: + { + rv = new RecordVal(hdrType(ip6_hopopts_type, "ip6_hopopts")); + const struct ip6_hbh* hbh = (const struct ip6_hbh*)data; + rv->Assign(0, new Val(hbh->ip6h_nxt, TYPE_COUNT)); + rv->Assign(1, new Val(hbh->ip6h_len, TYPE_COUNT)); + uint16 off = 2 * sizeof(uint8); + rv->Assign(2, BuildOptionsVal(data + off, Length() - off)); + + } + break; + + case IPPROTO_DSTOPTS: + { + rv = new RecordVal(hdrType(ip6_dstopts_type, "ip6_dstopts")); + const struct ip6_dest* dst = (const struct ip6_dest*)data; + rv->Assign(0, new Val(dst->ip6d_nxt, TYPE_COUNT)); + rv->Assign(1, new Val(dst->ip6d_len, TYPE_COUNT)); + uint16 off = 2 * sizeof(uint8); + rv->Assign(2, BuildOptionsVal(data + off, Length() - off)); + } + break; + + case IPPROTO_ROUTING: + { + rv = new RecordVal(hdrType(ip6_routing_type, "ip6_routing")); + const struct ip6_rthdr* rt = (const struct ip6_rthdr*)data; + rv->Assign(0, new Val(rt->ip6r_nxt, TYPE_COUNT)); + rv->Assign(1, new Val(rt->ip6r_len, TYPE_COUNT)); + rv->Assign(2, new Val(rt->ip6r_type, TYPE_COUNT)); + rv->Assign(3, new Val(rt->ip6r_segleft, TYPE_COUNT)); + uint16 off = 4 * sizeof(uint8); + rv->Assign(4, new StringVal(new BroString(data + off, Length() - off, 1))); + } + break; + + case IPPROTO_FRAGMENT: + { + rv = new RecordVal(hdrType(ip6_fragment_type, "ip6_fragment")); + const struct ip6_frag* frag = (const struct ip6_frag*)data; + rv->Assign(0, new Val(frag->ip6f_nxt, TYPE_COUNT)); + rv->Assign(1, new Val(frag->ip6f_reserved, TYPE_COUNT)); + rv->Assign(2, new Val((ntohs(frag->ip6f_offlg) & 0xfff8)>>3, TYPE_COUNT)); + rv->Assign(3, new Val((ntohs(frag->ip6f_offlg) & 0x0006)>>1, TYPE_COUNT)); + rv->Assign(4, new Val(ntohs(frag->ip6f_offlg) & 0x0001, TYPE_BOOL)); + rv->Assign(5, new Val(ntohl(frag->ip6f_ident), TYPE_COUNT)); + } + break; + + case IPPROTO_AH: + { + rv = new RecordVal(hdrType(ip6_ah_type, "ip6_ah")); + rv->Assign(0, new Val(((ip6_ext*)data)->ip6e_nxt, TYPE_COUNT)); + rv->Assign(1, new Val(((ip6_ext*)data)->ip6e_len, TYPE_COUNT)); + rv->Assign(2, new Val(ntohs(((uint16*)data)[1]), TYPE_COUNT)); + rv->Assign(3, new Val(ntohl(((uint32*)data)[1]), TYPE_COUNT)); + + if ( Length() >= 12 ) + { + // Sequence Number and ICV fields can only be extracted if + // Payload Len was non-zero for this header. + rv->Assign(4, new Val(ntohl(((uint32*)data)[2]), TYPE_COUNT)); + uint16 off = 3 * sizeof(uint32); + rv->Assign(5, new StringVal(new BroString(data + off, Length() - off, 1))); + } + } + break; + + case IPPROTO_ESP: + { + rv = new RecordVal(hdrType(ip6_esp_type, "ip6_esp")); + const uint32* esp = (const uint32*)data; + rv->Assign(0, new Val(ntohl(esp[0]), TYPE_COUNT)); + rv->Assign(1, new Val(ntohl(esp[1]), TYPE_COUNT)); + } + break; + +#ifdef ENABLE_MOBILE_IPV6 + case IPPROTO_MOBILITY: + { + rv = new RecordVal(hdrType(ip6_mob_type, "ip6_mobility_hdr")); + const struct ip6_mobility* mob = (const struct ip6_mobility*) data; + rv->Assign(0, new Val(mob->ip6mob_payload, TYPE_COUNT)); + rv->Assign(1, new Val(mob->ip6mob_len, TYPE_COUNT)); + rv->Assign(2, new Val(mob->ip6mob_type, TYPE_COUNT)); + rv->Assign(3, new Val(mob->ip6mob_rsv, TYPE_COUNT)); + rv->Assign(4, new Val(ntohs(mob->ip6mob_chksum), TYPE_COUNT)); + + RecordVal* msg = new RecordVal(hdrType(ip6_mob_msg_type, "ip6_mobility_msg")); + msg->Assign(0, new Val(mob->ip6mob_type, TYPE_COUNT)); + + uint16 off = sizeof(ip6_mobility); + const u_char* msg_data = data + off; + + switch ( mob->ip6mob_type ) { + case 0: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_brr")); + m->Assign(0, new Val(ntohs(*((uint16*)msg_data)), TYPE_COUNT)); + off += sizeof(uint16); + m->Assign(1, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(1, m); + } + break; + + case 1: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_hoti")); + m->Assign(0, new Val(ntohs(*((uint16*)msg_data)), TYPE_COUNT)); + m->Assign(1, new Val(ntohll(*((uint64*)(msg_data + sizeof(uint16)))), TYPE_COUNT)); + off += sizeof(uint16) + sizeof(uint64); + m->Assign(2, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(2, m); + break; + } + + case 2: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_coti")); + m->Assign(0, new Val(ntohs(*((uint16*)msg_data)), TYPE_COUNT)); + m->Assign(1, new Val(ntohll(*((uint64*)(msg_data + sizeof(uint16)))), TYPE_COUNT)); + off += sizeof(uint16) + sizeof(uint64); + m->Assign(2, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(3, m); + break; + } + + case 3: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_hot")); + m->Assign(0, new Val(ntohs(*((uint16*)msg_data)), TYPE_COUNT)); + m->Assign(1, new Val(ntohll(*((uint64*)(msg_data + sizeof(uint16)))), TYPE_COUNT)); + m->Assign(2, new Val(ntohll(*((uint64*)(msg_data + sizeof(uint16) + sizeof(uint64)))), TYPE_COUNT)); + off += sizeof(uint16) + 2 * sizeof(uint64); + m->Assign(3, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(4, m); + break; + } + + case 4: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_cot")); + m->Assign(0, new Val(ntohs(*((uint16*)msg_data)), TYPE_COUNT)); + m->Assign(1, new Val(ntohll(*((uint64*)(msg_data + sizeof(uint16)))), TYPE_COUNT)); + m->Assign(2, new Val(ntohll(*((uint64*)(msg_data + sizeof(uint16) + sizeof(uint64)))), TYPE_COUNT)); + off += sizeof(uint16) + 2 * sizeof(uint64); + m->Assign(3, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(5, m); + break; + } + + case 5: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_bu")); + m->Assign(0, new Val(ntohs(*((uint16*)msg_data)), TYPE_COUNT)); + m->Assign(1, new Val(ntohs(*((uint16*)(msg_data + sizeof(uint16)))) & 0x8000, TYPE_BOOL)); + m->Assign(2, new Val(ntohs(*((uint16*)(msg_data + sizeof(uint16)))) & 0x4000, TYPE_BOOL)); + m->Assign(3, new Val(ntohs(*((uint16*)(msg_data + sizeof(uint16)))) & 0x2000, TYPE_BOOL)); + m->Assign(4, new Val(ntohs(*((uint16*)(msg_data + sizeof(uint16)))) & 0x1000, TYPE_BOOL)); + m->Assign(5, new Val(ntohs(*((uint16*)(msg_data + 2*sizeof(uint16)))), TYPE_COUNT)); + off += 3 * sizeof(uint16); + m->Assign(6, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(6, m); + break; + } + + case 6: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_back")); + m->Assign(0, new Val(*((uint8*)msg_data), TYPE_COUNT)); + m->Assign(1, new Val(*((uint8*)(msg_data + sizeof(uint8))) & 0x80, TYPE_BOOL)); + m->Assign(2, new Val(ntohs(*((uint16*)(msg_data + sizeof(uint16)))), TYPE_COUNT)); + m->Assign(3, new Val(ntohs(*((uint16*)(msg_data + 2*sizeof(uint16)))), TYPE_COUNT)); + off += 3 * sizeof(uint16); + m->Assign(4, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(7, m); + break; + } + + case 7: + { + RecordVal* m = new RecordVal(hdrType(ip6_mob_brr_type, "ip6_mobility_be")); + m->Assign(0, new Val(*((uint8*)msg_data), TYPE_COUNT)); + const in6_addr* hoa = (const in6_addr*)(msg_data + sizeof(uint16)); + m->Assign(1, new AddrVal(IPAddr(*hoa))); + off += sizeof(uint16) + sizeof(in6_addr); + m->Assign(2, BuildOptionsVal(data + off, Length() - off)); + msg->Assign(8, m); + break; + } + + default: + reporter->Weird(fmt("unknown_mobility_type_%d", mob->ip6mob_type)); + break; + } + + rv->Assign(5, msg); + } + break; +#endif //ENABLE_MOBILE_IPV6 + + default: + break; + } + + return rv; + } + +RecordVal* IP_Hdr::BuildIPHdrVal() const + { + RecordVal* rval = 0; + + if ( ip4 ) + { + rval = new RecordVal(hdrType(ip4_hdr_type, "ip4_hdr")); + rval->Assign(0, new Val(ip4->ip_hl * 4, TYPE_COUNT)); + rval->Assign(1, new Val(ip4->ip_tos, TYPE_COUNT)); + rval->Assign(2, new Val(ntohs(ip4->ip_len), TYPE_COUNT)); + rval->Assign(3, new Val(ntohs(ip4->ip_id), TYPE_COUNT)); + rval->Assign(4, new Val(ip4->ip_ttl, TYPE_COUNT)); + rval->Assign(5, new Val(ip4->ip_p, TYPE_COUNT)); + rval->Assign(6, new AddrVal(ip4->ip_src.s_addr)); + rval->Assign(7, new AddrVal(ip4->ip_dst.s_addr)); + } + else + { + rval = ((*ip6_hdrs)[0])->BuildRecordVal(ip6_hdrs->BuildVal()); + } + + return rval; + } + +RecordVal* IP_Hdr::BuildPktHdrVal() const + { + static RecordType* pkt_hdr_type = 0; + static RecordType* tcp_hdr_type = 0; + static RecordType* udp_hdr_type = 0; + static RecordType* icmp_hdr_type = 0; + + if ( ! pkt_hdr_type ) + { + pkt_hdr_type = internal_type("pkt_hdr")->AsRecordType(); + tcp_hdr_type = internal_type("tcp_hdr")->AsRecordType(); + udp_hdr_type = internal_type("udp_hdr")->AsRecordType(); + icmp_hdr_type = internal_type("icmp_hdr")->AsRecordType(); + } + + RecordVal* pkt_hdr = new RecordVal(pkt_hdr_type); + + if ( ip4 ) + pkt_hdr->Assign(0, BuildIPHdrVal()); + else + pkt_hdr->Assign(1, BuildIPHdrVal()); + + // L4 header. + const u_char* data = Payload(); + + int proto = NextProto(); + switch ( proto ) { + case IPPROTO_TCP: + { + const struct tcphdr* tp = (const struct tcphdr*) data; + RecordVal* tcp_hdr = new RecordVal(tcp_hdr_type); + + int tcp_hdr_len = tp->th_off * 4; + int data_len = PayloadLen() - tcp_hdr_len; + + tcp_hdr->Assign(0, new PortVal(ntohs(tp->th_sport), TRANSPORT_TCP)); + tcp_hdr->Assign(1, new PortVal(ntohs(tp->th_dport), TRANSPORT_TCP)); + tcp_hdr->Assign(2, new Val(uint32(ntohl(tp->th_seq)), TYPE_COUNT)); + tcp_hdr->Assign(3, new Val(uint32(ntohl(tp->th_ack)), TYPE_COUNT)); + tcp_hdr->Assign(4, new Val(tcp_hdr_len, TYPE_COUNT)); + tcp_hdr->Assign(5, new Val(data_len, TYPE_COUNT)); + tcp_hdr->Assign(6, new Val(tp->th_flags, TYPE_COUNT)); + tcp_hdr->Assign(7, new Val(ntohs(tp->th_win), TYPE_COUNT)); + + pkt_hdr->Assign(2, tcp_hdr); + break; + } + + case IPPROTO_UDP: + { + const struct udphdr* up = (const struct udphdr*) data; + RecordVal* udp_hdr = new RecordVal(udp_hdr_type); + + udp_hdr->Assign(0, new PortVal(ntohs(up->uh_sport), TRANSPORT_UDP)); + udp_hdr->Assign(1, new PortVal(ntohs(up->uh_dport), TRANSPORT_UDP)); + udp_hdr->Assign(2, new Val(ntohs(up->uh_ulen), TYPE_COUNT)); + + pkt_hdr->Assign(3, udp_hdr); + break; + } + + case IPPROTO_ICMP: + { + const struct icmp* icmpp = (const struct icmp *) data; + RecordVal* icmp_hdr = new RecordVal(icmp_hdr_type); + + icmp_hdr->Assign(0, new Val(icmpp->icmp_type, TYPE_COUNT)); + + pkt_hdr->Assign(4, icmp_hdr); + break; + } + + default: + { + // This is not a protocol we understand. + break; + } + } + + return pkt_hdr; + } + +static inline bool isIPv6ExtHeader(uint8 type) + { + switch (type) { + case IPPROTO_HOPOPTS: + case IPPROTO_ROUTING: + case IPPROTO_DSTOPTS: + case IPPROTO_FRAGMENT: + case IPPROTO_AH: + case IPPROTO_ESP: +#ifdef ENABLE_MOBILE_IPV6 + case IPPROTO_MOBILITY: +#endif + return true; + default: + return false; + } + } + +void IPv6_Hdr_Chain::Init(const struct ip6_hdr* ip6, int total_len, + bool set_next, uint16 next) + { + length = 0; + uint8 current_type, next_type; + next_type = IPPROTO_IPV6; + const u_char* hdrs = (const u_char*) ip6; + + if ( total_len < (int)sizeof(struct ip6_hdr) ) + reporter->InternalError("IPv6_HdrChain::Init with truncated IP header"); + + do + { + // We can't determine a given header's length if there's less than + // two bytes of data available (2nd byte of extension headers is length) + if ( total_len < 2 ) + return; + + current_type = next_type; + IPv6_Hdr* p = new IPv6_Hdr(current_type, hdrs); + + next_type = p->NextHdr(); + uint16 cur_len = p->Length(); + + // If this header is truncated, don't add it to chain, don't go further. + if ( cur_len > total_len ) + { + delete p; + return; + } + + if ( set_next && next_type == IPPROTO_FRAGMENT ) + { + p->ChangeNext(next); + next_type = next; + } + + chain.push_back(p); + + // Check for routing headers and remember final destination address. + if ( current_type == IPPROTO_ROUTING ) + ProcessRoutingHeader((const struct ip6_rthdr*) hdrs, cur_len); + +#ifdef ENABLE_MOBILE_IPV6 + // Only Mobile IPv6 has a destination option we care about right now. + if ( current_type == IPPROTO_DSTOPTS ) + ProcessDstOpts((const struct ip6_dest*) hdrs, cur_len); +#endif + + hdrs += cur_len; + length += cur_len; + total_len -= cur_len; + + } while ( current_type != IPPROTO_FRAGMENT && + current_type != IPPROTO_ESP && +#ifdef ENABLE_MOBILE_IPV6 + current_type != IPPROTO_MOBILITY && +#endif + isIPv6ExtHeader(next_type) ); + } + +void IPv6_Hdr_Chain::ProcessRoutingHeader(const struct ip6_rthdr* r, uint16 len) + { + if ( finalDst ) + { + // RFC 2460 section 4.1 says Routing should occur at most once. + reporter->Weird(SrcAddr(), DstAddr(), "multiple_routing_headers"); + return; + } + + // Last 16 bytes of header (for all known types) is the address we want. + const in6_addr* addr = (const in6_addr*)(((const u_char*)r) + len - 16); + + switch ( r->ip6r_type ) { + case 0: // Defined by RFC 2460, deprecated by RFC 5095 + { + if ( r->ip6r_segleft > 0 && r->ip6r_len >= 2 ) + { + if ( r->ip6r_len % 2 == 0 ) + finalDst = new IPAddr(*addr); + else + reporter->Weird(SrcAddr(), DstAddr(), "odd_routing0_len"); + } + + // Always raise a weird since this type is deprecated. + reporter->Weird(SrcAddr(), DstAddr(), "routing0_hdr"); + } + break; + +#ifdef ENABLE_MOBILE_IPV6 + case 2: // Defined by Mobile IPv6 RFC 6275. + { + if ( r->ip6r_segleft > 0 ) + { + if ( r->ip6r_len == 2 ) + finalDst = new IPAddr(*addr); + else + reporter->Weird(SrcAddr(), DstAddr(), "bad_routing2_len"); + } + } + break; +#endif + + default: + reporter->Weird(fmt("unknown_routing_type_%d", r->ip6r_type)); + break; + } + } + +#ifdef ENABLE_MOBILE_IPV6 +void IPv6_Hdr_Chain::ProcessDstOpts(const struct ip6_dest* d, uint16 len) + { + const u_char* data = (const u_char*) d; + len -= 2 * sizeof(uint8); + data += 2* sizeof(uint8); + + while ( len > 0 ) + { + const struct ip6_opt* opt = (const struct ip6_opt*) data; + switch ( opt->ip6o_type ) { + case 201: // Home Address Option, Mobile IPv6 RFC 6275 section 6.3 + { + if ( opt->ip6o_len == 16 ) + if ( homeAddr ) + reporter->Weird(SrcAddr(), DstAddr(), "multiple_home_addr_opts"); + else + homeAddr = new IPAddr(*((const in6_addr*)(data + 2))); + else + reporter->Weird(SrcAddr(), DstAddr(), "bad_home_addr_len"); + } + break; + + default: + break; + } + + if ( opt->ip6o_type == 0 ) + { + data += sizeof(uint8); + len -= sizeof(uint8); + } + else + { + data += 2 * sizeof(uint8) + opt->ip6o_len; + len -= 2 * sizeof(uint8) + opt->ip6o_len; + } + } + } +#endif + +VectorVal* IPv6_Hdr_Chain::BuildVal() const + { + if ( ! ip6_ext_hdr_type ) + { + ip6_ext_hdr_type = internal_type("ip6_ext_hdr")->AsRecordType(); + ip6_hopopts_type = internal_type("ip6_hopopts")->AsRecordType(); + ip6_dstopts_type = internal_type("ip6_dstopts")->AsRecordType(); + ip6_routing_type = internal_type("ip6_routing")->AsRecordType(); + ip6_fragment_type = internal_type("ip6_fragment")->AsRecordType(); + ip6_ah_type = internal_type("ip6_ah")->AsRecordType(); + ip6_esp_type = internal_type("ip6_esp")->AsRecordType(); + ip6_mob_type = internal_type("ip6_mobility_hdr")->AsRecordType(); + } + + VectorVal* rval = new VectorVal( + internal_type("ip6_ext_hdr_chain")->AsVectorType()); + + for ( size_t i = 1; i < chain.size(); ++i ) + { + RecordVal* v = chain[i]->BuildRecordVal(); + RecordVal* ext_hdr = new RecordVal(ip6_ext_hdr_type); + uint8 type = chain[i]->Type(); + ext_hdr->Assign(0, new Val(type, TYPE_COUNT)); + + switch (type) { + case IPPROTO_HOPOPTS: + ext_hdr->Assign(1, v); + break; + case IPPROTO_DSTOPTS: + ext_hdr->Assign(2, v); + break; + case IPPROTO_ROUTING: + ext_hdr->Assign(3, v); + break; + case IPPROTO_FRAGMENT: + ext_hdr->Assign(4, v); + break; + case IPPROTO_AH: + ext_hdr->Assign(5, v); + break; + case IPPROTO_ESP: + ext_hdr->Assign(6, v); + break; +#ifdef ENABLE_MOBILE_IPV6 + case IPPROTO_MOBILITY: + ext_hdr->Assign(7, v); + break; +#endif + default: + reporter->InternalError("IPv6_Hdr_Chain bad header %d", type); + break; + } + rval->Assign(rval->Size(), ext_hdr, 0); + } + + return rval; + } diff --git a/src/IP.h b/src/IP.h index 73ac4ee5c7..c3a74b4a01 100644 --- a/src/IP.h +++ b/src/IP.h @@ -4,67 +4,370 @@ #define ip_h #include "config.h" +#include "net_util.h" +#include "IPAddr.h" +#include "Reporter.h" +#include "Val.h" +#include "Type.h" +#include +#include +#include -#include +#ifdef ENABLE_MOBILE_IPV6 +#ifndef IPPROTO_MOBILITY +#define IPPROTO_MOBILITY 135 +#endif + +struct ip6_mobility { + uint8 ip6mob_payload; + uint8 ip6mob_len; + uint8 ip6mob_type; + uint8 ip6mob_rsv; + uint16 ip6mob_chksum; +}; + +#endif //ENABLE_MOBILE_IPV6 + +/** + * Base class for IPv6 header/extensions. + */ +class IPv6_Hdr { +public: + /** + * Construct an IPv6 header or extension header from assigned type number. + */ + IPv6_Hdr(uint8 t, const u_char* d) : type(t), data(d) {} + + /** + * Replace the value of the next protocol field. + */ + void ChangeNext(uint8 next_type) + { + switch ( type ) { + case IPPROTO_IPV6: + ((ip6_hdr*)data)->ip6_nxt = next_type; + break; + case IPPROTO_HOPOPTS: + case IPPROTO_DSTOPTS: + case IPPROTO_ROUTING: + case IPPROTO_FRAGMENT: + case IPPROTO_AH: +#ifdef ENABLE_MOBILE_IPV6 + case IPPROTO_MOBILITY: +#endif + ((ip6_ext*)data)->ip6e_nxt = next_type; + break; + case IPPROTO_ESP: + default: + break; + } + } + + ~IPv6_Hdr() {} + + /** + * Returns the assigned IPv6 extension header type number of the header + * that immediately follows this one. + */ + uint8 NextHdr() const + { + switch ( type ) { + case IPPROTO_IPV6: + return ((ip6_hdr*)data)->ip6_nxt; + case IPPROTO_HOPOPTS: + case IPPROTO_DSTOPTS: + case IPPROTO_ROUTING: + case IPPROTO_FRAGMENT: + case IPPROTO_AH: +#ifdef ENABLE_MOBILE_IPV6 + case IPPROTO_MOBILITY: +#endif + return ((ip6_ext*)data)->ip6e_nxt; + case IPPROTO_ESP: + default: + return IPPROTO_NONE; + } + } + + /** + * Returns the length of the header in bytes. + */ + uint16 Length() const + { + switch ( type ) { + case IPPROTO_IPV6: + return 40; + case IPPROTO_HOPOPTS: + case IPPROTO_DSTOPTS: + case IPPROTO_ROUTING: +#ifdef ENABLE_MOBILE_IPV6 + case IPPROTO_MOBILITY: +#endif + return 8 + 8 * ((ip6_ext*)data)->ip6e_len; + case IPPROTO_FRAGMENT: + return 8; + case IPPROTO_AH: + return 8 + 4 * ((ip6_ext*)data)->ip6e_len; + case IPPROTO_ESP: + return 8; //encrypted payload begins after 8 bytes + default: + return 0; + } + } + + /** + * Returns the RFC 1700 et seq. IANA assigned number for the header. + */ + uint8 Type() const { return type; } + + /** + * Returns pointer to the start of where header structure resides in memory. + */ + const u_char* Data() const { return data; } + + /** + * Returns the script-layer record representation of the header. + */ + RecordVal* BuildRecordVal(VectorVal* chain = 0) const; + +protected: + uint8 type; + const u_char* data; +}; + +class IPv6_Hdr_Chain { +public: + /** + * Initializes the header chain from an IPv6 header structure. + */ + IPv6_Hdr_Chain(const struct ip6_hdr* ip6, int len) : +#ifdef ENABLE_MOBILE_IPV6 + homeAddr(0), +#endif + finalDst(0) + { Init(ip6, len, false); } + + ~IPv6_Hdr_Chain() + { + for ( size_t i = 0; i < chain.size(); ++i ) delete chain[i]; +#ifdef ENABLE_MOBILE_IPV6 + delete homeAddr; +#endif + delete finalDst; + } + + /** + * Returns the number of headers in the chain. + */ + size_t Size() const { return chain.size(); } + + /** + * Returns the sum of the length of all headers in the chain in bytes. + */ + uint16 TotalLength() const { return length; } + + /** + * Accesses the header at the given location in the chain. + */ + const IPv6_Hdr* operator[](const size_t i) const { return chain[i]; } + + /** + * Returns whether the header chain indicates a fragmented packet. + */ + bool IsFragment() const + { return chain[chain.size()-1]->Type() == IPPROTO_FRAGMENT; } + + /** + * Returns pointer to fragment header structure if the chain contains one. + */ + const struct ip6_frag* GetFragHdr() const + { return IsFragment() ? + (const struct ip6_frag*)chain[chain.size()-1]->Data(): 0; } + + /** + * If the header chain is a fragment, returns the offset in number of bytes + * relative to the start of the Fragmentable Part of the original packet. + */ + uint16 FragOffset() const + { return IsFragment() ? + (ntohs(GetFragHdr()->ip6f_offlg) & 0xfff8) : 0; } + + /** + * If the header chain is a fragment, returns the identification field. + */ + uint32 ID() const + { return IsFragment() ? ntohl(GetFragHdr()->ip6f_ident) : 0; } + + /** + * If the header chain is a fragment, returns the M (more fragments) flag. + */ + int MF() const + { return IsFragment() ? + (ntohs(GetFragHdr()->ip6f_offlg) & 0x0001) != 0 : 0; } + + /** + * If the chain contains a Destination Options header with a Home Address + * option as defined by Mobile IPv6 (RFC 6275), then return it, else + * return the source address in the main IPv6 header. + */ + IPAddr SrcAddr() const + { +#ifdef ENABLE_MOBILE_IPV6 + if ( homeAddr ) + return IPAddr(*homeAddr); + else +#endif + return IPAddr(((const struct ip6_hdr*)(chain[0]->Data()))->ip6_src); + } + + /** + * If the chain contains a Routing header with non-zero segments left, + * then return the last address of the first such header, else return + * the destination address of the main IPv6 header. + */ + IPAddr DstAddr() const + { + if ( finalDst ) + return IPAddr(*finalDst); + else + return IPAddr(((const struct ip6_hdr*)(chain[0]->Data()))->ip6_dst); + } + + /** + * Returns a vector of ip6_ext_hdr RecordVals that includes script-layer + * representation of all extension headers in the chain. + */ + VectorVal* BuildVal() const; + +protected: + // for access to protected ctor that changes next header values that + // point to a fragment + friend class FragReassembler; + + /** + * Initializes the header chain from an IPv6 header structure, and replaces + * the first next protocol pointer field that points to a fragment header. + */ + IPv6_Hdr_Chain(const struct ip6_hdr* ip6, uint16 next, int len) : +#ifdef ENABLE_MOBILE_IPV6 + homeAddr(0), +#endif + finalDst(0) + { Init(ip6, len, true, next); } + + /** + * Initializes the header chain from an IPv6 header structure of a given + * length, possibly setting the first next protocol pointer field that + * points to a fragment header. + */ + void Init(const struct ip6_hdr* ip6, int total_len, bool set_next, + uint16 next = 0); + + /** + * Process a routing header and allocate/remember the final destination + * address if it has segments left and is a valid routing header. + */ + void ProcessRoutingHeader(const struct ip6_rthdr* r, uint16 len); + +#ifdef ENABLE_MOBILE_IPV6 + /** + * Inspect a Destination Option header's options for things we need to + * remember, such as the Home Address option from Mobile IPv6. + */ + void ProcessDstOpts(const struct ip6_dest* d, uint16 len); +#endif + + vector chain; + + /** + * The summation of all header lengths in the chain in bytes. + */ + uint16 length; + +#ifdef ENABLE_MOBILE_IPV6 + /** + * Home Address of the packet's source as defined by Mobile IPv6 (RFC 6275). + */ + IPAddr* homeAddr; +#endif + + /** + * The final destination address in chain's first Routing header that has + * non-zero segments left. + */ + IPAddr* finalDst; +}; + +/** + * A class that wraps either an IPv4 or IPv6 packet and abstracts methods + * for inquiring about common features between the two. + */ class IP_Hdr { public: - IP_Hdr(struct ip* arg_ip4) + /** + * Attempts to construct the header from some blob of data based on IP + * version number. Caller must have already checked that the header + * is not truncated. + * @param p pointer to memory containing an IPv4 or IPv6 packet. + * @param arg_del whether to take ownership of \a p pointer's memory. + * @param len the length of data, in bytes, pointed to by \a p. + */ + IP_Hdr(const u_char* p, bool arg_del, int len) + : ip4(0), ip6(0), del(arg_del), ip6_hdrs(0) { - ip4 = arg_ip4; - ip6 = 0; - del = 1; - -#ifdef BROv6 - src_addr[0] = src_addr[1] = src_addr[2] = 0; - dst_addr[0] = dst_addr[1] = dst_addr[2] = 0; - - src_addr[3] = ip4->ip_src.s_addr; - dst_addr[3] = ip4->ip_dst.s_addr; -#endif + if ( ((const struct ip*)p)->ip_v == 4 ) + ip4 = (const struct ip*)p; + else if ( ((const struct ip*)p)->ip_v == 6 ) + { + ip6 = (const struct ip6_hdr*)p; + ip6_hdrs = new IPv6_Hdr_Chain(ip6, len); + } + else + { + if ( arg_del ) + delete [] p; + reporter->InternalError("bad IP version in IP_Hdr ctor"); + } } - IP_Hdr(const struct ip* arg_ip4) + /** + * Construct the header wrapper from an IPv4 packet. Caller must have + * already checked that the header is not truncated. + * @param arg_ip4 pointer to memory containing an IPv4 packet. + * @param arg_del whether to take ownership of \a arg_ip4 pointer's memory. + */ + IP_Hdr(const struct ip* arg_ip4, bool arg_del) + : ip4(arg_ip4), ip6(0), del(arg_del), ip6_hdrs(0) { - ip4 = arg_ip4; - ip6 = 0; - del = 0; - -#ifdef BROv6 - src_addr[0] = src_addr[1] = src_addr[2] = 0; - dst_addr[0] = dst_addr[1] = dst_addr[2] = 0; - - src_addr[3] = ip4->ip_src.s_addr; - dst_addr[3] = ip4->ip_dst.s_addr; -#endif } - IP_Hdr(struct ip6_hdr* arg_ip6) + /** + * Construct the header wrapper from an IPv6 packet. Caller must have + * already checked that the static IPv6 header is not truncated. If + * the packet contains extension headers and they are truncated, that can + * be checked afterwards by comparing \a len with \a TotalLen. E.g. + * NetSessions::DoNextPacket does this to skip truncated packets. + * @param arg_ip6 pointer to memory containing an IPv6 packet. + * @param arg_del whether to take ownership of \a arg_ip6 pointer's memory. + * @param len the packet's length in bytes. + * @param c an already-constructed header chain to take ownership of. + */ + IP_Hdr(const struct ip6_hdr* arg_ip6, bool arg_del, int len, + const IPv6_Hdr_Chain* c = 0) + : ip4(0), ip6(arg_ip6), del(arg_del), + ip6_hdrs(c ? c : new IPv6_Hdr_Chain(ip6, len)) { - ip4 = 0; - ip6 = arg_ip6; - del = 1; - -#ifdef BROv6 - memcpy(src_addr, ip6->ip6_src.s6_addr, 16); - memcpy(dst_addr, ip6->ip6_dst.s6_addr, 16); -#endif - } - - IP_Hdr(const struct ip6_hdr* arg_ip6) - { - ip4 = 0; - ip6 = arg_ip6; - del = 0; - -#ifdef BROv6 - memcpy(src_addr, ip6->ip6_src.s6_addr, 16); - memcpy(dst_addr, ip6->ip6_dst.s6_addr, 16); -#endif } + /** + * Destructor. + */ ~IP_Hdr() { + if ( ip6 ) + delete ip6_hdrs; + if ( del ) { if ( ip4 ) @@ -74,68 +377,181 @@ public: } } + /** + * If an IPv4 packet is wrapped, return a pointer to it, else null. + */ const struct ip* IP4_Hdr() const { return ip4; } + + /** + * If an IPv6 packet is wrapped, return a pointer to it, else null. + */ const struct ip6_hdr* IP6_Hdr() const { return ip6; } -#ifdef BROv6 - const uint32* SrcAddr() const { return src_addr; } - const uint32* DstAddr() const { return dst_addr; } -#else - const uint32* SrcAddr() const - { return ip4 ? &(ip4->ip_src.s_addr) : 0; } - const uint32* DstAddr() const - { return ip4 ? &(ip4->ip_dst.s_addr) : 0; } -#endif + /** + * Returns the source address held in the IP header. + */ + IPAddr IPHeaderSrcAddr() const + { return ip4 ? IPAddr(ip4->ip_src) : IPAddr(ip6->ip6_src); } - uint32 SrcAddr4() const { return ip4->ip_src.s_addr; } - uint32 DstAddr4() const { return ip4->ip_dst.s_addr; } + /** + * Returns the destination address held in the IP header. + */ + IPAddr IPHeaderDstAddr() const + { return ip4 ? IPAddr(ip4->ip_dst) : IPAddr(ip6->ip6_dst); } - uint16 ID4() const { return ip4 ? ip4->ip_id : 0; } + /** + * For IPv4 or IPv6 headers that don't contain a Home Address option + * (Mobile IPv6, RFC 6275), return source address held in the IP header. + * For IPv6 headers that contain a Home Address option, return that address. + */ + IPAddr SrcAddr() const + { return ip4 ? IPAddr(ip4->ip_src) : ip6_hdrs->SrcAddr(); } + /** + * For IPv4 or IPv6 headers that don't contain a Routing header with + * non-zero segments left, return destination address held in the IP header. + * For IPv6 headers with a Routing header that has non-zero segments left, + * return the last address in the first such Routing header. + */ + IPAddr DstAddr() const + { return ip4 ? IPAddr(ip4->ip_dst) : ip6_hdrs->DstAddr(); } + + /** + * Returns a pointer to the payload of the IP packet, usually an + * upper-layer protocol. + */ const u_char* Payload() const { if ( ip4 ) return ((const u_char*) ip4) + ip4->ip_hl * 4; else - return ((const u_char*) ip6) + 40; + return ((const u_char*) ip6) + ip6_hdrs->TotalLength(); } +#ifdef ENABLE_MOBILE_IPV6 + /** + * Returns a pointer to the mobility header of the IP packet, if present, + * else a null pointer. + */ + const ip6_mobility* MobilityHeader() const + { + if ( ip4 ) + return 0; + else if ( (*ip6_hdrs)[ip6_hdrs->Size()-1]->Type() != IPPROTO_MOBILITY ) + return 0; + else + return (const ip6_mobility*)(*ip6_hdrs)[ip6_hdrs->Size()-1]->Data(); + } +#endif + + /** + * Returns the length of the IP packet's payload (length of packet minus + * header length or, for IPv6, also minus length of all extension headers). + */ uint16 PayloadLen() const { if ( ip4 ) return ntohs(ip4->ip_len) - ip4->ip_hl * 4; else - return ntohs(ip6->ip6_plen); + return ntohs(ip6->ip6_plen) + 40 - ip6_hdrs->TotalLength(); } - uint16 TotalLen() const - { - if ( ip4 ) - return ntohs(ip4->ip_len); - else - return ntohs(ip6->ip6_plen) + 40; - } + /** + * Returns the length of the IP packet (length of headers and payload). + */ + uint32 TotalLen() const + { return ip4 ? ntohs(ip4->ip_len) : ntohs(ip6->ip6_plen) + 40; } - uint16 HdrLen() const { return ip4 ? ip4->ip_hl * 4 : 40; } + /** + * Returns length of IP packet header (includes extension headers for IPv6). + */ + uint16 HdrLen() const + { return ip4 ? ip4->ip_hl * 4 : ip6_hdrs->TotalLength(); } + + /** + * For IPv6 header chains, returns the type of the last header in the chain. + */ + uint8 LastHeader() const + { return ip4 ? IPPROTO_RAW : + ((*ip6_hdrs)[ip6_hdrs->Size()-1])->Type(); } + + /** + * Returns the protocol type of the IP packet's payload, usually an + * upper-layer protocol. For IPv6, this returns the last (extension) + * header's Next Header value. + */ unsigned char NextProto() const - { return ip4 ? ip4->ip_p : ip6->ip6_nxt; } + { return ip4 ? ip4->ip_p : + ((*ip6_hdrs)[ip6_hdrs->Size()-1])->NextHdr(); } + + /** + * Returns the IPv4 Time to Live or IPv6 Hop Limit field. + */ unsigned char TTL() const { return ip4 ? ip4->ip_ttl : ip6->ip6_hlim; } - uint16 FragField() const - { return ntohs(ip4 ? ip4->ip_off : 0); } + + /** + * Returns whether the IP header indicates this packet is a fragment. + */ + bool IsFragment() const + { return ip4 ? (ntohs(ip4->ip_off) & 0x3fff) != 0 : + ip6_hdrs->IsFragment(); } + + /** + * Returns the fragment packet's offset in relation to the original + * packet in bytes. + */ + uint16 FragOffset() const + { return ip4 ? (ntohs(ip4->ip_off) & 0x1fff) * 8 : + ip6_hdrs->FragOffset(); } + + /** + * Returns the fragment packet's identification field. + */ + uint32 ID() const + { return ip4 ? ntohs(ip4->ip_id) : ip6_hdrs->ID(); } + + /** + * Returns whether a fragment packet's "More Fragments" field is set. + */ + int MF() const + { return ip4 ? (ntohs(ip4->ip_off) & 0x2000) != 0 : ip6_hdrs->MF(); } + + /** + * Returns whether a fragment packet's "Don't Fragment" field is set. + * Note that IPv6 has no such field. + */ int DF() const - { return ip4 ? ((ntohs(ip4->ip_off) & IP_DF) != 0) : 0; } - uint16 IP_ID() const - { return ip4 ? (ntohs(ip4->ip_id)) : 0; } + { return ip4 ? ((ntohs(ip4->ip_off) & 0x4000) != 0) : 0; } + + /** + * Returns value of an IPv6 header's flow label field or 0 if it's IPv4. + */ + uint32 FlowLabel() const + { return ip4 ? 0 : (ntohl(ip6->ip6_flow) & 0x000fffff); } + + /** + * Returns number of IP headers in packet (includes IPv6 extension headers). + */ + size_t NumHeaders() const + { return ip4 ? 1 : ip6_hdrs->Size(); } + + /** + * Returns an ip_hdr or ip6_hdr_chain RecordVal. + */ + RecordVal* BuildIPHdrVal() const; + + /** + * Returns a pkt_hdr RecordVal, which includes not only the IP header, but + * also upper-layer (tcp/udp/icmp) headers. + */ + RecordVal* BuildPktHdrVal() const; private: const struct ip* ip4; const struct ip6_hdr* ip6; -#ifdef BROv6 - uint32 src_addr[NUM_ADDR_WORDS]; - uint32 dst_addr[NUM_ADDR_WORDS]; -#endif - int del; + bool del; + const IPv6_Hdr_Chain* ip6_hdrs; }; #endif diff --git a/src/IPAddr.cc b/src/IPAddr.cc new file mode 100644 index 0000000000..0ba5589fff --- /dev/null +++ b/src/IPAddr.cc @@ -0,0 +1,286 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include +#include +#include "IPAddr.h" +#include "Reporter.h" +#include "Conn.h" +#include "DPM.h" +#include "bro_inet_ntop.h" + +const uint8_t IPAddr::v4_mapped_prefix[12] = { 0, 0, 0, 0, + 0, 0, 0, 0, + 0, 0, 0xff, 0xff }; + +HashKey* BuildConnIDHashKey(const ConnID& id) + { + struct { + in6_addr ip1; + in6_addr ip2; + uint16 port1; + uint16 port2; + } key; + + // Lookup up connection based on canonical ordering, which is + // the smaller of and + // followed by the other. + if ( id.is_one_way || + addr_port_canon_lt(id.src_addr, id.src_port, id.dst_addr, id.dst_port) + ) + { + key.ip1 = id.src_addr.in6; + key.ip2 = id.dst_addr.in6; + key.port1 = id.src_port; + key.port2 = id.dst_port; + } + else + { + key.ip1 = id.dst_addr.in6; + key.ip2 = id.src_addr.in6; + key.port1 = id.dst_port; + key.port2 = id.src_port; + } + + return new HashKey(&key, sizeof(key)); + } + +HashKey* BuildExpectedConnHashKey(const ExpectedConn& c) + { + struct { + in6_addr orig; + in6_addr resp; + uint16 resp_p; + uint16 proto; + } key; + + key.orig = c.orig.in6; + key.resp = c.resp.in6; + key.resp_p = c.resp_p; + key.proto = c.proto; + + return new HashKey(&key, sizeof(key)); + } + +void IPAddr::Mask(int top_bits_to_keep) + { + if ( top_bits_to_keep < 0 || top_bits_to_keep > 128 ) + { + reporter->Error("Bad IPAddr::Mask value %d", top_bits_to_keep); + return; + } + + uint32_t tmp[4]; + memcpy(tmp, in6.s6_addr, sizeof(in6.s6_addr)); + + int word = 3; + int bits_to_chop = 128 - top_bits_to_keep; + + while ( bits_to_chop >= 32 ) + { + tmp[word] = 0; + --word; + bits_to_chop -= 32; + } + + uint32_t w = ntohl(tmp[word]); + w >>= bits_to_chop; + w <<= bits_to_chop; + tmp[word] = htonl(w); + + memcpy(in6.s6_addr, tmp, sizeof(in6.s6_addr)); + } + +void IPAddr::ReverseMask(int top_bits_to_chop) + { + if ( top_bits_to_chop < 0 || top_bits_to_chop > 128 ) + { + reporter->Error("Bad IPAddr::ReverseMask value %d", top_bits_to_chop); + return; + } + + uint32_t tmp[4]; + memcpy(tmp, in6.s6_addr, sizeof(in6.s6_addr)); + + int word = 0; + int bits_to_chop = top_bits_to_chop; + + while ( bits_to_chop >= 32 ) + { + tmp[word] = 0; + ++word; + bits_to_chop -= 32; + } + + uint32_t w = ntohl(tmp[word]); + w <<= bits_to_chop; + w >>= bits_to_chop; + tmp[word] = htonl(w); + + memcpy(in6.s6_addr, tmp, sizeof(in6.s6_addr)); + } + +void IPAddr::Init(const std::string& s) + { + if ( s.find(':') == std::string::npos ) // IPv4. + { + memcpy(in6.s6_addr, v4_mapped_prefix, sizeof(v4_mapped_prefix)); + + // Parse the address directly instead of using inet_pton since + // some platforms have more sensitive implementations than others + // that can't e.g. handle leading zeroes. + int a[4]; + int n = sscanf(s.c_str(), "%d.%d.%d.%d", a+0, a+1, a+2, a+3); + + if ( n != 4 || a[0] < 0 || a[1] < 0 || a[2] < 0 || a[3] < 0 || + a[0] > 255 || a[1] > 255 || a[2] > 255 || a[3] > 255 ) + { + reporter->Error("Bad IP address: %s", s.c_str()); + memset(in6.s6_addr, 0, sizeof(in6.s6_addr)); + return; + } + + uint32_t addr = (a[0] << 24) | (a[1] << 16) | (a[2] << 8) | a[3]; + addr = htonl(addr); + memcpy(&in6.s6_addr[12], &addr, sizeof(uint32_t)); + } + + else + { + if ( inet_pton(AF_INET6, s.c_str(), in6.s6_addr) <=0 ) + { + reporter->Error("Bad IP address: %s", s.c_str()); + memset(in6.s6_addr, 0, sizeof(in6.s6_addr)); + } + } + } + +string IPAddr::AsString() const + { + if ( GetFamily() == IPv4 ) + { + char s[INET_ADDRSTRLEN]; + + if ( ! bro_inet_ntop(AF_INET, &in6.s6_addr[12], s, INET_ADDRSTRLEN) ) + return "> 24) & 0xff; + uint32_t a2 = (a >> 16) & 0xff; + uint32_t a1 = (a >> 8) & 0xff; + uint32_t a0 = a & 0xff; + snprintf(buf, sizeof(buf), "%u.%u.%u.%u.in-addr.arpa", a0, a1, a2, a3); + return buf; + } + else + { + static const char hex_digit[] = "0123456789abcdef"; + string ptr_name("ip6.arpa"); + uint32_t* p = (uint32_t*) in6.s6_addr; + + for ( unsigned int i = 0; i < 4; ++i ) + { + uint32 a = ntohl(p[i]); + for ( unsigned int j = 1; j <=8; ++j ) + { + ptr_name.insert(0, 1, '.'); + ptr_name.insert(0, 1, hex_digit[(a >> (32-j*4)) & 0x0f]); + } + } + + return ptr_name; + } + } + +IPPrefix::IPPrefix(const in4_addr& in4, uint8_t length) + : prefix(in4), length(96 + length) + { + if ( length > 32 ) + reporter->InternalError("Bad in4_addr IPPrefix length : %d", length); + + prefix.Mask(this->length); + } + +IPPrefix::IPPrefix(const in6_addr& in6, uint8_t length) + : prefix(in6), length(length) + { + if ( length > 128 ) + reporter->InternalError("Bad in6_addr IPPrefix length : %d", length); + + prefix.Mask(this->length); + } + +IPPrefix::IPPrefix(const IPAddr& addr, uint8_t length) + : prefix(addr) + { + if ( prefix.GetFamily() == IPv4 ) + { + if ( length > 32 ) + reporter->InternalError("Bad IPAddr(v4) IPPrefix length : %d", + length); + + this->length = length + 96; + } + + else + { + if ( length > 128 ) + reporter->InternalError("Bad IPAddr(v6) IPPrefix length : %d", + length); + + this->length = length; + } + + prefix.Mask(this->length); + } + +string IPPrefix::AsString() const + { + char l[16]; + + if ( prefix.GetFamily() == IPv4 ) + modp_uitoa10(length - 96, l); + else + modp_uitoa10(length, l); + + return prefix.AsString() +"/" + l; + } + diff --git a/src/IPAddr.h b/src/IPAddr.h new file mode 100644 index 0000000000..6d26ef3fa8 --- /dev/null +++ b/src/IPAddr.h @@ -0,0 +1,643 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef IPADDR_H +#define IPADDR_H + +#include +#include +#include + +#include "BroString.h" +#include "Hash.h" +#include "util.h" +#include "Type.h" +#include "threading/SerialTypes.h" + +struct ConnID; +class ExpectedConn; + +typedef in_addr in4_addr; + +/** + * Class storing both IPv4 and IPv6 addresses. + */ +class IPAddr +{ +public: + /** + * Address family. + */ + typedef IPFamily Family; + + /** + * Byte order. + */ + enum ByteOrder { Host, Network }; + + /** + * Constructs the unspecified IPv6 address (all 128 bits zeroed). + */ + IPAddr() + { + memset(in6.s6_addr, 0, sizeof(in6.s6_addr)); + } + + /** + * Constructs an address instance from an IPv4 address. + * + * @param in6 The IPv6 address. + */ + explicit IPAddr(const in4_addr& in4) + { + memcpy(in6.s6_addr, v4_mapped_prefix, sizeof(v4_mapped_prefix)); + memcpy(&in6.s6_addr[12], &in4.s_addr, sizeof(in4.s_addr)); + } + + /** + * Constructs an address instance from an IPv6 address. + * + * @param in6 The IPv6 address. + */ + explicit IPAddr(const in6_addr& arg_in6) : in6(arg_in6) { } + + /** + * Constructs an address instance from a string representation. + * + * @param s String containing an IP address as either a dotted IPv4 + * address or a hex IPv6 address. + */ + IPAddr(const std::string& s) + { + Init(s); + } + + /** + * Constructs an address instance from a string representation. + * + * @param s ASCIIZ string containing an IP address as either a + * dotted IPv4 address or a hex IPv6 address. + */ + IPAddr(const char* s) + { + Init(s); + } + + /** + * Constructs an address instance from a string representation. + * + * @param s String containing an IP address as either a dotted IPv4 + * address or a hex IPv6 address. + */ + IPAddr(const BroString& s) + { + Init(s.CheckString()); + } + + /** + * Constructs an address instance from a raw byte representation. + * + * @param family The address family. + * + * @param bytes A pointer to the raw byte representation. This must point + * to 4 bytes if \a family is IPv4, and to 16 bytes if \a family is + * IPv6. + * + * @param order Indicates whether the raw representation pointed to + * by \a bytes is stored in network or host order. + */ + IPAddr(Family family, const uint32_t* bytes, ByteOrder order); + + /** + * Copy constructor. + */ + IPAddr(const IPAddr& other) : in6(other.in6) { }; + + /** + * Destructor. + */ + ~IPAddr() { }; + + /** + * Returns the address' family. + */ + Family GetFamily() const + { + if ( memcmp(in6.s6_addr, v4_mapped_prefix, 12) == 0 ) + return IPv4; + else + return IPv6; + } + + /** + * Returns true if the address represents a loopback device. + */ + bool IsLoopback() const; + + /** + * Returns true if the address represents a multicast address. + */ + bool IsMulticast() const + { + if ( GetFamily() == IPv4 ) + return in6.s6_addr[12] == 224; + else + return in6.s6_addr[0] == 0xff; + } + + /** + * Returns true if the address represents a broadcast address. + */ + bool IsBroadcast() const + { + if ( GetFamily() == IPv4 ) + return ((in6.s6_addr[12] == 0xff) && (in6.s6_addr[13] == 0xff) + && (in6.s6_addr[14] == 0xff) && (in6.s6_addr[15] == 0xff)); + else + return false; + } + + /** + * Retrieves the raw byte representation of the address. + * + * @param bytes The pointer to which \a bytes points will be set to + * the address of the raw representation in network-byte order. + * The return value indicates how many 32-bit words are valid starting at + * that address. The pointer will be valid as long as the address instance + * exists. + * + * @return The number of 32-bit words the raw representation uses. This + * will be 1 for an IPv4 address and 4 for an IPv6 address. + */ + int GetBytes(const uint32_t** bytes) const + { + if ( GetFamily() == IPv4 ) + { + *bytes = (uint32_t*) &in6.s6_addr[12]; + return 1; + } + else + { + *bytes = (uint32_t*) in6.s6_addr; + return 4; + } + } + + /** + * Retrieves a copy of the IPv6 raw byte representation of the address. + * If the internal address is IPv4, then the copied bytes use the + * IPv4 to IPv6 address mapping to return a full 16 bytes. + * + * @param bytes The pointer to a memory location in which the + * raw bytes of the address are to be copied. + * + * @param order The byte-order in which the returned raw bytes are copied. + * The default is network order. + */ + void CopyIPv6(uint32_t* bytes, ByteOrder order = Network) const + { + memcpy(bytes, in6.s6_addr, sizeof(in6.s6_addr)); + + if ( order == Host ) + { + for ( unsigned int i = 0; i < 4; ++i ) + bytes[i] = ntohl(bytes[i]); + } + } + + /** + * Retrieves a copy of the IPv6 raw byte representation of the address. + * @see CopyIPv6(uint32_t) + */ + void CopyIPv6(in6_addr* arg_in6) const + { + memcpy(arg_in6->s6_addr, in6.s6_addr, sizeof(in6.s6_addr)); + } + + /** + * Retrieves a copy of the IPv4 raw byte representation of the address. + * The caller should verify the address is of the IPv4 family type + * beforehand. @see GetFamily(). + * + * @param in4 The pointer to a memory location in which the raw bytes + * of the address are to be copied in network byte-order. + */ + void CopyIPv4(in4_addr* in4) const + { + memcpy(&in4->s_addr, &in6.s6_addr[12], sizeof(in4->s_addr)); + } + + /** + * Returns a key that can be used to lookup the IP Address in a hash + * table. Passes ownership to caller. + */ + HashKey* GetHashKey() const + { + return new HashKey((void*)in6.s6_addr, sizeof(in6.s6_addr)); + } + + /** + * Masks out lower bits of the address. + * + * @param top_bits_to_keep The number of bits \a not to mask out, + * counting from the highest order bit. The value is always + * interpreted relative to the IPv6 bit width, even if the address + * is IPv4. That means if compute ``192.168.1.2/16``, you need to + * pass in 112 (i.e., 96 + 16). The value must be in the range from + * 0 to 128. + */ + void Mask(int top_bits_to_keep); + + /** + * Masks out top bits of the address. + * + * @param top_bits_to_chop The number of bits to mask out, counting + * from the highest order bit. The value is always interpreted relative + * to the IPv6 bit width, even if the address is IPv4. So to mask out + * the first 16 bits of an IPv4 address, pass in 112 (i.e., 96 + 16). + * The value must be in the range from 0 to 128. + */ + void ReverseMask(int top_bits_to_chop); + + /** + * Assignment operator. + */ + IPAddr& operator=(const IPAddr& other) + { + // No self-assignment check here because it's correct without it and + // makes the common case faster. + in6 = other.in6; + return *this; + } + + /** + * Bitwise OR operator returns the IP address resulting from the bitwise + * OR operation on the raw bytes of this address with another. + */ + IPAddr operator|(const IPAddr& other) + { + in6_addr result; + for ( int i = 0; i < 16; ++i ) + result.s6_addr[i] = this->in6.s6_addr[i] | other.in6.s6_addr[i]; + + return IPAddr(result); + } + + /** + * Returns a string representation of the address. IPv4 addresses + * will be returned in dotted representation, IPv6 addresses in + * compressed hex. + */ + string AsString() const; + + /** + * Returns a string representation of the address suitable for inclusion + * in an URI. For IPv4 addresses, this is the same as AsString(), but + * IPv6 addresses are encased in square brackets. + */ + string AsURIString() const + { + if ( GetFamily() == IPv4 ) + return AsString(); + else + return string("[") + AsString() + "]"; + } + + /** + * Returns a host-order, plain hex string representation of the address. + */ + string AsHexString() const; + + /** + * Returns a string representation of the address. This returns the + * same as AsString(). + */ + operator std::string() const { return AsString(); } + + /** + * Returns a reverse pointer name associated with the IP address. + * For example, 192.168.0.1's reverse pointer is 1.0.168.192.in-addr.arpa. + */ + string PtrName() const; + + /** + * Comparison operator for IP address. + */ + friend bool operator==(const IPAddr& addr1, const IPAddr& addr2) + { + return memcmp(&addr1.in6, &addr2.in6, sizeof(in6_addr)) == 0; + } + + friend bool operator!=(const IPAddr& addr1, const IPAddr& addr2) + { + return ! (addr1 == addr2); + } + + /** + * Comparison operator IP addresses. This defines a well-defined order for + * IP addresses. However, the order does not necessarily correspond to + * their numerical values. + */ + friend bool operator<(const IPAddr& addr1, const IPAddr& addr2) + { + return memcmp(&addr1.in6, &addr2.in6, sizeof(in6_addr)) < 0; + } + + friend bool operator<=(const IPAddr& addr1, const IPAddr& addr2) + { + return addr1 < addr2 || addr1 == addr2; + } + + friend bool operator>=(const IPAddr& addr1, const IPAddr& addr2) + { + return ! ( addr1 < addr2 ); + } + + friend bool operator>(const IPAddr& addr1, const IPAddr& addr2) + { + return ! ( addr1 <= addr2 ); + } + + /** Converts the address into the type used internally by the + * inter-thread communication. + */ + void ConvertToThreadingValue(threading::Value::addr_t* v) const; + + friend HashKey* BuildConnIDHashKey(const ConnID& id); + friend HashKey* BuildExpectedConnHashKey(const ExpectedConn& c); + + unsigned int MemoryAllocation() const { return padded_sizeof(*this); } + +private: + friend class IPPrefix; + + /** + * Initializes an address instance from a string representation. + * + * @param s String containing an IP address as either a dotted IPv4 + * address or a hex IPv6 address. + */ + void Init(const std::string& s); + + in6_addr in6; // IPv6 or v4-to-v6-mapped address + + static const uint8_t v4_mapped_prefix[12]; // top 96 bits of v4-mapped-addr +}; + +inline IPAddr::IPAddr(Family family, const uint32_t* bytes, ByteOrder order) + { + if ( family == IPv4 ) + { + memcpy(in6.s6_addr, v4_mapped_prefix, sizeof(v4_mapped_prefix)); + memcpy(&in6.s6_addr[12], bytes, sizeof(uint32_t)); + + if ( order == Host ) + { + uint32_t* p = (uint32_t*) &in6.s6_addr[12]; + *p = htonl(*p); + } + } + + else + { + memcpy(in6.s6_addr, bytes, sizeof(in6.s6_addr)); + + if ( order == Host ) + { + for ( unsigned int i = 0; i < 4; ++ i) + { + uint32_t* p = (uint32_t*) &in6.s6_addr[i*4]; + *p = htonl(*p); + } + } + } + } + +inline bool IPAddr::IsLoopback() const + { + if ( GetFamily() == IPv4 ) + return in6.s6_addr[12] == 127; + + else + return ((in6.s6_addr[0] == 0) && (in6.s6_addr[1] == 0) + && (in6.s6_addr[2] == 0) && (in6.s6_addr[3] == 0) + && (in6.s6_addr[4] == 0) && (in6.s6_addr[5] == 0) + && (in6.s6_addr[6] == 0) && (in6.s6_addr[7] == 0) + && (in6.s6_addr[8] == 0) && (in6.s6_addr[9] == 0) + && (in6.s6_addr[10] == 0) && (in6.s6_addr[11] == 0) + && (in6.s6_addr[12] == 0) && (in6.s6_addr[13] == 0) + && (in6.s6_addr[14] == 0) && (in6.s6_addr[15] == 1)); + } + +inline void IPAddr::ConvertToThreadingValue(threading::Value::addr_t* v) const + { + v->family = GetFamily(); + + switch ( v->family ) { + + case IPv4: + CopyIPv4(&v->in.in4); + return; + + case IPv6: + CopyIPv6(&v->in.in6); + return; + + // Can't be reached. + abort(); + } + } + +/** + * Returns a hash key for a given ConnID. Passes ownership to caller. + */ +HashKey* BuildConnIDHashKey(const ConnID& id); + +/** + * Returns a hash key for a given ExpectedConn instance. Passes ownership to caller. + */ +HashKey* BuildExpectedConnHashKey(const ExpectedConn& c); + +/** + * Class storing both IPv4 and IPv6 prefixes + * (i.e., \c 192.168.1.1/16 and \c FD00::/8. + */ +class IPPrefix +{ +public: + + /** + * Constructs a prefix 0/0. + */ + IPPrefix() : length(0) {} + + /** + * Constructs a prefix instance from an IPv4 address and a prefix + * length. + * + * @param in4 The IPv4 address. + * + * @param length The prefix length in the range from 0 to 32. + */ + IPPrefix(const in4_addr& in4, uint8_t length); + + /** + * Constructs a prefix instance from an IPv6 address and a prefix + * length. + * + * @param in6 The IPv6 address. + * + * @param length The prefix length in the range from 0 to 128. + */ + IPPrefix(const in6_addr& in6, uint8_t length); + + /** + * Constructs a prefix instance from an IPAddr object and prefix length. + * + * @param addr The IP address. + * + * @param length The prefix length in the range from 0 to 128 + */ + IPPrefix(const IPAddr& addr, uint8_t length); + + /** + * Copy constructor. + */ + IPPrefix(const IPPrefix& other) + : prefix(other.prefix), length(other.length) { } + + /** + * Destructor. + */ + ~IPPrefix() { } + + /** + * Returns the prefix in the form of an IP address. The address will + * have all bits not part of the prefixed set to zero. + */ + const IPAddr& Prefix() const { return prefix; } + + /** + * Returns the bit length of the prefix, relative to the 32 bits + * of an IPv4 prefix or relative to the 128 bits of an IPv6 prefix. + */ + uint8_t Length() const + { + return prefix.GetFamily() == IPv4 ? length - 96 : length; + } + + /** + * Returns the bit length of the prefix always relative to a full + * 128 bits of an IPv6 prefix (or IPv4 mapped to IPv6). + */ + uint8_t LengthIPv6() const { return length; } + + /** Returns true if the given address is part of the prefix. + * + * @param addr The address to test. + */ + bool Contains(const IPAddr& addr) const + { + IPAddr p(addr); + p.Mask(length); + return p == prefix; + } + /** + * Assignment operator. + */ + IPPrefix& operator=(const IPPrefix& other) + { + // No self-assignment check here because it's correct without it and + // makes the common case faster. + prefix = other.prefix; + length = other.length; + return *this; + } + + /** + * Returns a string representation of the prefix. IPv4 addresses + * will be returned in dotted representation, IPv6 addresses in + * compressed hex. + */ + string AsString() const; + + operator std::string() const { return AsString(); } + + /** + * Returns a key that can be used to lookup the IP Prefix in a hash + * table. Passes ownership to caller. + */ + HashKey* GetHashKey() const + { + struct { + in6_addr ip; + uint32 len; + } key; + + key.ip = prefix.in6; + key.len = Length(); + + return new HashKey(&key, sizeof(key)); + } + + /** Converts the prefix into the type used internally by the + * inter-thread communication. + */ + void ConvertToThreadingValue(threading::Value::subnet_t* v) const + { + v->length = length; + prefix.ConvertToThreadingValue(&v->prefix); + } + + unsigned int MemoryAllocation() const { return padded_sizeof(*this); } + + /** + * Comparison operator for IP prefix. + */ + friend bool operator==(const IPPrefix& net1, const IPPrefix& net2) + { + return net1.Prefix() == net2.Prefix() && net1.Length() == net2.Length(); + } + + friend bool operator!=(const IPPrefix& net1, const IPPrefix& net2) + { + return ! (net1 == net2); + } + + /** + * Comparison operator IP prefixes. This defines a well-defined order for + * IP prefix. However, the order does not necessarily corresponding to their + * numerical values. + */ + friend bool operator<(const IPPrefix& net1, const IPPrefix& net2) + { + if ( net1.Prefix() < net2.Prefix() ) + return true; + + else if ( net1.Prefix() == net2.Prefix() ) + return net1.Length() < net2.Length(); + + else + return false; + } + + friend bool operator<=(const IPPrefix& net1, const IPPrefix& net2) + { + return net1 < net2 || net1 == net2; + } + + friend bool operator>=(const IPPrefix& net1, const IPPrefix& net2) + { + return ! (net1 < net2 ); + } + + friend bool operator>(const IPPrefix& net1, const IPPrefix& net2) + { + return ! ( net1 <= net2 ); + } + +private: + IPAddr prefix; // We store it as an address with the non-prefix bits masked out via Mask(). + uint8_t length; // The bit length of the prefix relative to full IPv6 addr. +}; + +#endif diff --git a/src/IRC.cc b/src/IRC.cc index caf7c492b6..1918300ba2 100644 --- a/src/IRC.cc +++ b/src/IRC.cc @@ -1188,15 +1188,10 @@ void IRC_Analyzer::DeliverStream(int length, const u_char* line, bool orig) if ( orig_status == REGISTERED && resp_status == REGISTERED && orig_zip_status == ACCEPT_ZIP && resp_zip_status == ACCEPT_ZIP ) { -#ifdef HAVE_LIBZ orig_zip_status = ZIP_LOADED; resp_zip_status = ZIP_LOADED; AddSupportAnalyzer(new ZIP_Analyzer(Conn(), true)); AddSupportAnalyzer(new ZIP_Analyzer(Conn(), false)); -#else - reporter->Error("IRC analyzer lacking libz support"); - Remove(); -#endif } return; diff --git a/src/LogMgr.h b/src/LogMgr.h deleted file mode 100644 index 10530960cb..0000000000 --- a/src/LogMgr.h +++ /dev/null @@ -1,140 +0,0 @@ -// See the file "COPYING" in the main distribution directory for copyright. -// -// A class managing log writers and filters. - -#ifndef LOGMGR_H -#define LOGMGR_H - -#include "Val.h" -#include "EventHandler.h" -#include "RemoteSerializer.h" - -class SerializationFormat; - -// Description of a log field. -struct LogField { - string name; - TypeTag type; - - LogField() { } - LogField(const LogField& other) - : name(other.name), type(other.type) { } - - // (Un-)serialize. - bool Read(SerializationFormat* fmt); - bool Write(SerializationFormat* fmt) const; -}; - -// Values as logged by a writer. -struct LogVal { - TypeTag type; - bool present; // False for unset fields. - - // The following union is a subset of BroValUnion, including only the - // types we can log directly. - struct set_t { bro_int_t size; LogVal** vals; }; - typedef set_t vec_t; - - union _val { - bro_int_t int_val; - bro_uint_t uint_val; - uint32 addr_val[NUM_ADDR_WORDS]; - subnet_type subnet_val; - double double_val; - string* string_val; - set_t set_val; - vec_t vector_val; - } val; - - LogVal(TypeTag arg_type = TYPE_ERROR, bool arg_present = true) - : type(arg_type), present(arg_present) {} - ~LogVal(); - - // (Un-)serialize. - bool Read(SerializationFormat* fmt); - bool Write(SerializationFormat* fmt) const; - - // Returns true if the type can be logged the framework. If - // `atomic_only` is true, will not permit composite types. - static bool IsCompatibleType(BroType* t, bool atomic_only=false); - -private: - LogVal(const LogVal& other) { } -}; - -class LogWriter; -class RemoteSerializer; -class RotationTimer; - -class LogMgr { -public: - LogMgr(); - ~LogMgr(); - - // These correspond to the BiFs visible on the scripting layer. The - // actual BiFs just forward here. - bool CreateStream(EnumVal* id, RecordVal* stream); - bool EnableStream(EnumVal* id); - bool DisableStream(EnumVal* id); - bool AddFilter(EnumVal* id, RecordVal* filter); - bool RemoveFilter(EnumVal* id, StringVal* name); - bool RemoveFilter(EnumVal* id, string name); - bool Write(EnumVal* id, RecordVal* columns); - bool SetBuf(EnumVal* id, bool enabled); // Adjusts all writers. - bool Flush(EnumVal* id); // Flushes all writers.. - -protected: - friend class LogWriter; - friend class RemoteSerializer; - friend class RotationTimer; - - //// Function also used by the RemoteSerializer. - - // Takes ownership of fields. - LogWriter* CreateWriter(EnumVal* id, EnumVal* writer, string path, - int num_fields, LogField** fields); - - // Takes ownership of values.. - bool Write(EnumVal* id, EnumVal* writer, string path, - int num_fields, LogVal** vals); - - // Announces all instantiated writers to peer. - void SendAllWritersTo(RemoteSerializer::PeerID peer); - - //// Functions safe to use by writers. - - // Signals that a file has been rotated. - bool FinishedRotation(LogWriter* writer, string new_name, string old_name, - double open, double close, bool terminating); - - // Reports an error for the given writer. - void Error(LogWriter* writer, const char* msg); - - // Deletes the values as passed into Write(). - void DeleteVals(int num_fields, LogVal** vals); - -private: - struct Filter; - struct Stream; - struct WriterInfo; - - bool TraverseRecord(Stream* stream, Filter* filter, RecordType* rt, - TableVal* include, TableVal* exclude, string path, list indices); - - LogVal** RecordToFilterVals(Stream* stream, Filter* filter, - RecordVal* columns); - - LogVal* ValToLogVal(Val* val, BroType* ty = 0); - Stream* FindStream(EnumVal* id); - void RemoveDisabledWriters(Stream* stream); - void InstallRotationTimer(WriterInfo* winfo); - void Rotate(WriterInfo* info); - Filter* FindFilter(EnumVal* id, StringVal* filter); - WriterInfo* FindWriter(LogWriter* writer); - - vector streams; // Indexed by stream enum. -}; - -extern LogMgr* log_mgr; - -#endif diff --git a/src/LogWriter.cc b/src/LogWriter.cc deleted file mode 100644 index 8584a0b0b5..0000000000 --- a/src/LogWriter.cc +++ /dev/null @@ -1,158 +0,0 @@ -// See the file "COPYING" in the main distribution directory for copyright. - -#include "util.h" -#include "LogWriter.h" - -LogWriter::LogWriter() - { - buf = 0; - buf_len = 1024; - buffering = true; - disabled = false; - } - -LogWriter::~LogWriter() - { - if ( buf ) - free(buf); - - for(int i = 0; i < num_fields; ++i) - delete fields[i]; - - delete [] fields; - } - -bool LogWriter::Init(string arg_path, int arg_num_fields, - const LogField* const * arg_fields) - { - path = arg_path; - num_fields = arg_num_fields; - fields = arg_fields; - - if ( ! DoInit(arg_path, arg_num_fields, arg_fields) ) - { - disabled = true; - return false; - } - - return true; - } - -bool LogWriter::Write(int arg_num_fields, LogVal** vals) - { - // Double-check that the arguments match. If we get this from remote, - // something might be mixed up. - if ( num_fields != arg_num_fields ) - { - DBG_LOG(DBG_LOGGING, "Number of fields don't match in LogWriter::Write() (%d vs. %d)", - arg_num_fields, num_fields); - - DeleteVals(vals); - return false; - } - - for ( int i = 0; i < num_fields; ++i ) - { - if ( vals[i]->type != fields[i]->type ) - { - DBG_LOG(DBG_LOGGING, "Field type doesn't match in LogWriter::Write() (%d vs. %d)", - vals[i]->type, fields[i]->type); - DeleteVals(vals); - return false; - } - } - - bool result = DoWrite(num_fields, fields, vals); - - DeleteVals(vals); - - if ( ! result ) - disabled = true; - - return result; - } - -bool LogWriter::SetBuf(bool enabled) - { - if ( enabled == buffering ) - // No change. - return true; - - buffering = enabled; - - if ( ! DoSetBuf(enabled) ) - { - disabled = true; - return false; - } - - return true; - } - -bool LogWriter::Rotate(string rotated_path, double open, - double close, bool terminating) - { - if ( ! DoRotate(rotated_path, open, close, terminating) ) - { - disabled = true; - return false; - } - - return true; - } - -bool LogWriter::Flush() - { - if ( ! DoFlush() ) - { - disabled = true; - return false; - } - - return true; - } - -void LogWriter::Finish() - { - DoFinish(); - } - -const char* LogWriter::Fmt(const char* format, ...) - { - if ( ! buf ) - buf = (char*) malloc(buf_len); - - va_list al; - va_start(al, format); - int n = safe_vsnprintf(buf, buf_len, format, al); - va_end(al); - - if ( (unsigned int) n >= buf_len ) - { // Not enough room, grow the buffer. - buf_len = n + 32; - buf = (char*) realloc(buf, buf_len); - - // Is it portable to restart? - va_start(al, format); - n = safe_vsnprintf(buf, buf_len, format, al); - va_end(al); - } - - return buf; - } - -void LogWriter::Error(const char *msg) - { - log_mgr->Error(this, msg); - } - -void LogWriter::DeleteVals(LogVal** vals) - { - log_mgr->DeleteVals(num_fields, vals); - } - -bool LogWriter::FinishedRotation(string new_name, string old_name, double open, - double close, bool terminating) - { - return log_mgr->FinishedRotation(this, new_name, old_name, open, close, terminating); - } diff --git a/src/LogWriter.h b/src/LogWriter.h deleted file mode 100644 index 1d2f9fa4b2..0000000000 --- a/src/LogWriter.h +++ /dev/null @@ -1,192 +0,0 @@ -// See the file "COPYING" in the main distribution directory for copyright. -// -// Interface API for a log writer backend. The LogMgr creates a separate -// writer instance of pair of (writer type, output path). -// -// Note thay classes derived from LogWriter must be fully thread-safe and not -// use any non-thread-safe Bro functionality (which includes almost -// everything ...). In particular, do not use fmt() but LogWriter::Fmt()!. -// -// The one exception to this rule is the constructor: it is guaranteed to be -// executed inside the main thread and can thus in particular access global -// script variables. - -#ifndef LOGWRITER_H -#define LOGWRITER_H - -#include "LogMgr.h" -#include "BroString.h" - -class LogWriter { -public: - LogWriter(); - virtual ~LogWriter(); - - //// Interface methods to interact with the writer. Note that these - //// methods are not necessarily thread-safe and must be called only - //// from the main thread (which will typically mean only from the - //// LogMgr). In particular, they must not be called from the - //// writer's derived implementation. - - // One-time initialization of the writer to define the logged fields. - // Interpretation of "path" is left to the writer, and will be - // corresponding the value configured on the script-level. - // - // Returns false if an error occured, in which case the writer must - // not be used further. - // - // The new instance takes ownership of "fields", and will delete them - // when done. - bool Init(string path, int num_fields, const LogField* const * fields); - - // Writes one log entry. The method takes ownership of "vals" and - // will return immediately after queueing the write request, which is - // potentially before output has actually been written out. - // - // num_fields and the types of the LogVals must match what was passed - // to Init(). - // - // Returns false if an error occured, in which case the writer must - // not be used any further. - bool Write(int num_fields, LogVal** vals); - - // Sets the buffering status for the writer, if the writer supports - // that. (If not, it will be ignored). - bool SetBuf(bool enabled); - - // Flushes any currently buffered output, if the writer supports - // that. (If not, it will be ignored). - bool Flush(); - - // Triggers rotation, if the writer supports that. (If not, it will - // be ignored). - bool Rotate(string rotated_path, double open, double close, bool terminating); - - // Finishes writing to this logger regularly. Must not be called if - // an error has been indicated earlier. After calling this, no - // further writing must be performed. - void Finish(); - - //// Thread-safe methods that may be called from the writer - //// implementation. - - // The following methods return the information as passed to Init(). - const string Path() const { return path; } - int NumFields() const { return num_fields; } - const LogField* const * Fields() const { return fields; } - -protected: - // Methods for writers to override. If any of these returs false, it - // will be assumed that a fatal error has occured that prevents the - // writer from further operation. It will then be disabled and - // deleted. When return false, the writer should also report the - // error via Error(). Note that even if a writer does not support the - // functionality for one these methods (like rotation), it must still - // return true if that is not to be considered a fatal error. - // - // Called once for initialization of the writer. - virtual bool DoInit(string path, int num_fields, - const LogField* const * fields) = 0; - - // Called once per log entry to record. - virtual bool DoWrite(int num_fields, const LogField* const * fields, - LogVal** vals) = 0; - - // Called when the buffering status for this writer is changed. If - // buffering is disabled, the writer should attempt to write out - // information as quickly as possible even if doing so may have a - // performance impact. If enabled (which is the default), it may - // buffer data as helpful and write it out later in a way optimized - // for performance. The current buffering state can be queried via - // IsBuf(). - // - // A writer may ignore buffering changes if it doesn't fit with its - // semantics (but must still return true in that case). - virtual bool DoSetBuf(bool enabled) = 0; - - // Called to flush any currently buffered output. - // - // A writer may ignore flush requests if it doesn't fit with its - // semantics (but must still return true in that case). - virtual bool DoFlush() = 0; - - // Called when a log output is to be rotated. Most directly this only - // applies to writers writing into files, which should then close the - // current file and open a new one. However, a writer may also - // trigger other apppropiate actions if semantics are similar. - // - // Once rotation has finished, the implementation should call - // RotationDone() to signal the log manager that potential - // postprocessors can now run. - // - // "rotate_path" reflects the path to where the rotated output is to - // be moved, with specifics depending on the writer. It should - // generally be interpreted in a way consistent with that of "path" - // as passed into DoInit(). As an example, for file-based output, - // "rotate_path" could be the original filename extended with a - // timestamp indicating the time of the rotation. - // - // "open" and "close" are the network time's when the *current* file - // was opened and closed, respectively. - // - // "terminating" indicated whether the rotation request occurs due - // the main Bro prcoess terminating (and not because we've reach a - // regularly scheduled time for rotation). - // - // A writer may ignore rotation requests if it doesn't fit with its - // semantics (but must still return true in that case). - virtual bool DoRotate(string rotated_path, double open, double close, - bool terminating) = 0; - - // Called once on termination. Not called when any of the other - // methods has previously signaled an error, i.e., executing this - // method signals a regular shutdown of the writer. - virtual void DoFinish() = 0; - - //// Methods for writers to use. These are thread-safe. - - // A thread-safe version of fmt(). - const char* Fmt(const char* format, ...); - - // Returns the current buffering state. - bool IsBuf() { return buffering; } - - // Reports an error to the user. - void Error(const char *msg); - - // Signals to the log manager that a file has been rotated. - // - // new_name: The filename of the rotated file. old_name: The filename - // of the origina file. - // - // open/close: The timestamps when the original file was opened and - // closed, respectively. - // - // terminating: True if rotation request occured due to the main Bro - // process shutting down. - bool FinishedRotation(string new_name, string old_name, double open, - double close, bool terminating); - -private: - friend class LogMgr; - - // When an error occurs, we call this method to set a flag marking - // the writer as disabled. The LogMgr will check the flag later and - // remove the writer. - bool Disabled() { return disabled; } - - // Deletes the values as passed into Write(). - void DeleteVals(LogVal** vals); - - string path; - int num_fields; - const LogField* const * fields; - bool buffering; - bool disabled; - - // For implementing Fmt(). - char* buf; - unsigned int buf_len; -}; - -#endif diff --git a/src/LogWriterAscii.cc b/src/LogWriterAscii.cc deleted file mode 100644 index 9fc71789d8..0000000000 --- a/src/LogWriterAscii.cc +++ /dev/null @@ -1,318 +0,0 @@ -// See the file "COPYING" in the main distribution directory for copyright. - -#include -#include - -#include "LogWriterAscii.h" -#include "NetVar.h" - -/** - * Takes a string, escapes each character into its equivalent hex code (\x##), and - * returns a string containing all escaped values. - * - * @param str string to escape - * @return A std::string containing a list of escaped hex values of the form \x## - */ -static string get_escaped_string(const std::string& str) -{ - char tbuf[16]; - string esc = ""; - - for ( size_t i = 0; i < str.length(); ++i ) - { - snprintf(tbuf, sizeof(tbuf), "\\x%02x", str[i]); - esc += tbuf; - } - - return esc; -} - -LogWriterAscii::LogWriterAscii() - { - file = 0; - - output_to_stdout = BifConst::LogAscii::output_to_stdout; - include_header = BifConst::LogAscii::include_header; - - separator_len = BifConst::LogAscii::separator->Len(); - separator = new char[separator_len]; - memcpy(separator, BifConst::LogAscii::separator->Bytes(), - separator_len); - - set_separator_len = BifConst::LogAscii::set_separator->Len(); - set_separator = new char[set_separator_len]; - memcpy(set_separator, BifConst::LogAscii::set_separator->Bytes(), - set_separator_len); - - empty_field_len = BifConst::LogAscii::empty_field->Len(); - empty_field = new char[empty_field_len]; - memcpy(empty_field, BifConst::LogAscii::empty_field->Bytes(), - empty_field_len); - - unset_field_len = BifConst::LogAscii::unset_field->Len(); - unset_field = new char[unset_field_len]; - memcpy(unset_field, BifConst::LogAscii::unset_field->Bytes(), - unset_field_len); - - header_prefix_len = BifConst::LogAscii::header_prefix->Len(); - header_prefix = new char[header_prefix_len]; - memcpy(header_prefix, BifConst::LogAscii::header_prefix->Bytes(), - header_prefix_len); - - desc.SetEscape(separator, separator_len); - } - -LogWriterAscii::~LogWriterAscii() - { - if ( file ) - fclose(file); - - delete [] separator; - delete [] set_separator; - delete [] empty_field; - delete [] unset_field; - delete [] header_prefix; - } - -bool LogWriterAscii::WriteHeaderField(const string& key, const string& val) - { - string str = string(header_prefix, header_prefix_len) + - key + string(separator, separator_len) + val + "\n"; - - return (fwrite(str.c_str(), str.length(), 1, file) == 1); - } - -bool LogWriterAscii::DoInit(string path, int num_fields, - const LogField* const * fields) - { - if ( output_to_stdout ) - path = "/dev/stdout"; - - fname = IsSpecial(path) ? path : path + ".log"; - - if ( ! (file = fopen(fname.c_str(), "w")) ) - { - Error(Fmt("cannot open %s: %s", fname.c_str(), - strerror(errno))); - - return false; - } - - if ( include_header ) - { - string str = string(header_prefix, header_prefix_len) - + "separator " // Always use space as separator here. - + get_escaped_string(string(separator, separator_len)) - + "\n"; - - if( fwrite(str.c_str(), str.length(), 1, file) != 1 ) - goto write_error; - - if ( ! WriteHeaderField("path", path) ) - goto write_error; - - string names; - string types; - - for ( int i = 0; i < num_fields; ++i ) - { - if ( i > 0 ) - { - names += string(separator, separator_len); - types += string(separator, separator_len); - } - - const LogField* field = fields[i]; - names += field->name; - types += type_name(field->type); - } - - if ( ! (WriteHeaderField("fields", names) - && WriteHeaderField("types", types)) ) - goto write_error; - } - - return true; - -write_error: - Error(Fmt("error writing to %s: %s", fname.c_str(), strerror(errno))); - return false; - } - -bool LogWriterAscii::DoFlush() - { - fflush(file); - return true; - } - -void LogWriterAscii::DoFinish() - { - } - -bool LogWriterAscii::DoWriteOne(ODesc* desc, LogVal* val, const LogField* field) - { - if ( ! val->present ) - { - desc->AddN(unset_field, unset_field_len); - return true; - } - - switch ( val->type ) { - - case TYPE_BOOL: - desc->Add(val->val.int_val ? "T" : "F"); - break; - - case TYPE_INT: - desc->Add(val->val.int_val); - break; - - case TYPE_COUNT: - case TYPE_COUNTER: - case TYPE_PORT: - desc->Add(val->val.uint_val); - break; - - case TYPE_SUBNET: - desc->Add(dotted_addr(val->val.subnet_val.net)); - desc->Add("/"); - desc->Add(val->val.subnet_val.width); - break; - - case TYPE_ADDR: - desc->Add(dotted_addr(val->val.addr_val)); - break; - - case TYPE_TIME: - case TYPE_INTERVAL: - char buf[256]; - modp_dtoa(val->val.double_val, buf, 6); - desc->Add(buf); - break; - - case TYPE_DOUBLE: - desc->Add(val->val.double_val); - break; - - case TYPE_ENUM: - case TYPE_STRING: - case TYPE_FILE: - case TYPE_FUNC: - { - int size = val->val.string_val->size(); - if ( size ) - desc->AddN(val->val.string_val->data(), val->val.string_val->size()); - else - desc->AddN(empty_field, empty_field_len); - break; - } - - case TYPE_TABLE: - { - if ( ! val->val.set_val.size ) - { - desc->AddN(empty_field, empty_field_len); - break; - } - - for ( int j = 0; j < val->val.set_val.size; j++ ) - { - if ( j > 0 ) - desc->AddN(set_separator, set_separator_len); - - if ( ! DoWriteOne(desc, val->val.set_val.vals[j], field) ) - return false; - } - - break; - } - - case TYPE_VECTOR: - { - if ( ! val->val.vector_val.size ) - { - desc->AddN(empty_field, empty_field_len); - break; - } - - for ( int j = 0; j < val->val.vector_val.size; j++ ) - { - if ( j > 0 ) - desc->AddN(set_separator, set_separator_len); - - if ( ! DoWriteOne(desc, val->val.vector_val.vals[j], field) ) - return false; - } - - break; - } - - default: - Error(Fmt("unsupported field format %d for %s", val->type, - field->name.c_str())); - return false; - } - - return true; - } - -bool LogWriterAscii::DoWrite(int num_fields, const LogField* const * fields, - LogVal** vals) - { - if ( ! file ) - DoInit(Path(), NumFields(), Fields()); - - desc.Clear(); - - for ( int i = 0; i < num_fields; i++ ) - { - if ( i > 0 ) - desc.AddRaw(separator, separator_len); - - if ( ! DoWriteOne(&desc, vals[i], fields[i]) ) - return false; - } - - desc.AddRaw("\n", 1); - - if ( fwrite(desc.Bytes(), desc.Len(), 1, file) != 1 ) - { - Error(Fmt("error writing to %s: %s", fname.c_str(), strerror(errno))); - return false; - } - - if ( IsBuf() ) - fflush(file); - - return true; - } - -bool LogWriterAscii::DoRotate(string rotated_path, double open, - double close, bool terminating) - { - // Don't rotate special files or if there's not one currently open. - if ( ! file || IsSpecial(Path()) ) - return true; - - fclose(file); - file = 0; - - string nname = rotated_path + ".log"; - rename(fname.c_str(), nname.c_str()); - - if ( ! FinishedRotation(nname, fname, open, close, terminating) ) - { - Error(Fmt("error rotating %s to %s", fname.c_str(), nname.c_str())); - return false; - } - - return true; - } - -bool LogWriterAscii::DoSetBuf(bool enabled) - { - // Nothing to do. - return true; - } - - diff --git a/src/LogWriterAscii.h b/src/LogWriterAscii.h deleted file mode 100644 index 7755f71d06..0000000000 --- a/src/LogWriterAscii.h +++ /dev/null @@ -1,57 +0,0 @@ -// See the file "COPYING" in the main distribution directory for copyright. -// -// Log writer for delimiter-separated ASCII logs. - -#ifndef LOGWRITERASCII_H -#define LOGWRITERASCII_H - -#include "LogWriter.h" - -class LogWriterAscii : public LogWriter { -public: - LogWriterAscii(); - ~LogWriterAscii(); - - static LogWriter* Instantiate() { return new LogWriterAscii; } - -protected: - virtual bool DoInit(string path, int num_fields, - const LogField* const * fields); - virtual bool DoWrite(int num_fields, const LogField* const * fields, - LogVal** vals); - virtual bool DoSetBuf(bool enabled); - virtual bool DoRotate(string rotated_path, double open, double close, - bool terminating); - virtual bool DoFlush(); - virtual void DoFinish(); - -private: - bool IsSpecial(string path) { return path.find("/dev/") == 0; } - bool DoWriteOne(ODesc* desc, LogVal* val, const LogField* field); - bool WriteHeaderField(const string& key, const string& value); - - FILE* file; - string fname; - ODesc desc; - - // Options set from the script-level. - bool output_to_stdout; - bool include_header; - - char* separator; - int separator_len; - - char* set_separator; - int set_separator_len; - - char* empty_field; - int empty_field_len; - - char* unset_field; - int unset_field_len; - - char* header_prefix; - int header_prefix_len; -}; - -#endif diff --git a/src/LogWriterNone.cc b/src/LogWriterNone.cc deleted file mode 100644 index 592772afdb..0000000000 --- a/src/LogWriterNone.cc +++ /dev/null @@ -1,16 +0,0 @@ - -#include "LogWriterNone.h" - -bool LogWriterNone::DoRotate(string rotated_path, double open, - double close, bool terminating) - { - if ( ! FinishedRotation(string("/dev/null"), Path(), open, close, terminating)) - { - Error(Fmt("error rotating %s", Path().c_str())); - return false; - } - - return true; - } - - diff --git a/src/LogWriterNone.h b/src/LogWriterNone.h deleted file mode 100644 index 3811a19469..0000000000 --- a/src/LogWriterNone.h +++ /dev/null @@ -1,30 +0,0 @@ -// See the file "COPYING" in the main distribution directory for copyright. -// -// Dummy log writer that just discards everything (but still pretends to rotate). - -#ifndef LOGWRITERNONE_H -#define LOGWRITERNONE_H - -#include "LogWriter.h" - -class LogWriterNone : public LogWriter { -public: - LogWriterNone() {} - ~LogWriterNone() {}; - - static LogWriter* Instantiate() { return new LogWriterNone; } - -protected: - virtual bool DoInit(string path, int num_fields, - const LogField* const * fields) { return true; } - - virtual bool DoWrite(int num_fields, const LogField* const * fields, - LogVal** vals) { return true; } - virtual bool DoSetBuf(bool enabled) { return true; } - virtual bool DoRotate(string rotated_path, double open, double close, - bool terminating); - virtual bool DoFlush() { return true; } - virtual void DoFinish() {} -}; - -#endif diff --git a/src/Login.cc b/src/Login.cc index 56efd12f53..e626fb3a0a 100644 --- a/src/Login.cc +++ b/src/Login.cc @@ -38,7 +38,7 @@ Login_Analyzer::Login_Analyzer(AnalyzerTag::Tag tag, Connection* conn) if ( ! re_skip_authentication ) { -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG HeapLeakChecker::Disabler disabler; #endif re_skip_authentication = init_RE(skip_authentication); diff --git a/src/MIME.cc b/src/MIME.cc index 103cf149ef..4a7c0268b0 100644 --- a/src/MIME.cc +++ b/src/MIME.cc @@ -4,6 +4,7 @@ #include "MIME.h" #include "Event.h" #include "Reporter.h" +#include "digest.h" // Here are a few things to do: // @@ -1008,7 +1009,7 @@ void MIME_Mail::Done() if ( compute_content_hash && mime_content_hash ) { u_char* digest = new u_char[16]; - md5_finish(&md5_hash, digest); + md5_final(&md5_hash, digest); val_list* vl = new val_list; vl->append(analyzer->BuildConnVal()); @@ -1096,7 +1097,7 @@ void MIME_Mail::SubmitData(int len, const char* buf) if ( compute_content_hash ) { content_hash_length += len; - md5_append(&md5_hash, (const u_char*) buf, len); + md5_update(&md5_hash, (const u_char*) buf, len); } if ( mime_entity_data || mime_all_data ) diff --git a/src/MIME.h b/src/MIME.h index 52d943fb15..ffff30e387 100644 --- a/src/MIME.h +++ b/src/MIME.h @@ -2,13 +2,12 @@ #define mime_h #include - +#include #include #include #include using namespace std; -#include "md5.h" #include "Base64.h" #include "BroString.h" #include "Analyzer.h" @@ -248,7 +247,7 @@ protected: int buffer_offset; int compute_content_hash; int content_hash_length; - md5_state_t md5_hash; + MD5_CTX md5_hash; vector entity_content; vector all_content; diff --git a/src/NCP.cc b/src/NCP.cc index 83378a09a7..edd882747c 100644 --- a/src/NCP.cc +++ b/src/NCP.cc @@ -225,5 +225,7 @@ NCP_Analyzer::NCP_Analyzer(Connection* conn) NCP_Analyzer::~NCP_Analyzer() { delete session; + delete o_ncp; + delete r_ncp; } diff --git a/src/Net.cc b/src/Net.cc index 2d8ee85353..328998b011 100644 --- a/src/Net.cc +++ b/src/Net.cc @@ -42,7 +42,6 @@ extern int select(int, fd_set *, fd_set *, fd_set *, struct timeval *); PList(PktSrc) pkt_srcs; // FIXME: We should really merge PktDumper and PacketDumper. -// It's on my to-do [Robin]. PktDumper* pkt_dumper = 0; int reading_live = 0; @@ -70,6 +69,7 @@ PktSrc* current_pktsrc = 0; IOSource* current_iosrc; std::list files_scanned; +std::vector sig_files; RETSIGTYPE watchdog(int /* signo */) { @@ -143,7 +143,7 @@ RETSIGTYPE watchdog(int /* signo */) return RETSIGVAL; } -void net_init(name_list& interfaces, name_list& readfiles, +void net_init(name_list& interfaces, name_list& readfiles, name_list& netflows, name_list& flowfiles, const char* writefile, const char* filter, const char* secondary_filter, int do_watchdog) @@ -248,12 +248,14 @@ void net_init(name_list& interfaces, name_list& readfiles, FlowSocketSrc* fs = new FlowSocketSrc(netflows[i]); if ( ! fs->IsOpen() ) + { reporter->Error("%s: problem with netflow socket %s - %s\n", prog, netflows[i], fs->ErrorMsg()); - else - { - io_sources.Register(fs); + delete fs; } + + else + io_sources.Register(fs); } } @@ -454,6 +456,7 @@ void net_run() // date on timers and events. network_time = ct; expire_timers(); + usleep(1); // Just yield. } } @@ -483,6 +486,8 @@ void net_run() // since Bro timers are not high-precision anyway.) if ( ! using_communication ) usleep(100000); + else + usleep(1000); // Flawfinder says about usleep: // diff --git a/src/Net.h b/src/Net.h index 9e68cc025b..5b959d1688 100644 --- a/src/Net.h +++ b/src/Net.h @@ -111,5 +111,6 @@ struct ScannedFile { }; extern std::list files_scanned; +extern std::vector sig_files; #endif diff --git a/src/NetVar.cc b/src/NetVar.cc index 25e4f7a0bc..248ae15e1a 100644 --- a/src/NetVar.cc +++ b/src/NetVar.cc @@ -16,7 +16,9 @@ RecordType* pcap_packet; RecordType* signature_state; EnumType* transport_proto; TableType* string_set; +TableType* string_array; TableType* count_set; +VectorType* string_vec; int watchdog_interval; @@ -29,7 +31,6 @@ int tcp_SYN_ack_ok; int tcp_match_undelivered; int encap_hdr_size; -int udp_tunnel_port; double frag_timeout; @@ -45,17 +46,10 @@ int tcp_max_initial_window; int tcp_max_above_hole_without_any_acks; int tcp_excessive_data_without_further_acks; -int ssl_compare_cipherspecs; -int ssl_analyze_certificates; -int ssl_store_certificates; -int ssl_verify_certificates; -int ssl_store_key_material; -int ssl_max_cipherspec_size; -StringVal* ssl_store_cert_path; -StringVal* x509_trusted_cert_path; -TableType* cipher_suites_list; RecordType* x509_type; +RecordType* socks_address; + double non_analyzed_lifetime; double tcp_inactivity_timeout; double udp_inactivity_timeout; @@ -174,6 +168,7 @@ TableVal* preserve_orig_addr; TableVal* preserve_resp_addr; TableVal* preserve_other_addr; +int max_files_in_cache; double log_rotate_interval; double log_max_size; RecordType* rotate_info; @@ -190,8 +185,6 @@ StringVal* ssl_ca_certificate; StringVal* ssl_private_key; StringVal* ssl_passphrase; -StringVal* x509_crl_file; - Val* profiling_file; double profiling_interval; int expensive_profiling_multiple; @@ -211,11 +204,6 @@ int sig_max_group_size; int enable_syslog; -int use_connection_compressor; -int cc_handle_resets; -int cc_handle_only_syns; -int cc_instantiate_on_data; - TableType* irc_join_list; RecordType* irc_join_info; TableVal* irc_servers; @@ -255,6 +243,7 @@ StringVal* cmd_line_bpf_filter; #include "types.bif.netvar_def" #include "event.bif.netvar_def" #include "logging.bif.netvar_def" +#include "input.bif.netvar_def" #include "reporter.bif.netvar_def" void init_event_handlers() @@ -271,6 +260,7 @@ void init_general_global_var() state_dir = internal_val("state_dir")->AsStringVal(); state_write_delay = opt_internal_double("state_write_delay"); + max_files_in_cache = opt_internal_int("max_files_in_cache"); log_rotate_interval = opt_internal_double("log_rotate_interval"); log_max_size = opt_internal_double("log_max_size"); rotate_info = internal_type("rotate_info")->AsRecordType(); @@ -315,6 +305,7 @@ void init_net_var() #include "const.bif.netvar_init" #include "types.bif.netvar_init" #include "logging.bif.netvar_init" +#include "input.bif.netvar_init" #include "reporter.bif.netvar_init" conn_id = internal_type("conn_id")->AsRecordType(); @@ -328,6 +319,8 @@ void init_net_var() pcap_packet = internal_type("pcap_packet")->AsRecordType(); transport_proto = internal_type("transport_proto")->AsEnumType(); string_set = internal_type("string_set")->AsTableType(); + string_array = internal_type("string_array")->AsTableType(); + string_vec = internal_type("string_vec")->AsVectorType(); ignore_checksums = opt_internal_int("ignore_checksums"); partial_connection_ok = opt_internal_int("partial_connection_ok"); @@ -336,8 +329,6 @@ void init_net_var() encap_hdr_size = opt_internal_int("encap_hdr_size"); - udp_tunnel_port = opt_internal_int("udp_tunnel_port") & ~UDP_PORT_MASK; - frag_timeout = opt_internal_double("frag_timeout"); tcp_SYN_timeout = opt_internal_double("tcp_SYN_timeout"); @@ -354,17 +345,9 @@ void init_net_var() tcp_excessive_data_without_further_acks = opt_internal_int("tcp_excessive_data_without_further_acks"); - ssl_compare_cipherspecs = opt_internal_int("ssl_compare_cipherspecs"); - ssl_analyze_certificates = opt_internal_int("ssl_analyze_certificates"); - ssl_store_certificates = opt_internal_int("ssl_store_certificates"); - ssl_verify_certificates = opt_internal_int("ssl_verify_certificates"); - ssl_store_key_material = opt_internal_int("ssl_store_key_material"); - ssl_max_cipherspec_size = opt_internal_int("ssl_max_cipherspec_size"); - - x509_trusted_cert_path = opt_internal_string("X509_trusted_cert_path"); - ssl_store_cert_path = opt_internal_string("ssl_store_cert_path"); x509_type = internal_type("X509")->AsRecordType(); - x509_crl_file = opt_internal_string("X509_crl_file"); + + socks_address = internal_type("SOCKS::Address")->AsRecordType(); non_analyzed_lifetime = opt_internal_double("non_analyzed_lifetime"); tcp_inactivity_timeout = opt_internal_double("tcp_inactivity_timeout"); @@ -521,12 +504,6 @@ void init_net_var() gap_report_freq = opt_internal_double("gap_report_freq"); - use_connection_compressor = - opt_internal_int("use_connection_compressor"); - cc_handle_resets = opt_internal_int("cc_handle_resets"); - cc_handle_only_syns = opt_internal_int("cc_handle_only_syns"); - cc_instantiate_on_data = opt_internal_int("cc_instantiate_on_data"); - irc_join_info = internal_type("irc_join_info")->AsRecordType(); irc_join_list = internal_type("irc_join_list")->AsTableType(); irc_servers = internal_val("irc_servers")->AsTableVal(); diff --git a/src/NetVar.h b/src/NetVar.h index f8def230c0..2561fa0ad9 100644 --- a/src/NetVar.h +++ b/src/NetVar.h @@ -19,7 +19,9 @@ extern RecordType* SYN_packet; extern RecordType* pcap_packet; extern EnumType* transport_proto; extern TableType* string_set; +extern TableType* string_array; extern TableType* count_set; +extern VectorType* string_vec; extern int watchdog_interval; @@ -32,7 +34,6 @@ extern int tcp_SYN_ack_ok; extern int tcp_match_undelivered; extern int encap_hdr_size; -extern int udp_tunnel_port; extern double frag_timeout; @@ -48,17 +49,9 @@ extern int tcp_max_initial_window; extern int tcp_max_above_hole_without_any_acks; extern int tcp_excessive_data_without_further_acks; -// see policy/ssl.bro for details -extern int ssl_compare_cipherspecs; -extern int ssl_analyze_certificates; -extern int ssl_store_certificates; -extern int ssl_verify_certificates; -extern int ssl_store_key_material; -extern int ssl_max_cipherspec_size; -extern StringVal* ssl_store_cert_path; -extern StringVal* x509_trusted_cert_path; extern RecordType* x509_type; -extern StringVal* x509_crl_file; + +extern RecordType* socks_address; extern double non_analyzed_lifetime; extern double tcp_inactivity_timeout; @@ -178,6 +171,7 @@ extern double connection_status_update_interval; extern StringVal* state_dir; extern double state_write_delay; +extern int max_files_in_cache; extern double log_rotate_interval; extern double log_max_size; extern RecordType* rotate_info; @@ -214,11 +208,6 @@ extern int sig_max_group_size; extern int enable_syslog; -extern int use_connection_compressor; -extern int cc_handle_resets; -extern int cc_handle_only_syns; -extern int cc_instantiate_on_data; - extern TableType* irc_join_list; extern RecordType* irc_join_info; extern TableVal* irc_servers; @@ -264,6 +253,7 @@ extern void init_net_var(); #include "types.bif.netvar_h" #include "event.bif.netvar_h" #include "logging.bif.netvar_h" +#include "input.bif.netvar_h" #include "reporter.bif.netvar_h" #endif diff --git a/src/NetbiosSSN.cc b/src/NetbiosSSN.cc index 274e76f137..362d974956 100644 --- a/src/NetbiosSSN.cc +++ b/src/NetbiosSSN.cc @@ -110,7 +110,7 @@ int NetbiosSSN_Interpreter::ParseDatagram(const u_char* data, int len, return 0; } - + int NetbiosSSN_Interpreter::ParseBroadcast(const u_char* data, int len, int is_query) { @@ -131,6 +131,9 @@ int NetbiosSSN_Interpreter::ParseBroadcast(const u_char* data, int len, return 0; } + delete srcname; + delete dstname; + return 0; } diff --git a/src/OSFinger.cc b/src/OSFinger.cc index f8032d60ee..3368a8e40c 100644 --- a/src/OSFinger.cc +++ b/src/OSFinger.cc @@ -63,15 +63,16 @@ OSFingerprint::OSFingerprint(FingerprintMode arg_mode) } } -bool OSFingerprint::CacheMatch(uint32 addr, int id) +bool OSFingerprint::CacheMatch(const IPAddr& addr, int id) { - HashKey key = HashKey(&addr, 1); + HashKey* key = addr.GetHashKey(); int* pid = new int; *pid=id; - int* prev = os_matches.Insert(&key, pid); + int* prev = os_matches.Insert(key, pid); bool ret = (prev ? *prev != id : 1); if (prev) delete prev; + delete key; return ret; } diff --git a/src/OSFinger.h b/src/OSFinger.h index e5a0829640..0968fb5fd3 100644 --- a/src/OSFinger.h +++ b/src/OSFinger.h @@ -14,6 +14,7 @@ #include "util.h" #include "Dict.h" #include "Reporter.h" +#include "IPAddr.h" // Size limit for size wildcards. #define PACKET_BIG 100 @@ -88,7 +89,7 @@ public: int FindMatch(struct os_type* retval, uint16 tot, uint8 DF_flag, uint8 TTL, uint16 WSS, uint8 ocnt, uint8* op, uint16 MSS, uint8 win_scale, uint32 tstamp, uint32 quirks, uint8 ECN) const; - bool CacheMatch(uint32 addr, int id); + bool CacheMatch(const IPAddr& addr, int id); int Get_OS_From_SYN(struct os_type* retval, uint16 tot, uint8 DF_flag, uint8 TTL, uint16 WSS, diff --git a/src/PIA.cc b/src/PIA.cc index 6a00e7e1d0..9adb4ccab3 100644 --- a/src/PIA.cc +++ b/src/PIA.cc @@ -196,20 +196,20 @@ void PIA_TCP::FirstPacket(bool is_orig, const IP_Hdr* ip) ip4->ip_p = IPPROTO_TCP; // Cast to const so that it doesn't delete it. - ip4_hdr = new IP_Hdr((const struct ip*) ip4); + ip4_hdr = new IP_Hdr(ip4, false); } if ( is_orig ) { - copy_addr(Conn()->OrigAddr(), &ip4->ip_src.s_addr); - copy_addr(Conn()->RespAddr(), &ip4->ip_dst.s_addr); + Conn()->OrigAddr().CopyIPv4(&ip4->ip_src); + Conn()->RespAddr().CopyIPv4(&ip4->ip_dst); tcp4->th_sport = htons(Conn()->OrigPort()); tcp4->th_dport = htons(Conn()->RespPort()); } else { - copy_addr(Conn()->RespAddr(), &ip4->ip_src.s_addr); - copy_addr(Conn()->OrigAddr(), &ip4->ip_dst.s_addr); + Conn()->RespAddr().CopyIPv4(&ip4->ip_src); + Conn()->OrigAddr().CopyIPv4(&ip4->ip_dst); tcp4->th_sport = htons(Conn()->RespPort()); tcp4->th_dport = htons(Conn()->OrigPort()); } diff --git a/src/POP3.cc b/src/POP3.cc index 4ffe67ef48..3075e76507 100644 --- a/src/POP3.cc +++ b/src/POP3.cc @@ -158,6 +158,7 @@ void POP3_Analyzer::ProcessRequest(int length, const char* line) if ( e >= end ) { Weird("pop3_malformed_auth_plain"); + delete decoded; return; } @@ -167,6 +168,7 @@ void POP3_Analyzer::ProcessRequest(int length, const char* line) if ( s >= end ) { Weird("pop3_malformed_auth_plain"); + delete decoded; return; } diff --git a/src/PacketFilter.cc b/src/PacketFilter.cc index 9b6b881ce5..4fb3b1c8f7 100644 --- a/src/PacketFilter.cc +++ b/src/PacketFilter.cc @@ -1,11 +1,11 @@ #include "PacketFilter.h" -void PacketFilter::AddSrc(addr_type src, uint32 tcp_flags, double probability) +void PacketFilter::AddSrc(const IPAddr& src, uint32 tcp_flags, double probability) { Filter* f = new Filter; f->tcp_flags = tcp_flags; f->probability = uint32(probability * RAND_MAX); - src_filter.Insert(src, NUM_ADDR_WORDS * 32, f); + src_filter.Insert(src, 128, f); } void PacketFilter::AddSrc(Val* src, uint32 tcp_flags, double probability) @@ -16,12 +16,12 @@ void PacketFilter::AddSrc(Val* src, uint32 tcp_flags, double probability) src_filter.Insert(src, f); } -void PacketFilter::AddDst(addr_type dst, uint32 tcp_flags, double probability) +void PacketFilter::AddDst(const IPAddr& dst, uint32 tcp_flags, double probability) { Filter* f = new Filter; f->tcp_flags = tcp_flags; f->probability = uint32(probability * RAND_MAX); - dst_filter.Insert(dst, NUM_ADDR_WORDS * 32, f); + dst_filter.Insert(dst, 128, f); } void PacketFilter::AddDst(Val* dst, uint32 tcp_flags, double probability) @@ -32,9 +32,9 @@ void PacketFilter::AddDst(Val* dst, uint32 tcp_flags, double probability) dst_filter.Insert(dst, f); } -bool PacketFilter::RemoveSrc(addr_type src) +bool PacketFilter::RemoveSrc(const IPAddr& src) { - return src_filter.Remove(src, NUM_ADDR_WORDS * 32) != 0; + return src_filter.Remove(src, 128) != 0; } bool PacketFilter::RemoveSrc(Val* src) @@ -42,9 +42,9 @@ bool PacketFilter::RemoveSrc(Val* src) return src_filter.Remove(src) != NULL; } -bool PacketFilter::RemoveDst(addr_type dst) +bool PacketFilter::RemoveDst(const IPAddr& dst) { - return dst_filter.Remove(dst, NUM_ADDR_WORDS * 32) != NULL; + return dst_filter.Remove(dst, 128) != NULL; } bool PacketFilter::RemoveDst(Val* dst) @@ -54,21 +54,11 @@ bool PacketFilter::RemoveDst(Val* dst) bool PacketFilter::Match(const IP_Hdr* ip, int len, int caplen) { -#ifdef BROv6 - Filter* f = (Filter*) src_filter.Lookup(ip->SrcAddr(), - NUM_ADDR_WORDS * 32); -#else - Filter* f = (Filter*) src_filter.Lookup(*ip->SrcAddr(), - NUM_ADDR_WORDS * 32); -#endif + Filter* f = (Filter*) src_filter.Lookup(ip->SrcAddr(), 128); if ( f ) return MatchFilter(*f, *ip, len, caplen); -#ifdef BROv6 - f = (Filter*) dst_filter.Lookup(ip->DstAddr(), NUM_ADDR_WORDS * 32); -#else - f = (Filter*) dst_filter.Lookup(*ip->DstAddr(), NUM_ADDR_WORDS * 32); -#endif + f = (Filter*) dst_filter.Lookup(ip->DstAddr(), 128); if ( f ) return MatchFilter(*f, *ip, len, caplen); @@ -81,9 +71,7 @@ bool PacketFilter::MatchFilter(const Filter& f, const IP_Hdr& ip, if ( ip.NextProto() == IPPROTO_TCP && f.tcp_flags ) { // Caution! The packet sanity checks have not been performed yet - const struct ip* ip4 = ip.IP4_Hdr(); - - int ip_hdr_len = ip4->ip_hl * 4; + int ip_hdr_len = ip.HdrLen(); len -= ip_hdr_len; // remove IP header caplen -= ip_hdr_len; @@ -92,8 +80,7 @@ bool PacketFilter::MatchFilter(const Filter& f, const IP_Hdr& ip, // Packet too short, will be dropped anyway. return false; - const struct tcphdr* tp = - (const struct tcphdr*) ((u_char*) ip4 + ip_hdr_len); + const struct tcphdr* tp = (const struct tcphdr*) ip.Payload(); if ( tp->th_flags & f.tcp_flags ) // At least one of the flags is set, so don't drop diff --git a/src/PacketFilter.h b/src/PacketFilter.h index ed8000f40f..3d7a3aa3be 100644 --- a/src/PacketFilter.h +++ b/src/PacketFilter.h @@ -14,16 +14,16 @@ public: // Drops all packets from a particular source (which may be given // as an AddrVal or a SubnetVal) which hasn't any of TCP flags set // (TH_*) with the given probability (from 0..MAX_PROB). - void AddSrc(addr_type src, uint32 tcp_flags, double probability); + void AddSrc(const IPAddr& src, uint32 tcp_flags, double probability); void AddSrc(Val* src, uint32 tcp_flags, double probability); - void AddDst(addr_type src, uint32 tcp_flags, double probability); + void AddDst(const IPAddr& src, uint32 tcp_flags, double probability); void AddDst(Val* src, uint32 tcp_flags, double probability); // Removes the filter entry for the given src/dst // Returns false if filter doesn not exist. - bool RemoveSrc(addr_type src); + bool RemoveSrc(const IPAddr& src); bool RemoveSrc(Val* dst); - bool RemoveDst(addr_type dst); + bool RemoveDst(const IPAddr& dst); bool RemoveDst(Val* dst); // Returns true if packet matches a drop filter diff --git a/src/PacketSort.cc b/src/PacketSort.cc index 0ff08b3280..a7e2b04572 100644 --- a/src/PacketSort.cc +++ b/src/PacketSort.cc @@ -27,13 +27,16 @@ PacketSortElement::PacketSortElement(PktSrc* arg_src, { const struct ip* ip = (const struct ip*) (pkt + hdr_size); if ( ip->ip_v == 4 ) - ip_hdr = new IP_Hdr(ip); + ip_hdr = new IP_Hdr(ip, false); + else if ( ip->ip_v == 6 && (caplen >= sizeof(struct ip6_hdr) + hdr_size) ) + ip_hdr = new IP_Hdr((const struct ip6_hdr*) ip, false, caplen - hdr_size); else - ip_hdr = new IP_Hdr((const struct ip6_hdr*) ip); + // Weird will be generated later in NetSessions::NextPacket. + return; if ( ip_hdr->NextProto() == IPPROTO_TCP && // Note: can't sort fragmented packets - (ip_hdr->FragField() & 0x3fff) == 0 ) + ( ! ip_hdr->IsFragment() ) ) { tcp_offset = hdr_size + ip_hdr->HdrLen(); if ( caplen >= tcp_offset + sizeof(struct tcphdr) ) @@ -65,7 +68,7 @@ PacketSortElement::PacketSortElement(PktSrc* arg_src, payload_length = ip_hdr->PayloadLen() - tp->th_off * 4; - key = id.BuildConnKey(); + key = BuildConnIDHashKey(id); is_tcp = 1; } diff --git a/src/PersistenceSerializer.cc b/src/PersistenceSerializer.cc index c757467f90..d9baad05bb 100644 --- a/src/PersistenceSerializer.cc +++ b/src/PersistenceSerializer.cc @@ -137,7 +137,7 @@ bool PersistenceSerializer::CheckForFile(UnserialInfo* info, const char* file, bool PersistenceSerializer::ReadAll(bool is_init, bool delete_files) { -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG HeapLeakChecker::Disabler disabler; #endif diff --git a/src/PktSrc.cc b/src/PktSrc.cc index d86952a61f..8485bc8ca5 100644 --- a/src/PktSrc.cc +++ b/src/PktSrc.cc @@ -193,7 +193,18 @@ void PktSrc::Process() { protocol = (data[3] << 24) + (data[2] << 16) + (data[1] << 8) + data[0]; - if ( protocol != AF_INET && protocol != AF_INET6 ) + // From the Wireshark Wiki: "AF_INET6, unfortunately, has + // different values in {NetBSD,OpenBSD,BSD/OS}, + // {FreeBSD,DragonFlyBSD}, and {Darwin/Mac OS X}, so an IPv6 + // packet might have a link-layer header with 24, 28, or 30 + // as the AF_ value." As we may be reading traces captured on + // platforms other than what we're running on, we accept them + // all here. + if ( protocol != AF_INET + && protocol != AF_INET6 + && protocol != 24 + && protocol != 28 + && protocol != 30 ) { sessions->Weird("non_ip_packet_in_null_transport", &hdr, data); data = 0; @@ -400,6 +411,7 @@ void PktSrc::AddSecondaryTablePrograms() { delete program; Close(); + return; } SecondaryProgram* sp = new SecondaryProgram(program, se); diff --git a/src/PrefixTable.cc b/src/PrefixTable.cc index e04c3f9294..58a488a9f1 100644 --- a/src/PrefixTable.cc +++ b/src/PrefixTable.cc @@ -1,34 +1,19 @@ #include "PrefixTable.h" #include "Reporter.h" -// IPv4 version. -inline static prefix_t* make_prefix(const uint32 addr, int width) +inline static prefix_t* make_prefix(const IPAddr& addr, int width) { prefix_t* prefix = (prefix_t*) safe_malloc(sizeof(prefix_t)); - memcpy(&prefix->add.sin, &addr, sizeof(prefix->add.sin)) ; - prefix->family = AF_INET; - prefix->bitlen = width; - prefix->ref_count = 1; - - return prefix; - } - -#ifdef BROv6 -inline static prefix_t* make_prefix(const uint32* addr, int width) - { - prefix_t* prefix = (prefix_t*) safe_malloc(sizeof(prefix_t)); - - memcpy(&prefix->add.sin6, addr, 4 * sizeof(uint32)); + addr.CopyIPv6(&prefix->add.sin6); prefix->family = AF_INET6; prefix->bitlen = width; prefix->ref_count = 1; return prefix; } -#endif -void* PrefixTable::Insert(const_addr_type addr, int width, void* data) +void* PrefixTable::Insert(const IPAddr& addr, int width, void* data) { prefix_t* prefix = make_prefix(addr, width); patricia_node_t* node = patricia_lookup(tree, prefix); @@ -55,12 +40,12 @@ void* PrefixTable::Insert(const Val* value, void* data) switch ( value->Type()->Tag() ) { case TYPE_ADDR: - return Insert(value->AsAddr(), NUM_ADDR_WORDS * 32, data); + return Insert(value->AsAddr(), 128, data); break; case TYPE_SUBNET: - return Insert(value->AsSubNet()->net, - value->AsSubNet()->width, data); + return Insert(value->AsSubNet().Prefix(), + value->AsSubNet().LengthIPv6(), data); break; default: @@ -69,7 +54,7 @@ void* PrefixTable::Insert(const Val* value, void* data) } } -void* PrefixTable::Lookup(const_addr_type addr, int width, bool exact) const +void* PrefixTable::Lookup(const IPAddr& addr, int width, bool exact) const { prefix_t* prefix = make_prefix(addr, width); patricia_node_t* node = @@ -89,12 +74,12 @@ void* PrefixTable::Lookup(const Val* value, bool exact) const switch ( value->Type()->Tag() ) { case TYPE_ADDR: - return Lookup(value->AsAddr(), NUM_ADDR_WORDS * 32, exact); + return Lookup(value->AsAddr(), 128, exact); break; case TYPE_SUBNET: - return Lookup(value->AsSubNet()->net, - value->AsSubNet()->width, exact); + return Lookup(value->AsSubNet().Prefix(), + value->AsSubNet().LengthIPv6(), exact); break; default: @@ -104,7 +89,7 @@ void* PrefixTable::Lookup(const Val* value, bool exact) const } } -void* PrefixTable::Remove(const_addr_type addr, int width) +void* PrefixTable::Remove(const IPAddr& addr, int width) { prefix_t* prefix = make_prefix(addr, width); patricia_node_t* node = patricia_search_exact(tree, prefix); @@ -128,11 +113,12 @@ void* PrefixTable::Remove(const Val* value) switch ( value->Type()->Tag() ) { case TYPE_ADDR: - return Remove(value->AsAddr(), NUM_ADDR_WORDS * 32); + return Remove(value->AsAddr(), 128); break; case TYPE_SUBNET: - return Remove(value->AsSubNet()->net, value->AsSubNet()->width); + return Remove(value->AsSubNet().Prefix(), + value->AsSubNet().LengthIPv6()); break; default: diff --git a/src/PrefixTable.h b/src/PrefixTable.h index 78596c7f35..2e5f43a0a8 100644 --- a/src/PrefixTable.h +++ b/src/PrefixTable.h @@ -3,6 +3,7 @@ #include "Val.h" #include "net_util.h" +#include "IPAddr.h" extern "C" { #include "patricia.h" @@ -24,7 +25,7 @@ public: // Addr in network byte order. If data is zero, acts like a set. // Returns ptr to old data if already existing. // For existing items without data, returns non-nil if found. - void* Insert(const_addr_type addr, int width, void* data = 0); + void* Insert(const IPAddr& addr, int width, void* data = 0); // Value may be addr or subnet. void* Insert(const Val* value, void* data = 0); @@ -32,11 +33,11 @@ public: // Returns nil if not found, pointer to data otherwise. // For items without data, returns non-nil if found. // If exact is false, performs exact rather than longest-prefix match. - void* Lookup(const_addr_type addr, int width, bool exact = false) const; + void* Lookup(const IPAddr& addr, int width, bool exact = false) const; void* Lookup(const Val* value, bool exact = false) const; // Returns pointer to data or nil if not found. - void* Remove(const_addr_type addr, int width); + void* Remove(const IPAddr& addr, int width); void* Remove(const Val* value); void Clear() { Clear_Patricia(tree, 0); } diff --git a/src/Reassem.cc b/src/Reassem.cc index 89fe29e7d4..c3c19ff0e6 100644 --- a/src/Reassem.cc +++ b/src/Reassem.cc @@ -43,8 +43,7 @@ DataBlock::DataBlock(const u_char* data, int size, int arg_seq, unsigned int Reassembler::total_size = 0; -Reassembler::Reassembler(int init_seq, const uint32* ip_addr, - ReassemblerType arg_type) +Reassembler::Reassembler(int init_seq, ReassemblerType arg_type) { blocks = last_block = 0; trim_seq = last_reassem_seq = init_seq; diff --git a/src/Reassem.h b/src/Reassem.h index 06d1e28f40..1f65059e02 100644 --- a/src/Reassem.h +++ b/src/Reassem.h @@ -4,6 +4,7 @@ #define reassem_h #include "Obj.h" +#include "IPAddr.h" class DataBlock { public: @@ -25,8 +26,7 @@ enum ReassemblerType { REASSEM_IP, REASSEM_TCP }; class Reassembler : public BroObj { public: - Reassembler(int init_seq, const uint32* ip_addr, - ReassemblerType arg_type); + Reassembler(int init_seq, ReassemblerType arg_type); virtual ~Reassembler(); void NewBlock(double t, int seq, int len, const u_char* data); diff --git a/src/RemoteSerializer.cc b/src/RemoteSerializer.cc index 324da2c92b..564ad2be68 100644 --- a/src/RemoteSerializer.cc +++ b/src/RemoteSerializer.cc @@ -147,6 +147,7 @@ #include #include +#include #include #include #include @@ -172,6 +173,9 @@ #include #include +#include +#include +#include #include "RemoteSerializer.h" #include "Func.h" @@ -183,8 +187,11 @@ #include "Sessions.h" #include "File.h" #include "Conn.h" -#include "LogMgr.h" #include "Reporter.h" +#include "threading/SerialTypes.h" +#include "logging/Manager.h" +#include "IPAddr.h" +#include "bro_inet_ntop.h" extern "C" { #include "setsignal.h" @@ -319,6 +326,18 @@ static const char* msgToStr(int msg) } } +static vector tokenize(const string& s, char delim) + { + vector tokens; + stringstream ss(s); + string token; + + while ( std::getline(ss, token, delim) ) + tokens.push_back(token); + + return tokens; + } + // Start of every message between two processes. We do the low-level work // ourselves to make this 64-bit safe. (The actual layout is an artifact of // an earlier design that depended on how a 32-bit GCC lays out its structs ...) @@ -385,6 +404,9 @@ inline void RemoteSerializer::SetupSerialInfo(SerialInfo* info, Peer* peer) peer->phase == Peer::RUNNING ) info->new_cache_strategy = true; + if ( (peer->caps & Peer::BROCCOLI_PEER) ) + info->broccoli_peer = true; + info->include_locations = false; } @@ -452,17 +474,6 @@ static inline char* fmt_uint32s(int nargs, va_list ap) } #endif - -static inline const char* ip2a(uint32 ip) - { - static char buffer[32]; - struct in_addr addr; - - addr.s_addr = htonl(ip); - - return inet_ntop(AF_INET, &addr, buffer, 32); - } - static pid_t child_pid = 0; // Return true if message type is sent by a peer (rather than the child @@ -527,6 +538,7 @@ RemoteSerializer::RemoteSerializer() terminating = false; in_sync = 0; last_flush = 0; + received_logs = 0; } RemoteSerializer::~RemoteSerializer() @@ -635,7 +647,7 @@ void RemoteSerializer::Fork() exit(1); // FIXME: Better way to handle this? } - close(pipe[1]); + safe_close(pipe[1]); return; } @@ -652,12 +664,12 @@ void RemoteSerializer::Fork() } child.SetParentIO(io); - close(pipe[0]); + safe_close(pipe[0]); // Close file descriptors. - close(0); - close(1); - close(2); + safe_close(0); + safe_close(1); + safe_close(2); // Be nice. setpriority(PRIO_PROCESS, 0, 5); @@ -667,8 +679,9 @@ void RemoteSerializer::Fork() } } -RemoteSerializer::PeerID RemoteSerializer::Connect(addr_type ip, uint16 port, - const char* our_class, double retry, bool use_ssl) +RemoteSerializer::PeerID RemoteSerializer::Connect(const IPAddr& ip, + const string& zone_id, uint16 port, const char* our_class, double retry, + bool use_ssl) { if ( ! using_communication ) return true; @@ -676,28 +689,23 @@ RemoteSerializer::PeerID RemoteSerializer::Connect(addr_type ip, uint16 port, if ( ! initialized ) reporter->InternalError("remote serializer not initialized"); -#ifdef BROv6 - if ( ! is_v4_addr(ip) ) - Error("inter-Bro communication not supported over IPv6"); - - uint32 ip4 = to_v4_addr(ip); -#else - uint32 ip4 = ip; -#endif - - ip4 = ntohl(ip4); - if ( ! child_pid ) Fork(); - Peer* p = AddPeer(ip4, port); + Peer* p = AddPeer(ip, port); p->orig = true; if ( our_class ) p->our_class = our_class; - if ( ! SendToChild(MSG_CONNECT_TO, p, 5, p->id, - ip4, port, uint32(retry), use_ssl) ) + const size_t BUFSIZE = 1024; + char* data = new char[BUFSIZE]; + snprintf(data, BUFSIZE, + "%"PRI_PTR_COMPAT_UINT",%s,%s,%"PRIu16",%"PRIu32",%d", p->id, + ip.AsString().c_str(), zone_id.c_str(), port, uint32(retry), + use_ssl); + + if ( ! SendToChild(MSG_CONNECT_TO, p, data) ) { RemovePeer(p); return false; @@ -1222,17 +1230,15 @@ bool RemoteSerializer::SendCapabilities(Peer* peer) uint32 caps = 0; -#ifdef HAVE_LIBZ caps |= Peer::COMPRESSION; -#endif - caps |= Peer::PID_64BIT; caps |= Peer::NEW_CACHE_STRATEGY; return caps ? SendToChild(MSG_CAPS, peer, 3, caps, 0, 0) : true; } -bool RemoteSerializer::Listen(addr_type ip, uint16 port, bool expect_ssl) +bool RemoteSerializer::Listen(const IPAddr& ip, uint16 port, bool expect_ssl, + bool ipv6, const string& zone_id, double retry) { if ( ! using_communication ) return true; @@ -1240,18 +1246,18 @@ bool RemoteSerializer::Listen(addr_type ip, uint16 port, bool expect_ssl) if ( ! initialized ) reporter->InternalError("remote serializer not initialized"); -#ifdef BROv6 - if ( ! is_v4_addr(ip) ) - Error("inter-Bro communication not supported over IPv6"); + if ( ! ipv6 && ip.GetFamily() == IPv6 && + ip != IPAddr("0.0.0.0") && ip != IPAddr("::") ) + reporter->FatalError("Attempt to listen on address %s, but IPv6 " + "communication disabled", ip.AsString().c_str()); - uint32 ip4 = to_v4_addr(ip); -#else - uint32 ip4 = ip; -#endif + const size_t BUFSIZE = 1024; + char* data = new char[BUFSIZE]; + snprintf(data, BUFSIZE, "%s,%"PRIu16",%d,%d,%s,%"PRIu32, + ip.AsString().c_str(), port, expect_ssl, ipv6, zone_id.c_str(), + (uint32) retry); - ip4 = ntohl(ip4); - - if ( ! SendToChild(MSG_LISTEN, 0, 3, ip4, port, expect_ssl) ) + if ( ! SendToChild(MSG_LISTEN, 0, data) ) return false; listening = true; @@ -1359,6 +1365,14 @@ double RemoteSerializer::NextTimestamp(double* local_network_time) { Poll(false); + if ( received_logs > 0 ) + { + // If we processed logs last time, assume there's more. + idle = false; + received_logs = 0; + return timer_mgr->Time(); + } + double et = events.length() ? events[0]->time : -1; double pt = packets.length() ? packets[0]->time : -1; @@ -1460,7 +1474,7 @@ void RemoteSerializer::Finish() Poll(true); while ( io->CanWrite() ); - loop_over_list(peers, i) + loop_over_list(peers, i) { CloseConnection(peers[i]); } @@ -1780,7 +1794,7 @@ RecordVal* RemoteSerializer::MakePeerVal(Peer* peer) RecordVal* v = new RecordVal(::peer); v->Assign(0, new Val(uint32(peer->id), TYPE_COUNT)); // Sic! Network order for AddrVal, host order for PortVal. - v->Assign(1, new AddrVal(htonl(peer->ip))); + v->Assign(1, new AddrVal(peer->ip)); v->Assign(2, new PortVal(peer->port, TRANSPORT_TCP)); v->Assign(3, new Val(false, TYPE_BOOL)); v->Assign(4, new StringVal("")); // set when received @@ -1789,8 +1803,8 @@ RecordVal* RemoteSerializer::MakePeerVal(Peer* peer) return v; } -RemoteSerializer::Peer* RemoteSerializer::AddPeer(uint32 ip, uint16 port, - PeerID id) +RemoteSerializer::Peer* RemoteSerializer::AddPeer(const IPAddr& ip, uint16 port, + PeerID id) { Peer* peer = new Peer; peer->id = id != PEER_NONE ? id : id_counter++; @@ -1955,9 +1969,22 @@ bool RemoteSerializer::EnterPhaseRunning(Peer* peer) bool RemoteSerializer::ProcessConnected() { // IP and port follow. - uint32* args = (uint32*) current_args->data; - uint32 host = ntohl(args[0]); // ### Fix: only works for IPv4 - uint16 port = (uint16) ntohl(args[1]); + vector args = tokenize(current_args->data, ','); + + if ( args.size() != 2 ) + { + InternalCommError("ProcessConnected() bad number of arguments"); + return false; + } + + IPAddr host = IPAddr(args[0]); + uint16 port; + + if ( ! atoi_n(args[1].size(), args[1].c_str(), 0, 10, port) ) + { + InternalCommError("ProcessConnected() bad peer port string"); + return false; + } if ( ! current_peer ) { @@ -2106,11 +2133,9 @@ bool RemoteSerializer::ProcessPhaseDone() bool RemoteSerializer::HandshakeDone(Peer* peer) { -#ifdef HAVE_LIBZ if ( peer->caps & Peer::COMPRESSION && peer->comp_level > 0 ) if ( ! SendToChild(MSG_COMPRESS, peer, 1, peer->comp_level) ) return false; -#endif if ( ! (peer->caps & Peer::PID_64BIT) ) Log(LogInfo, "peer does not support 64bit PIDs; using compatibility mode", peer); @@ -2118,6 +2143,9 @@ bool RemoteSerializer::HandshakeDone(Peer* peer) if ( (peer->caps & Peer::NEW_CACHE_STRATEGY) ) Log(LogInfo, "peer supports keep-in-cache; using that", peer); + if ( (peer->caps & Peer::BROCCOLI_PEER) ) + Log(LogInfo, "peer is a Broccoli", peer); + if ( peer->logs_requested ) log_mgr->SendAllWritersTo(peer->id); @@ -2370,6 +2398,9 @@ bool RemoteSerializer::ProcessSerialization() current_peer->phase == Peer::RUNNING ) info.new_cache_strategy = true; + if ( current_peer->caps & Peer::BROCCOLI_PEER ) + info.broccoli_peer = true; + if ( ! forward_remote_state_changes ) ignore_accesses = true; @@ -2472,17 +2503,17 @@ bool RemoteSerializer::ProcessRemotePrint() return true; } -bool RemoteSerializer::SendLogCreateWriter(EnumVal* id, EnumVal* writer, string path, int num_fields, const LogField* const * fields) +bool RemoteSerializer::SendLogCreateWriter(EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields) { loop_over_list(peers, i) { - SendLogCreateWriter(peers[i]->id, id, writer, path, num_fields, fields); + SendLogCreateWriter(peers[i]->id, id, writer, info, num_fields, fields); } return true; } -bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal* writer, string path, int num_fields, const LogField* const * fields) +bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields) { SetErrorDescr("logging"); @@ -2495,14 +2526,17 @@ bool RemoteSerializer::SendLogCreateWriter(PeerID peer_id, EnumVal* id, EnumVal* if ( peer->phase != Peer::HANDSHAKE && peer->phase != Peer::RUNNING ) return false; + if ( ! peer->logs_requested ) + return false; + BinarySerializationFormat fmt; fmt.StartWrite(); bool success = fmt.Write(id->AsEnum(), "id") && fmt.Write(writer->AsEnum(), "writer") && - fmt.Write(path, "path") && - fmt.Write(num_fields, "num_fields"); + fmt.Write(num_fields, "num_fields") && + info.Write(&fmt); if ( ! success ) goto error; @@ -2536,7 +2570,7 @@ error: return false; } -bool RemoteSerializer::SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const LogVal* const * vals) +bool RemoteSerializer::SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals) { loop_over_list(peers, i) { @@ -2546,7 +2580,7 @@ bool RemoteSerializer::SendLogWrite(EnumVal* id, EnumVal* writer, string path, i return true; } -bool RemoteSerializer::SendLogWrite(Peer* peer, EnumVal* id, EnumVal* writer, string path, int num_fields, const LogVal* const * vals) +bool RemoteSerializer::SendLogWrite(Peer* peer, EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals) { if ( peer->phase != Peer::HANDSHAKE && peer->phase != Peer::RUNNING ) return false; @@ -2554,7 +2588,9 @@ bool RemoteSerializer::SendLogWrite(Peer* peer, EnumVal* id, EnumVal* writer, st if ( ! peer->logs_requested ) return false; - assert(peer->log_buffer); + if ( ! peer->log_buffer ) + // Peer shutting down. + return false; // Serialize the log record entry. @@ -2589,7 +2625,10 @@ bool RemoteSerializer::SendLogWrite(Peer* peer, EnumVal* id, EnumVal* writer, st if ( len > (LOG_BUFFER_SIZE - peer->log_buffer_used) || (network_time - last_flush > 1.0) ) { if ( ! FlushLogBuffer(peer) ) + { + delete [] data; return false; + } } // If the data is actually larger than our complete buffer, just send it out. @@ -2612,6 +2651,9 @@ error: bool RemoteSerializer::FlushLogBuffer(Peer* p) { + if ( ! p->logs_requested ) + return false; + last_flush = network_time; if ( p->state == Peer::CLOSING ) @@ -2633,32 +2675,38 @@ bool RemoteSerializer::ProcessLogCreateWriter() if ( current_peer->state == Peer::CLOSING ) return false; +#ifdef USE_PERFTOOLS_DEBUG + // Don't track allocations here, they'll be released only after the + // main loop exists. And it's just a tiny amount anyway. + HeapLeakChecker::Disabler disabler; +#endif + assert(current_args); EnumVal* id_val = 0; EnumVal* writer_val = 0; - LogField** fields = 0; + threading::Field** fields = 0; BinarySerializationFormat fmt; fmt.StartRead(current_args->data, current_args->len); int id, writer; - string path; int num_fields; + logging::WriterBackend::WriterInfo* info = new logging::WriterBackend::WriterInfo(); bool success = fmt.Read(&id, "id") && fmt.Read(&writer, "writer") && - fmt.Read(&path, "path") && - fmt.Read(&num_fields, "num_fields"); + fmt.Read(&num_fields, "num_fields") && + info->Read(&fmt); if ( ! success ) goto error; - fields = new LogField* [num_fields]; + fields = new threading::Field* [num_fields]; for ( int i = 0; i < num_fields; i++ ) { - fields[i] = new LogField; + fields[i] = new threading::Field; if ( ! fields[i]->Read(&fmt) ) goto error; } @@ -2668,7 +2716,8 @@ bool RemoteSerializer::ProcessLogCreateWriter() id_val = new EnumVal(id, BifType::Enum::Log::ID); writer_val = new EnumVal(writer, BifType::Enum::Log::Writer); - if ( ! log_mgr->CreateWriter(id_val, writer_val, path, num_fields, fields) ) + if ( ! log_mgr->CreateWriter(id_val, writer_val, info, num_fields, fields, + true, false, true) ) goto error; Unref(id_val); @@ -2699,7 +2748,7 @@ bool RemoteSerializer::ProcessLogWrite() // Unserialize one entry. EnumVal* id_val = 0; EnumVal* writer_val = 0; - LogVal** vals = 0; + threading::Value** vals = 0; int id, writer; string path; @@ -2713,11 +2762,11 @@ bool RemoteSerializer::ProcessLogWrite() if ( ! success ) goto error; - vals = new LogVal* [num_fields]; + vals = new threading::Value* [num_fields]; for ( int i = 0; i < num_fields; i++ ) { - vals[i] = new LogVal; + vals[i] = new threading::Value; if ( ! vals[i]->Read(&fmt) ) goto error; } @@ -2737,6 +2786,8 @@ bool RemoteSerializer::ProcessLogWrite() fmt.EndRead(); + ++received_logs; + return true; error: @@ -2846,11 +2897,6 @@ void RemoteSerializer::GotID(ID* id, Val* val) (desc && *desc) ? desc : "not set"), current_peer); -#ifdef USE_PERFTOOLS - // May still be cached, but we don't care. - heap_checker->IgnoreObject(id); -#endif - Unref(id); return; } @@ -2928,25 +2974,38 @@ void RemoteSerializer::Log(LogLevel level, const char* msg) void RemoteSerializer::Log(LogLevel level, const char* msg, Peer* peer, LogSrc src) { + if ( peer ) + { + val_list* vl = new val_list(); + vl->append(peer->val->Ref()); + vl->append(new Val(level, TYPE_COUNT)); + vl->append(new Val(src, TYPE_COUNT)); + vl->append(new StringVal(msg)); + mgr.QueueEvent(remote_log_peer, vl); + } + else + { + val_list* vl = new val_list(); + vl->append(new Val(level, TYPE_COUNT)); + vl->append(new Val(src, TYPE_COUNT)); + vl->append(new StringVal(msg)); + mgr.QueueEvent(remote_log, vl); + } + +#ifdef DEBUG const int BUFSIZE = 1024; char buffer[BUFSIZE]; - int len = 0; if ( peer ) - len += snprintf(buffer + len, sizeof(buffer) - len, - "[#%d/%s:%d] ", int(peer->id), ip2a(peer->ip), - peer->port); + len += snprintf(buffer + len, sizeof(buffer) - len, "[#%d/%s:%d] ", + int(peer->id), peer->ip.AsURIString().c_str(), + peer->port); len += safe_snprintf(buffer + len, sizeof(buffer) - len, "%s", msg); - val_list* vl = new val_list(); - vl->append(new Val(level, TYPE_COUNT)); - vl->append(new Val(src, TYPE_COUNT)); - vl->append(new StringVal(buffer)); - mgr.QueueEvent(remote_log, vl); - DEBUG_COMM(fmt("parent: %.6f %s", current_time(), buffer)); +#endif } void RemoteSerializer::RaiseEvent(EventHandlerPtr event, Peer* peer, @@ -3227,8 +3286,10 @@ SocketComm::SocketComm() terminating = false; killing = false; - listen_fd_clear = -1; - listen_fd_ssl = -1; + listen_port = 0; + listen_ssl = false; + enable_ipv6 = false; + bind_retry_interval = 0; listen_next_try = 0; // We don't want to use the signal handlers of our parent. @@ -3251,8 +3312,7 @@ SocketComm::~SocketComm() delete peers[i]->io; delete io; - close(listen_fd_clear); - close(listen_fd_ssl); + CloseListenFDs(); } static unsigned int first_rtime = 0; @@ -3301,20 +3361,13 @@ void SocketComm::Run() } if ( listen_next_try && time(0) > listen_next_try ) - Listen(listen_if, listen_port, listen_ssl); + Listen(); - if ( listen_fd_clear >= 0 ) + for ( size_t i = 0; i < listen_fds.size(); ++i ) { - FD_SET(listen_fd_clear, &fd_read); - if ( listen_fd_clear > max_fd ) - max_fd = listen_fd_clear; - } - - if ( listen_fd_ssl >= 0 ) - { - FD_SET(listen_fd_ssl, &fd_read); - if ( listen_fd_ssl > max_fd ) - max_fd = listen_fd_ssl; + FD_SET(listen_fds[i], &fd_read); + if ( listen_fds[i] > max_fd ) + max_fd = listen_fds[i]; } if ( io->IsFillingUp() && ! shutting_conns_down ) @@ -3366,6 +3419,11 @@ void SocketComm::Run() small_timeout.tv_usec = io->CanWrite() || io->CanRead() ? 1 : 10; +#if 0 + if ( ! io->CanWrite() ) + usleep(10); +#endif + int a = select(max_fd + 1, &fd_read, &fd_write, &fd_except, &small_timeout); @@ -3398,12 +3456,9 @@ void SocketComm::Run() } } - if ( listen_fd_clear >= 0 && - FD_ISSET(listen_fd_clear, &fd_read) ) - AcceptConnection(listen_fd_clear); - - if ( listen_fd_ssl >= 0 && FD_ISSET(listen_fd_ssl, &fd_read) ) - AcceptConnection(listen_fd_ssl); + for ( size_t i = 0; i < listen_fds.size(); ++i ) + if ( FD_ISSET(listen_fds[i], &fd_read) ) + AcceptConnection(listen_fds[i]); // Hack to display CPU usage of the child, triggered via // SIGPROF. @@ -3527,13 +3582,8 @@ bool SocketComm::DoParentMessage() case MSG_LISTEN_STOP: { - if ( listen_fd_ssl >= 0 ) - close(listen_fd_ssl); + CloseListenFDs(); - if ( listen_fd_clear >= 0 ) - close(listen_fd_clear); - - listen_fd_clear = listen_fd_ssl = -1; Log("stopped listening"); return true; @@ -3673,14 +3723,43 @@ bool SocketComm::ForwardChunkToPeer() bool SocketComm::ProcessConnectTo() { assert(parent_args); - uint32* args = (uint32*) parent_args->data; + vector args = tokenize(parent_args->data, ','); + + if ( args.size() != 6 ) + { + Error(fmt("ProcessConnectTo() bad number of arguments")); + return false; + } Peer* peer = new Peer; - peer->id = ntohl(args[0]); - peer->ip = ntohl(args[1]); - peer->port = ntohl(args[2]); - peer->retry = ntohl(args[3]); - peer->ssl = ntohl(args[4]); + + if ( ! atoi_n(args[0].size(), args[0].c_str(), 0, 10, peer->id) ) + { + Error(fmt("ProccessConnectTo() bad peer id string")); + delete peer; + return false; + } + + peer->ip = IPAddr(args[1]); + peer->zone_id = args[2]; + + if ( ! atoi_n(args[3].size(), args[3].c_str(), 0, 10, peer->port) ) + { + Error(fmt("ProcessConnectTo() bad peer port string")); + delete peer; + return false; + } + + if ( ! atoi_n(args[4].size(), args[4].c_str(), 0, 10, peer->retry) ) + { + Error(fmt("ProcessConnectTo() bad peer retry string")); + delete peer; + return false; + } + + peer->ssl = false; + if ( args[5] != "0" ) + peer->ssl = true; return Connect(peer); } @@ -3688,22 +3767,43 @@ bool SocketComm::ProcessConnectTo() bool SocketComm::ProcessListen() { assert(parent_args); - uint32* args = (uint32*) parent_args->data; + vector args = tokenize(parent_args->data, ','); - uint32 addr = ntohl(args[0]); - uint16 port = uint16(ntohl(args[1])); - uint32 ssl = ntohl(args[2]); + if ( args.size() != 6 ) + { + Error(fmt("ProcessListen() bad number of arguments")); + return false; + } - return Listen(addr, port, ssl); + listen_if = args[0]; + + if ( ! atoi_n(args[1].size(), args[1].c_str(), 0, 10, listen_port) ) + { + Error(fmt("ProcessListen() bad peer port string")); + return false; + } + + listen_ssl = false; + if ( args[2] != "0" ) + listen_ssl = true; + + enable_ipv6 = false; + if ( args[3] != "0" ) + enable_ipv6 = true; + + listen_zone_id = args[4]; + + if ( ! atoi_n(args[5].size(), args[5].c_str(), 0, 10, bind_retry_interval) ) + { + Error(fmt("ProcessListen() bad peer port string")); + return false; + } + + return Listen(); } bool SocketComm::ProcessParentCompress() { -#ifndef HAVE_LIBZ - InternalError("supposed to enable compression but don't have zlib"); - return false; -#else - assert(parent_args); uint32* args = (uint32*) parent_args->data; @@ -3727,7 +3827,6 @@ bool SocketComm::ProcessParentCompress() Log(fmt("enabling compression (level %d)", level), parent_peer); return true; -#endif } bool SocketComm::ProcessRemoteMessage(SocketComm::Peer* peer) @@ -3847,10 +3946,6 @@ bool SocketComm::ProcessPeerCompress(Peer* peer) { peer->state = MSG_NONE; -#ifndef HAVE_LIBZ - Error("peer compresses although we do not support it", peer); - return false; -#else if ( ! parent_peer->compressor ) { parent_peer->io = new CompressedChunkedIO(parent_peer->io); @@ -3862,34 +3957,58 @@ bool SocketComm::ProcessPeerCompress(Peer* peer) ((CompressedChunkedIO*) peer->io)->EnableDecompression(); Log("enabling decompression", peer); return true; -#endif } bool SocketComm::Connect(Peer* peer) { - struct sockaddr_in server; + int status; + addrinfo hints, *res, *res0; + bzero(&hints, sizeof(hints)); - int sockfd = socket(PF_INET, SOCK_STREAM, 0); - if ( sockfd < 0 ) + hints.ai_family = PF_UNSPEC; + hints.ai_protocol = IPPROTO_TCP; + hints.ai_socktype = SOCK_STREAM; + hints.ai_flags = AI_NUMERICHOST; + + char port_str[16]; + modp_uitoa10(peer->port, port_str); + + string gaihostname(peer->ip.AsString()); + if ( peer->zone_id != "" ) + gaihostname.append("%").append(peer->zone_id); + + status = getaddrinfo(gaihostname.c_str(), port_str, &hints, &res0); + if ( status != 0 ) { - Error(fmt("can't create socket, %s", strerror(errno))); + Error(fmt("getaddrinfo error: %s", gai_strerror(status))); return false; } - bzero(&server, sizeof(server)); - server.sin_family = AF_INET; - server.sin_port = htons(peer->port); - server.sin_addr.s_addr = htonl(peer->ip); - - bool connected = true; - - if ( connect(sockfd, (sockaddr*) &server, sizeof(server)) < 0 ) + int sockfd = -1; + for ( res = res0; res; res = res->ai_next ) { - Error(fmt("connect failed: %s", strerror(errno)), peer); - close(sockfd); - connected = false; + sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol); + if ( sockfd < 0 ) + { + Error(fmt("can't create connect socket, %s", strerror(errno))); + continue; + } + + if ( connect(sockfd, res->ai_addr, res->ai_addrlen) < 0 ) + { + Error(fmt("connect failed: %s", strerror(errno)), peer); + safe_close(sockfd); + sockfd = -1; + continue; + } + + break; } + freeaddrinfo(res0); + + bool connected = sockfd != -1; + if ( ! (connected || peer->retry) ) { CloseConnection(peer, false); @@ -3914,9 +4033,7 @@ bool SocketComm::Connect(Peer* peer) if ( connected ) { if ( peer->ssl ) - { peer->io = new ChunkedIOSSL(sockfd, false); - } else peer->io = new ChunkedIOFd(sockfd, "child->peer"); @@ -3931,7 +4048,13 @@ bool SocketComm::Connect(Peer* peer) if ( connected ) { Log("connected", peer); - if ( ! SendToParent(MSG_CONNECTED, peer, 2, peer->ip, peer->port) ) + + const size_t BUFSIZE = 1024; + char* data = new char[BUFSIZE]; + snprintf(data, BUFSIZE, "%s,%"PRIu32, peer->ip.AsString().c_str(), + peer->port); + + if ( ! SendToParent(MSG_CONNECTED, peer, data) ) return false; } @@ -3968,86 +4091,156 @@ bool SocketComm::CloseConnection(Peer* peer, bool reconnect) return true; } -bool SocketComm::Listen(uint32 ip, uint16 port, bool expect_ssl) +bool SocketComm::Listen() { - int* listen_fd = expect_ssl ? &listen_fd_ssl : &listen_fd_clear; + int status, on = 1; + addrinfo hints, *res, *res0; + bzero(&hints, sizeof(hints)); - if ( *listen_fd >= 0 ) - close(*listen_fd); + IPAddr listen_ip(listen_if); - struct sockaddr_in server; - - *listen_fd = socket(PF_INET, SOCK_STREAM, 0); - if ( *listen_fd < 0 ) + if ( enable_ipv6 ) { - Error(fmt("can't create listen socket, %s", - strerror(errno))); + if ( listen_ip == IPAddr("0.0.0.0") || listen_ip == IPAddr("::") ) + hints.ai_family = PF_UNSPEC; + else + hints.ai_family = (listen_ip.GetFamily() == IPv4 ? PF_INET : PF_INET6); + } + else + hints.ai_family = PF_INET; + + hints.ai_protocol = IPPROTO_TCP; + hints.ai_socktype = SOCK_STREAM; + hints.ai_flags = AI_PASSIVE | AI_NUMERICHOST; + + char port_str[16]; + modp_uitoa10(listen_port, port_str); + + string scoped_addr(listen_if); + if ( listen_zone_id != "" ) + scoped_addr.append("%").append(listen_zone_id); + + const char* addr_str = 0; + if ( listen_ip != IPAddr("0.0.0.0") && listen_ip != IPAddr("::") ) + addr_str = scoped_addr.c_str(); + + CloseListenFDs(); + + if ( (status = getaddrinfo(addr_str, port_str, &hints, &res0)) != 0 ) + { + Error(fmt("getaddrinfo error: %s", gai_strerror(status))); return false; } - // Set SO_REUSEADDR. - int turn_on = 1; - if ( setsockopt(*listen_fd, SOL_SOCKET, SO_REUSEADDR, - &turn_on, sizeof(turn_on)) < 0 ) + for ( res = res0; res; res = res->ai_next ) { - Error(fmt("can't set SO_REUSEADDR, %s", - strerror(errno))); - return false; - } - - bzero(&server, sizeof(server)); - server.sin_family = AF_INET; - server.sin_port = htons(port); - server.sin_addr.s_addr = htonl(ip); - - if ( bind(*listen_fd, (sockaddr*) &server, sizeof(server)) < 0 ) - { - Error(fmt("can't bind to port %d, %s", port, strerror(errno))); - close(*listen_fd); - *listen_fd = -1; - - if ( errno == EADDRINUSE ) + if ( res->ai_family != AF_INET && res->ai_family != AF_INET6 ) { - listen_if = ip; - listen_port = port; - listen_ssl = expect_ssl; - // FIXME: Make this timeout configurable. - listen_next_try = time(0) + 30; + Error(fmt("can't create listen socket: unknown address family, %d", + res->ai_family)); + continue; } - return false; + + IPAddr a = (res->ai_family == AF_INET) ? + IPAddr(((sockaddr_in*)res->ai_addr)->sin_addr) : + IPAddr(((sockaddr_in6*)res->ai_addr)->sin6_addr); + + string l_addr_str(a.AsURIString()); + if ( listen_zone_id != "") + l_addr_str.append("%").append(listen_zone_id); + + int fd = socket(res->ai_family, res->ai_socktype, res->ai_protocol); + if ( fd < 0 ) + { + Error(fmt("can't create listen socket, %s", strerror(errno))); + continue; + } + + if ( setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &on, sizeof(on)) < 0 ) + Error(fmt("can't set SO_REUSEADDR, %s", strerror(errno))); + + // For IPv6 listening sockets, we don't want do dual binding to also + // get IPv4-mapped addresses because that's not as portable. e.g. + // many BSDs don't allow that. + if ( res->ai_family == AF_INET6 && + setsockopt(fd, IPPROTO_IPV6, IPV6_V6ONLY, &on, sizeof(on)) < 0 ) + Error(fmt("can't set IPV6_V6ONLY, %s", strerror(errno))); + + if ( bind(fd, res->ai_addr, res->ai_addrlen) < 0 ) + { + Error(fmt("can't bind to %s:%s, %s", l_addr_str.c_str(), + port_str, strerror(errno))); + + if ( errno == EADDRINUSE ) + { + // Abandon completely this attempt to set up listening sockets, + // try again later. + safe_close(fd); + CloseListenFDs(); + listen_next_try = time(0) + bind_retry_interval; + return false; + } + + safe_close(fd); + continue; + } + + if ( listen(fd, 50) < 0 ) + { + Error(fmt("can't listen on %s:%s, %s", l_addr_str.c_str(), + port_str, strerror(errno))); + safe_close(fd); + continue; + } + + listen_fds.push_back(fd); + Log(fmt("listening on %s:%s (%s)", l_addr_str.c_str(), port_str, + listen_ssl ? "ssl" : "clear")); } - if ( listen(*listen_fd, 50) < 0 ) - { - Error(fmt("can't listen, %s", strerror(errno))); - return false; - } + freeaddrinfo(res0); listen_next_try = 0; - Log(fmt("listening on %s:%d (%s)", - ip2a(ip), port, expect_ssl ? "ssl" : "clear")); - return true; + return listen_fds.size() > 0; } bool SocketComm::AcceptConnection(int fd) { - sockaddr_in client; - socklen_t len = sizeof(client); + union { + sockaddr_storage ss; + sockaddr_in s4; + sockaddr_in6 s6; + } client; - int clientfd = accept(fd, (sockaddr*) &client, &len); + socklen_t len = sizeof(client.ss); + + int clientfd = accept(fd, (sockaddr*) &client.ss, &len); if ( clientfd < 0 ) { - Error(fmt("accept failed, %s %d", - strerror(errno), errno)); + Error(fmt("accept failed, %s %d", strerror(errno), errno)); + return false; + } + + if ( client.ss.ss_family != AF_INET && client.ss.ss_family != AF_INET6 ) + { + Error(fmt("accept fail, unknown address family %d", + client.ss.ss_family)); + safe_close(clientfd); return false; } Peer* peer = new Peer; peer->id = id_counter++; - peer->ip = ntohl(client.sin_addr.s_addr); - peer->port = ntohs(client.sin_port); + peer->ip = client.ss.ss_family == AF_INET ? + IPAddr(client.s4.sin_addr) : + IPAddr(client.s6.sin6_addr); + + peer->port = client.ss.ss_family == AF_INET ? + ntohs(client.s4.sin_port) : + ntohs(client.s6.sin6_port); + peer->connected = true; - peer->ssl = (fd == listen_fd_ssl); + peer->ssl = listen_ssl; peer->compressor = false; if ( peer->ssl ) @@ -4057,8 +4250,7 @@ bool SocketComm::AcceptConnection(int fd) if ( ! peer->io->Init() ) { - Error(fmt("can't init peer io: %s", - peer->io->Error()), false); + Error(fmt("can't init peer io: %s", peer->io->Error()), false); return false; } @@ -4066,7 +4258,12 @@ bool SocketComm::AcceptConnection(int fd) Log(fmt("accepted %s connection", peer->ssl ? "SSL" : "clear"), peer); - if ( ! SendToParent(MSG_CONNECTED, peer, 2, peer->ip, peer->port) ) + const size_t BUFSIZE = 1024; + char* data = new char[BUFSIZE]; + snprintf(data, BUFSIZE, "%s,%"PRIu32, peer->ip.AsString().c_str(), + peer->port); + + if ( ! SendToParent(MSG_CONNECTED, peer, data) ) return false; return true; @@ -4083,13 +4280,27 @@ const char* SocketComm::MakeLogString(const char* msg, Peer* peer) int len = 0; if ( peer ) + { + string scoped_addr(peer->ip.AsURIString()); + if ( peer->zone_id != "" ) + scoped_addr.append("%").append(peer->zone_id); + len = snprintf(buffer, BUFSIZE, "[#%d/%s:%d] ", int(peer->id), - ip2a(peer->ip), peer->port); + scoped_addr.c_str(), peer->port); + } len += safe_snprintf(buffer + len, BUFSIZE - len, "%s", msg); return buffer; } +void SocketComm::CloseListenFDs() + { + for ( size_t i = 0; i < listen_fds.size(); ++i ) + safe_close(listen_fds[i]); + + listen_fds.clear(); + } + void SocketComm::Error(const char* msg, bool kill_me) { if ( kill_me ) @@ -4132,7 +4343,7 @@ void SocketComm::Log(const char* msg, Peer* peer) void SocketComm::InternalError(const char* msg) { - fprintf(stderr, "interal error in child: %s\n", msg); + fprintf(stderr, "internal error in child: %s\n", msg); Kill(); } @@ -4147,8 +4358,7 @@ void SocketComm::Kill() LogProf(); Log("terminating"); - close(listen_fd_clear); - close(listen_fd_ssl); + CloseListenFDs(); kill(getpid(), SIGTERM); diff --git a/src/RemoteSerializer.h b/src/RemoteSerializer.h index f849a6a2b5..5ff7fff8d6 100644 --- a/src/RemoteSerializer.h +++ b/src/RemoteSerializer.h @@ -9,13 +9,17 @@ #include "IOSource.h" #include "Stats.h" #include "File.h" +#include "logging/WriterBackend.h" -// All IP arguments are in host byte-order. -// FIXME: Change this to network byte order +#include +#include class IncrementalSendTimer; -class LogField; -class LogVal; + +namespace threading { + struct Field; + struct Value; +} // This class handles the communication done in Bro's main loop. class RemoteSerializer : public Serializer, public IOSource { @@ -32,7 +36,8 @@ public: static const PeerID PEER_NONE = SOURCE_LOCAL; // Connect to host (returns PEER_NONE on error). - PeerID Connect(addr_type ip, uint16 port, const char* our_class, double retry, bool use_ssl); + PeerID Connect(const IPAddr& ip, const string& zone_id, uint16 port, + const char* our_class, double retry, bool use_ssl); // Close connection to host. bool CloseConnection(PeerID peer); @@ -60,7 +65,8 @@ public: bool CompleteHandshake(PeerID peer); // Start to listen. - bool Listen(addr_type ip, uint16 port, bool expect_ssl); + bool Listen(const IPAddr& ip, uint16 port, bool expect_ssl, bool ipv6, + const string& zone_id, double retry); // Stop it. bool StopListening(); @@ -99,13 +105,13 @@ public: bool SendPrintHookEvent(BroFile* f, const char* txt, size_t len); // Send a request to create a writer on a remote side. - bool SendLogCreateWriter(PeerID peer, EnumVal* id, EnumVal* writer, string path, int num_fields, const LogField* const * fields); + bool SendLogCreateWriter(PeerID peer, EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields); // Broadcasts a request to create a writer. - bool SendLogCreateWriter(EnumVal* id, EnumVal* writer, string path, int num_fields, const LogField* const * fields); + bool SendLogCreateWriter(EnumVal* id, EnumVal* writer, const logging::WriterBackend::WriterInfo& info, int num_fields, const threading::Field* const * fields); // Broadcast a log entry to everybody interested. - bool SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const LogVal* const * vals); + bool SendLogWrite(EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals); // Synchronzizes time with all connected peers. Returns number of // current sync-point, or -1 on error. @@ -176,9 +182,7 @@ protected: struct Peer { PeerID id; // Unique ID (non-zero) per peer. - // ### Fix: currently, we only work for IPv4. - // addr_type ip; - uint32 ip; + IPAddr ip; uint16 port; handler_list handlers; @@ -198,6 +202,7 @@ protected: static const int NO_CACHING = 2; static const int PID_64BIT = 4; static const int NEW_CACHE_STRATEGY = 8; + static const int BROCCOLI_PEER = 16; // Constants to remember to who did something. static const int NONE = 0; @@ -273,7 +278,7 @@ protected: bool ProcessLogWrite(); bool ProcessRequestLogs(); - Peer* AddPeer(uint32 ip, uint16 port, PeerID id = PEER_NONE); + Peer* AddPeer(const IPAddr& ip, uint16 port, PeerID id = PEER_NONE); Peer* LookupPeer(PeerID id, bool only_if_connected); void RemovePeer(Peer* peer); bool IsConnectedPeer(PeerID id); @@ -299,7 +304,7 @@ protected: bool SendID(SerialInfo* info, Peer* peer, const ID& id); bool SendCapabilities(Peer* peer); bool SendPacket(SerialInfo* info, Peer* peer, const Packet& p); - bool SendLogWrite(Peer* peer, EnumVal* id, EnumVal* writer, string path, int num_fields, const LogVal* const * vals); + bool SendLogWrite(Peer* peer, EnumVal* id, EnumVal* writer, string path, int num_fields, const threading::Value* const * vals); void UnregisterHandlers(Peer* peer); void RaiseEvent(EventHandlerPtr event, Peer* peer, const char* arg = 0); @@ -334,6 +339,7 @@ private: int propagate_accesses; bool ignore_accesses; bool terminating; + int received_logs; Peer* source_peer; PeerID id_counter; // Keeps track of assigned IDs. uint32 current_sync_point; @@ -407,7 +413,6 @@ protected: { id = 0; io = 0; - ip = 0; port = 0; state = 0; connected = false; @@ -419,7 +424,8 @@ protected: RemoteSerializer::PeerID id; ChunkedIO* io; - uint32 ip; + IPAddr ip; + string zone_id; uint16 port; char state; bool connected; @@ -432,7 +438,7 @@ protected: bool compressor; }; - bool Listen(uint32 ip, uint16 port, bool expect_ssl); + bool Listen(); bool AcceptConnection(int listen_fd); bool Connect(Peer* peer); bool CloseConnection(Peer* peer, bool reconnect); @@ -477,6 +483,9 @@ protected: bool ForwardChunkToPeer(); const char* MakeLogString(const char* msg, Peer *peer); + // Closes all file descriptors associated with listening sockets. + void CloseListenFDs(); + // Peers we are communicating with: declare(PList, Peer); typedef PList(Peer) peer_list; @@ -493,15 +502,17 @@ protected: char parent_msgtype; ChunkedIO::Chunk* parent_args; - int listen_fd_clear; - int listen_fd_ssl; + vector listen_fds; // If the port we're trying to bind to is already in use, we will retry // it regularly. - uint32 listen_if; // Fix: only supports IPv4 + string listen_if; + string listen_zone_id; // RFC 4007 IPv6 zone_id uint16 listen_port; - bool listen_ssl; - time_t listen_next_try; + bool listen_ssl; // use SSL for IO + bool enable_ipv6; // allow IPv6 listen sockets + uint32 bind_retry_interval; // retry interval for already-in-use sockets + time_t listen_next_try; // time at which to try another bind bool shutting_conns_down; bool terminating; bool killing; diff --git a/src/Reporter.cc b/src/Reporter.cc index fca9a6a233..18f39ce4af 100644 --- a/src/Reporter.cc +++ b/src/Reporter.cc @@ -149,13 +149,13 @@ void Reporter::WeirdHelper(EventHandlerPtr event, Val* conn_val, const char* add va_list ap; va_start(ap, fmt_name); - DoLog("weird", event, stderr, 0, vl, false, false, 0, fmt_name, ap); + DoLog("weird", event, 0, 0, vl, false, false, 0, fmt_name, ap); va_end(ap); delete vl; } -void Reporter::WeirdFlowHelper(const uint32* orig, const uint32* resp, const char* fmt_name, ...) +void Reporter::WeirdFlowHelper(const IPAddr& orig, const IPAddr& resp, const char* fmt_name, ...) { val_list* vl = new val_list(2); vl->append(new AddrVal(orig)); @@ -163,7 +163,7 @@ void Reporter::WeirdFlowHelper(const uint32* orig, const uint32* resp, const cha va_list ap; va_start(ap, fmt_name); - DoLog("weird", flow_weird, stderr, 0, vl, false, false, 0, fmt_name, ap); + DoLog("weird", flow_weird, 0, 0, vl, false, false, 0, fmt_name, ap); va_end(ap); delete vl; @@ -184,7 +184,7 @@ void Reporter::Weird(Val* conn_val, const char* name, const char* addl) WeirdHelper(conn_weird, conn_val, addl, "%s", name); } -void Reporter::Weird(const uint32* orig, const uint32* resp, const char* name) +void Reporter::Weird(const IPAddr& orig, const IPAddr& resp, const char* name) { WeirdFlowHelper(orig, resp, "%s", name); } @@ -326,7 +326,8 @@ void Reporter::DoLog(const char* prefix, EventHandlerPtr event, FILE* out, Conne s += buffer; s += "\n"; - fprintf(out, "%s", s.c_str()); + if ( out ) + fprintf(out, "%s", s.c_str()); if ( addl ) { diff --git a/src/Reporter.h b/src/Reporter.h index 20d7b17e2c..e610e1519e 100644 --- a/src/Reporter.h +++ b/src/Reporter.h @@ -9,8 +9,8 @@ #include #include "util.h" -#include "net_util.h" #include "EventHandler.h" +#include "IPAddr.h" class Connection; class Location; @@ -74,7 +74,7 @@ public: void Weird(const char* name); // Raises net_weird(). void Weird(Connection* conn, const char* name, const char* addl = ""); // Raises conn_weird(). void Weird(Val* conn_val, const char* name, const char* addl = ""); // Raises conn_weird(). - void Weird(const uint32* orig, const uint32* resp, const char* name); // Raises flow_weird(). + void Weird(const IPAddr& orig, const IPAddr& resp, const char* name); // Raises flow_weird(). // Syslog a message. This methods does nothing if we're running // offline from a trace. @@ -121,7 +121,7 @@ private: // The order if addl, name needs to be like that since fmt_name can // contain format specifiers void WeirdHelper(EventHandlerPtr event, Val* conn_val, const char* addl, const char* fmt_name, ...); - void WeirdFlowHelper(const uint32* orig, const uint32* resp, const char* fmt_name, ...); + void WeirdFlowHelper(const IPAddr& orig, const IPAddr& resp, const char* fmt_name, ...); int errors; bool via_events; diff --git a/src/RuleCondition.cc b/src/RuleCondition.cc index 8852747cc4..410f6a1b3e 100644 --- a/src/RuleCondition.cc +++ b/src/RuleCondition.cc @@ -126,6 +126,23 @@ RuleConditionEval::RuleConditionEval(const char* func) rules_error("unknown identifier", func); return; } + + if ( id->Type()->Tag() == TYPE_FUNC ) + { + // Validate argument quantity and type. + FuncType* f = id->Type()->AsFuncType(); + + if ( f->YieldType()->Tag() != TYPE_BOOL ) + rules_error("eval function type must yield a 'bool'", func); + + TypeList tl; + tl.Append(internal_type("signature_state")->Ref()); + tl.Append(base_type(TYPE_STRING)); + + if ( ! f->CheckArgs(tl.Types()) ) + rules_error("eval function parameters must be a 'signature_state' " + "and a 'string' type", func); + } } bool RuleConditionEval::DoMatch(Rule* rule, RuleEndpointState* state, diff --git a/src/RuleMatcher.cc b/src/RuleMatcher.cc index ac8f8a867c..c71f86108a 100644 --- a/src/RuleMatcher.cc +++ b/src/RuleMatcher.cc @@ -1,4 +1,5 @@ #include +#include #include "config.h" @@ -41,6 +42,23 @@ RuleHdrTest::RuleHdrTest(Prot arg_prot, uint32 arg_offset, uint32 arg_size, level = 0; } +RuleHdrTest::RuleHdrTest(Prot arg_prot, Comp arg_comp, vector arg_v) + { + prot = arg_prot; + offset = 0; + size = 0; + comp = arg_comp; + vals = new maskedvalue_list; + prefix_vals = arg_v; + sibling = 0; + child = 0; + pattern_rules = 0; + pure_rules = 0; + ruleset = new IntSet; + id = ++idcounter; + level = 0; + } + Val* RuleMatcher::BuildRuleStateValue(const Rule* rule, const RuleEndpointState* state) const { @@ -63,6 +81,8 @@ RuleHdrTest::RuleHdrTest(RuleHdrTest& h) loop_over_list(*h.vals, i) vals->append(new MaskedValue(*(*h.vals)[i])); + prefix_vals = h.prefix_vals; + for ( int j = 0; j < Rule::TYPES; ++j ) { loop_over_list(h.psets[j], k) @@ -73,6 +93,9 @@ RuleHdrTest::RuleHdrTest(RuleHdrTest& h) copied_set->ids = orig_set->ids; loop_over_list(orig_set->patterns, l) copied_set->patterns.append(copy_string(orig_set->patterns[l])); + delete copied_set; + // TODO: Why do we create copied_set only to then + // never use it? } } @@ -111,6 +134,10 @@ bool RuleHdrTest::operator==(const RuleHdrTest& h) (*vals)[i]->mask != (*h.vals)[i]->mask ) return false; + for ( size_t i = 0; i < prefix_vals.size(); ++i ) + if ( ! (prefix_vals[i] == h.prefix_vals[i]) ) + return false; + return true; } @@ -126,6 +153,9 @@ void RuleHdrTest::PrintDebug() fprintf(stderr, " 0x%08x/0x%08x", (*vals)[i]->val, (*vals)[i]->mask); + for ( size_t i = 0; i < prefix_vals.size(); ++i ) + fprintf(stderr, " %s", prefix_vals[i].AsString().c_str()); + fprintf(stderr, "\n"); } @@ -188,7 +218,7 @@ void RuleMatcher::Delete(RuleHdrTest* node) bool RuleMatcher::ReadFiles(const name_list& files) { -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG HeapLeakChecker::Disabler disabler; #endif @@ -407,29 +437,129 @@ static inline uint32 getval(const u_char* data, int size) } -// A line which can be inserted into the macros below for debugging -// fprintf(stderr, "%.06f %08x & %08x %s %08x\n", network_time, v, (mvals)[i]->mask, #op, (mvals)[i]->val); - // Evaluate a value list (matches if at least one value matches). -#define DO_MATCH_OR( mvals, v, op ) \ - { \ - loop_over_list((mvals), i) \ - { \ - if ( ((v) & (mvals)[i]->mask) op (mvals)[i]->val ) \ - goto match; \ - } \ - goto no_match; \ +template +static inline bool match_or(const maskedvalue_list& mvals, uint32 v, FuncT comp) + { + loop_over_list(mvals, i) + { + if ( comp(v & mvals[i]->mask, mvals[i]->val) ) + return true; + } + return false; + } + +// Evaluate a prefix list (matches if at least one value matches). +template +static inline bool match_or(const vector& prefixes, const IPAddr& a, + FuncT comp) + { + for ( size_t i = 0; i < prefixes.size(); ++i ) + { + IPAddr masked(a); + masked.Mask(prefixes[i].LengthIPv6()); + if ( comp(masked, prefixes[i].Prefix()) ) + return true; + } + return false; } // Evaluate a value list (doesn't match if any value matches). -#define DO_MATCH_NOT_AND( mvals, v, op ) \ - { \ - loop_over_list((mvals), i) \ - { \ - if ( ((v) & (mvals)[i]->mask) op (mvals)[i]->val ) \ - goto no_match; \ - } \ - goto match; \ +template +static inline bool match_not_and(const maskedvalue_list& mvals, uint32 v, + FuncT comp) + { + loop_over_list(mvals, i) + { + if ( comp(v & mvals[i]->mask, mvals[i]->val) ) + return false; + } + return true; + } + +// Evaluate a prefix list (doesn't match if any value matches). +template +static inline bool match_not_and(const vector& prefixes, + const IPAddr& a, FuncT comp) + { + for ( size_t i = 0; i < prefixes.size(); ++i ) + { + IPAddr masked(a); + masked.Mask(prefixes[i].LengthIPv6()); + if ( comp(masked, prefixes[i].Prefix()) ) + return false; + } + return true; + } + +static inline bool compare(const maskedvalue_list& mvals, uint32 v, + RuleHdrTest::Comp comp) + { + switch ( comp ) { + case RuleHdrTest::EQ: + return match_or(mvals, v, std::equal_to()); + break; + + case RuleHdrTest::NE: + return match_not_and(mvals, v, std::equal_to()); + break; + + case RuleHdrTest::LT: + return match_or(mvals, v, std::less()); + break; + + case RuleHdrTest::GT: + return match_or(mvals, v, std::greater()); + break; + + case RuleHdrTest::LE: + return match_or(mvals, v, std::less_equal()); + break; + + case RuleHdrTest::GE: + return match_or(mvals, v, std::greater_equal()); + break; + + default: + reporter->InternalError("unknown comparison type"); + break; + } + return false; + } + +static inline bool compare(const vector& prefixes, const IPAddr& a, + RuleHdrTest::Comp comp) + { + switch ( comp ) { + case RuleHdrTest::EQ: + return match_or(prefixes, a, std::equal_to()); + break; + + case RuleHdrTest::NE: + return match_not_and(prefixes, a, std::equal_to()); + break; + + case RuleHdrTest::LT: + return match_or(prefixes, a, std::less()); + break; + + case RuleHdrTest::GT: + return match_or(prefixes, a, std::greater()); + break; + + case RuleHdrTest::LE: + return match_or(prefixes, a, std::less_equal()); + break; + + case RuleHdrTest::GE: + return match_or(prefixes, a, std::greater_equal()); + break; + + default: + reporter->InternalError("unknown comparison type"); + break; + } + return false; } RuleEndpointState* RuleMatcher::InitEndpoint(Analyzer* analyzer, @@ -489,66 +619,54 @@ RuleEndpointState* RuleMatcher::InitEndpoint(Analyzer* analyzer, if ( ip ) { - // Get start of transport layer. - const u_char* transport = ip->Payload(); - // Descend the RuleHdrTest tree further. for ( RuleHdrTest* h = hdr_test->child; h; h = h->sibling ) { - const u_char* data; + bool match = false; // Evaluate the header test. switch ( h->prot ) { + case RuleHdrTest::NEXT: + match = compare(*h->vals, ip->NextProto(), h->comp); + break; + case RuleHdrTest::IP: - data = (const u_char*) ip->IP4_Hdr(); + if ( ! ip->IP4_Hdr() ) + continue; + + match = compare(*h->vals, getval((const u_char*)ip->IP4_Hdr() + h->offset, h->size), h->comp); + break; + + case RuleHdrTest::IPv6: + if ( ! ip->IP6_Hdr() ) + continue; + + match = compare(*h->vals, getval((const u_char*)ip->IP6_Hdr() + h->offset, h->size), h->comp); break; case RuleHdrTest::ICMP: + case RuleHdrTest::ICMPv6: case RuleHdrTest::TCP: case RuleHdrTest::UDP: - data = transport; + match = compare(*h->vals, getval(ip->Payload() + h->offset, h->size), h->comp); + break; + + case RuleHdrTest::IPSrc: + match = compare(h->prefix_vals, ip->IPHeaderSrcAddr(), h->comp); + break; + + case RuleHdrTest::IPDst: + match = compare(h->prefix_vals, ip->IPHeaderDstAddr(), h->comp); break; default: - data = 0; reporter->InternalError("unknown protocol"); + break; } - // ### data can be nil here if it's an - // IPv6 packet and we're doing an IP test. - if ( ! data ) - continue; - - // Sorry for the hidden gotos :-) - switch ( h->comp ) { - case RuleHdrTest::EQ: - DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), ==); - - case RuleHdrTest::NE: - DO_MATCH_NOT_AND(*h->vals, getval(data + h->offset, h->size), ==); - - case RuleHdrTest::LT: - DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), <); - - case RuleHdrTest::GT: - DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), >); - - case RuleHdrTest::LE: - DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), <=); - - case RuleHdrTest::GE: - DO_MATCH_OR(*h->vals, getval(data + h->offset, h->size), >=); - - default: - reporter->InternalError("unknown comparision type"); - } - -no_match: - continue; - -match: - tests.append(h); + if ( match ) + tests.append(h); } } } @@ -1025,7 +1143,7 @@ void RuleMatcher::DumpStateStats(BroFile* f, RuleHdrTest* hdr_test) Rule* r = Rule::rule_table[set->ids[k] - 1]; f->Write(fmt("%s ", r->ID())); } - + f->Write("\n"); } } @@ -1047,8 +1165,11 @@ static Val* get_bro_val(const char* label) } -// Converts an atomic Val and appends it to the list -static bool val_to_maskedval(Val* v, maskedvalue_list* append_to) +// Converts an atomic Val and appends it to the list. For subnet types, +// if the prefix_vector param isn't null, appending to that is preferred +// over appending to the masked val list. +static bool val_to_maskedval(Val* v, maskedvalue_list* append_to, + vector* prefix_vector) { MaskedValue* mval = new MaskedValue; @@ -1067,30 +1188,40 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to) break; case TYPE_SUBNET: -#ifdef BROv6 { - uint32* n = v->AsSubNet()->net; - uint32* m = v->AsSubNetVal()->Mask(); - bool is_v4_mask = m[0] == 0xffffffff && - m[1] == m[0] && m[2] == m[0]; - - if ( is_v4_addr(n) && is_v4_mask ) + if ( prefix_vector ) { - mval->val = ntohl(to_v4_addr(n)); - mval->mask = m[3]; + prefix_vector->push_back(v->AsSubNet()); + delete mval; + return true; } - else { - rules_error("IPv6 subnets not supported"); - mval->val = 0; - mval->mask = 0; + const uint32* n; + uint32 m[4]; + v->AsSubNet().Prefix().GetBytes(&n); + v->AsSubNetVal()->Mask().CopyIPv6(m); + + for ( unsigned int i = 0; i < 4; ++i ) + m[i] = ntohl(m[i]); + + bool is_v4_mask = m[0] == 0xffffffff && + m[1] == m[0] && m[2] == m[0]; + + + if ( v->AsSubNet().Prefix().GetFamily() == IPv4 && is_v4_mask ) + { + mval->val = ntohl(*n); + mval->mask = m[3]; + } + else + { + rules_error("IPv6 subnets not supported"); + mval->val = 0; + mval->mask = 0; + } } } -#else - mval->val = ntohl(v->AsSubNet()->net); - mval->mask = v->AsSubNetVal()->Mask(); -#endif break; default: @@ -1103,7 +1234,8 @@ static bool val_to_maskedval(Val* v, maskedvalue_list* append_to) return true; } -void id_to_maskedvallist(const char* id, maskedvalue_list* append_to) +void id_to_maskedvallist(const char* id, maskedvalue_list* append_to, + vector* prefix_vector) { Val* v = get_bro_val(id); if ( ! v ) @@ -1113,12 +1245,17 @@ void id_to_maskedvallist(const char* id, maskedvalue_list* append_to) { val_list* vals = v->AsTableVal()->ConvertToPureList()->Vals(); loop_over_list(*vals, i ) - if ( ! val_to_maskedval((*vals)[i], append_to) ) + if ( ! val_to_maskedval((*vals)[i], append_to, prefix_vector) ) + { + delete_vals(vals); return; + } + + delete_vals(vals); } else - val_to_maskedval(v, append_to); + val_to_maskedval(v, append_to, prefix_vector); } char* id_to_str(const char* id) diff --git a/src/RuleMatcher.h b/src/RuleMatcher.h index 5bba69e130..b8895513b4 100644 --- a/src/RuleMatcher.h +++ b/src/RuleMatcher.h @@ -2,7 +2,9 @@ #define sigs_h #include +#include +#include "IPAddr.h" #include "BroString.h" #include "List.h" #include "RE.h" @@ -59,17 +61,19 @@ declare(PList, BroString); typedef PList(BroString) bstr_list; // Get values from Bro's script-level variables. -extern void id_to_maskedvallist(const char* id, maskedvalue_list* append_to); +extern void id_to_maskedvallist(const char* id, maskedvalue_list* append_to, + vector* prefix_vector = 0); extern char* id_to_str(const char* id); extern uint32 id_to_uint(const char* id); class RuleHdrTest { public: enum Comp { LE, GE, LT, GT, EQ, NE }; - enum Prot { NOPROT, IP, ICMP, TCP, UDP }; + enum Prot { NOPROT, IP, IPv6, ICMP, ICMPv6, TCP, UDP, NEXT, IPSrc, IPDst }; RuleHdrTest(Prot arg_prot, uint32 arg_offset, uint32 arg_size, Comp arg_comp, maskedvalue_list* arg_vals); + RuleHdrTest(Prot arg_prot, Comp arg_comp, vector arg_v); ~RuleHdrTest(); void PrintDebug(); @@ -86,6 +90,7 @@ private: Prot prot; Comp comp; maskedvalue_list* vals; + vector prefix_vals; // for use with IPSrc/IPDst comparisons uint32 offset; uint32 size; diff --git a/src/SMB.cc b/src/SMB.cc index edce2a69b8..a06707328a 100644 --- a/src/SMB.cc +++ b/src/SMB.cc @@ -368,7 +368,7 @@ int SMB_Session::ParseSetupAndx(int is_orig, binpac::SMB::SMB_header const& hdr, // The binpac type depends on the negotiated server settings - // possibly we can just pick the "right" format here, and use that? - if ( hdr.flags2() && 0x0800 ) + if ( hdr.flags2() & 0x0800 ) { binpac::SMB::SMB_setup_andx_ext msg(hdr.unicode()); msg.Parse(body.data(), body.data() + body.length()); diff --git a/src/SMTP.cc b/src/SMTP.cc index 3af8af3b7b..85a3bc79dc 100644 --- a/src/SMTP.cc +++ b/src/SMTP.cc @@ -353,7 +353,6 @@ void SMTP_Analyzer::ProcessLine(int length, const char* line, bool orig) int ext_len; get_word(end_of_line - line, line, ext_len, ext); - line = skip_whitespace(line + ext_len, end_of_line); ProcessExtension(ext_len, ext); } } diff --git a/src/SOCKS.cc b/src/SOCKS.cc new file mode 100644 index 0000000000..4a6eda7043 --- /dev/null +++ b/src/SOCKS.cc @@ -0,0 +1,86 @@ +#include "SOCKS.h" +#include "socks_pac.h" +#include "TCP_Reassembler.h" + +SOCKS_Analyzer::SOCKS_Analyzer(Connection* conn) +: TCP_ApplicationAnalyzer(AnalyzerTag::SOCKS, conn) + { + interp = new binpac::SOCKS::SOCKS_Conn(this); + orig_done = resp_done = false; + pia = 0; + } + +SOCKS_Analyzer::~SOCKS_Analyzer() + { + delete interp; + } + +void SOCKS_Analyzer::EndpointDone(bool orig) + { + if ( orig ) + orig_done = true; + else + resp_done = true; + } + +void SOCKS_Analyzer::Done() + { + TCP_ApplicationAnalyzer::Done(); + + interp->FlowEOF(true); + interp->FlowEOF(false); + } + +void SOCKS_Analyzer::EndpointEOF(bool is_orig) + { + TCP_ApplicationAnalyzer::EndpointEOF(is_orig); + interp->FlowEOF(is_orig); + } + +void SOCKS_Analyzer::DeliverStream(int len, const u_char* data, bool orig) + { + TCP_ApplicationAnalyzer::DeliverStream(len, data, orig); + + assert(TCP()); + + if ( TCP()->IsPartial() ) + // punt on partial. + return; + + if ( orig_done && resp_done ) + { + // Finished decapsulating tunnel layer. Now do standard processing + // with the rest of the conneciton. + // + // Note that we assume that no payload data arrives before both endpoints + // are done with there part of the SOCKS protocol. + + if ( ! pia ) + { + pia = new PIA_TCP(Conn()); + AddChildAnalyzer(pia); + pia->FirstPacket(true, 0); + pia->FirstPacket(false, 0); + } + + ForwardStream(len, data, orig); + } + else + { + try + { + interp->NewData(orig, data, data + len); + } + catch ( const binpac::Exception& e ) + { + ProtocolViolation(fmt("Binpac exception: %s", e.c_msg())); + } + } + } + +void SOCKS_Analyzer::Undelivered(int seq, int len, bool orig) + { + TCP_ApplicationAnalyzer::Undelivered(seq, len, orig); + interp->NewGap(orig, len); + } + diff --git a/src/SOCKS.h b/src/SOCKS.h new file mode 100644 index 0000000000..9753abb660 --- /dev/null +++ b/src/SOCKS.h @@ -0,0 +1,45 @@ +#ifndef socks_h +#define socks_h + +// SOCKS v4 analyzer. + +#include "TCP.h" +#include "PIA.h" + +namespace binpac { + namespace SOCKS { + class SOCKS_Conn; + } +} + + +class SOCKS_Analyzer : public TCP_ApplicationAnalyzer { +public: + SOCKS_Analyzer(Connection* conn); + ~SOCKS_Analyzer(); + + void EndpointDone(bool orig); + + virtual void Done(); + virtual void DeliverStream(int len, const u_char* data, bool orig); + virtual void Undelivered(int seq, int len, bool orig); + virtual void EndpointEOF(bool is_orig); + + static Analyzer* InstantiateAnalyzer(Connection* conn) + { return new SOCKS_Analyzer(conn); } + + static bool Available() + { + return socks_request || socks_reply; + } + +protected: + + bool orig_done; + bool resp_done; + + PIA_TCP *pia; + binpac::SOCKS::SOCKS_Conn* interp; +}; + +#endif diff --git a/src/SSH.cc b/src/SSH.cc index c07aad3dd1..3a8f468ae4 100644 --- a/src/SSH.cc +++ b/src/SSH.cc @@ -50,23 +50,24 @@ void SSH_Analyzer::DeliverStream(int length, const u_char* data, bool is_orig) // SSH-.-\n // // We're interested in the "version" part here. - + if ( length < 4 || memcmp(line, "SSH-", 4) != 0 ) { Weird("malformed_ssh_identification"); ProtocolViolation("malformed ssh identification", line, length); return; } - + int i; for ( i = 4; i < length && line[i] != '-'; ++i ) ; - + if ( TCP() ) { if ( length >= i ) { - const uint32* dst; + IPAddr dst; + if ( is_orig ) dst = TCP()->Orig()->dst_addr; else diff --git a/src/SSL-binpac.cc b/src/SSL.cc similarity index 60% rename from src/SSL-binpac.cc rename to src/SSL.cc index db9a7004d6..4658bbbc16 100644 --- a/src/SSL-binpac.cc +++ b/src/SSL.cc @@ -1,21 +1,21 @@ -#include "SSL-binpac.h" +#include "SSL.h" #include "TCP_Reassembler.h" #include "Reporter.h" #include "util.h" -SSL_Analyzer_binpac::SSL_Analyzer_binpac(Connection* c) +SSL_Analyzer::SSL_Analyzer(Connection* c) : TCP_ApplicationAnalyzer(AnalyzerTag::SSL, c) { interp = new binpac::SSL::SSL_Conn(this); had_gap = false; } -SSL_Analyzer_binpac::~SSL_Analyzer_binpac() +SSL_Analyzer::~SSL_Analyzer() { delete interp; } -void SSL_Analyzer_binpac::Done() +void SSL_Analyzer::Done() { TCP_ApplicationAnalyzer::Done(); @@ -23,23 +23,22 @@ void SSL_Analyzer_binpac::Done() interp->FlowEOF(false); } -void SSL_Analyzer_binpac::EndpointEOF(TCP_Reassembler* endp) +void SSL_Analyzer::EndpointEOF(bool is_orig) { - TCP_ApplicationAnalyzer::EndpointEOF(endp); - interp->FlowEOF(endp->IsOrig()); + TCP_ApplicationAnalyzer::EndpointEOF(is_orig); + interp->FlowEOF(is_orig); } -void SSL_Analyzer_binpac::DeliverStream(int len, const u_char* data, bool orig) +void SSL_Analyzer::DeliverStream(int len, const u_char* data, bool orig) { TCP_ApplicationAnalyzer::DeliverStream(len, data, orig); assert(TCP()); - if ( TCP()->IsPartial() ) return; if ( had_gap ) - // XXX: If only one side had a content gap, we could still try to + // If only one side had a content gap, we could still try to // deliver data to the other side if the script layer can handle this. return; @@ -53,7 +52,7 @@ void SSL_Analyzer_binpac::DeliverStream(int len, const u_char* data, bool orig) } } -void SSL_Analyzer_binpac::Undelivered(int seq, int len, bool orig) +void SSL_Analyzer::Undelivered(int seq, int len, bool orig) { TCP_ApplicationAnalyzer::Undelivered(seq, len, orig); had_gap = true; diff --git a/src/SSL-binpac.h b/src/SSL.h similarity index 68% rename from src/SSL-binpac.h rename to src/SSL.h index 8dab19d00c..d0ef164877 100644 --- a/src/SSL-binpac.h +++ b/src/SSL.h @@ -1,14 +1,13 @@ -#ifndef ssl_binpac_h -#define ssl_binpac_h +#ifndef ssl_h +#define ssl_h #include "TCP.h" - #include "ssl_pac.h" -class SSL_Analyzer_binpac : public TCP_ApplicationAnalyzer { +class SSL_Analyzer : public TCP_ApplicationAnalyzer { public: - SSL_Analyzer_binpac(Connection* conn); - virtual ~SSL_Analyzer_binpac(); + SSL_Analyzer(Connection* conn); + virtual ~SSL_Analyzer(); // Overriden from Analyzer. virtual void Done(); @@ -16,10 +15,10 @@ public: virtual void Undelivered(int seq, int len, bool orig); // Overriden from TCP_ApplicationAnalyzer. - virtual void EndpointEOF(TCP_Reassembler* endp); + virtual void EndpointEOF(bool is_orig); static Analyzer* InstantiateAnalyzer(Connection* conn) - { return new SSL_Analyzer_binpac(conn); } + { return new SSL_Analyzer(conn); } static bool Available() { diff --git a/src/SSLv2.cc b/src/SSLv2.cc deleted file mode 100644 index 9fa654048d..0000000000 --- a/src/SSLv2.cc +++ /dev/null @@ -1,944 +0,0 @@ -#include "SSLv2.h" -#include "SSLv3.h" - -// --- Initalization of static variables -------------------------------------- - -uint SSLv2_Interpreter::totalConnections = 0; -uint SSLv2_Interpreter::analyzedConnections = 0; -uint SSLv2_Interpreter::openedConnections = 0; -uint SSLv2_Interpreter::failedConnections = 0; -uint SSLv2_Interpreter::weirdConnections = 0; -uint SSLv2_Interpreter::totalRecords = 0; -uint SSLv2_Interpreter::clientHelloRecords = 0; -uint SSLv2_Interpreter::serverHelloRecords = 0; -uint SSLv2_Interpreter::clientMasterKeyRecords = 0; -uint SSLv2_Interpreter::errorRecords = 0; - - -// --- SSLv2_Interpreter ------------------------------------------------------- - -/*! - * The Constructor. - * - * \param proxy Pointer to the SSLProxy_Analyzer who created this instance. - */ -SSLv2_Interpreter::SSLv2_Interpreter(SSLProxy_Analyzer* proxy) -: SSL_Interpreter(proxy) - { - ++totalConnections; - records = 0; - bAnalyzedCounted = false; - connState = START; - - pServerCipherSpecs = 0; - pClientCipherSpecs = 0; - bClientWantsCachedSession = false; - usedCipherSpec = (SSLv2_CipherSpec) 0; - - pConnectionId = 0; - pChallenge = 0; - pSessionId = 0; - pMasterClearKey = 0; - pMasterEncryptedKey = 0; - pClientReadKey = 0; - pServerReadKey = 0; - } - -/*! - * The Destructor. - */ -SSLv2_Interpreter::~SSLv2_Interpreter() - { - if ( connState != CLIENT_MASTERKEY_SEEN && - connState != CACHED_SESSION && - connState != START && // we only complain if we saw some data - connState != ERROR_SEEN ) - ++failedConnections; - - if ( connState != CLIENT_MASTERKEY_SEEN && connState != CACHED_SESSION ) - ++weirdConnections; - - delete pServerCipherSpecs; - delete pClientCipherSpecs; - delete pConnectionId; - delete pChallenge; - delete pSessionId; - delete pMasterClearKey; - delete pMasterEncryptedKey; - delete pClientReadKey; - delete pServerReadKey; - } - -/*! - * This method implements SSL_Interpreter::BuildInterpreterEndpoints() - */ -void SSLv2_Interpreter::BuildInterpreterEndpoints() - { - orig = new SSLv2_Endpoint(this, 1); - resp = new SSLv2_Endpoint(this, 0); - } - -/*! - * This method prints some counters. - */ -void SSLv2_Interpreter::printStats() - { - printf("SSLv2:\n"); - printf("totalConnections = %u\n", totalConnections); - printf("analyzedConnections = %u\n", analyzedConnections); - printf("openedConnections = %u\n", openedConnections); - printf("failedConnections = %u\n", failedConnections); - printf("weirdConnections = %u\n", weirdConnections); - - printf("totalRecords = %u\n", totalRecords); - printf("clientHelloRecords = %u\n", clientHelloRecords); - printf("serverHelloRecords = %u\n", serverHelloRecords); - printf("clientMasterKeyRecords = %u\n", clientMasterKeyRecords); - printf("errorRecords = %u\n", errorRecords); - - printf("SSL_RecordBuilder::maxAllocCount = %u\n", SSL_RecordBuilder::maxAllocCount); - printf("SSL_RecordBuilder::maxFragmentCount = %u\n", SSL_RecordBuilder::maxFragmentCount); - printf("SSL_RecordBuilder::fragmentedHeaders = %u\n", SSL_RecordBuilder::fragmentedHeaders); - } - -/*! - * \return the current state of the ssl connection - */ -SSLv2_States SSLv2_Interpreter::ConnState() - { - return connState; - } - -/*! - * This method is called by SSLv2_Endpoint::Deliver(). It is the main entry - * point of this class. The header of the given SSLV2 record is analyzed and - * its contents are then passed to the corresponding analyzer method. After - * the record has been analyzed, the ssl connection state is updated. - * - * \param s Pointer to the endpoint which sent the record - * \param length length of SSLv2 record - * \param data pointer to SSLv2 record to analyze - */ -void SSLv2_Interpreter::NewSSLRecord(SSL_InterpreterEndpoint* s, - int length, const u_char* data) - { - ++records; - ++totalRecords; - - if ( ! bAnalyzedCounted ) - { - ++analyzedConnections; - bAnalyzedCounted = true; - } - - // We should see a maximum of 4 cleartext records. - if ( records == 5 ) - { // so this should never happen - Weird("SSLv2: Saw more than 4 records, skipping connection..."); - proxy->SetSkip(1); - return; - } - - // SSLv2 record header analysis - uint32 recordLength = 0; // data length of SSLv2 record - bool isEscape = false; - uint8 padding = 0; - const u_char* contents; - - if ( (data[0] & 0x80) > 0 ) - { // we have a two-byte record header - recordLength = ((data[0] & 0x7f) << 8) | data[1]; - contents = data + 2; - if ( recordLength + 2 != uint32(length) ) - { - // This should never happen, otherwise - // we have a bug in the SSL_RecordBuilder. - Weird("SSLv2: FATAL: recordLength doesn't match data block length!"); - connState = ERROR_REQUIRED; - proxy->SetSkip(1); - return; - } - } - else - { // We have a three-byte record header. - recordLength = ((data[0] & 0x3f) << 8) | data[1]; - isEscape = (data[0] & 0x40) != 0; - padding = data[2]; - contents = data + 3; - if ( recordLength + 3 != uint32(length) ) - { - // This should never happen, otherwise - // we have a bug in the SSL_RecordBuilder. - Weird("SSLv2: FATAL: recordLength doesn't match data block length!"); - connState = ERROR_REQUIRED; - proxy->SetSkip(1); - return; - } - - if ( padding == 0 && ! isEscape ) - Weird("SSLv2: 3 Byte record header, but no escape, no padding!"); - } - - if ( recordLength == 0 ) - { - Weird("SSLv2: Record length is zero (no record data)!"); - return; - } - - if ( isEscape ) - Weird("SSLv2: Record has escape bit set (security escape)!"); - - if ( padding > 0 && connState != CACHED_SESSION && - connState != CLIENT_MASTERKEY_SEEN ) - Weird("SSLv2 record with padding > 0 in cleartext!"); - - // MISSING: - // A final consistency check is done when a block cipher is used - // and the protocol is using encryption. The amount of data present - // in a record (RECORD-LENGTH))must be a multiple of the cipher's - // block size. If the received record is not a multiple of the - // cipher's block size then the record is considered damaged, and it - // is to be treated as if an "I/O Error" had occurred (i.e. an - // unrecoverable error is asserted and the connection is closed). - - switch ( connState ) { - case START: - // Only CLIENT-HELLLOs allowed here. - if ( contents[0] != SSLv2_MT_CLIENT_HELLO ) - { - Weird("SSLv2: First packet is not a CLIENT-HELLO!"); - analyzeRecord(s, recordLength, contents); - connState = ERROR_REQUIRED; - } - else - connState = ClientHelloRecord(s, recordLength, contents); - break; - - case CLIENT_HELLO_SEEN: - // Only SERVER-HELLOs or ERRORs allowed here. - if ( contents[0] == SSLv2_MT_SERVER_HELLO ) - connState = ServerHelloRecord(s, recordLength, contents); - else if ( contents[0] == SSLv2_MT_ERROR ) - connState = ErrorRecord(s, recordLength, contents); - else - { - Weird("SSLv2: State violation in CLIENT_HELLO_SEEN!"); - analyzeRecord(s, recordLength, contents); - connState = ERROR_REQUIRED; - } - break; - - case NEW_SESSION: - // We expect a client master key. - if ( contents[0] == SSLv2_MT_CLIENT_MASTER_KEY ) - connState = ClientMasterKeyRecord(s, recordLength, contents); - else if ( contents[0] == SSLv2_MT_ERROR ) - connState = ErrorRecord(s, recordLength, contents); - else - { - Weird("SSLv2: State violation in NEW_SESSION or encrypted record!"); - analyzeRecord(s, recordLength, contents); - connState = ERROR_REQUIRED; - } - - delete pServerCipherSpecs; - pServerCipherSpecs = 0; - break; - - case CACHED_SESSION: - delete pServerCipherSpecs; - pServerCipherSpecs = 0; - // No break here. - - case CLIENT_MASTERKEY_SEEN: - // If no error record, no further analysis. - if ( contents[0] == SSLv2_MT_ERROR && - recordLength == SSLv2_ERROR_RECORD_SIZE ) - connState = ErrorRecord(s, recordLength, contents); - else - { - // So we finished the cleartext handshake. - // Skip all further data. - - proxy->SetSkip(1); - ++openedConnections; - } - break; - - case ERROR_REQUIRED: - if ( contents[0] == SSLv2_MT_ERROR ) - connState = ErrorRecord(s, recordLength, contents); - else - { - // We lost tracking: this should not happen. - Weird("SSLv2: State inconsistency in ERROR_REQUIRED (lost tracking!)!"); - analyzeRecord(s, recordLength, contents); - connState = ERROR_REQUIRED; - } - break; - - case ERROR_SEEN: - // We don't have recoverable errors in cleartext phase, - // so we shouldn't see anymore packets. - Weird("SSLv2: Traffic after error record!"); - analyzeRecord(s, recordLength, contents); - break; - - default: - reporter->InternalError("SSLv2: unknown state"); - break; - } - } - -/*! - * This method is called whenever the connection tracking failed. It calls - * the corresponding analyzer method for the given SSLv2 record, but does not - * update the ssl connection state. - * - * \param s Pointer to the endpoint which sent the record - * \param length length of SSLv2 record - * \param data pointer to SSLv2 record to analyze - */ -void SSLv2_Interpreter::analyzeRecord(SSL_InterpreterEndpoint* s, - int length, const u_char* data) - { - switch ( data[0] ) { - case SSLv2_MT_ERROR: - ErrorRecord(s, length, data); - break; - - case SSLv2_MT_CLIENT_HELLO: - ClientHelloRecord(s, length, data); - break; - - case SSLv2_MT_CLIENT_MASTER_KEY: - ClientMasterKeyRecord(s, length, data); - break; - - case SSLv2_MT_SERVER_HELLO: - ServerHelloRecord(s, length, data); - break; - - case SSLv2_MT_CLIENT_FINISHED: - case SSLv2_MT_SERVER_VERIFY: - case SSLv2_MT_SERVER_FINISHED: - case SSLv2_MT_REQUEST_CERTIFICATE: - case SSLv2_MT_CLIENT_CERTIFICATE: - Weird("SSLv2: Encrypted record type seems to be in cleartext"); - break; - - default: - // Unknown record type. - Weird("SSLv2: Unknown record type or encrypted record"); - break; - } - } - -/*! - * This method analyses a SSLv2 CLIENT-HELLO record. - * - * \param s Pointer to the endpoint which sent the record - * \param length length of SSLv2 CLIENT-HELLO record - * \param data pointer to SSLv2 CLIENT-HELLO record to analyze - * - * \return the updated state of the current ssl connection - */ -SSLv2_States SSLv2_Interpreter::ClientHelloRecord(SSL_InterpreterEndpoint* s, - int recordLength, const u_char* recordData) - { - // This method gets the record's data (without the header). - ++clientHelloRecords; - - if ( s != orig ) - Weird("SSLv2: CLIENT-HELLO record from server!"); - - // There should not be any pending data in the SSLv2 reassembler, - // because the client should wait for a server response. - if ( ((SSLv2_Endpoint*) s)->isDataPending() ) - Weird("SSLv2: Pending data in SSL_RecordBuilder after CLIENT-HELLO!"); - - // Client hello minimum header size check. - if ( recordLength < SSLv2_CLIENT_HELLO_HEADER_SIZE ) - { - Weird("SSLv2: CLIENT-HELLO is too small!"); - return ERROR_REQUIRED; - } - - // Extract the data of the client hello header. - SSLv2_ClientHelloHeader ch; - ch.clientVersion = uint16(recordData[1] << 8) | recordData[2]; - ch.cipherSpecLength = uint16(recordData[3] << 8) | recordData[4]; - ch.sessionIdLength = uint16(recordData[5] << 8) | recordData[6]; - ch.challengeLength = uint16(recordData[7] << 8) | recordData[8]; - - if ( ch.clientVersion != SSLProxy_Analyzer::SSLv20 && - ch.clientVersion != SSLProxy_Analyzer::SSLv30 && - ch.clientVersion != SSLProxy_Analyzer::SSLv31 ) - { - Weird("SSLv2: Unsupported SSL-Version in CLIENT-HELLO"); - return ERROR_REQUIRED; - } - - if ( ch.challengeLength + ch.cipherSpecLength + ch.sessionIdLength + - SSLv2_CLIENT_HELLO_HEADER_SIZE != recordLength ) - { - Weird("SSLv2: Size inconsistency in CLIENT-HELLO"); - return ERROR_REQUIRED; - } - - // The CIPHER-SPECS-LENGTH must be > 0 and a multiple of 3. - if ( ch.cipherSpecLength == 0 || ch.cipherSpecLength % 3 != 0 ) - { - Weird("SSLv2: Nonconform CIPHER-SPECS-LENGTH in CLIENT-HELLO."); - return ERROR_REQUIRED; - } - - // The SESSION-ID-LENGTH must either be zero or 16. - if ( ch.sessionIdLength != 0 && ch.sessionIdLength != 16 ) - Weird("SSLv2: Nonconform SESSION-ID-LENGTH in CLIENT-HELLO."); - - if ( (ch.challengeLength < 16) || (ch.challengeLength > 32)) - Weird("SSLv2: Nonconform CHALLENGE-LENGTH in CLIENT-HELLO."); - - const u_char* ptr = recordData; - ptr += SSLv2_CLIENT_HELLO_HEADER_SIZE + ch.cipherSpecLength; - - pSessionId = new SSL_DataBlock(ptr, ch.sessionIdLength); - - // If decrypting, store the challenge. - if ( ssl_store_key_material && ch.challengeLength <= 32 ) - pChallenge = new SSL_DataBlock(ptr, ch.challengeLength); - - bClientWantsCachedSession = ch.sessionIdLength != 0; - - TableVal* currentCipherSuites = - analyzeCiphers(s, ch.cipherSpecLength, - recordData + SSLv2_CLIENT_HELLO_HEADER_SIZE); - - fire_ssl_conn_attempt(ch.clientVersion, currentCipherSuites); - - return CLIENT_HELLO_SEEN; - } - -/*! - * This method analyses a SSLv2 SERVER-HELLO record. - * - * \param s Pointer to the endpoint which sent the record - * \param length length of SSLv2 SERVER-HELLO record - * \param data pointer to SSLv2 SERVER-HELLO record to analyze - * - * \return the updated state of the current ssl connection - */ -SSLv2_States SSLv2_Interpreter::ServerHelloRecord(SSL_InterpreterEndpoint* s, - int recordLength, const u_char* recordData) - { - ++serverHelloRecords; - TableVal* currentCipherSuites = NULL; - - if ( s != resp ) - Weird("SSLv2: SERVER-HELLO from client!"); - - if ( recordLength < SSLv2_SERVER_HELLO_HEADER_SIZE ) - { - Weird("SSLv2: SERVER-HELLO is too small!"); - return ERROR_REQUIRED; - } - - // Extract the data of the client hello header. - SSLv2_ServerHelloHeader sh; - sh.sessionIdHit = recordData[1]; - sh.certificateType = recordData[2]; - sh.serverVersion = uint16(recordData[3] << 8) | recordData[4]; - sh.certificateLength = uint16(recordData[5] << 8) | recordData[6]; - sh.cipherSpecLength = uint16(recordData[7] << 8) | recordData[8]; - sh.connectionIdLength = uint16(recordData[9] << 8) | recordData[10]; - - if ( sh.serverVersion != SSLProxy_Analyzer::SSLv20 ) - { - Weird("SSLv2: Unsupported SSL-Version in SERVER-HELLO"); - return ERROR_REQUIRED; - } - - if ( sh.certificateLength + sh.cipherSpecLength + - sh.connectionIdLength + - SSLv2_SERVER_HELLO_HEADER_SIZE != recordLength ) - { - Weird("SSLv2: Size inconsistency in SERVER-HELLO"); - return ERROR_REQUIRED; - } - - // The length of the CONNECTION-ID must be between 16 and 32 bytes. - if ( sh.connectionIdLength < 16 || sh.connectionIdLength > 32 ) - Weird("SSLv2: Nonconform CONNECTION-ID-LENGTH in SERVER-HELLO"); - - // If decrypting, store the connection ID. - if ( ssl_store_key_material && sh.connectionIdLength <= 32 ) - { - const u_char* ptr = recordData; - - ptr += SSLv2_SERVER_HELLO_HEADER_SIZE + sh.cipherSpecLength + - sh.certificateLength; - - pConnectionId = new SSL_DataBlock(ptr, sh.connectionIdLength); - } - - if ( sh.sessionIdHit == 0 ) - { - // Generating reusing-connection event. - EventHandlerPtr event = ssl_session_insertion; - - if ( event ) - { - TableVal* sessionIDTable = - MakeSessionID( - recordData + - SSLv2_SERVER_HELLO_HEADER_SIZE + - sh.certificateLength + - sh.cipherSpecLength, - sh.connectionIdLength); - - val_list* vl = new val_list; - vl->append(proxy->BuildConnVal()); - vl->append(sessionIDTable); - - proxy->ConnectionEvent(ssl_session_insertion, vl); - } - } - - SSLv2_States nextState; - - if ( sh.sessionIdHit != 0 ) - { // we're using a cached session - - // There should not be any pending data in the SSLv2 - // reassembler, because the server should wait for a - // client response. - if ( ((SSLv2_Endpoint*) s)->isDataPending() ) - { - // But turns out some SSL Implementations do this - // when using a cached session. - } - - // Consistency check for SESSION-ID-HIT. - if ( ! bClientWantsCachedSession ) - Weird("SSLv2: SESSION-ID hit in SERVER-HELLO, but no SESSION-ID in CLIENT-HELLO!"); - - // If the SESSION-ID-HIT flag is non-zero then the - // CERTIFICATE-TYPE, CERTIFICATE-LENGTH and - // CIPHER-SPECS-LENGTH fields will be zero. - if ( sh.certificateType != 0 || sh.certificateLength != 0 || - sh.cipherSpecLength != 0 ) - Weird("SSLv2: SESSION-ID-HIT, but session data in SERVER-HELLO"); - - // Generate reusing-connection event. - if ( pSessionId ) - { - fire_ssl_conn_reused(pSessionId); - delete pSessionId; - pSessionId = 0; - } - - nextState = CACHED_SESSION; - } - else - { // we're starting a new session - - // There should not be any pending data in the SSLv2 - // reassembler, because the server should wait for - // a client response. - if ( ((SSLv2_Endpoint*) s)->isDataPending() ) - Weird("SSLv2: Pending data in SSL_RecordBuilder after SERVER-HELLO (new session)!"); - - // TODO: check certificate length ??? - if ( sh.certificateLength == 0 ) - Weird("SSLv2: No certificate in SERVER-HELLO!"); - - // The CIPHER-SPECS-LENGTH must be > zero and a multiple of 3. - if ( sh.cipherSpecLength == 0 ) - Weird("SSLv2: No CIPHER-SPECS in SERVER-HELLO!"); - - if ( sh.cipherSpecLength % 3 != 0 ) - { - Weird("SSLv2: Nonconform CIPHER-SPECS-LENGTH in SERVER-HELLO"); - return ERROR_REQUIRED; - } - - const u_char* ptr = recordData; - ptr += sh.certificateLength + SSLv2_SERVER_HELLO_HEADER_SIZE; - currentCipherSuites = analyzeCiphers(s, sh.cipherSpecLength, ptr); - - nextState = NEW_SESSION; - } - - // Check if at least one cipher is supported by the client. - if ( pClientCipherSpecs && pServerCipherSpecs ) - { - bool bFound = false; - for ( int i = 0; i < pClientCipherSpecs->len; i += 3 ) - { - for ( int j = 0; j < pServerCipherSpecs->len; j += 3 ) - { - if ( memcmp(pClientCipherSpecs + i, - pServerCipherSpecs + j, 3) == 0 ) - { - bFound = true; - i = pClientCipherSpecs->len; - break; - } - } - } - - if ( ! bFound ) - { - Weird("SSLv2: Client's and server's CIPHER-SPECS don't match!"); - nextState = ERROR_REQUIRED; - } - - delete pClientCipherSpecs; - pClientCipherSpecs = 0; - } - - // Certificate analysis. - if ( sh.certificateLength > 0 && ssl_analyze_certificates != 0 ) - { - analyzeCertificate(s, recordData + SSLv2_SERVER_HELLO_HEADER_SIZE, - sh.certificateLength, sh.certificateType, false); - } - - if ( nextState == NEW_SESSION ) - // generate server-reply event - fire_ssl_conn_server_reply(sh.serverVersion, currentCipherSuites); - - else if ( nextState == CACHED_SESSION ) - { // generate server-reply event - fire_ssl_conn_server_reply(sh.serverVersion, currentCipherSuites); - // Generate a connection-established event with a dummy - // cipher suite, since we can't remember session information - // (yet). - // Note: A new session identifier is sent encrypted in SSLv2! - fire_ssl_conn_established(sh.serverVersion, 0xABCD); - } - else - // Unref, since the table is not delivered to any event. - Unref(currentCipherSuites); - - return nextState; - } - -/*! - * This method analyses a SSLv2 CLIENT-MASTER-KEY record. - * - * \param s Pointer to the endpoint which sent the record - * \param length length of SSLv2 CLIENT-MASTER-KEY record - * \param data pointer to SSLv2 CLIENT-MASTER-KEY record to analyze - * - * \return the updated state of the current ssl connection - */ -SSLv2_States SSLv2_Interpreter:: - ClientMasterKeyRecord(SSL_InterpreterEndpoint* s, int recordLength, - const u_char* recordData) - { - ++clientMasterKeyRecords; - SSLv2_States nextState = CLIENT_MASTERKEY_SEEN; - - if ( s != orig ) - Weird("SSLv2: CLIENT-MASTER-KEY from server!"); - - if ( recordLength < SSLv2_CLIENT_MASTER_KEY_HEADER_SIZE ) - { - Weird("SSLv2: CLIENT-MASTER-KEY is too small!"); - return ERROR_REQUIRED; - } - - // Extract the data of the client master key header. - SSLv2_ClientMasterKeyHeader cmk; - cmk.cipherKind = - ((recordData[1] << 16) | recordData[2] << 8) | recordData[3]; - cmk.clearKeyLength = uint16(recordData[4] << 8) | recordData[5]; - cmk.encryptedKeyLength = uint16(recordData[6] << 8) | recordData[7]; - cmk.keyArgLength = uint16(recordData[8] << 8) | recordData[9]; - - if ( cmk.clearKeyLength + cmk.encryptedKeyLength + cmk.keyArgLength + - SSLv2_CLIENT_MASTER_KEY_HEADER_SIZE != recordLength ) - { - Weird("SSLv2: Size inconsistency in CLIENT-MASTER-KEY"); - return ERROR_REQUIRED; - } - - // Check if cipher is supported by the server. - if ( pServerCipherSpecs ) - { - bool bFound = false; - for ( int i = 0; i < pServerCipherSpecs->len; i += 3 ) - { - uint32 cipherSpec = - ((pServerCipherSpecs->data[i] << 16) | - pServerCipherSpecs->data[i+1] << 8) | - pServerCipherSpecs->data[i+2]; - - if ( cmk.cipherKind == cipherSpec ) - { - bFound = true; - break; - } - } - - if ( ! bFound ) - { - Weird("SSLv2: Client chooses unadvertised cipher in CLIENT-MASTER-KEY!"); - nextState = ERROR_REQUIRED; - } - else - nextState = CLIENT_MASTERKEY_SEEN; - - delete pServerCipherSpecs; - pServerCipherSpecs = 0; - } - - // TODO: check if cipher has been advertised before. - - SSL_CipherSpec* pCipherSpecTemp = 0; - - HashKey h(static_cast(cmk.cipherKind)); - pCipherSpecTemp = (SSL_CipherSpec*) SSL_CipherSpecDict.Lookup(&h); - if ( ! pCipherSpecTemp || ! (pCipherSpecTemp->flags & SSL_FLAG_SSLv20) ) - Weird("SSLv2: Unknown CIPHER-SPEC in CLIENT-MASTER-KEY!"); - else - { // check for conistency of clearKeyLength - if ( cmk.clearKeyLength * 8 != pCipherSpecTemp->clearKeySize ) - { - Weird("SSLv2: Inconsistency of clearKeyLength in CLIENT-MASTER-KEY!"); - // nextState = ERROR_REQUIRED; - } - - // TODO: check for consistency of encryptedKeyLength. - // TODO: check for consistency of keyArgLength. -// switch ( cmk.cipherKind ) -// { -// case SSL_CK_RC4_128_WITH_MD5: -// case SSL_CK_RC4_128_EXPORT40_WITH_MD5: -// if ( cmk.keyArgLength != 0 ) -// { -// Weird("SSLv2: Inconsistency of keyArgLength in CLIENT-MASTER-KEY!"); -// //nextState = ERROR_REQUIRED; -// } -// break; -// case SSL_CK_DES_64_CBC_WITH_MD5: -// case SSL_CK_RC2_128_CBC_EXPORT40_WITH_MD5: -// case SSL_CK_RC2_128_CBC_WITH_MD5: -// case SSL_CK_IDEA_128_CBC_WITH_MD5: -// case SSL_CK_DES_192_EDE3_CBC_WITH_MD5: -// if ( cmk.keyArgLength != 8 ) -// { -// Weird("SSLv2: Inconsistency of keyArgLength in CLIENT-MASTER-KEY!"); -// } -// break; -// } - } - - // Remember the used cipher spec. - usedCipherSpec = SSLv2_CipherSpec(cmk.cipherKind); - - // If decrypting, store the clear key part of the master key. - if ( ssl_store_key_material /* && cmk.clearKeyLength == 11 */ ) - { - pMasterClearKey = - new SSL_DataBlock((recordData + SSLv2_CLIENT_MASTER_KEY_HEADER_SIZE), cmk.clearKeyLength); - - pMasterEncryptedKey = - new SSL_DataBlock((recordData + SSLv2_CLIENT_MASTER_KEY_HEADER_SIZE + cmk.clearKeyLength ), cmk.encryptedKeyLength); - } - - if ( nextState == CLIENT_MASTERKEY_SEEN ) - fire_ssl_conn_established(SSLProxy_Analyzer::SSLv20, - cmk.cipherKind); - - return nextState; - } - - -/*! - * This method analyses a SSLv2 ERROR record. - * - * \param s Pointer to the endpoint which sent the record - * \param length length of SSLv2 ERROR record - * \param data pointer to SSLv2 ERROR record to analyze - * - * \return the updated state of the current ssl connection - */ -SSLv2_States SSLv2_Interpreter::ErrorRecord(SSL_InterpreterEndpoint* s, - int recordLength, const u_char* recordData) - { - ++errorRecords; - - if ( unsigned(recordLength) != SSLv2_ERROR_RECORD_SIZE ) - { - Weird("SSLv2: Size mismatch in Error Record!"); - return ERROR_REQUIRED; - } - - SSLv2_ErrorRecord er; - er.errorCode = (recordData[1] << 8) | recordData[2]; - SSL3x_AlertLevel al = SSL3x_AlertLevel(255); - - switch ( er.errorCode ) { - case SSLv2_PE_NO_CIPHER: - // The client doesn't support a cipher which the server - // supports. Only from client to server and not recoverable! - al = SSL3x_ALERT_LEVEL_FATAL; - break; - - case SSLv2_PE_NO_CERTIFICATE: - if ( s == orig ) - // from client to server: not recoverable - al = SSL3x_ALERT_LEVEL_FATAL; - else - // from server to client: recoverable - al = SSL3x_ALERT_LEVEL_WARNING; - break; - - case SSLv2_PE_BAD_CERTIFICATE: - if ( s == orig ) - // from client to server: not recoverable - al = SSL3x_ALERT_LEVEL_FATAL; - else - // from server to client: recoverable - al = SSL3x_ALERT_LEVEL_WARNING; - break; - - case SSLv2_PE_UNSUPPORTED_CERTIFICATE_TYPE: - if ( s == orig ) - // from client to server: not recoverable - al = SSL3x_ALERT_LEVEL_FATAL; - else - // from server to client: recoverable - al = SSL3x_ALERT_LEVEL_WARNING; - break; - - default: - al = SSL3x_ALERT_LEVEL_FATAL; - break; - } - - fire_ssl_conn_alert(SSLProxy_Analyzer::SSLv20, al, er.errorCode); - - return ERROR_SEEN; - } - -/*! - * This method analyses a set of SSLv2 cipher suites. - * - * \param s Pointer to the endpoint which sent the cipher suites - * \param length length of cipher suites - * \param data pointer to cipher suites to analyze - * - * \return a pointer to a Bro TableVal (of type cipher_suites_list) which contains - * the cipher suites list of the current analyzed record - */ -TableVal* SSLv2_Interpreter::analyzeCiphers(SSL_InterpreterEndpoint* s, - int length, const u_char* data) - { - if ( length > MAX_CIPHERSPEC_SIZE ) - { - if ( s == orig ) - Weird("SSLv2: Client has CipherSpecs > MAX_CIPHERSPEC_SIZE"); - else - Weird("SSLv2: Server has CipherSpecs > MAX_CIPHERSPEC_SIZE"); - } - else - { // cipher specs are not too big - if ( ssl_compare_cipherspecs ) - { // store cipher specs for state analysis - if ( s == resp ) - pServerCipherSpecs = - new SSL_DataBlock(data, length); - else - pClientCipherSpecs = - new SSL_DataBlock(data, length); - } - } - - const u_char* pCipher = data; - bool bExtractCipherSuite = false; - TableVal* pCipherTable = 0; - - // We only extract the cipher suite when the corresponding - // ssl events are defined (otherwise we do work for nothing - // and suffer a memory leak). - // FIXME: This check needs to be done only once! - if ( (s == orig && ssl_conn_attempt) || - (s == resp && ssl_conn_server_reply) ) - { - pCipherTable = new TableVal(cipher_suites_list); - bExtractCipherSuite = true; - } - - for ( int i = 0; i < length; i += 3 ) - { - SSL_CipherSpec* pCurrentCipherSpec; - uint32 cipherSpecID = - ((pCipher[0] << 16) | pCipher[1] << 8) | pCipher[2]; - - // Check for unknown cipher specs. - HashKey h(static_cast(cipherSpecID)); - pCurrentCipherSpec = - (SSL_CipherSpec*) SSL_CipherSpecDict.Lookup(&h); - - if ( ! pCurrentCipherSpec ) - { - if ( s == orig ) - Weird("SSLv2: Unknown CIPHER-SPEC in CLIENT-HELLO!"); - else - Weird("SSLv2: Unknown CIPHER-SPEC in SERVER-HELLO!"); - } - - if ( bExtractCipherSuite ) - { - Val* index = new Val(cipherSpecID, TYPE_COUNT); - pCipherTable->Assign(index, 0); - Unref(index); - } - - pCipher += 3; - } - - return pCipherTable; - } - -// --- SSLv2_EndPoint --------------------------------------------------------- - -/*! - * The constructor. - * - * \param interpreter Pointer to the SSLv2 interpreter to whom this endpoint belongs to - * \param is_orig true if this is the originating endpoint of the ssl connection, - * false otherwise - */ -SSLv2_Endpoint::SSLv2_Endpoint(SSLv2_Interpreter* interpreter, int is_orig) -: SSL_InterpreterEndpoint(interpreter, is_orig) - { - sentRecords = 0; - } - -/*! - * The destructor. - */ -SSLv2_Endpoint::~SSLv2_Endpoint() - { - } - -/*! - * This method is called by the SSLProxy_Analyzer with a complete reassembled - * SSLv2 record. It passes the record to SSLv2_Interpreter::NewSSLRecord(). - * - * \param t reserved (always zero) - * \param seq reserved (always zero) - * \param len length of the data block containing the ssl record - * \param data pointer to the data block containing the ssl record - */ -void SSLv2_Endpoint::Deliver(int len, const u_char* data) - { - ++((SSLv2_Endpoint*)peer)->sentRecords; - - ((SSLv2_Interpreter*)interpreter)->NewSSLRecord(this, len, data); - } diff --git a/src/SerialInfo.h b/src/SerialInfo.h index d322aa4b37..aa4c382349 100644 --- a/src/SerialInfo.h +++ b/src/SerialInfo.h @@ -15,6 +15,7 @@ public: pid_32bit = false; include_locations = true; new_cache_strategy = false; + broccoli_peer = false; } SerialInfo(const SerialInfo& info) @@ -28,6 +29,7 @@ public: pid_32bit = info.pid_32bit; include_locations = info.include_locations; new_cache_strategy = info.new_cache_strategy; + broccoli_peer = info.broccoli_peer; } // Parameters that control serialization. @@ -46,6 +48,11 @@ public: // If true, we support keeping objs in cache permanently. bool new_cache_strategy; + // If true, we're connecting to a Broccoli. If so, serialization + // specifics may be adapted for functionality Broccoli does not + // support. + bool broccoli_peer; + ChunkedIO::Chunk* chunk; // chunk written right before the serialization // Attributes set during serialization. @@ -70,6 +77,7 @@ public: print = 0; pid_32bit = false; new_cache_strategy = false; + broccoli_peer = false; } UnserialInfo(const UnserialInfo& info) @@ -86,6 +94,7 @@ public: print = info.print; pid_32bit = info.pid_32bit; new_cache_strategy = info.new_cache_strategy; + broccoli_peer = info.broccoli_peer; } // Parameters that control unserialization. @@ -106,6 +115,11 @@ public: // If true, we support keeping objs in cache permanently. bool new_cache_strategy; + // If true, we're connecting to a Broccoli. If so, serialization + // specifics may be adapted for functionality Broccoli does not + // support. + bool broccoli_peer; + // If a global ID already exits, of these policies is used. enum { Keep, // keep the old ID and ignore the new diff --git a/src/SerialObj.cc b/src/SerialObj.cc index a8ab969f5e..73cab275c2 100644 --- a/src/SerialObj.cc +++ b/src/SerialObj.cc @@ -163,7 +163,7 @@ SerialObj* SerialObj::Unserialize(UnserialInfo* info, SerialType type) if ( ! result ) { DBG_POP(DBG_SERIAL); - return false; + return 0; } DBG_POP(DBG_SERIAL); diff --git a/src/SerialTypes.h b/src/SerialTypes.h index 0ba48f89a9..c47ff19298 100644 --- a/src/SerialTypes.h +++ b/src/SerialTypes.h @@ -125,7 +125,7 @@ SERIAL_EXPR(FIELD_EXPR, 22) SERIAL_EXPR(HAS_FIELD_EXPR, 23) SERIAL_EXPR(RECORD_CONSTRUCTOR_EXPR, 24) SERIAL_EXPR(FIELD_ASSIGN_EXPR, 25) -SERIAL_EXPR(RECORD_MATCH_EXPR, 26) +// There used to be a SERIAL_EXPR(RECORD_MATCH_EXPR, 26) here SERIAL_EXPR(ARITH_COERCE_EXPR, 27) SERIAL_EXPR(RECORD_COERCE_EXPR, 28) SERIAL_EXPR(FLATTEN_EXPR, 29) diff --git a/src/SerializationFormat.cc b/src/SerializationFormat.cc index 5e3a68a42e..10dd4f29ea 100644 --- a/src/SerializationFormat.cc +++ b/src/SerializationFormat.cc @@ -230,6 +230,71 @@ bool BinarySerializationFormat::Read(string* v, const char* tag) return true; } +bool BinarySerializationFormat::Read(IPAddr* addr, const char* tag) + { + int n = 0; + if ( ! Read(&n, "addr-len") ) + return false; + + if ( n != 1 && n != 4 ) + return false; + + uint32_t raw[4]; + + for ( int i = 0; i < n; ++i ) + { + if ( ! Read(&raw[i], "addr-part") ) + return false; + + raw[i] = htonl(raw[i]); + } + + if ( n == 1 ) + *addr = IPAddr(IPv4, raw, IPAddr::Network); + else + *addr = IPAddr(IPv6, raw, IPAddr::Network); + + return true; + } + +bool BinarySerializationFormat::Read(IPPrefix* prefix, const char* tag) + { + IPAddr addr; + int len; + + if ( ! (Read(&addr, "prefix") && Read(&len, "width")) ) + return false; + + *prefix = IPPrefix(addr, len); + return true; + } + +bool BinarySerializationFormat::Read(struct in_addr* addr, const char* tag) + { + uint32_t* bytes = (uint32_t*) &addr->s_addr; + + if ( ! Read(&bytes[0], "addr4") ) + return false; + + bytes[0] = htonl(bytes[0]); + return true; + } + +bool BinarySerializationFormat::Read(struct in6_addr* addr, const char* tag) + { + uint32_t* bytes = (uint32_t*) &addr->s6_addr; + + for ( int i = 0; i < 4; ++i ) + { + if ( ! Read(&bytes[i], "addr6-part") ) + return false; + + bytes[i] = htonl(bytes[i]); + } + + return true; + } + bool BinarySerializationFormat::Write(char v, const char* tag) { DBG_LOG(DBG_SERIAL, "Write char %s [%s]", fmt_bytes(&v, 1), tag); @@ -299,6 +364,53 @@ bool BinarySerializationFormat::Write(const string& s, const char* tag) return Write(s.data(), s.size(), tag); } +bool BinarySerializationFormat::Write(const IPAddr& addr, const char* tag) + { + const uint32_t* raw; + int n = addr.GetBytes(&raw); + + assert(n == 1 || n == 4); + + if ( ! Write(n, "addr-len") ) + return false; + + for ( int i = 0; i < n; ++i ) + { + if ( ! Write(ntohl(raw[i]), "addr-part") ) + return false; + } + + return true; + } + +bool BinarySerializationFormat::Write(const IPPrefix& prefix, const char* tag) + { + return Write(prefix.Prefix(), "prefix") && Write(prefix.Length(), "width"); + } + +bool BinarySerializationFormat::Write(const struct in_addr& addr, const char* tag) + { + const uint32_t* bytes = (uint32_t*) &addr.s_addr; + + if ( ! Write(ntohl(bytes[0]), "addr4") ) + return false; + + return true; + } + +bool BinarySerializationFormat::Write(const struct in6_addr& addr, const char* tag) + { + const uint32_t* bytes = (uint32_t*) &addr.s6_addr; + + for ( int i = 0; i < 4; ++i ) + { + if ( ! Write(ntohl(bytes[i]), "addr6-part") ) + return false; + } + + return true; + } + bool BinarySerializationFormat::WriteOpenTag(const char* tag) { return true; @@ -389,6 +501,30 @@ bool XMLSerializationFormat::Read(string* s, const char* tag) return false; } +bool XMLSerializationFormat::Read(IPAddr* addr, const char* tag) + { + reporter->InternalError("no reading of xml"); + return false; + } + +bool XMLSerializationFormat::Read(IPPrefix* prefix, const char* tag) + { + reporter->InternalError("no reading of xml"); + return false; + } + +bool XMLSerializationFormat::Read(struct in_addr* addr, const char* tag) + { + reporter->InternalError("no reading of xml"); + return false; + } + +bool XMLSerializationFormat::Read(struct in6_addr* addr, const char* tag) + { + reporter->InternalError("no reading of xml"); + return false; + } + bool XMLSerializationFormat::Write(char v, const char* tag) { return WriteElem(tag, "char", &v, 1); @@ -469,6 +605,30 @@ bool XMLSerializationFormat::Write(const char* buf, int len, const char* tag) return WriteElem(tag, "string", buf, len); } +bool XMLSerializationFormat::Write(const IPAddr& addr, const char* tag) + { + reporter->InternalError("XML output of addresses not implemented"); + return false; + } + +bool XMLSerializationFormat::Write(const IPPrefix& prefix, const char* tag) + { + reporter->InternalError("XML output of prefixes not implemented"); + return false; + } + +bool XMLSerializationFormat::Write(const struct in_addr& addr, const char* tag) + { + reporter->InternalError("XML output of in_addr not implemented"); + return false; + } + +bool XMLSerializationFormat::Write(const struct in6_addr& addr, const char* tag) + { + reporter->InternalError("XML output of in6_addr not implemented"); + return false; + } + bool XMLSerializationFormat::WriteEncodedString(const char* s, int len) { while ( len-- ) diff --git a/src/SerializationFormat.h b/src/SerializationFormat.h index 2067456bf1..f270b61bae 100644 --- a/src/SerializationFormat.h +++ b/src/SerializationFormat.h @@ -9,6 +9,9 @@ using namespace std; #include "util.h" +class IPAddr; +class IPPrefix; + // Abstract base class. class SerializationFormat { public: @@ -28,6 +31,10 @@ public: virtual bool Read(bool* v, const char* tag) = 0; virtual bool Read(double* d, const char* tag) = 0; virtual bool Read(string* s, const char* tag) = 0; + virtual bool Read(IPAddr* addr, const char* tag) = 0; + virtual bool Read(IPPrefix* prefix, const char* tag) = 0; + virtual bool Read(struct in_addr* addr, const char* tag) = 0; + virtual bool Read(struct in6_addr* addr, const char* tag) = 0; // Returns number of raw bytes read since last call to StartRead(). int BytesRead() const { return bytes_read; } @@ -50,6 +57,10 @@ public: virtual bool Write(const char* s, const char* tag) = 0; virtual bool Write(const char* buf, int len, const char* tag) = 0; virtual bool Write(const string& s, const char* tag) = 0; + virtual bool Write(const IPAddr& addr, const char* tag) = 0; + virtual bool Write(const IPPrefix& prefix, const char* tag) = 0; + virtual bool Write(const struct in_addr& addr, const char* tag) = 0; + virtual bool Write(const struct in6_addr& addr, const char* tag) = 0; virtual bool WriteOpenTag(const char* tag) = 0; virtual bool WriteCloseTag(const char* tag) = 0; @@ -90,6 +101,10 @@ public: virtual bool Read(double* d, const char* tag); virtual bool Read(char** str, int* len, const char* tag); virtual bool Read(string* s, const char* tag); + virtual bool Read(IPAddr* addr, const char* tag); + virtual bool Read(IPPrefix* prefix, const char* tag); + virtual bool Read(struct in_addr* addr, const char* tag); + virtual bool Read(struct in6_addr* addr, const char* tag); virtual bool Write(int v, const char* tag); virtual bool Write(uint16 v, const char* tag); virtual bool Write(uint32 v, const char* tag); @@ -101,6 +116,10 @@ public: virtual bool Write(const char* s, const char* tag); virtual bool Write(const char* buf, int len, const char* tag); virtual bool Write(const string& s, const char* tag); + virtual bool Write(const IPAddr& addr, const char* tag); + virtual bool Write(const IPPrefix& prefix, const char* tag); + virtual bool Write(const struct in_addr& addr, const char* tag); + virtual bool Write(const struct in6_addr& addr, const char* tag); virtual bool WriteOpenTag(const char* tag); virtual bool WriteCloseTag(const char* tag); virtual bool WriteSeparator(); @@ -123,6 +142,10 @@ public: virtual bool Write(const char* s, const char* tag); virtual bool Write(const char* buf, int len, const char* tag); virtual bool Write(const string& s, const char* tag); + virtual bool Write(const IPAddr& addr, const char* tag); + virtual bool Write(const IPPrefix& prefix, const char* tag); + virtual bool Write(const struct in_addr& addr, const char* tag); + virtual bool Write(const struct in6_addr& addr, const char* tag); virtual bool WriteOpenTag(const char* tag); virtual bool WriteCloseTag(const char* tag); virtual bool WriteSeparator(); @@ -138,6 +161,10 @@ public: virtual bool Read(double* d, const char* tag); virtual bool Read(char** str, int* len, const char* tag); virtual bool Read(string* s, const char* tag); + virtual bool Read(IPAddr* addr, const char* tag); + virtual bool Read(IPPrefix* prefix, const char* tag); + virtual bool Read(struct in_addr* addr, const char* tag); + virtual bool Read(struct in6_addr* addr, const char* tag); private: // Encodes non-printable characters. diff --git a/src/Serializer.cc b/src/Serializer.cc index b82da0aaf3..fc6d00d06c 100644 --- a/src/Serializer.cc +++ b/src/Serializer.cc @@ -742,10 +742,11 @@ FileSerializer::~FileSerializer() io->Flush(); delete [] file; - delete io; - if ( fd >= 0 ) - close(fd); + if ( io ) + delete io; // destructor will call close() on fd + else if ( fd >= 0 ) + safe_close(fd); } bool FileSerializer::Open(const char* file, bool pure) @@ -808,8 +809,8 @@ void FileSerializer::CloseFile() if ( io ) io->Flush(); - if ( fd >= 0 ) - close(fd); + if ( fd >= 0 && ! io ) // destructor of io calls close() on fd + safe_close(fd); fd = -1; delete [] file; @@ -1103,9 +1104,9 @@ void EventPlayer::Process() void Packet::Describe(ODesc* d) const { const IP_Hdr ip = IP(); - d->Add(dotted_addr(ip.SrcAddr())); + d->Add(ip.SrcAddr()); d->Add("->"); - d->Add(dotted_addr(ip.DstAddr())); + d->Add(ip.DstAddr()); } bool Packet::Serialize(SerialInfo* info) const diff --git a/src/Serializer.h b/src/Serializer.h index db09cc837f..e7396cb7f8 100644 --- a/src/Serializer.h +++ b/src/Serializer.h @@ -69,6 +69,8 @@ public: { return format->Read(const_cast(str), len, tag); } bool Read(string* s, const char* tag); + bool Read(IPAddr* a, const char* tag) { return format->Read(a, tag); } + bool Read(IPPrefix* p, const char* tag) { return format->Read(p, tag); } bool Write(const char* s, const char* tag) { return format->Write(s, tag); } @@ -76,6 +78,8 @@ public: { return format->Write(buf, len, tag); } bool Write(const string& s, const char* tag) { return format->Write(s.data(), s.size(), tag); } + bool Write(const IPAddr& a, const char* tag) { return format->Write(a, tag); } + bool Write(const IPPrefix& p, const char* tag) { return format->Write(p, tag); } bool WriteOpenTag(const char* tag) { return format->WriteOpenTag(tag); } @@ -121,7 +125,7 @@ protected: // This will be increased whenever there is an incompatible change // in the data format. - static const uint32 DATA_FORMAT_VERSION = 20; + static const uint32 DATA_FORMAT_VERSION = 22; ChunkedIO* io; @@ -411,7 +415,7 @@ public: } const IP_Hdr IP() const - { return IP_Hdr((struct ip *) (pkt + hdr_size)); } + { return IP_Hdr((struct ip *) (pkt + hdr_size), true); } void Describe(ODesc* d) const; diff --git a/src/Sessions.cc b/src/Sessions.cc index 572e8dea59..6f42e5726b 100644 --- a/src/Sessions.cc +++ b/src/Sessions.cc @@ -27,10 +27,10 @@ #include "InterConn.h" #include "Discard.h" #include "RuleMatcher.h" -#include "ConnCompressor.h" #include "DPM.h" #include "PacketSort.h" +#include "TunnelEncapsulation.h" // These represent NetBIOS services on ephemeral ports. They're numbered // so that we can use a single int to hold either an actual TCP/UDP server @@ -68,11 +68,31 @@ void TimerMgrExpireTimer::Dispatch(double t, int is_expire) } } +void IPTunnelTimer::Dispatch(double t, int is_expire) + { + NetSessions::IPTunnelMap::const_iterator it = + sessions->ip_tunnels.find(tunnel_idx); + + if ( it == sessions->ip_tunnels.end() ) + return; + + double last_active = it->second.second; + double inactive_time = t > last_active ? t - last_active : 0; + + if ( inactive_time >= BifConst::Tunnel::ip_tunnel_timeout ) + // tunnel activity timed out, delete it from map + sessions->ip_tunnels.erase(tunnel_idx); + + else if ( ! is_expire ) + // tunnel activity didn't timeout, schedule another timer + timer_mgr->Add(new IPTunnelTimer(t, tunnel_idx)); + } + NetSessions::NetSessions() { TypeList* t = new TypeList(); - t->Append(base_type(TYPE_COUNT)); // source IP address - t->Append(base_type(TYPE_COUNT)); // dest IP address + t->Append(base_type(TYPE_ADDR)); // source IP address + t->Append(base_type(TYPE_ADDR)); // dest IP address t->Append(base_type(TYPE_COUNT)); // source and dest ports ch = new CompositeHash(t); @@ -135,22 +155,12 @@ NetSessions::~NetSessions() delete SYN_OS_Fingerprinter; delete pkt_profiler; Unref(arp_analyzer); + delete discarder; + delete stp_manager; } void NetSessions::Done() { - delete stp_manager; - delete discarder; - } - -namespace // private namespace - { - bool looks_like_IPv4_packet(int len, const struct ip* ip_hdr) - { - if ( len < int(sizeof(struct ip)) ) - return false; - return ip_hdr->ip_v == 4 && ntohs(ip_hdr->ip_len) == len; - } } void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr, @@ -169,60 +179,8 @@ void NetSessions::DispatchPacket(double t, const struct pcap_pkthdr* hdr, } if ( encap_hdr_size > 0 && ip_data ) - { - // We're doing tunnel encapsulation. Check whether there's - // a particular associated port. - // - // Should we discourage the use of encap_hdr_size for UDP - // tunnneling? It is probably better handled by enabling - // BifConst::parse_udp_tunnels instead of specifying a fixed - // encap_hdr_size. - if ( udp_tunnel_port > 0 ) - { - ASSERT(ip_hdr); - if ( ip_hdr->ip_p == IPPROTO_UDP ) - { - const struct udphdr* udp_hdr = - reinterpret_cast - (ip_data); - - if ( ntohs(udp_hdr->uh_dport) == udp_tunnel_port ) - { - // A match. - hdr_size += encap_hdr_size; - } - } - } - - else - // Blanket encapsulation - hdr_size += encap_hdr_size; - } - - // Check IP packets encapsulated through UDP tunnels. - // Specifying a udp_tunnel_port is optional but recommended (to avoid - // the cost of checking every UDP packet). - else if ( BifConst::parse_udp_tunnels && ip_data && ip_hdr->ip_p == IPPROTO_UDP ) - { - const struct udphdr* udp_hdr = - reinterpret_cast(ip_data); - - if ( udp_tunnel_port == 0 || // 0 matches any port - udp_tunnel_port == ntohs(udp_hdr->uh_dport) ) - { - const u_char* udp_data = - ip_data + sizeof(struct udphdr); - const struct ip* ip_encap = - reinterpret_cast(udp_data); - const int ip_encap_len = - ntohs(udp_hdr->uh_ulen) - sizeof(struct udphdr); - const int ip_encap_caplen = - hdr->caplen - (udp_data - pkt); - - if ( looks_like_IPv4_packet(ip_encap_len, ip_encap) ) - hdr_size = udp_data - pkt; - } - } + // Blanket encapsulation + hdr_size += encap_hdr_size; if ( src_ps->FilterType() == TYPE_FILTER_NORMAL ) NextPacket(t, hdr, pkt, hdr_size, pkt_elem); @@ -252,7 +210,7 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr, // difference here is that header extraction in // PacketSort does not generate Weird events. - DoNextPacket(t, hdr, pkt_elem->IPHdr(), pkt, hdr_size); + DoNextPacket(t, hdr, pkt_elem->IPHdr(), pkt, hdr_size, 0); else { @@ -273,24 +231,35 @@ void NetSessions::NextPacket(double t, const struct pcap_pkthdr* hdr, } const struct ip* ip = (const struct ip*) (pkt + hdr_size); + if ( ip->ip_v == 4 ) { - IP_Hdr ip_hdr(ip); - DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size); + IP_Hdr ip_hdr(ip, false); + DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size, 0); } - else if ( arp_analyzer && arp_analyzer->IsARP(pkt, hdr_size) ) - arp_analyzer->NextPacket(t, hdr, pkt, hdr_size); + else if ( ip->ip_v == 6 ) + { + if ( caplen < sizeof(struct ip6_hdr) ) + { + Weird("truncated_IP", hdr, pkt); + return; + } + + IP_Hdr ip_hdr((const struct ip6_hdr*) (pkt + hdr_size), false, caplen); + DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size, 0); + } + + else if ( ARP_Analyzer::IsARP(pkt, hdr_size) ) + { + if ( arp_analyzer ) + arp_analyzer->NextPacket(t, hdr, pkt, hdr_size); + } else { -#ifdef BROv6 - IP_Hdr ip_hdr((const struct ip6_hdr*) (pkt + hdr_size)); - DoNextPacket(t, hdr, &ip_hdr, pkt, hdr_size); -#else - Weird("non_IPv4_packet", hdr, pkt); + Weird("unknown_packet_type", hdr, pkt); return; -#endif } } @@ -329,7 +298,8 @@ void NetSessions::NextPacketSecondary(double /* t */, const struct pcap_pkthdr* StringVal* cmd_val = new StringVal(sp->Event()->Filter()); args->append(cmd_val); - args->append(BuildHeader(ip)); + IP_Hdr ip_hdr(ip, false); + args->append(ip_hdr.BuildPktHdrVal()); // ### Need to queue event here. try { @@ -397,21 +367,9 @@ int NetSessions::CheckConnectionTag(Connection* conn) return 1; } - -static bool looks_like_IPv4_packet(int len, const struct ip* ip_hdr) - { - if ( (unsigned int) len < sizeof(struct ip) ) - return false; - - if ( ip_hdr->ip_v == 4 && ntohs(ip_hdr->ip_len) == len ) - return true; - else - return false; - } - void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, const IP_Hdr* ip_hdr, const u_char* const pkt, - int hdr_size) + int hdr_size, const EncapsulationStack* encapsulation) { uint32 caplen = hdr->caplen - hdr_size; const struct ip* ip4 = ip_hdr->IP4_Hdr(); @@ -419,7 +377,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, uint32 len = ip_hdr->TotalLen(); if ( hdr->len < len + hdr_size ) { - Weird("truncated_IP", hdr, pkt); + Weird("truncated_IP", hdr, pkt, encapsulation); return; } @@ -431,41 +389,32 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, if ( ! ignore_checksums && ip4 && ones_complement_checksum((void*) ip4, ip_hdr_len, 0) != 0xffff ) { - Weird("bad_IP_checksum", hdr, pkt); + Weird("bad_IP_checksum", hdr, pkt, encapsulation); return; } if ( discarder && discarder->NextPacket(ip_hdr, len, caplen) ) return; - int proto = ip_hdr->NextProto(); - if ( proto != IPPROTO_TCP && proto != IPPROTO_UDP && - proto != IPPROTO_ICMP ) - { - dump_this_packet = 1; - return; - } - FragReassembler* f = 0; - uint32 frag_field = ip_hdr->FragField(); - if ( (frag_field & 0x3fff) != 0 ) + if ( ip_hdr->IsFragment() ) { dump_this_packet = 1; // always record fragments if ( caplen < len ) { - Weird("incompletely_captured_fragment", ip_hdr); + Weird("incompletely_captured_fragment", ip_hdr, encapsulation); // Don't try to reassemble, that's doomed. // Discard all except the first fragment (which // is useful in analyzing header-only traces) - if ( (frag_field & 0x1fff) != 0 ) + if ( ip_hdr->FragOffset() != 0 ) return; } else { - f = NextFragment(t, ip_hdr, pkt + hdr_size, frag_field); + f = NextFragment(t, ip_hdr, pkt + hdr_size); const IP_Hdr* ih = f->ReassembledPkt(); if ( ! ih ) // It didn't reassemble into anything yet. @@ -482,21 +431,56 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, len -= ip_hdr_len; // remove IP header caplen -= ip_hdr_len; - uint32 min_hdr_len = (proto == IPPROTO_TCP) ? sizeof(struct tcphdr) : - (proto == IPPROTO_UDP ? sizeof(struct udphdr) : ICMP_MINLEN); - - if ( len < min_hdr_len ) + // We stop building the chain when seeing IPPROTO_ESP so if it's + // there, it's always the last. + if ( ip_hdr->LastHeader() == IPPROTO_ESP ) { - Weird("truncated_header", hdr, pkt); - if ( f ) - Remove(f); // ### + dump_this_packet = 1; + if ( esp_packet ) + { + val_list* vl = new val_list(); + vl->append(ip_hdr->BuildPktHdrVal()); + mgr.QueueEvent(esp_packet, vl); + } + Remove(f); + // Can't do more since upper-layer payloads are going to be encrypted. return; } - if ( caplen < min_hdr_len ) + +#ifdef ENABLE_MOBILE_IPV6 + // We stop building the chain when seeing IPPROTO_MOBILITY so it's always + // last if present. + if ( ip_hdr->LastHeader() == IPPROTO_MOBILITY ) { - Weird("internally_truncated_header", hdr, pkt); - if ( f ) - Remove(f); // ### + dump_this_packet = 1; + + if ( ! ignore_checksums && mobility_header_checksum(ip_hdr) != 0xffff ) + { + Weird("bad_MH_checksum", hdr, pkt, encapsulation); + Remove(f); + return; + } + + if ( mobile_ipv6_message ) + { + val_list* vl = new val_list(); + vl->append(ip_hdr->BuildPktHdrVal()); + mgr.QueueEvent(mobile_ipv6_message, vl); + } + + if ( ip_hdr->NextProto() != IPPROTO_NONE ) + Weird("mobility_piggyback", hdr, pkt, encapsulation); + + Remove(f); + return; + } +#endif + + int proto = ip_hdr->NextProto(); + + if ( CheckHeaderTrunc(proto, len, caplen, hdr, pkt, encapsulation) ) + { + Remove(f); return; } @@ -506,7 +490,6 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, id.src_addr = ip_hdr->SrcAddr(); id.dst_addr = ip_hdr->DstAddr(); Dictionary* d = 0; - bool pass_to_conn_compressor = false; switch ( proto ) { case IPPROTO_TCP: @@ -516,7 +499,6 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, id.dst_port = tp->th_dport; id.is_one_way = 0; d = &tcp_conns; - pass_to_conn_compressor = ip4 && use_connection_compressor; break; } @@ -535,7 +517,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, const struct icmp* icmpp = (const struct icmp *) data; id.src_port = icmpp->icmp_type; - id.dst_port = ICMP_counterpart(icmpp->icmp_type, + id.dst_port = ICMP4_counterpart(icmpp->icmp_type, icmpp->icmp_code, id.is_one_way); @@ -546,12 +528,104 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, break; } + case IPPROTO_ICMPV6: + { + const struct icmp* icmpp = (const struct icmp *) data; + + id.src_port = icmpp->icmp_type; + id.dst_port = ICMP6_counterpart(icmpp->icmp_type, + icmpp->icmp_code, + id.is_one_way); + + id.src_port = htons(id.src_port); + id.dst_port = htons(id.dst_port); + + d = &icmp_conns; + break; + } + + case IPPROTO_IPV4: + case IPPROTO_IPV6: + { + if ( ! BifConst::Tunnel::enable_ip ) + { + Weird("IP_tunnel", ip_hdr, encapsulation); + Remove(f); + return; + } + + if ( encapsulation && + encapsulation->Depth() >= BifConst::Tunnel::max_depth ) + { + Weird("exceeded_tunnel_max_depth", ip_hdr, encapsulation); + Remove(f); + return; + } + + // Check for a valid inner packet first. + IP_Hdr* inner = 0; + int result = ParseIPPacket(caplen, data, proto, inner); + + if ( result < 0 ) + Weird("truncated_inner_IP", ip_hdr, encapsulation); + + else if ( result > 0 ) + Weird("inner_IP_payload_length_mismatch", ip_hdr, encapsulation); + + if ( result != 0 ) + { + delete inner; + Remove(f); + return; + } + + // Look up to see if we've already seen this IP tunnel, identified + // by the pair of IP addresses, so that we can always associate the + // same UID with it. + IPPair tunnel_idx; + if ( ip_hdr->SrcAddr() < ip_hdr->DstAddr() ) + tunnel_idx = IPPair(ip_hdr->SrcAddr(), ip_hdr->DstAddr()); + else + tunnel_idx = IPPair(ip_hdr->DstAddr(), ip_hdr->SrcAddr()); + + IPTunnelMap::iterator it = ip_tunnels.find(tunnel_idx); + + if ( it == ip_tunnels.end() ) + { + EncapsulatingConn ec(ip_hdr->SrcAddr(), ip_hdr->DstAddr()); + ip_tunnels[tunnel_idx] = TunnelActivity(ec, network_time); + timer_mgr->Add(new IPTunnelTimer(network_time, tunnel_idx)); + } + else + it->second.second = network_time; + + DoNextInnerPacket(t, hdr, inner, encapsulation, + ip_tunnels[tunnel_idx].first); + + Remove(f); + return; + } + + case IPPROTO_NONE: + { + // If the packet is encapsulated in Teredo, then it was a bubble and + // the Teredo analyzer may have raised an event for that, else we're + // not sure the reason for the No Next header in the packet. + if ( ! ( encapsulation && + encapsulation->LastType() == BifEnum::Tunnel::TEREDO ) ) + Weird("ipv6_no_next", hdr, pkt); + + Remove(f); + return; + } + default: - Weird(fmt("unknown_protocol %d", proto), hdr, pkt); + Weird(fmt("unknown_protocol_%d", proto), hdr, pkt, encapsulation); + Remove(f); return; } - HashKey* h = id.BuildConnKey(); + HashKey* h = BuildConnIDHashKey(id); if ( ! h ) reporter->InternalError("hash computation failed"); @@ -559,56 +633,67 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, // FIXME: The following is getting pretty complex. Need to split up // into separate functions. - if ( pass_to_conn_compressor ) - conn = conn_compressor->NextPacket(t, h, ip_hdr, hdr, pkt); + conn = (Connection*) d->Lookup(h); + if ( ! conn ) + { + conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation); + if ( conn ) + d->Insert(h, conn); + } else { - conn = (Connection*) d->Lookup(h); - if ( ! conn ) + // We already know that connection. + int consistent = CheckConnectionTag(conn); + if ( consistent < 0 ) { - conn = NewConn(h, t, &id, data, proto); + delete h; + Remove(f); + return; + } + + if ( ! consistent || conn->IsReuse(t, data) ) + { + if ( consistent ) + conn->Event(connection_reused, 0); + + Remove(conn); + conn = NewConn(h, t, &id, data, proto, ip_hdr->FlowLabel(), encapsulation); if ( conn ) d->Insert(h, conn); } else { - // We already know that connection. - int consistent = CheckConnectionTag(conn); - if ( consistent < 0 ) - { - delete h; - return; - } - - if ( ! consistent || conn->IsReuse(t, data) ) - { - if ( consistent ) - conn->Event(connection_reused, 0); - - Remove(conn); - conn = NewConn(h, t, &id, data, proto); - if ( conn ) - d->Insert(h, conn); - } - else - delete h; - } - - if ( ! conn ) delete h; + conn->CheckEncapsulation(encapsulation); + } } if ( ! conn ) + { + delete h; + Remove(f); return; + } int record_packet = 1; // whether to record the packet at all int record_content = 1; // whether to record its data - int is_orig = addr_eq(id.src_addr, conn->OrigAddr()) && - id.src_port == conn->OrigPort(); + int is_orig = (id.src_addr == conn->OrigAddr()) && + (id.src_port == conn->OrigPort()); - if ( new_packet && ip4 ) - conn->Event(new_packet, 0, BuildHeader(ip4)); + conn->CheckFlowLabel(is_orig, ip_hdr->FlowLabel()); + + Val* pkt_hdr_val = 0; + + if ( ipv6_ext_headers && ip_hdr->NumHeaders() > 1 ) + { + pkt_hdr_val = ip_hdr->BuildPktHdrVal(); + conn->Event(ipv6_ext_headers, 0, pkt_hdr_val); + } + + if ( new_packet ) + conn->Event(new_packet, 0, + pkt_hdr_val ? pkt_hdr_val->Ref() : ip_hdr->BuildPktHdrVal()); conn->NextPacket(t, is_orig, ip_hdr, len, caplen, data, record_packet, record_content, @@ -618,7 +703,7 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, { // Above we already recorded the fragment in its entirety. f->DeleteTimer(); - Remove(f); // ### + Remove(f); } else if ( record_packet ) @@ -634,110 +719,119 @@ void NetSessions::DoNextPacket(double t, const struct pcap_pkthdr* hdr, } } -Val* NetSessions::BuildHeader(const struct ip* ip) +void NetSessions::DoNextInnerPacket(double t, const struct pcap_pkthdr* hdr, + const IP_Hdr* inner, const EncapsulationStack* prev, + const EncapsulatingConn& ec) { - static RecordType* pkt_hdr_type = 0; - static RecordType* ip_hdr_type = 0; - static RecordType* tcp_hdr_type = 0; - static RecordType* udp_hdr_type = 0; - static RecordType* icmp_hdr_type; + struct pcap_pkthdr fake_hdr; + fake_hdr.caplen = fake_hdr.len = inner->TotalLen(); - if ( ! pkt_hdr_type ) + if ( hdr ) + fake_hdr.ts = hdr->ts; + else { - pkt_hdr_type = internal_type("pkt_hdr")->AsRecordType(); - ip_hdr_type = internal_type("ip_hdr")->AsRecordType(); - tcp_hdr_type = internal_type("tcp_hdr")->AsRecordType(); - udp_hdr_type = internal_type("udp_hdr")->AsRecordType(); - icmp_hdr_type = internal_type("icmp_hdr")->AsRecordType(); + fake_hdr.ts.tv_sec = (time_t) network_time; + fake_hdr.ts.tv_usec = (suseconds_t) + ((network_time - (double)fake_hdr.ts.tv_sec) * 1000000); } - RecordVal* pkt_hdr = new RecordVal(pkt_hdr_type); + const u_char* pkt = 0; - RecordVal* ip_hdr = new RecordVal(ip_hdr_type); + if ( inner->IP4_Hdr() ) + pkt = (const u_char*) inner->IP4_Hdr(); + else + pkt = (const u_char*) inner->IP6_Hdr(); - int ip_hdr_len = ip->ip_hl * 4; - int ip_pkt_len = ntohs(ip->ip_len); + EncapsulationStack* outer = prev ? + new EncapsulationStack(*prev) : new EncapsulationStack(); + outer->Add(ec); - ip_hdr->Assign(0, new Val(ip->ip_hl * 4, TYPE_COUNT)); - ip_hdr->Assign(1, new Val(ip->ip_tos, TYPE_COUNT)); - ip_hdr->Assign(2, new Val(ip_pkt_len, TYPE_COUNT)); - ip_hdr->Assign(3, new Val(ntohs(ip->ip_id), TYPE_COUNT)); - ip_hdr->Assign(4, new Val(ip->ip_ttl, TYPE_COUNT)); - ip_hdr->Assign(5, new Val(ip->ip_p, TYPE_COUNT)); - ip_hdr->Assign(6, new AddrVal(ip->ip_src.s_addr)); - ip_hdr->Assign(7, new AddrVal(ip->ip_dst.s_addr)); + DoNextPacket(t, &fake_hdr, inner, pkt, 0, outer); - pkt_hdr->Assign(0, ip_hdr); - - // L4 header. - const u_char* data = ((const u_char*) ip) + ip_hdr_len; - - int proto = ip->ip_p; - switch ( proto ) { - case IPPROTO_TCP: - { - const struct tcphdr* tp = (const struct tcphdr*) data; - RecordVal* tcp_hdr = new RecordVal(tcp_hdr_type); - - int tcp_hdr_len = tp->th_off * 4; - int data_len = ip_pkt_len - ip_hdr_len - tcp_hdr_len; - - tcp_hdr->Assign(0, new PortVal(ntohs(tp->th_sport), TRANSPORT_TCP)); - tcp_hdr->Assign(1, new PortVal(ntohs(tp->th_dport), TRANSPORT_TCP)); - tcp_hdr->Assign(2, new Val(uint32(ntohl(tp->th_seq)), TYPE_COUNT)); - tcp_hdr->Assign(3, new Val(uint32(ntohl(tp->th_ack)), TYPE_COUNT)); - tcp_hdr->Assign(4, new Val(tcp_hdr_len, TYPE_COUNT)); - tcp_hdr->Assign(5, new Val(data_len, TYPE_COUNT)); - tcp_hdr->Assign(6, new Val(tp->th_flags, TYPE_COUNT)); - tcp_hdr->Assign(7, new Val(ntohs(tp->th_win), TYPE_COUNT)); - - pkt_hdr->Assign(1, tcp_hdr); - break; - } - - case IPPROTO_UDP: - { - const struct udphdr* up = (const struct udphdr*) data; - RecordVal* udp_hdr = new RecordVal(udp_hdr_type); - - udp_hdr->Assign(0, new PortVal(ntohs(up->uh_sport), TRANSPORT_UDP)); - udp_hdr->Assign(1, new PortVal(ntohs(up->uh_dport), TRANSPORT_UDP)); - udp_hdr->Assign(2, new Val(ntohs(up->uh_ulen), TYPE_COUNT)); - - pkt_hdr->Assign(2, udp_hdr); - break; - } - - case IPPROTO_ICMP: - { - const struct icmp* icmpp = (const struct icmp *) data; - RecordVal* icmp_hdr = new RecordVal(icmp_hdr_type); - - icmp_hdr->Assign(0, new Val(icmpp->icmp_type, TYPE_COUNT)); - - pkt_hdr->Assign(3, icmp_hdr); - break; - } - - default: - { - // This is not a protocol we understand. - } + delete inner; + delete outer; } - return pkt_hdr; +int NetSessions::ParseIPPacket(int caplen, const u_char* const pkt, int proto, + IP_Hdr*& inner) + { + if ( proto == IPPROTO_IPV6 ) + { + if ( caplen < (int)sizeof(struct ip6_hdr) ) + return -1; + + inner = new IP_Hdr((const struct ip6_hdr*) pkt, false, caplen); + } + + else if ( proto == IPPROTO_IPV4 ) + { + if ( caplen < (int)sizeof(struct ip) ) + return -1; + + inner = new IP_Hdr((const struct ip*) pkt, false); + } + + else + reporter->InternalError("Bad IP protocol version in DoNextInnerPacket"); + + if ( (uint32)caplen != inner->TotalLen() ) + return (uint32)caplen < inner->TotalLen() ? -1 : 1; + + return 0; + } + +bool NetSessions::CheckHeaderTrunc(int proto, uint32 len, uint32 caplen, + const struct pcap_pkthdr* h, + const u_char* p, const EncapsulationStack* encap) + { + uint32 min_hdr_len = 0; + switch ( proto ) { + case IPPROTO_TCP: + min_hdr_len = sizeof(struct tcphdr); + break; + case IPPROTO_UDP: + min_hdr_len = sizeof(struct udphdr); + break; + case IPPROTO_IPV4: + min_hdr_len = sizeof(struct ip); + break; + case IPPROTO_IPV6: + min_hdr_len = sizeof(struct ip6_hdr); + break; + case IPPROTO_NONE: + min_hdr_len = 0; + break; + case IPPROTO_ICMP: + case IPPROTO_ICMPV6: + default: + // Use for all other packets. + min_hdr_len = ICMP_MINLEN; + break; + } + + if ( len < min_hdr_len ) + { + Weird("truncated_header", h, p, encap); + return true; + } + + if ( caplen < min_hdr_len ) + { + Weird("internally_truncated_header", h, p, encap); + return true; + } + + return false; } FragReassembler* NetSessions::NextFragment(double t, const IP_Hdr* ip, - const u_char* pkt, uint32 frag_field) + const u_char* pkt) { - uint32 src_addr = uint32(ip->SrcAddr4()); - uint32 dst_addr = uint32(ip->DstAddr4()); - uint32 frag_id = ntohs(ip->ID4()); // we actually could skip conv. + uint32 frag_id = ip->ID(); ListVal* key = new ListVal(TYPE_ANY); - key->Append(new Val(src_addr, TYPE_COUNT)); - key->Append(new Val(dst_addr, TYPE_COUNT)); + key->Append(new AddrVal(ip->SrcAddr())); + key->Append(new AddrVal(ip->DstAddr())); key->Append(new Val(frag_id, TYPE_COUNT)); HashKey* h = ch->ComputeHash(key, 1); @@ -747,7 +841,7 @@ FragReassembler* NetSessions::NextFragment(double t, const IP_Hdr* ip, FragReassembler* f = fragments.Lookup(h); if ( ! f ) { - f = new FragReassembler(this, ip, pkt, frag_field, h, t); + f = new FragReassembler(this, ip, pkt, h, t); fragments.Insert(h, f); Unref(key); return f; @@ -756,7 +850,7 @@ FragReassembler* NetSessions::NextFragment(double t, const IP_Hdr* ip, delete h; Unref(key); - f->AddFragment(t, ip, pkt, frag_field); + f->AddFragment(t, ip, pkt); return f; } @@ -772,7 +866,7 @@ int NetSessions::Get_OS_From_SYN(struct os_type* retval, quirks, ECN) : 0; } -bool NetSessions::CompareWithPreviousOSMatch(uint32 addr, int id) const +bool NetSessions::CompareWithPreviousOSMatch(const IPAddr& addr, int id) const { return SYN_OS_Fingerprinter ? SYN_OS_Fingerprinter->CacheMatch(addr, id) : 0; @@ -813,44 +907,30 @@ Connection* NetSessions::FindConnection(Val* v) // types, too. } - addr_type orig_addr = (*vl)[orig_h]->AsAddr(); - addr_type resp_addr = (*vl)[resp_h]->AsAddr(); + const IPAddr& orig_addr = (*vl)[orig_h]->AsAddr(); + const IPAddr& resp_addr = (*vl)[resp_h]->AsAddr(); PortVal* orig_portv = (*vl)[orig_p]->AsPortVal(); PortVal* resp_portv = (*vl)[resp_p]->AsPortVal(); ConnID id; -#ifdef BROv6 id.src_addr = orig_addr; id.dst_addr = resp_addr; -#else - id.src_addr = &orig_addr; - id.dst_addr = &resp_addr; -#endif id.src_port = htons((unsigned short) orig_portv->Port()); id.dst_port = htons((unsigned short) resp_portv->Port()); id.is_one_way = 0; // ### incorrect for ICMP connections - HashKey* h = id.BuildConnKey(); + HashKey* h = BuildConnIDHashKey(id); if ( ! h ) reporter->InternalError("hash computation failed"); Dictionary* d; if ( orig_portv->IsTCP() ) - { - if ( use_connection_compressor ) - { - Connection* conn = conn_compressor->Lookup(h); - delete h; - return conn; - } - else - d = &tcp_conns; - } + d = &tcp_conns; else if ( orig_portv->IsUDP() ) d = &udp_conns; else if ( orig_portv->IsICMP() ) @@ -903,17 +983,7 @@ void NetSessions::Remove(Connection* c) switch ( c->ConnTransport() ) { case TRANSPORT_TCP: - if ( use_connection_compressor && - conn_compressor->Remove(k) ) - // Note, if the Remove() returned false - // then the compressor doesn't know about - // this connection, which *should* mean that - // we never gave it the connection in the - // first place, and thus we should check - // the regular TCP table instead. - ; - - else if ( ! tcp_conns.RemoveEntry(k) ) + if ( ! tcp_conns.RemoveEntry(k) ) reporter->InternalError("connection missing"); break; @@ -939,6 +1009,7 @@ void NetSessions::Remove(Connection* c) void NetSessions::Remove(FragReassembler* f) { + if ( ! f ) return; HashKey* k = f->Key(); if ( ! k ) reporter->InternalError("fragment block not in dictionary"); @@ -960,13 +1031,8 @@ void NetSessions::Insert(Connection* c) // reference the old key for already existing connections. case TRANSPORT_TCP: - if ( use_connection_compressor ) - old = conn_compressor->Insert(c); - else - { - old = (Connection*) tcp_conns.Remove(c->Key()); - tcp_conns.Insert(c->Key(), c); - } + old = (Connection*) tcp_conns.Remove(c->Key()); + tcp_conns.Insert(c->Key(), c); break; case TRANSPORT_UDP: @@ -998,9 +1064,6 @@ void NetSessions::Insert(Connection* c) void NetSessions::Drain() { - if ( use_connection_compressor ) - conn_compressor->Drain(); - IterCookie* cookie = tcp_conns.InitForIteration(); Connection* tc; @@ -1050,7 +1113,8 @@ void NetSessions::GetStats(SessionStats& s) const } Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id, - const u_char* data, int proto) + const u_char* data, int proto, uint32 flow_label, + const EncapsulationStack* encapsulation) { // FIXME: This should be cleaned up a bit, it's too protocol-specific. // But I'm not yet sure what the right abstraction for these things is. @@ -1070,6 +1134,9 @@ Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id, case IPPROTO_UDP: tproto = TRANSPORT_UDP; break; + case IPPROTO_ICMPV6: + tproto = TRANSPORT_ICMP; + break; default: reporter->InternalError("unknown transport protocol"); break; @@ -1092,7 +1159,7 @@ Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id, // an analyzable connection. ConnID flip_id = *id; - const uint32* ta = flip_id.src_addr; + const IPAddr ta = flip_id.src_addr; flip_id.src_addr = flip_id.dst_addr; flip_id.dst_addr = ta; @@ -1103,7 +1170,7 @@ Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id, id = &flip_id; } - Connection* conn = new Connection(this, k, t, id); + Connection* conn = new Connection(this, k, t, id, flow_label, encapsulation); conn->SetTransport(tproto); dpm->BuildInitialAnalyzerTree(tproto, conn, data); @@ -1113,10 +1180,7 @@ Connection* NetSessions::NewConn(HashKey* k, double t, const ConnID* id, conn->AppendAddl(fmt("tag=%s", conn->GetTimerMgr()->GetTag().c_str())); - // If the connection compressor is active, it takes care of the - // new_connection/connection_external events for TCP connections. - if ( new_connection && - (tproto != TRANSPORT_TCP || ! use_connection_compressor) ) + if ( new_connection ) { conn->Event(new_connection, 0); @@ -1270,18 +1334,26 @@ void NetSessions::Internal(const char* msg, const struct pcap_pkthdr* hdr, reporter->InternalError("%s", msg); } -void NetSessions::Weird(const char* name, - const struct pcap_pkthdr* hdr, const u_char* pkt) +void NetSessions::Weird(const char* name, const struct pcap_pkthdr* hdr, + const u_char* pkt, const EncapsulationStack* encap) { if ( hdr ) dump_this_packet = 1; - reporter->Weird(name); + if ( encap && encap->LastType() != BifEnum::Tunnel::NONE ) + reporter->Weird(fmt("%s_in_tunnel", name)); + else + reporter->Weird(name); } -void NetSessions::Weird(const char* name, const IP_Hdr* ip) +void NetSessions::Weird(const char* name, const IP_Hdr* ip, + const EncapsulationStack* encap) { - reporter->Weird(ip->SrcAddr(), ip->DstAddr(), name); + if ( encap && encap->LastType() != BifEnum::Tunnel::NONE ) + reporter->Weird(ip->SrcAddr(), ip->DstAddr(), + fmt("%s_in_tunnel", name)); + else + reporter->Weird(ip->SrcAddr(), ip->DstAddr(), name); } unsigned int NetSessions::ConnectionMemoryUsage() diff --git a/src/Sessions.h b/src/Sessions.h index 452de874db..abaa8b49d0 100644 --- a/src/Sessions.h +++ b/src/Sessions.h @@ -11,13 +11,16 @@ #include "PacketFilter.h" #include "Stats.h" #include "NetVar.h" +#include "TunnelEncapsulation.h" +#include struct pcap_pkthdr; +class EncapsulationStack; class Connection; -class ConnID; class OSFingerprint; class ConnCompressor; +struct ConnID; declare(PDict,Connection); declare(PDict,FragReassembler); @@ -79,7 +82,7 @@ public: // Returns a reassembled packet, or nil if there are still // some missing fragments. FragReassembler* NextFragment(double t, const IP_Hdr* ip, - const u_char* pkt, uint32 frag_field); + const u_char* pkt); int Get_OS_From_SYN(struct os_type* retval, uint16 tot, uint8 DF_flag, uint8 TTL, uint16 WSS, @@ -87,7 +90,7 @@ public: uint32 tstamp, /* uint8 TOS, */ uint32 quirks, uint8 ECN) const; - bool CompareWithPreviousOSMatch(uint32 addr, int id) const; + bool CompareWithPreviousOSMatch(const IPAddr& addr, int id) const; // Looks up the connection referred to by the given Val, // which should be a conn_id record. Returns nil if there's @@ -105,9 +108,10 @@ public: void GetStats(SessionStats& s) const; - void Weird(const char* name, - const struct pcap_pkthdr* hdr, const u_char* pkt); - void Weird(const char* name, const IP_Hdr* ip); + void Weird(const char* name, const struct pcap_pkthdr* hdr, + const u_char* pkt, const EncapsulationStack* encap = 0); + void Weird(const char* name, const IP_Hdr* ip, + const EncapsulationStack* encap = 0); PacketFilter* GetPacketFilter() { @@ -131,6 +135,51 @@ public: icmp_conns.Length(); } + void DoNextPacket(double t, const struct pcap_pkthdr* hdr, + const IP_Hdr* ip_hdr, const u_char* const pkt, + int hdr_size, const EncapsulationStack* encapsulation); + + /** + * Wrapper that recurses on DoNextPacket for encapsulated IP packets. + * + * @param t Network time. + * @param hdr If the outer pcap header is available, this pointer can be set + * so that the fake pcap header passed to DoNextPacket will use + * the same timeval. The caplen and len fields of the fake pcap + * header are always set to the TotalLength() of \a inner. + * @param inner Pointer to IP header wrapper of the inner packet, ownership + * of the pointer's memory is assumed by this function. + * @param prev Any previous encapsulation stack of the caller, not including + * the most-recently found depth of encapsulation. + * @param ec The most-recently found depth of encapsulation. + */ + void DoNextInnerPacket(double t, const struct pcap_pkthdr* hdr, + const IP_Hdr* inner, const EncapsulationStack* prev, + const EncapsulatingConn& ec); + + /** + * Returns a wrapper IP_Hdr object if \a pkt appears to be a valid IPv4 + * or IPv6 header based on whether it's long enough to contain such a header + * and also that the payload length field of that header matches the actual + * length of \a pkt given by \a caplen. + * + * @param caplen The length of \a pkt in bytes. + * @param pkt The inner IP packet data. + * @param proto Either IPPROTO_IPV6 or IPPROTO_IPV4 to indicate which IP + * protocol \a pkt corresponds to. + * @param inner The inner IP packet wrapper pointer to be allocated/assigned + * if \a pkt looks like a valid IP packet or at least long enough + * to hold an IP header. + * @return 0 If the inner IP packet appeared valid, else -1 if \a caplen + * is greater than the supposed IP packet's payload length field or + * 1 if \a caplen is less than the supposed packet's payload length. + * In the -1 case, \a inner may still be non-null if \a caplen was + * long enough to be an IP header, and \a inner is always non-null + * for other return values. + */ + int ParseIPPacket(int caplen, const u_char* const pkt, int proto, + IP_Hdr*& inner); + unsigned int ConnectionMemoryUsage(); unsigned int ConnectionMemoryUsageConnVals(); unsigned int MemoryAllocation(); @@ -140,9 +189,11 @@ protected: friend class RemoteSerializer; friend class ConnCompressor; friend class TimerMgrExpireTimer; + friend class IPTunnelTimer; Connection* NewConn(HashKey* k, double t, const ConnID* id, - const u_char* data, int proto); + const u_char* data, int proto, uint32 flow_lable, + const EncapsulationStack* encapsulation); // Check whether the tag of the current packet is consistent with // the given connection. Returns: @@ -173,10 +224,6 @@ protected: const u_char* const pkt, int hdr_size, PacketSortElement* pkt_elem); - void DoNextPacket(double t, const struct pcap_pkthdr* hdr, - const IP_Hdr* ip_hdr, const u_char* const pkt, - int hdr_size); - void NextPacketSecondary(double t, const struct pcap_pkthdr* hdr, const u_char* const pkt, int hdr_size, const PktSrc* src_ps); @@ -190,10 +237,12 @@ protected: void Internal(const char* msg, const struct pcap_pkthdr* hdr, const u_char* pkt); - // Builds a record encapsulating a packet. This should be more - // general, including the equivalent of a union of tcp/udp/icmp - // headers . - Val* BuildHeader(const struct ip* ip); + // For a given protocol, checks whether the header's length as derived + // from lower-level headers or the length actually captured is less + // than that protocol's minimum header size. + bool CheckHeaderTrunc(int proto, uint32 len, uint32 caplen, + const struct pcap_pkthdr* hdr, const u_char* pkt, + const EncapsulationStack* encap); CompositeHash* ch; PDict(Connection) tcp_conns; @@ -201,6 +250,11 @@ protected: PDict(Connection) icmp_conns; PDict(FragReassembler) fragments; + typedef pair IPPair; + typedef pair TunnelActivity; + typedef std::map IPTunnelMap; + IPTunnelMap ip_tunnels; + ARP_Analyzer* arp_analyzer; SteppingStoneManager* stp_manager; @@ -218,6 +272,21 @@ protected: TimerMgrMap timer_mgrs; }; + +class IPTunnelTimer : public Timer { +public: + IPTunnelTimer(double t, NetSessions::IPPair p) + : Timer(t + BifConst::Tunnel::ip_tunnel_timeout, + TIMER_IP_TUNNEL_INACTIVITY), tunnel_idx(p) {} + + ~IPTunnelTimer() {} + + void Dispatch(double t, int is_expire); + +protected: + NetSessions::IPPair tunnel_idx; +}; + // Manager for the currently active sessions. extern NetSessions* sessions; diff --git a/src/StateAccess.cc b/src/StateAccess.cc index 136a006c52..2d0a8dfc5a 100644 --- a/src/StateAccess.cc +++ b/src/StateAccess.cc @@ -231,7 +231,7 @@ bool StateAccess::CheckOldSet(const char* op, ID* id, Val* index, bool StateAccess::MergeTables(TableVal* dst, Val* src) { - if ( ! src->Type()->Tag() == TYPE_TABLE ) + if ( src->Type()->Tag() != TYPE_TABLE ) { reporter->Error("type mismatch while merging tables"); return false; @@ -678,7 +678,7 @@ bool StateAccess::DoUnserialize(UnserialInfo* info) target.id = new ID(name, SCOPE_GLOBAL, true); Ref(target.id); global_scope()->Insert(name, target.id); -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG heap_checker->IgnoreObject(target.id); #endif } diff --git a/src/Stats.cc b/src/Stats.cc index 55835613e9..8d48c47a25 100644 --- a/src/Stats.cc +++ b/src/Stats.cc @@ -6,17 +6,16 @@ #include "Stats.h" #include "Scope.h" #include "cq.h" -#include "ConnCompressor.h" #include "DNS_Mgr.h" #include "Trigger.h" - +#include "threading/Manager.h" int killed_by_inactivity = 0; -uint32 tot_ack_events = 0; -uint32 tot_ack_bytes = 0; -uint32 tot_gap_events = 0; -uint32 tot_gap_bytes = 0; +uint64 tot_ack_events = 0; +uint64 tot_ack_bytes = 0; +uint64 tot_gap_events = 0; +uint64 tot_gap_bytes = 0; class ProfileTimer : public Timer { @@ -129,19 +128,6 @@ void ProfileLogger::Log() expensive ? sessions->ConnectionMemoryUsageConnVals() / 1024 : 0 )); - const ConnCompressor::Sizes& cs = conn_compressor->Size(); - - file->Write(fmt("%.6f ConnCompressor: pending=%d pending_in_mem=%d full_conns=%d pending+real=%d mem=%dK avg=%.1f/%.1f\n", - network_time, - cs.pending_valid, - cs.pending_in_mem, - cs.connections, - cs.hash_table_size, - cs.memory / 1024, - cs.memory / double(cs.pending_valid), - cs.memory / double(cs.pending_in_mem) - )); - SessionStats s; sessions->GetStats(s); @@ -217,6 +203,25 @@ void ProfileLogger::Log() current_timers[i])); } + file->Write(fmt("%0.6f Threads: current=%d\n", network_time, thread_mgr->NumThreads())); + + const threading::Manager::msg_stats_list& thread_stats = thread_mgr->GetMsgThreadStats(); + for ( threading::Manager::msg_stats_list::const_iterator i = thread_stats.begin(); + i != thread_stats.end(); ++i ) + { + threading::MsgThread::Stats s = i->second; + file->Write(fmt("%0.6f %-25s in=%" PRIu64 " out=%" PRIu64 " pending=%" PRIu64 "/%" PRIu64 + " (#queue r/w: in=%" PRIu64 "/%" PRIu64 " out=%" PRIu64 "/%" PRIu64 ")" + "\n", + network_time, + i->first.c_str(), + s.sent_in, s.sent_out, + s.pending_in, s.pending_out, + s.queue_in_stats.num_reads, s.queue_in_stats.num_writes, + s.queue_out_stats.num_reads, s.queue_out_stats.num_writes + )); + } + // Script-level state. unsigned int size, mem = 0; PDict(ID)* globals = global_scope()->Vars(); diff --git a/src/Stats.h b/src/Stats.h index eeebfe2213..a11d66828a 100644 --- a/src/Stats.h +++ b/src/Stats.h @@ -116,10 +116,10 @@ extern SampleLogger* sample_logger; extern int killed_by_inactivity; // Content gap statistics. -extern uint32 tot_ack_events; -extern uint32 tot_ack_bytes; -extern uint32 tot_gap_events; -extern uint32 tot_gap_bytes; +extern uint64 tot_ack_events; +extern uint64 tot_ack_bytes; +extern uint64 tot_gap_events; +extern uint64 tot_gap_bytes; // A TCPStateStats object tracks the distribution of TCP states for diff --git a/src/Stmt.cc b/src/Stmt.cc index 6a83940b3b..7d754d8e72 100644 --- a/src/Stmt.cc +++ b/src/Stmt.cc @@ -258,6 +258,8 @@ static BroFile* print_stdout = 0; Val* PrintStmt::DoExec(val_list* vals, stmt_flow_type& /* flow */) const { + RegisterAccess(); + if ( ! print_stdout ) print_stdout = new BroFile(stdout); @@ -941,7 +943,10 @@ ForStmt::ForStmt(id_list* arg_loop_vars, Expr* loop_expr) { const type_list* indices = e->Type()->AsTableType()->IndexTypes(); if ( indices->length() != loop_vars->length() ) + { e->Error("wrong index size"); + return; + } for ( int i = 0; i < indices->length(); i++ ) { diff --git a/src/Stmt.h b/src/Stmt.h index 8e3a4b4118..7c3b42609b 100644 --- a/src/Stmt.h +++ b/src/Stmt.h @@ -52,6 +52,7 @@ public: void RegisterAccess() const { last_access = network_time; access_count++; } void AccessStats(ODesc* d) const; + uint32 GetAccessCount() const { return access_count; } virtual void Describe(ODesc* d) const; diff --git a/src/TCP.cc b/src/TCP.cc index 0fae07a24d..555adf1b57 100644 --- a/src/TCP.cc +++ b/src/TCP.cc @@ -46,6 +46,7 @@ TCP_Analyzer::TCP_Analyzer(Connection* conn) finished = 0; reassembling = 0; first_packet_seen = 0; + is_partial = 0; orig = new TCP_Endpoint(this, 1); resp = new TCP_Endpoint(this, 0); @@ -276,7 +277,7 @@ void TCP_Analyzer::ProcessSYN(const IP_Hdr* ip, const struct tcphdr* tp, uint32 tcp_hdr_len, int& seq_len, TCP_Endpoint* endpoint, TCP_Endpoint* peer, uint32 base_seq, uint32 ack_seq, - const uint32* orig_addr, + const IPAddr& orig_addr, int is_orig, TCP_Flags flags) { int len = seq_len; @@ -346,7 +347,7 @@ void TCP_Analyzer::ProcessSYN(const IP_Hdr* ip, const struct tcphdr* tp, // is_orig will be removed once we can do SYN-ACK fingerprinting. if ( OS_version_found && is_orig ) { - Val src_addr_val(orig_addr, TYPE_ADDR); + AddrVal src_addr_val(orig_addr); if ( generate_OS_version_event->Size() == 0 || generate_OS_version_event->Lookup(&src_addr_val) ) { @@ -414,7 +415,7 @@ int TCP_Analyzer::ProcessFlags(double t, uint32 tcp_hdr_len, int len, int& seq_len, TCP_Endpoint* endpoint, TCP_Endpoint* peer, uint32 base_seq, uint32 ack_seq, - const uint32* orig_addr, + const IPAddr& orig_addr, int is_orig, TCP_Flags flags) { if ( flags.SYN() ) @@ -989,8 +990,7 @@ void TCP_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig, if ( ! orig->did_close || ! resp->did_close ) Conn()->SetLastTime(t); - const uint32* orig_addr = Conn()->OrigAddr(); - const uint32* resp_addr = Conn()->RespAddr(); + const IPAddr orig_addr = Conn()->OrigAddr(); uint32 tcp_hdr_len = data - (const u_char*) tp; @@ -1204,7 +1204,7 @@ RecordVal* TCP_Analyzer::BuildOSVal(int is_orig, const IP_Hdr* ip, if ( ip->HdrLen() > 20 ) quirks |= QUIRK_IPOPT; - if ( ip->IP_ID() == 0 ) + if ( ip->ID() == 0 ) quirks |= QUIRK_ZEROID; if ( tcp->th_seq == 0 ) @@ -1331,7 +1331,7 @@ RecordVal* TCP_Analyzer::BuildOSVal(int is_orig, const IP_Hdr* ip, tstamp, quirks, uint8(tcp->th_flags & (TH_ECE|TH_CWR))); - if ( sessions->CompareWithPreviousOSMatch(ip->SrcAddr4(), id) ) + if ( sessions->CompareWithPreviousOSMatch(ip->SrcAddr(), id) ) { RecordVal* os = new RecordVal(OS_version); @@ -1943,11 +1943,11 @@ int TCPStats_Endpoint::DataSent(double /* t */, int seq, int len, int caplen, { if ( ++num_pkts == 1 ) { // First packet. - last_id = ntohs(ip->ID4()); + last_id = ip->ID(); return 0; } - int id = ntohs(ip->ID4()); + int id = ip->ID(); if ( id == last_id ) { diff --git a/src/TCP.h b/src/TCP.h index 65f437856a..c84202fcf6 100644 --- a/src/TCP.h +++ b/src/TCP.h @@ -6,6 +6,7 @@ #include "Analyzer.h" #include "TCP.h" #include "PacketDumper.h" +#include "IPAddr.h" // We define two classes here: // - TCP_Analyzer is the analyzer for the TCP protocol itself. @@ -128,7 +129,7 @@ protected: uint32 tcp_hdr_len, int& seq_len, TCP_Endpoint* endpoint, TCP_Endpoint* peer, uint32 base_seq, uint32 ack_seq, - const uint32* orig_addr, + const IPAddr& orig_addr, int is_orig, TCP_Flags flags); void ProcessFIN(double t, TCP_Endpoint* endpoint, int& seq_len, @@ -144,7 +145,7 @@ protected: uint32 tcp_hdr_len, int len, int& seq_len, TCP_Endpoint* endpoint, TCP_Endpoint* peer, uint32 base_seq, uint32 ack_seq, - const uint32* orig_addr, + const IPAddr& orig_addr, int is_orig, TCP_Flags flags); void TransitionFromInactive(double t, TCP_Endpoint* endpoint, diff --git a/src/TCP_Endpoint.cc b/src/TCP_Endpoint.cc index 5a65a18d7c..69c08870d9 100644 --- a/src/TCP_Endpoint.cc +++ b/src/TCP_Endpoint.cc @@ -32,13 +32,8 @@ TCP_Endpoint::TCP_Endpoint(TCP_Analyzer* arg_analyzer, int arg_is_orig) dst_addr = is_orig ? tcp_analyzer->Conn()->OrigAddr() : tcp_analyzer->Conn()->RespAddr(); -#ifdef BROv6 - checksum_base = ones_complement_checksum((void*) src_addr, 16, 0); - checksum_base = ones_complement_checksum((void*) dst_addr, 16, checksum_base); -#else - checksum_base = ones_complement_checksum((void*) src_addr, 4, 0); - checksum_base = ones_complement_checksum((void*) dst_addr, 4, checksum_base); -#endif + checksum_base = ones_complement_checksum(src_addr, 0); + checksum_base = ones_complement_checksum(dst_addr, checksum_base); // Note, for IPv6, strictly speaking this field is 32 bits // rather than 16 bits. But because the upper bits are all zero, // we get the same checksum either way. The same applies to diff --git a/src/TCP_Endpoint.h b/src/TCP_Endpoint.h index 758a504ff5..52a757b256 100644 --- a/src/TCP_Endpoint.h +++ b/src/TCP_Endpoint.h @@ -3,6 +3,8 @@ #ifndef tcpendpoint_h #define tcpendpoint_h +#include "IPAddr.h" + typedef enum { TCP_ENDPOINT_INACTIVE, // no SYN (or other packets) seen for this side TCP_ENDPOINT_SYN_SENT, // SYN seen, but no ack @@ -128,8 +130,8 @@ public: uint32 checksum_base; double start_time, last_time; - const uint32* src_addr; // the other endpoint - const uint32* dst_addr; // this endpoint + IPAddr src_addr; // the other endpoint + IPAddr dst_addr; // this endpoint uint32 window; // current congestion window (*scaled*, not pre-scaling) int window_scale; // from the TCP option uint32 window_ack_seq; // at which ack_seq number did we record 'window' diff --git a/src/TCP_Reassembler.cc b/src/TCP_Reassembler.cc index ba31ab68d0..eb2709373c 100644 --- a/src/TCP_Reassembler.cc +++ b/src/TCP_Reassembler.cc @@ -20,16 +20,16 @@ const bool DEBUG_tcp_connection_close = false; const bool DEBUG_tcp_match_undelivered = false; static double last_gap_report = 0.0; -static uint32 last_ack_events = 0; -static uint32 last_ack_bytes = 0; -static uint32 last_gap_events = 0; -static uint32 last_gap_bytes = 0; +static uint64 last_ack_events = 0; +static uint64 last_ack_bytes = 0; +static uint64 last_gap_events = 0; +static uint64 last_gap_bytes = 0; TCP_Reassembler::TCP_Reassembler(Analyzer* arg_dst_analyzer, TCP_Analyzer* arg_tcp_analyzer, TCP_Reassembler::Type arg_type, bool arg_is_orig, TCP_Endpoint* arg_endp) -: Reassembler(1, arg_endp->dst_addr, REASSEM_TCP) + : Reassembler(1, REASSEM_TCP) { dst_analyzer = arg_dst_analyzer; tcp_analyzer = arg_tcp_analyzer; @@ -513,10 +513,10 @@ void TCP_Reassembler::AckReceived(int seq) if ( gap_report && gap_report_freq > 0.0 && dt >= gap_report_freq ) { - int devents = tot_ack_events - last_ack_events; - int dbytes = tot_ack_bytes - last_ack_bytes; - int dgaps = tot_gap_events - last_gap_events; - int dgap_bytes = tot_gap_bytes - last_gap_bytes; + uint64 devents = tot_ack_events - last_ack_events; + uint64 dbytes = tot_ack_bytes - last_ack_bytes; + uint64 dgaps = tot_gap_events - last_gap_events; + uint64 dgap_bytes = tot_gap_bytes - last_gap_bytes; RecordVal* r = new RecordVal(gap_info); r->Assign(0, new Val(devents, TYPE_COUNT)); diff --git a/src/Teredo.cc b/src/Teredo.cc new file mode 100644 index 0000000000..7794d1cb3b --- /dev/null +++ b/src/Teredo.cc @@ -0,0 +1,254 @@ + +#include "Teredo.h" +#include "IP.h" +#include "Reporter.h" + +void Teredo_Analyzer::Done() + { + Analyzer::Done(); + Event(udp_session_done); + } + +bool TeredoEncapsulation::DoParse(const u_char* data, int& len, + bool found_origin, bool found_auth) + { + if ( len < 2 ) + { + Weird("truncated_Teredo"); + return false; + } + + uint16 tag = ntohs((*((const uint16*)data))); + + if ( tag == 0 ) + { + // Origin Indication + if ( found_origin ) + // can't have multiple origin indications + return false; + + if ( len < 8 ) + { + Weird("truncated_Teredo_origin_indication"); + return false; + } + + origin_indication = data; + len -= 8; + data += 8; + return DoParse(data, len, true, found_auth); + } + + else if ( tag == 1 ) + { + // Authentication + if ( found_origin || found_auth ) + // can't have multiple authentication headers and can't come after + // an origin indication + return false; + + if ( len < 4 ) + { + Weird("truncated_Teredo_authentication"); + return false; + } + + uint8 id_len = data[2]; + uint8 au_len = data[3]; + uint16 tot_len = 4 + id_len + au_len + 8 + 1; + + if ( len < tot_len ) + { + Weird("truncated_Teredo_authentication"); + return false; + } + + auth = data; + len -= tot_len; + data += tot_len; + return DoParse(data, len, found_origin, true); + } + + else if ( ((tag & 0xf000)>>12) == 6 ) + { + // IPv6 + if ( len < 40 ) + { + Weird("truncated_IPv6_in_Teredo"); + return false; + } + + // There's at least a possible IPv6 header, we'll decide what to do + // later if the payload length field doesn't match the actual length + // of the packet. + inner_ip = data; + return true; + } + + return false; + } + +RecordVal* TeredoEncapsulation::BuildVal(const IP_Hdr* inner) const + { + static RecordType* teredo_hdr_type = 0; + static RecordType* teredo_auth_type = 0; + static RecordType* teredo_origin_type = 0; + + if ( ! teredo_hdr_type ) + { + teredo_hdr_type = internal_type("teredo_hdr")->AsRecordType(); + teredo_auth_type = internal_type("teredo_auth")->AsRecordType(); + teredo_origin_type = internal_type("teredo_origin")->AsRecordType(); + } + + RecordVal* teredo_hdr = new RecordVal(teredo_hdr_type); + + if ( auth ) + { + RecordVal* teredo_auth = new RecordVal(teredo_auth_type); + uint8 id_len = *((uint8*)(auth + 2)); + uint8 au_len = *((uint8*)(auth + 3)); + uint64 nonce = ntohll(*((uint64*)(auth + 4 + id_len + au_len))); + uint8 conf = *((uint8*)(auth + 4 + id_len + au_len + 8)); + teredo_auth->Assign(0, new StringVal( + new BroString(auth + 4, id_len, 1))); + teredo_auth->Assign(1, new StringVal( + new BroString(auth + 4 + id_len, au_len, 1))); + teredo_auth->Assign(2, new Val(nonce, TYPE_COUNT)); + teredo_auth->Assign(3, new Val(conf, TYPE_COUNT)); + teredo_hdr->Assign(0, teredo_auth); + } + + if ( origin_indication ) + { + RecordVal* teredo_origin = new RecordVal(teredo_origin_type); + uint16 port = ntohs(*((uint16*)(origin_indication + 2))) ^ 0xFFFF; + uint32 addr = ntohl(*((uint32*)(origin_indication + 4))) ^ 0xFFFFFFFF; + teredo_origin->Assign(0, new PortVal(port, TRANSPORT_UDP)); + teredo_origin->Assign(1, new AddrVal(htonl(addr))); + teredo_hdr->Assign(1, teredo_origin); + } + + teredo_hdr->Assign(2, inner->BuildPktHdrVal()); + return teredo_hdr; + } + +void Teredo_Analyzer::DeliverPacket(int len, const u_char* data, bool orig, + int seq, const IP_Hdr* ip, int caplen) + { + Analyzer::DeliverPacket(len, data, orig, seq, ip, caplen); + + if ( orig ) + valid_orig = false; + else + valid_resp = false; + + TeredoEncapsulation te(this); + + if ( ! te.Parse(data, len) ) + { + ProtocolViolation("Bad Teredo encapsulation", (const char*) data, len); + return; + } + + const EncapsulationStack* e = Conn()->GetEncapsulation(); + + if ( e && e->Depth() >= BifConst::Tunnel::max_depth ) + { + Weird("tunnel_depth", true); + return; + } + + IP_Hdr* inner = 0; + int rslt = sessions->ParseIPPacket(len, te.InnerIP(), IPPROTO_IPV6, inner); + + if ( rslt > 0 ) + { + if ( inner->NextProto() == IPPROTO_NONE && inner->PayloadLen() == 0 ) + // Teredo bubbles having data after IPv6 header isn't strictly a + // violation, but a little weird. + Weird("Teredo_bubble_with_payload", true); + else + { + delete inner; + ProtocolViolation("Teredo payload length", (const char*) data, len); + return; + } + } + + if ( rslt == 0 || rslt > 0 ) + { + if ( orig ) + valid_orig = true; + else + valid_resp = true; + + if ( BifConst::Tunnel::yielding_teredo_decapsulation && + ! ProtocolConfirmed() ) + { + // Only confirm the Teredo tunnel and start decapsulating packets + // when no other sibling analyzer thinks it's already parsing the + // right protocol. + bool sibling_has_confirmed = false; + if ( Parent() ) + { + LOOP_OVER_GIVEN_CONST_CHILDREN(i, Parent()->GetChildren()) + { + if ( (*i)->ProtocolConfirmed() ) + { + sibling_has_confirmed = true; + break; + } + } + } + + if ( ! sibling_has_confirmed ) + Confirm(); + else + { + delete inner; + return; + } + } + else + // Aggressively decapsulate anything with valid Teredo encapsulation. + Confirm(); + } + + else + { + delete inner; + ProtocolViolation("Truncated Teredo", (const char*) data, len); + return; + } + + Val* teredo_hdr = 0; + + if ( teredo_packet ) + { + teredo_hdr = te.BuildVal(inner); + Conn()->Event(teredo_packet, 0, teredo_hdr); + } + + if ( te.Authentication() && teredo_authentication ) + { + teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner); + Conn()->Event(teredo_authentication, 0, teredo_hdr); + } + + if ( te.OriginIndication() && teredo_origin_indication ) + { + teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner); + Conn()->Event(teredo_origin_indication, 0, teredo_hdr); + } + + if ( inner->NextProto() == IPPROTO_NONE && teredo_bubble ) + { + teredo_hdr = teredo_hdr ? teredo_hdr->Ref() : te.BuildVal(inner); + Conn()->Event(teredo_bubble, 0, teredo_hdr); + } + + EncapsulatingConn ec(Conn(), BifEnum::Tunnel::TEREDO); + + sessions->DoNextInnerPacket(network_time, 0, inner, e, ec); + } diff --git a/src/Teredo.h b/src/Teredo.h new file mode 100644 index 0000000000..e720d3f37c --- /dev/null +++ b/src/Teredo.h @@ -0,0 +1,96 @@ +#ifndef Teredo_h +#define Teredo_h + +#include "Analyzer.h" +#include "NetVar.h" + +class Teredo_Analyzer : public Analyzer { +public: + Teredo_Analyzer(Connection* conn) : Analyzer(AnalyzerTag::Teredo, conn), + valid_orig(false), valid_resp(false) + {} + + virtual ~Teredo_Analyzer() + {} + + virtual void Done(); + + virtual void DeliverPacket(int len, const u_char* data, bool orig, + int seq, const IP_Hdr* ip, int caplen); + + static Analyzer* InstantiateAnalyzer(Connection* conn) + { return new Teredo_Analyzer(conn); } + + static bool Available() + { return BifConst::Tunnel::enable_teredo && + BifConst::Tunnel::max_depth > 0; } + + /** + * Emits a weird only if the analyzer has previously been able to + * decapsulate a Teredo packet in both directions or if *force* param is + * set, since otherwise the weirds could happen frequently enough to be less + * than helpful. The *force* param is meant for cases where just one side + * has a valid encapsulation and so the weird would be informative. + */ + void Weird(const char* name, bool force = false) const + { + if ( ProtocolConfirmed() || force ) + reporter->Weird(Conn(), name); + } + + /** + * If the delayed confirmation option is set, then a valid encapsulation + * seen from both end points is required before confirming. + */ + void Confirm() + { + if ( ! BifConst::Tunnel::delay_teredo_confirmation || + ( valid_orig && valid_resp ) ) + ProtocolConfirmation(); + } + +protected: + friend class AnalyzerTimer; + void ExpireTimer(double t); + + bool valid_orig; + bool valid_resp; +}; + +class TeredoEncapsulation { +public: + TeredoEncapsulation(const Teredo_Analyzer* ta) + : inner_ip(0), origin_indication(0), auth(0), analyzer(ta) + {} + + /** + * Returns whether input data parsed as a valid Teredo encapsulation type. + * If it was valid, the len argument is decremented appropriately. + */ + bool Parse(const u_char* data, int& len) + { return DoParse(data, len, false, false); } + + const u_char* InnerIP() const + { return inner_ip; } + + const u_char* OriginIndication() const + { return origin_indication; } + + const u_char* Authentication() const + { return auth; } + + RecordVal* BuildVal(const IP_Hdr* inner) const; + +protected: + bool DoParse(const u_char* data, int& len, bool found_orig, bool found_au); + + void Weird(const char* name) const + { analyzer->Weird(name); } + + const u_char* inner_ip; + const u_char* origin_indication; + const u_char* auth; + const Teredo_Analyzer* analyzer; +}; + +#endif diff --git a/src/Timer.cc b/src/Timer.cc index 2e2fb09c6b..c2a8bb3421 100644 --- a/src/Timer.cc +++ b/src/Timer.cc @@ -20,6 +20,7 @@ const char* TimerNames[] = { "IncrementalSendTimer", "IncrementalWriteTimer", "InterconnTimer", + "IPTunnelInactivityTimer", "NetbiosExpireTimer", "NetworkTimer", "NTPExpireTimer", diff --git a/src/Timer.h b/src/Timer.h index bb6b8d56ae..310e72bdc9 100644 --- a/src/Timer.h +++ b/src/Timer.h @@ -26,6 +26,7 @@ enum TimerType { TIMER_INCREMENTAL_SEND, TIMER_INCREMENTAL_WRITE, TIMER_INTERCONN, + TIMER_IP_TUNNEL_INACTIVITY, TIMER_NB_EXPIRE, TIMER_NETWORK, TIMER_NTP_EXPIRE, diff --git a/src/Trigger.cc b/src/Trigger.cc index 26f80c73d9..164f11b885 100644 --- a/src/Trigger.cc +++ b/src/Trigger.cc @@ -410,7 +410,7 @@ Val* Trigger::Lookup(const CallExpr* expr) return (i != cache.end()) ? i->second : 0; } -const char* Trigger::Name() +const char* Trigger::Name() const { assert(location); return fmt("%s:%d-%d", location->filename, diff --git a/src/Trigger.h b/src/Trigger.h index ffec50d7ef..8e04fb9189 100644 --- a/src/Trigger.h +++ b/src/Trigger.h @@ -60,7 +60,7 @@ public: virtual void Access(Val* val, const StateAccess& sa) { QueueTrigger(this); } - virtual const char* Name(); + virtual const char* Name() const; static void QueueTrigger(Trigger* trigger); diff --git a/src/TunnelEncapsulation.cc b/src/TunnelEncapsulation.cc new file mode 100644 index 0000000000..edbabef81f --- /dev/null +++ b/src/TunnelEncapsulation.cc @@ -0,0 +1,55 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "TunnelEncapsulation.h" +#include "util.h" +#include "Conn.h" + +EncapsulatingConn::EncapsulatingConn(Connection* c, BifEnum::Tunnel::Type t) + : src_addr(c->OrigAddr()), dst_addr(c->RespAddr()), + src_port(c->OrigPort()), dst_port(c->RespPort()), + proto(c->ConnTransport()), type(t), uid(c->GetUID()) + { + if ( ! uid ) + { + uid = calculate_unique_id(); + c->SetUID(uid); + } + } + +RecordVal* EncapsulatingConn::GetRecordVal() const + { + RecordVal *rv = new RecordVal(BifType::Record::Tunnel::EncapsulatingConn); + + RecordVal* id_val = new RecordVal(conn_id); + id_val->Assign(0, new AddrVal(src_addr)); + id_val->Assign(1, new PortVal(ntohs(src_port), proto)); + id_val->Assign(2, new AddrVal(dst_addr)); + id_val->Assign(3, new PortVal(ntohs(dst_port), proto)); + rv->Assign(0, id_val); + rv->Assign(1, new EnumVal(type, BifType::Enum::Tunnel::Type)); + + char tmp[20]; + rv->Assign(2, new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62))); + + return rv; + } + +bool operator==(const EncapsulationStack& e1, const EncapsulationStack& e2) + { + if ( ! e1.conns ) + return e2.conns; + + if ( ! e2.conns ) + return false; + + if ( e1.conns->size() != e2.conns->size() ) + return false; + + for ( size_t i = 0; i < e1.conns->size(); ++i ) + { + if ( (*e1.conns)[i] != (*e2.conns)[i] ) + return false; + } + + return true; + } diff --git a/src/TunnelEncapsulation.h b/src/TunnelEncapsulation.h new file mode 100644 index 0000000000..e8ca7a48b6 --- /dev/null +++ b/src/TunnelEncapsulation.h @@ -0,0 +1,208 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef TUNNELS_H +#define TUNNELS_H + +#include "config.h" +#include "NetVar.h" +#include "IPAddr.h" +#include "Val.h" +#include + +class Connection; + +/** + * Represents various types of tunnel "connections", that is, a pair of + * endpoints whose communication encapsulates inner IP packets. This could + * mean IP packets nested inside IP packets or IP packets nested inside a + * transport layer protocol. EncapsulatingConn's are assigned a UID, which can + * be shared with Connection's in the case the tunnel uses a transport-layer. + */ +class EncapsulatingConn { +public: + /** + * Default tunnel connection constructor. + */ + EncapsulatingConn() + : src_port(0), dst_port(0), proto(TRANSPORT_UNKNOWN), + type(BifEnum::Tunnel::NONE), uid(0) + {} + + /** + * Construct an IP tunnel "connection" with its own UID. + * The assignment of "source" and "destination" addresses here can be + * arbitrary, comparison between EncapsulatingConn objects will treat IP + * tunnels as equivalent as long as the same two endpoints are involved. + * + * @param s The tunnel source address, likely taken from an IP header. + * @param d The tunnel destination address, likely taken from an IP header. + */ + EncapsulatingConn(const IPAddr& s, const IPAddr& d) + : src_addr(s), dst_addr(d), src_port(0), dst_port(0), + proto(TRANSPORT_UNKNOWN), type(BifEnum::Tunnel::IP) + { + uid = calculate_unique_id(); + } + + /** + * Construct a tunnel connection using information from an already existing + * transport-layer-aware connection object. + * + * @param c The connection from which endpoint information can be extracted. + * If it already has a UID associated with it, that gets inherited, + * otherwise a new UID is created for this tunnel and \a c. + * @param t The type of tunneling that is occurring over the connection. + */ + EncapsulatingConn(Connection* c, BifEnum::Tunnel::Type t); + + /** + * Copy constructor. + */ + EncapsulatingConn(const EncapsulatingConn& other) + : src_addr(other.src_addr), dst_addr(other.dst_addr), + src_port(other.src_port), dst_port(other.dst_port), + proto(other.proto), type(other.type), uid(other.uid) + {} + + /** + * Destructor. + */ + ~EncapsulatingConn() + {} + + BifEnum::Tunnel::Type Type() const + { return type; } + + /** + * Returns record value of type "EncapsulatingConn" representing the tunnel. + */ + RecordVal* GetRecordVal() const; + + friend bool operator==(const EncapsulatingConn& ec1, + const EncapsulatingConn& ec2) + { + if ( ec1.type != ec2.type ) + return false; + + if ( ec1.type == BifEnum::Tunnel::IP ) + // Reversing endpoints is still same tunnel. + return ec1.uid == ec2.uid && ec1.proto == ec2.proto && + ((ec1.src_addr == ec2.src_addr && ec1.dst_addr == ec2.dst_addr) || + (ec1.src_addr == ec2.dst_addr && ec1.dst_addr == ec2.src_addr)); + + return ec1.src_addr == ec2.src_addr && ec1.dst_addr == ec2.dst_addr && + ec1.src_port == ec2.src_port && ec1.dst_port == ec2.dst_port && + ec1.uid == ec2.uid && ec1.proto == ec2.proto; + } + + friend bool operator!=(const EncapsulatingConn& ec1, + const EncapsulatingConn& ec2) + { + return ! ( ec1 == ec2 ); + } + +protected: + IPAddr src_addr; + IPAddr dst_addr; + uint16 src_port; + uint16 dst_port; + TransportProto proto; + BifEnum::Tunnel::Type type; + uint64 uid; +}; + +/** + * Abstracts an arbitrary amount of nested tunneling. + */ +class EncapsulationStack { +public: + EncapsulationStack() : conns(0) + {} + + EncapsulationStack(const EncapsulationStack& other) + { + if ( other.conns ) + conns = new vector(*(other.conns)); + else + conns = 0; + } + + EncapsulationStack& operator=(const EncapsulationStack& other) + { + if ( this == &other ) + return *this; + + delete conns; + + if ( other.conns ) + conns = new vector(*(other.conns)); + else + conns = 0; + + return *this; + } + + ~EncapsulationStack() { delete conns; } + + /** + * Add a new inner-most tunnel to the EncapsulationStack. + * + * @param c The new inner-most tunnel to append to the tunnel chain. + */ + void Add(const EncapsulatingConn& c) + { + if ( ! conns ) + conns = new vector(); + + conns->push_back(c); + } + + /** + * Return how many nested tunnels are involved in a encapsulation, zero + * meaning no tunnels are present. + */ + size_t Depth() const + { + return conns ? conns->size() : 0; + } + + /** + * Return the tunnel type of the inner-most tunnel. + */ + BifEnum::Tunnel::Type LastType() const + { + return conns ? (*conns)[conns->size()-1].Type() : BifEnum::Tunnel::NONE; + } + + /** + * Get the value of type "EncapsulatingConnVector" represented by the + * entire encapsulation chain. + */ + VectorVal* GetVectorVal() const + { + VectorVal* vv = new VectorVal( + internal_type("EncapsulatingConnVector")->AsVectorType()); + + if ( conns ) + { + for ( size_t i = 0; i < conns->size(); ++i ) + vv->Assign(i, (*conns)[i].GetRecordVal(), 0); + } + + return vv; + } + + friend bool operator==(const EncapsulationStack& e1, + const EncapsulationStack& e2); + + friend bool operator!=(const EncapsulationStack& e1, + const EncapsulationStack& e2) + { + return ! ( e1 == e2 ); + } + +protected: + vector* conns; +}; + +#endif diff --git a/src/Type.cc b/src/Type.cc index cd40583aae..414c07d3d7 100644 --- a/src/Type.cc +++ b/src/Type.cc @@ -15,10 +15,9 @@ extern int generate_documentation; +// Note: This function must be thread-safe. const char* type_name(TypeTag t) { - static char errbuf[512]; - static const char* type_names[int(NUM_TYPES)] = { "void", "bool", "int", "count", "counter", @@ -37,10 +36,7 @@ const char* type_name(TypeTag t) }; if ( int(t) >= NUM_TYPES ) - { - snprintf(errbuf, sizeof(errbuf), "%d: not a type tag", int(t)); - return errbuf; - } + return "type_name(): not a type tag"; return type_names[int(t)]; } @@ -876,74 +872,12 @@ void CommentedTypeDecl::DescribeReST(ODesc* d) const } } -RecordField::RecordField(int arg_base, int arg_offset, int arg_total_offset) - { - base = arg_base; - offset = arg_offset; - total_offset = arg_total_offset; - } - RecordType::RecordType(type_decl_list* arg_types) : BroType(TYPE_RECORD) { types = arg_types; - base = 0; - fields = 0; num_fields = types ? types->length() : 0; } -RecordType::RecordType(TypeList* arg_base, type_decl_list* refinements) - : BroType(TYPE_RECORD) - { - if ( refinements ) - arg_base->Append(new RecordType(refinements)); - - Init(arg_base); - } - -void RecordType::Init(TypeList* arg_base) - { - assert(false); // Is this ever used? - - base = arg_base; - - if ( ! base ) - Internal("empty RecordType"); - - fields = new PDict(RecordField)(ORDERED); - types = 0; - - type_list* t = base->Types(); - - loop_over_list(*t, i) - { - BroType* ti = (*t)[i]; - - if ( ti->Tag() != TYPE_RECORD ) - (*t)[i]->Error("non-record in base type list"); - - RecordType* rti = ti->AsRecordType(); - int n = rti->NumFields(); - - for ( int j = 0; j < n; ++j ) - { - const TypeDecl* tdij = rti->FieldDecl(j); - - if ( fields->Lookup(tdij->id) ) - { - reporter->Error("duplicate field %s", tdij->id); - continue; - } - - RecordField* rf = new RecordField(i, j, fields->Length()); - - if ( fields->Insert(tdij->id, rf) ) - Internal("duplicate field when constructing record"); - } - } - - num_fields = fields->Length(); - } - RecordType::~RecordType() { if ( types ) @@ -953,9 +887,6 @@ RecordType::~RecordType() delete types; } - - delete fields; - Unref(base); } int RecordType::HasField(const char* field) const @@ -971,17 +902,7 @@ BroType* RecordType::FieldType(const char* field) const BroType* RecordType::FieldType(int field) const { - if ( types ) - return (*types)[field]->type; - else - { - RecordField* rf = fields->NthEntry(field); - if ( ! rf ) - Internal("missing field in RecordType::FieldType"); - BroType* bt = (*base->Types())[rf->base]; - RecordType* rbt = bt->AsRecordType(); - return rbt->FieldType(rf->offset); - } + return (*types)[field]->type; } Val* RecordType::FieldDefault(int field) const @@ -989,7 +910,7 @@ Val* RecordType::FieldDefault(int field) const const TypeDecl* td = FieldDecl(field); if ( ! td->attrs ) - return false; + return 0; const Attr* def_attr = td->attrs->FindAttr(ATTR_DEFAULT); @@ -998,26 +919,14 @@ Val* RecordType::FieldDefault(int field) const int RecordType::FieldOffset(const char* field) const { - if ( types ) + loop_over_list(*types, i) { - loop_over_list(*types, i) - { - TypeDecl* td = (*types)[i]; - if ( streq(td->id, field) ) - return i; - } - - return -1; + TypeDecl* td = (*types)[i]; + if ( streq(td->id, field) ) + return i; } - else - { - RecordField* rf = fields->Lookup(field); - if ( ! rf ) - return -1; - else - return rf->total_offset; - } + return -1; } const char* RecordType::FieldName(int field) const @@ -1027,33 +936,12 @@ const char* RecordType::FieldName(int field) const const TypeDecl* RecordType::FieldDecl(int field) const { - if ( types ) - return (*types)[field]; - else - { - RecordField* rf = fields->NthEntry(field); - if ( ! rf ) - reporter->InternalError("missing field in RecordType::FieldDecl"); - - BroType* bt = (*base->Types())[rf->base]; - RecordType* rbt = bt->AsRecordType(); - return rbt->FieldDecl(rf->offset); - } + return (*types)[field]; } TypeDecl* RecordType::FieldDecl(int field) { - if ( types ) - return (*types)[field]; - else - { - RecordField* rf = fields->NthEntry(field); - if ( ! rf ) - Internal("missing field in RecordType::FieldDecl"); - BroType* bt = (*base->Types())[rf->base]; - RecordType* rbt = bt->AsRecordType(); - return rbt->FieldDecl(rf->offset); - } + return (*types)[field]; } void RecordType::Describe(ODesc* d) const @@ -1151,11 +1039,6 @@ void RecordType::DescribeFields(ODesc* d) const d->SP(); } } - else - { - d->AddCount(1); - base->Describe(d); - } } } @@ -1208,9 +1091,6 @@ bool RecordType::DoSerialize(SerialInfo* info) const else if ( ! SERIALIZE(false) ) return false; - SERIALIZE_OPTIONAL(base); - - // We don't serialize the fields as we can reconstruct them. return true; } @@ -1245,13 +1125,6 @@ bool RecordType::DoUnserialize(UnserialInfo* info) else types = 0; - BroType* type; - UNSERIALIZE_OPTIONAL(type, BroType::Unserialize(info, TYPE_LIST)); - base = (TypeList*) type; - - if ( base ) - Init(base); - return true; } @@ -1594,21 +1467,16 @@ bool VectorType::DoUnserialize(UnserialInfo* info) return yield_type != 0; } -BroType* refine_type(TypeList* base, type_decl_list* refinements) +void VectorType::Describe(ODesc* d) const { - type_list* t = base->Types(); + if ( d->IsReadable() ) + d->AddSP("vector of"); + else + d->Add(int(Tag())); - if ( t->length() == 1 && ! refinements ) - { // Just a direct reference to a single type. - BroType* rt = (*t)[0]->Ref(); - Unref(base); - return rt; - } - - return new RecordType(base, refinements); + yield_type->Describe(d); } - BroType* base_type(TypeTag tag) { static BroType* base_types[NUM_TYPES]; @@ -1996,13 +1864,8 @@ BroType* merge_types(const BroType* t1, const BroType* t2) if ( t1->IsSet() ) return new SetType(tl3, 0); - else if ( tg1 == TYPE_TABLE ) - return new TableType(tl3, y3); else - { - reporter->InternalError("bad tag in merge_types"); - return 0; - } + return new TableType(tl3, y3); } case TYPE_FUNC: diff --git a/src/Type.h b/src/Type.h index 5ebc5761a3..efe15e6188 100644 --- a/src/Type.h +++ b/src/Type.h @@ -426,20 +426,9 @@ public: std::list* comments; }; -class RecordField { -public: - RecordField(int arg_base, int arg_offset, int arg_total_offset); - - int base; // which base element it belongs to - int offset; // where it is in that base - int total_offset; // where it is in the aggregate record -}; -declare(PDict,RecordField); - class RecordType : public BroType { public: RecordType(type_decl_list* types); - RecordType(TypeList* base, type_decl_list* refinements); ~RecordType(); @@ -473,15 +462,11 @@ public: void DescribeFieldsReST(ODesc* d, bool func_args) const; protected: - RecordType() { fields = 0; base = 0; types = 0; } - - void Init(TypeList* arg_base); + RecordType() { types = 0; } DECLARE_SERIAL(RecordType) int num_fields; - PDict(RecordField)* fields; - TypeList* base; type_decl_list* types; }; @@ -579,6 +564,8 @@ public: // gets using an empty "vector()" constructor. bool IsUnspecifiedVector() const; + void Describe(ODesc* d) const; + protected: VectorType() { yield_type = 0; } @@ -587,10 +574,6 @@ protected: BroType* yield_type; }; - -// Returns the given type refinement, or error_type() if it's illegal. -extern BroType* refine_type(TypeList* base, type_decl_list* refinements); - // Returns the BRO basic (non-parameterized) type with the given type. extern BroType* base_type(TypeTag tag); diff --git a/src/UDP.cc b/src/UDP.cc index 35e9f58388..d85cb39edd 100644 --- a/src/UDP.cc +++ b/src/UDP.cc @@ -57,15 +57,15 @@ void UDP_Analyzer::DeliverPacket(int len, const u_char* data, bool is_orig, { bool bad = false; - if ( ip->IP4_Hdr() && chksum && - udp_checksum(ip->IP4_Hdr(), up, len) != 0xffff ) - bad = true; + if ( ip->IP4_Hdr() ) + { + if ( chksum && ! ValidateChecksum(ip, up, len) ) + bad = true; + } -#ifdef BROv6 - if ( ip->IP6_Hdr() && /* checksum is not optional for IPv6 */ - udp6_checksum(ip->IP6_Hdr(), up, len) != 0xffff ) + /* checksum is not optional for IPv6 */ + else if ( ! ValidateChecksum(ip, up, len) ) bad = true; -#endif if ( bad ) { @@ -206,4 +206,24 @@ unsigned int UDP_Analyzer::MemoryAllocation() const return Analyzer::MemoryAllocation() + padded_sizeof(*this) - 24; } +bool UDP_Analyzer::ValidateChecksum(const IP_Hdr* ip, const udphdr* up, int len) + { + uint32 sum; + if ( len % 2 == 1 ) + // Add in pad byte. + sum = htons(((const u_char*) up)[len - 1] << 8); + else + sum = 0; + + sum = ones_complement_checksum(ip->SrcAddr(), sum); + sum = ones_complement_checksum(ip->DstAddr(), sum); + // Note, for IPv6, strictly speaking the protocol and length fields are + // 32 bits rather than 16 bits. But because the upper bits are all zero, + // we get the same checksum either way. + sum += htons(IPPROTO_UDP); + sum += htons((unsigned short) len); + sum = ones_complement_checksum((void*) up, len, sum); + + return sum == 0xffff; + } diff --git a/src/UDP.h b/src/UDP.h index 5124adf4cd..b93d4da97f 100644 --- a/src/UDP.h +++ b/src/UDP.h @@ -4,6 +4,7 @@ #define udp_h #include "Analyzer.h" +#include typedef enum { UDP_INACTIVE, // no packet seen @@ -31,6 +32,10 @@ protected: virtual bool IsReuse(double t, const u_char* pkt); virtual unsigned int MemoryAllocation() const; + // Returns true if the checksum is valid, false if not + static bool ValidateChecksum(const IP_Hdr* ip, const struct udphdr* up, + int len); + bro_int_t request_len, reply_len; private: diff --git a/src/Val.cc b/src/Val.cc index 1ec54d36cd..79fa8a0c69 100644 --- a/src/Val.cc +++ b/src/Val.cc @@ -25,7 +25,7 @@ #include "PrefixTable.h" #include "Conn.h" #include "Reporter.h" - +#include "IPAddr.h" Val::Val(Func* f) { @@ -64,7 +64,7 @@ Val::~Val() Unref(type); #ifdef DEBUG - Unref(bound_id); + delete [] bound_id; #endif } @@ -205,29 +205,10 @@ bool Val::DoSerialize(SerialInfo* info) const val.string_val->Len()); case TYPE_INTERNAL_ADDR: - return SERIALIZE(NUM_ADDR_WORDS) -#ifdef BROv6 - && SERIALIZE(uint32(ntohl(val.addr_val[0]))) - && SERIALIZE(uint32(ntohl(val.addr_val[1]))) - && SERIALIZE(uint32(ntohl(val.addr_val[2]))) - && SERIALIZE(uint32(ntohl(val.addr_val[3]))); -#else - && SERIALIZE(uint32(ntohl(val.addr_val))); -#endif + return SERIALIZE(*val.addr_val); case TYPE_INTERNAL_SUBNET: - return info->s->WriteOpenTag("subnet") - && SERIALIZE(NUM_ADDR_WORDS) -#ifdef BROv6 - && SERIALIZE(uint32(ntohl(val.subnet_val.net[0]))) - && SERIALIZE(uint32(ntohl(val.subnet_val.net[1]))) - && SERIALIZE(uint32(ntohl(val.subnet_val.net[2]))) - && SERIALIZE(uint32(ntohl(val.subnet_val.net[3]))) -#else - && SERIALIZE(uint32(ntohl(val.subnet_val.net))) -#endif - && SERIALIZE(val.subnet_val.width) - && info->s->WriteCloseTag("subnet"); + return SERIALIZE(*val.subnet_val); case TYPE_INTERNAL_OTHER: // Derived classes are responsible for this. @@ -294,94 +275,15 @@ bool Val::DoUnserialize(UnserialInfo* info) case TYPE_INTERNAL_ADDR: { - int num_words; - if ( ! UNSERIALIZE(&num_words) ) - return false; - - if ( num_words != 1 && num_words != 4 ) - { - info->s->Error("bad address type"); - return false; - } - - uint32 a[4]; // big enough to hold either - - for ( int i = 0; i < num_words; ++i ) - { - if ( ! UNSERIALIZE(&a[i]) ) - return false; - - a[i] = htonl(a[i]); - } - -#ifndef BROv6 - if ( num_words == 4 ) - { - if ( a[0] || a[1] || a[2] ) - info->s->Warning("received IPv6 address, ignoring"); - ((AddrVal*) this)->Init(a[3]); - } - else - ((AddrVal*) this)->Init(a[0]); -#else - if ( num_words == 1 ) - ((AddrVal*) this)->Init(a[0]); - else - ((AddrVal*) this)->Init(a); -#endif + val.addr_val = new IPAddr(); + return UNSERIALIZE(val.addr_val); } - return true; case TYPE_INTERNAL_SUBNET: { - int num_words; - if ( ! UNSERIALIZE(&num_words) ) - return false; - - if ( num_words != 1 && num_words != 4 ) - { - info->s->Error("bad subnet type"); - return false; - } - - uint32 a[4]; // big enough to hold either - - for ( int i = 0; i < num_words; ++i ) - { - if ( ! UNSERIALIZE(&a[i]) ) - return false; - - a[i] = htonl(a[i]); - } - - int width; - if ( ! UNSERIALIZE(&width) ) - return false; - -#ifdef BROv6 - if ( num_words == 1 ) - { - a[3] = a[0]; - a[0] = a[1] = a[2] = 0; - } - - ((SubNetVal*) this)->Init(a, width); - -#else - if ( num_words == 4 ) - { - if ( a[0] || a[1] || a[2] ) - info->s->Warning("received IPv6 subnet, ignoring"); - a[0] = a[3]; - - if ( width > 32 ) - width -= 96; - } - - ((SubNetVal*) this)->Init(a[0], width); -#endif + val.subnet_val = new IPPrefix(); + return UNSERIALIZE(val.subnet_val); } - return true; case TYPE_INTERNAL_OTHER: // Derived classes are responsible for this. @@ -590,12 +492,10 @@ void Val::ValDescribe(ODesc* d) const case TYPE_INTERNAL_UNSIGNED: d->Add(val.uint_val); break; case TYPE_INTERNAL_DOUBLE: d->Add(val.double_val); break; case TYPE_INTERNAL_STRING: d->AddBytes(val.string_val); break; - case TYPE_INTERNAL_ADDR: d->Add(dotted_addr(val.addr_val)); break; + case TYPE_INTERNAL_ADDR: d->Add(val.addr_val->AsString().c_str()); break; case TYPE_INTERNAL_SUBNET: - d->Add(dotted_addr(val.subnet_val.net)); - d->Add("/"); - d->Add(val.subnet_val.width); + d->Add(val.subnet_val->AsString().c_str()); break; case TYPE_INTERNAL_ERROR: d->AddCS("error"); break; @@ -706,7 +606,8 @@ ID* MutableVal::Bind() const ip = htonl(0x7f000001); // 127.0.0.1 safe_snprintf(name, MAX_NAME_SIZE, "#%s#%d#", - dotted_addr(ip), getpid()); + IPAddr(IPv4, &ip, IPAddr::Network)->AsString().c_str(), + getpid()); #else safe_snprintf(name, MAX_NAME_SIZE, "#%s#%d#", host, getpid()); #endif @@ -957,92 +858,41 @@ bool PortVal::DoUnserialize(UnserialInfo* info) AddrVal::AddrVal(const char* text) : Val(TYPE_ADDR) { - const char* colon = strchr(text, ':'); - - if ( colon ) - { -#ifdef BROv6 - Init(dotted_to_addr6(text)); -#else - reporter->Error("bro wasn't compiled with IPv6 support"); - Init(uint32(0)); -#endif - } - - else - Init(dotted_to_addr(text)); + val.addr_val = new IPAddr(text); } AddrVal::AddrVal(uint32 addr) : Val(TYPE_ADDR) { // ### perhaps do gethostbyaddr here? - Init(addr); + val.addr_val = new IPAddr(IPv4, &addr, IPAddr::Network); } -AddrVal::AddrVal(const uint32* addr) : Val(TYPE_ADDR) +AddrVal::AddrVal(const uint32 addr[4]) : Val(TYPE_ADDR) { - Init(addr); + val.addr_val = new IPAddr(IPv6, addr, IPAddr::Network); + } + +AddrVal::AddrVal(const IPAddr& addr) : Val(TYPE_ADDR) + { + val.addr_val = new IPAddr(addr); } AddrVal::~AddrVal() { -#ifdef BROv6 - delete [] val.addr_val; -#endif - } - -Val* AddrVal::SizeVal() const - { - uint32 addr; - -#ifdef BROv6 - if ( ! is_v4_addr(val.addr_val) ) - { - Error("|addr| for IPv6 addresses not supported"); - return new Val(0, TYPE_COUNT); - } - - addr = to_v4_addr(val.addr_val); -#else - addr = val.addr_val; -#endif - - addr = ntohl(addr); - - return new Val(addr, TYPE_COUNT); - } - -void AddrVal::Init(uint32 addr) - { -#ifdef BROv6 - val.addr_val = new uint32[4]; - val.addr_val[0] = val.addr_val[1] = val.addr_val[2] = 0; - val.addr_val[3] = addr; -#else - val.addr_val = addr; -#endif - } - -void AddrVal::Init(const uint32* addr) - { -#ifdef BROv6 - val.addr_val = new uint32[4]; - val.addr_val[0] = addr[0]; - val.addr_val[1] = addr[1]; - val.addr_val[2] = addr[2]; - val.addr_val[3] = addr[3]; -#else - val.addr_val = addr[0]; -#endif + delete val.addr_val; } unsigned int AddrVal::MemoryAllocation() const { -#ifdef BROv6 - return padded_sizeof(*this) + pad_size(4 * sizeof(uint32)); -#else - return padded_sizeof(*this); -#endif + return padded_sizeof(*this) + val.addr_val->MemoryAllocation(); + } + +Val* AddrVal::SizeVal() const + { + if ( val.addr_val->GetFamily() == IPv4 ) + return new Val(32, TYPE_COUNT); + else + return new Val(128, TYPE_COUNT); } IMPLEMENT_SERIAL(AddrVal, SER_ADDR_VAL); @@ -1059,209 +909,114 @@ bool AddrVal::DoUnserialize(UnserialInfo* info) return true; } -static uint32 parse_dotted(const char* text, int& dots) - { - int addr[4]; - uint32 a = 0; - dots = 0; - - if ( sscanf(text, "%d.%d.%d.%d", addr+0, addr+1, addr+2, addr+3) == 4 ) - { - a = (addr[0] << 24) | (addr[1] << 16) | - (addr[2] << 8) | addr[3]; - dots = 3; - } - - else if ( sscanf(text, "%d.%d.%d", addr+0, addr+1, addr+2) == 3 ) - { - a = (addr[0] << 24) | (addr[1] << 16) | (addr[2] << 8); - dots = 2; - } - - else if ( sscanf(text, "%d.%d", addr+0, addr+1) == 2 ) - { - a = (addr[0] << 24) | (addr[1] << 16); - dots = 1; - } - - else - reporter->InternalError("scanf failed in parse_dotted()"); - - for ( int i = 0; i <= dots; ++i ) - { - if ( addr[i] < 0 || addr[i] > 255 ) - { - reporter->Error("bad dotted address %s", text); - break; - } - } - - return a; - } - SubNetVal::SubNetVal(const char* text) : Val(TYPE_SUBNET) { - const char* sep = strchr(text, '/'); - if ( ! sep ) - Internal("separator missing in SubNetVal::SubNetVal"); + string s(text); + size_t slash_loc = s.find('/'); - Init(text, atoi(sep+1)); + if ( slash_loc == string::npos ) + { + reporter->Error("Bad string in SubNetVal ctor: %s", text); + val.subnet_val = new IPPrefix(); + } + else + { + val.subnet_val = new IPPrefix(s.substr(0, slash_loc), + atoi(s.substr(slash_loc + 1).c_str())); + } } SubNetVal::SubNetVal(const char* text, int width) : Val(TYPE_SUBNET) { - Init(text, width); + val.subnet_val = new IPPrefix(text, width); } SubNetVal::SubNetVal(uint32 addr, int width) : Val(TYPE_SUBNET) { - Init(addr, width); + IPAddr a(IPv4, &addr, IPAddr::Network); + val.subnet_val = new IPPrefix(a, width); } -#ifdef BROv6 SubNetVal::SubNetVal(const uint32* addr, int width) : Val(TYPE_SUBNET) { - Init(addr, width); - } -#endif - -void SubNetVal::Init(const char* text, int width) - { -#ifdef BROv6 - if ( width <= 0 || width > 128 ) -#else - if ( width <= 0 || width > 32 ) -#endif - Error("bad subnet width"); - - int dots; - uint32 a = parse_dotted(text, dots); - - Init(uint32(htonl(a)), width); + IPAddr a(IPv6, addr, IPAddr::Network); + val.subnet_val = new IPPrefix(a, width); } - -void SubNetVal::Init(uint32 addr, int width) +SubNetVal::SubNetVal(const IPAddr& addr, int width) : Val(TYPE_SUBNET) { -#ifdef BROv6 - Internal("SubNetVal::Init called on 4-byte address w/ BROv6"); -#else - val.subnet_val.net = mask_addr(addr, uint32(width)); - val.subnet_val.width = width; -#endif + val.subnet_val = new IPPrefix(addr, width); } -void SubNetVal::Init(const uint32* addr, int width) +SubNetVal::SubNetVal(const IPPrefix& prefix) : Val(TYPE_SUBNET) { -#ifdef BROv6 - const uint32* a = mask_addr(addr, uint32(width)); + val.subnet_val = new IPPrefix(prefix); + } - val.subnet_val.net[0] = a[0]; - val.subnet_val.net[1] = a[1]; - val.subnet_val.net[2] = a[2]; - val.subnet_val.net[3] = a[3]; +SubNetVal::~SubNetVal() + { + delete val.subnet_val; + } - if ( is_v4_addr(addr) && width <= 32 ) - val.subnet_val.width = width + 96; - else - val.subnet_val.width = width; -#else - Internal("SubNetVal::Init called on 16-byte address w/o BROv6"); -#endif +const IPAddr& SubNetVal::Prefix() const + { + return val.subnet_val->Prefix(); + } + +int SubNetVal::Width() const + { + return val.subnet_val->Length(); + } + +unsigned int SubNetVal::MemoryAllocation() const + { + return padded_sizeof(*this) + val.subnet_val->MemoryAllocation(); } Val* SubNetVal::SizeVal() const { - int retained; -#ifdef BROv6 - retained = 128 - Width(); -#else - retained = 32 - Width(); -#endif - + int retained = 128 - val.subnet_val->LengthIPv6(); return new Val(pow(2.0, double(retained)), TYPE_DOUBLE); } void SubNetVal::ValDescribe(ODesc* d) const { - d->Add(dotted_addr(val.subnet_val.net, d->Style() == ALTERNATIVE_STYLE)); - d->Add("/"); -#ifdef BROv6 - if ( is_v4_addr(val.subnet_val.net) ) - d->Add(val.subnet_val.width - 96); - else -#endif - d->Add(val.subnet_val.width); + d->Add(string(*val.subnet_val).c_str()); } -addr_type SubNetVal::Mask() const +IPAddr SubNetVal::Mask() const { - if ( val.subnet_val.width == 0 ) + if ( val.subnet_val->Length() == 0 ) { // We need to special-case a mask width of zero, since // the compiler doesn't guarantee that 1 << 32 yields 0. -#ifdef BROv6 - uint32* m = new uint32[4]; - for ( int i = 0; i < 4; ++i ) + uint32 m[4]; + for ( unsigned int i = 0; i < 4; ++i ) m[i] = 0; - - return m; -#else - return 0; -#endif + IPAddr rval(IPv6, m, IPAddr::Host); + return rval; } -#ifdef BROv6 - uint32* m = new uint32[4]; + uint32 m[4]; uint32* mp = m; uint32 w; - for ( w = val.subnet_val.width; w >= 32; w -= 32 ) - *(mp++) = 0xffffffff; + for ( w = val.subnet_val->Length(); w >= 32; w -= 32 ) + *(mp++) = 0xffffffff; *mp = ~((1 << (32 - w)) - 1); while ( ++mp < m + 4 ) - *mp = 0; + *mp = 0; - return m; - -#else - return ~((1 << (32 - val.subnet_val.width)) - 1); -#endif + IPAddr rval(IPv6, m, IPAddr::Host); + return rval; } -bool SubNetVal::Contains(const uint32 addr) const +bool SubNetVal::Contains(const IPAddr& addr) const { -#ifdef BROv6 - Internal("SubNetVal::Contains called on 4-byte address w/ BROv6"); - return false; -#else - return ntohl(val.subnet_val.net) == (ntohl(addr) & Mask()); -#endif - } - -bool SubNetVal::Contains(const uint32* addr) const - { -#ifdef BROv6 - const uint32* net = val.subnet_val.net; - const uint32* a = addr; - uint32 m; - - for ( m = val.subnet_val.width; m > 32; m -= 32 ) - { - if ( *net != *a ) - return false; - - ++net; - ++a; - } - - uint32 mask = ~((1 << (32 - m)) - 1); - return ntohl(*net) == (ntohl(*a) & mask); -#else - return Contains(addr[3]); -#endif + IPAddr a(addr); + return val.subnet_val->Contains(a); } IMPLEMENT_SERIAL(SubNetVal, SER_SUBNET_VAL); @@ -1896,6 +1651,7 @@ int TableVal::RemoveFrom(Val* val) const while ( (v = tbl->NextEntry(k, c)) ) { Val* index = RecoverIndex(k); + Unref(index); Unref(t->Delete(k)); delete k; @@ -2386,10 +2142,13 @@ void TableVal::DoExpire(double t) (v = tbl->NextEntry(k, expire_cookie)); ++i ) { if ( v->ExpireAccessTime() == 0 ) + { // This happens when we insert val while network_time - // hasn't been initialized yet (e.g. in bro_init()). - // We correct the timestamp now. - v->SetExpireAccess(network_time); + // hasn't been initialized yet (e.g. in bro_init()), and + // also when bro_start_network_time hasn't been initialized + // (e.g. before first packet). The expire_access_time is + // correct, so we just need to wait. + } else if ( v->ExpireAccessTime() + expire_time < t ) { @@ -3476,20 +3235,10 @@ int same_atomic_val(const Val* v1, const Val* v2) return v1->InternalDouble() == v2->InternalDouble(); case TYPE_INTERNAL_STRING: return Bstr_eq(v1->AsString(), v2->AsString()); - case TYPE_INTERNAL_ADDR: - { - const addr_type& a1 = v1->AsAddr(); - const addr_type& a2 = v2->AsAddr(); -#ifdef BROv6 - return addr_eq(a1, a2); -#else - return addr_eq(&a1, &a2); -#endif - } - + return v1->AsAddr() == v2->AsAddr(); case TYPE_INTERNAL_SUBNET: - return subnet_eq(v1->AsSubNet(), v2->AsSubNet()); + return v1->AsSubNet() == v2->AsSubNet(); default: reporter->InternalError("same_atomic_val called for non-atomic value"); diff --git a/src/Val.h b/src/Val.h index d851be311b..c3ec5b04fb 100644 --- a/src/Val.h +++ b/src/Val.h @@ -18,6 +18,7 @@ #include "ID.h" #include "Scope.h" #include "StateAccess.h" +#include "IPAddr.h" class Val; class Func; @@ -53,11 +54,11 @@ typedef union { // Used for count, counter, port, subnet. bro_uint_t uint_val; - // Used for addr, net - addr_type addr_val; + // Used for addr + IPAddr* addr_val; // Used for subnet - subnet_type subnet_val; + IPPrefix* subnet_val; // Used for double, time, interval. double double_val; @@ -226,10 +227,10 @@ public: CONST_ACCESSOR(TYPE_PATTERN, RE_Matcher*, re_val, AsPattern) CONST_ACCESSOR(TYPE_VECTOR, vector*, vector_val, AsVector) - const subnet_type* AsSubNet() const + const IPPrefix& AsSubNet() const { CHECK_TAG(type->Tag(), TYPE_SUBNET, "Val::SubNet", type_name) - return &val.subnet_val; + return *val.subnet_val; } BroType* AsType() const @@ -238,12 +239,11 @@ public: return type; } - // ... in network byte order - const addr_type AsAddr() const + const IPAddr& AsAddr() const { if ( type->Tag() != TYPE_ADDR ) BadTag("Val::AsAddr", type_name(type->Tag())); - return val.addr_val; + return *val.addr_val; } #define ACCESSOR(tag, ctype, accessor, name) \ @@ -261,10 +261,17 @@ public: ACCESSOR(TYPE_PATTERN, RE_Matcher*, re_val, AsPattern) ACCESSOR(TYPE_VECTOR, vector*, vector_val, AsVector) - subnet_type* AsSubNet() + const IPPrefix& AsSubNet() { CHECK_TAG(type->Tag(), TYPE_SUBNET, "Val::SubNet", type_name) - return &val.subnet_val; + return *val.subnet_val; + } + + const IPAddr& AsAddr() + { + if ( type->Tag() != TYPE_ADDR ) + BadTag("Val::AsAddr", type_name(type->Tag())); + return *val.addr_val; } // Gives fast access to the bits of something that is one of @@ -282,6 +289,7 @@ public: CONVERTER(TYPE_PATTERN, PatternVal*, AsPatternVal) CONVERTER(TYPE_PORT, PortVal*, AsPortVal) CONVERTER(TYPE_SUBNET, SubNetVal*, AsSubNetVal) + CONVERTER(TYPE_ADDR, AddrVal*, AsAddrVal) CONVERTER(TYPE_TABLE, TableVal*, AsTableVal) CONVERTER(TYPE_RECORD, RecordVal*, AsRecordVal) CONVERTER(TYPE_LIST, ListVal*, AsListVal) @@ -299,6 +307,7 @@ public: CONST_CONVERTER(TYPE_PATTERN, PatternVal*, AsPatternVal) CONST_CONVERTER(TYPE_PORT, PortVal*, AsPortVal) CONST_CONVERTER(TYPE_SUBNET, SubNetVal*, AsSubNetVal) + CONST_CONVERTER(TYPE_ADDR, AddrVal*, AsAddrVal) CONST_CONVERTER(TYPE_TABLE, TableVal*, AsTableVal) CONST_CONVERTER(TYPE_RECORD, RecordVal*, AsRecordVal) CONST_CONVERTER(TYPE_LIST, ListVal*, AsListVal) @@ -338,13 +347,15 @@ public: #ifdef DEBUG // For debugging, we keep a reference to the global ID to which a // value has been bound *last*. - ID* GetID() const { return bound_id; } + ID* GetID() const + { + return bound_id ? global_scope()->Lookup(bound_id) : 0; + } + void SetID(ID* id) { - if ( bound_id ) - ::Unref(bound_id); - bound_id = id; - ::Ref(bound_id); + delete [] bound_id; + bound_id = id ? copy_string(id->Name()) : 0; } #endif @@ -392,8 +403,8 @@ protected: RecordVal* attribs; #ifdef DEBUG - // For debugging, we keep the ID to which a Val is bound. - ID* bound_id; + // For debugging, we keep the name of the ID to which a Val is bound. + const char* bound_id; #endif }; @@ -500,13 +511,9 @@ protected: #define NUM_PORT_SPACES 4 #define PORT_SPACE_MASK 0x30000 -#define TCP_PORT_MASK 0x10000 -#define UDP_PORT_MASK 0x20000 -#define ICMP_PORT_MASK 0x30000 - -typedef enum { - TRANSPORT_UNKNOWN, TRANSPORT_TCP, TRANSPORT_UDP, TRANSPORT_ICMP, -} TransportProto; +#define TCP_PORT_MASK 0x10000 +#define UDP_PORT_MASK 0x20000 +#define ICMP_PORT_MASK 0x30000 class PortVal : public Val { public: @@ -553,8 +560,9 @@ public: Val* SizeVal() const; // Constructor for address already in network order. - AddrVal(uint32 addr); - AddrVal(const uint32* addr); + AddrVal(uint32 addr); // IPv4. + AddrVal(const uint32 addr[4]); // IPv6. + AddrVal(const IPAddr& addr); unsigned int MemoryAllocation() const; @@ -564,9 +572,6 @@ protected: AddrVal(TypeTag t) : Val(t) { } AddrVal(BroType* t) : Val(t) { } - void Init(uint32 addr); - void Init(const uint32* addr); - DECLARE_SERIAL(AddrVal); }; @@ -574,30 +579,26 @@ class SubNetVal : public Val { public: SubNetVal(const char* text); SubNetVal(const char* text, int width); - SubNetVal(uint32 addr, int width); // for address already massaged - SubNetVal(const uint32* addr, int width); // ditto + SubNetVal(uint32 addr, int width); // IPv4. + SubNetVal(const uint32 addr[4], int width); // IPv6. + SubNetVal(const IPAddr& addr, int width); + SubNetVal(const IPPrefix& prefix); + ~SubNetVal(); Val* SizeVal() const; - int Width() const { return val.subnet_val.width; } - addr_type Mask() const; // returns host byte order + const IPAddr& Prefix() const; + int Width() const; + IPAddr Mask() const; - bool Contains(const uint32 addr) const; - bool Contains(const uint32* addr) const; + bool Contains(const IPAddr& addr) const; - unsigned int MemoryAllocation() const - { - return Val::MemoryAllocation() + padded_sizeof(*this) - padded_sizeof(Val); - } + unsigned int MemoryAllocation() const; protected: friend class Val; SubNetVal() {} - void Init(const char* text, int width); - void Init(uint32 addr, int width); - void Init(const uint32 *addr, int width); - void ValDescribe(ODesc* d) const; DECLARE_SERIAL(SubNetVal); @@ -841,6 +842,9 @@ public: timer = 0; } + HashKey* ComputeHash(const Val* index) const + { return table_hash->ComputeHash(index, 1); } + protected: friend class Val; friend class StateAccess; @@ -851,8 +855,6 @@ protected: void CheckExpireAttr(attr_tag at); int ExpandCompoundAndInit(val_list* vl, int k, Val* new_val); int CheckAndAssign(Val* index, Val* new_val, Opcode op = OP_ASSIGN); - HashKey* ComputeHash(const Val* index) const - { return table_hash->ComputeHash(index, 1); } bool AddProperties(Properties arg_state); bool RemoveProperties(Properties arg_state); diff --git a/src/Var.cc b/src/Var.cc index 897a454670..d54d94a078 100644 --- a/src/Var.cc +++ b/src/Var.cc @@ -202,7 +202,8 @@ Stmt* add_local(ID* id, BroType* t, init_class c, Expr* init, Ref(id); Stmt* stmt = - new ExprStmt(new AssignExpr(new NameExpr(id), init, 0)); + new ExprStmt(new AssignExpr(new NameExpr(id), init, 0, 0, + id->Attrs() ? id->Attrs()->Attrs() : 0 )); stmt->SetLocationInfo(init->GetLocationInfo()); return stmt; diff --git a/src/X509.cc b/src/X509.cc deleted file mode 100644 index 55b6b78f04..0000000000 --- a/src/X509.cc +++ /dev/null @@ -1,263 +0,0 @@ -#include - -#include "X509.h" -#include "config.h" - -// ### NOTE: while d2i_X509 does not take a const u_char** pointer, -// here we assume d2i_X509 does not write to , so it is safe to -// convert data to a non-const pointer. Could some X509 guru verify -// this? - -X509* d2i_X509_(X509** px, const u_char** in, int len) - { -#ifdef OPENSSL_D2I_X509_USES_CONST_CHAR - return d2i_X509(px, in, len); -#else - return d2i_X509(px, (u_char**)in, len); -#endif - } - -X509_STORE* X509_Cert::ctx = 0; -X509_LOOKUP* X509_Cert::lookup = 0; -X509_STORE_CTX X509_Cert::csc; -bool X509_Cert::bInited = false; - -// TODO: Check if Key < 768 Bits => Weakness! -// FIXME: Merge verify and verifyChain. - -void X509_Cert::sslCertificateEvent(Contents_SSL* e, X509* pCert) - { - EventHandlerPtr event = ssl_certificate; - if ( ! event ) - return; - - char tmp[256]; - RecordVal* pX509Cert = new RecordVal(x509_type); - - X509_NAME_oneline(X509_get_issuer_name(pCert), tmp, sizeof tmp); - pX509Cert->Assign(0, new StringVal(tmp)); - X509_NAME_oneline(X509_get_subject_name(pCert), tmp, sizeof tmp); - pX509Cert->Assign(1, new StringVal(tmp)); - pX509Cert->Assign(2, new AddrVal(e->Conn()->OrigAddr())); - - val_list* vl = new val_list; - vl->append(e->BuildConnVal()); - vl->append(pX509Cert); - vl->append(new Val(e->IsOrig(), TYPE_BOOL)); - - e->Conn()->ConnectionEvent(event, e, vl); - } - -void X509_Cert::sslCertificateError(Contents_SSL* e, int error_numbe) - { - Val* err_str = new StringVal(X509_verify_cert_error_string(csc.error)); - val_list* vl = new val_list; - - vl->append(e->BuildConnVal()); - vl->append(new Val(csc.error, TYPE_INT)); - vl->append(err_str); - - e->Conn()->ConnectionEvent(ssl_X509_error, e, vl); - } - -int X509_Cert::init() - { -#if 0 - OpenSSL_add_all_algorithms(); -#endif - - ctx = X509_STORE_new(); - int flag = 0; - int ret = 0; - - if ( x509_trusted_cert_path && - x509_trusted_cert_path->AsString()->Len() > 0 ) - { // add the path(s) for the local CA's certificates - const BroString* pString = x509_trusted_cert_path->AsString(); - - lookup = X509_STORE_add_lookup(ctx, X509_LOOKUP_hash_dir()); - if ( ! lookup ) - { - reporter->Error("X509_Cert::init(): initing lookup failed\n"); - flag = 1; - } - - int i = X509_LOOKUP_add_dir(lookup, - (const char*) pString->Bytes(), - X509_FILETYPE_PEM); - if ( ! i ) - { - reporter->Error("X509_Cert::init(): error adding lookup directory\n"); - ret = 0; - } - } - else - { - printf("X509: Using the default trusted cert path.\n"); - X509_STORE_set_default_paths(ctx); - } - - // Add crl functionality - will only add if defined and - // X509_STORE_add_lookup was successful. - if ( ! flag && x509_crl_file && x509_crl_file->AsString()->Len() > 0 ) - { - const BroString* rString = x509_crl_file->AsString(); - - if ( X509_load_crl_file(lookup, (const char*) rString->Bytes(), - X509_FILETYPE_PEM) != 1 ) - { - reporter->Error("X509_Cert::init(): error reading CRL file\n"); - ret = 1; - } - -#if 0 - // Note, openssl version must be > 0.9.7(a). - X509_STORE_set_flags(ctx, - X509_V_FLAG_CRL_CHECK | X509_V_FLAG_CRL_CHECK_ALL); -#endif - } - - bInited = true; - return ret; - } - -int X509_Cert::verify(Contents_SSL* e, const u_char* data, uint32 len) - { - if ( ! bInited ) - init(); - - X509* pCert = d2i_X509_(NULL, &data, len); - if ( ! pCert ) - { - // 5 = X509_V_ERR_UNABLE_TO_DECODE_ISSUER_PUBLIC_KEY - sslCertificateError(e, 5); - return -1; - } - - sslCertificateEvent(e, pCert); - - X509_STORE_CTX_init(&csc, ctx, pCert, 0); - X509_STORE_CTX_set_time(&csc, 0, (time_t) network_time); - int i = X509_verify_cert(&csc); - X509_STORE_CTX_cleanup(&csc); - int ret = 0; - - int ext = X509_get_ext_count(pCert); - - if ( ext > 0 ) - { - TableVal* x509ex = new TableVal(x509_extension); - val_list* vl = new val_list; - char buf[256]; - - for ( int k = 0; k < ext; ++k ) - { - X509_EXTENSION* ex = X509_get_ext(pCert, k); - ASN1_OBJECT* obj = X509_EXTENSION_get_object(ex); - i2t_ASN1_OBJECT(buf, sizeof(buf), obj); - - Val* index = new Val(k+1, TYPE_COUNT); - Val* value = new StringVal(strlen(buf), buf); - x509ex->Assign(index, value); - Unref(index); - // later we can do critical extensions like: - // X509_EXTENSION_get_critical(ex); - } - - vl->append(e->BuildConnVal()); - vl->append(x509ex); - e->Conn()->ConnectionEvent(process_X509_extensions, e, vl); - } - - if ( ! i ) - { - sslCertificateError(e, csc.error); - ret = csc.error; - } - else - ret = 0; - - delete pCert; - return ret; - } - -int X509_Cert::verifyChain(Contents_SSL* e, const u_char* data, uint32 len) - { - if ( ! bInited ) - init(); - - // Gets an ssl3x cert chain (could be one single cert, too, - // but in chain format). - - // Init the stack. - STACK_OF(X509)* untrustedCerts = sk_X509_new_null(); - if ( ! untrustedCerts ) - { - // Internal error allocating stack of untrusted certs. - // 11 = X509_V_ERR_OUT_OF_MEM - sslCertificateError(e, 11); - return -1; - } - - // NOT AGAIN!!! - // Extract certificates and put them into an OpenSSL Stack. - uint tempLength = 0; - int certCount = 0; - X509* pCert = 0; // base cert, this one is to be verified - - while ( tempLength < len ) - { - ++certCount; - uint32 certLength = - uint32((data[tempLength + 0] << 16) | - data[tempLength + 1] << 8) | - data[tempLength + 2]; - - // Points to current cert. - const u_char* pCurrentCert = &data[tempLength+3]; - - X509* pTemp = d2i_X509_(0, &pCurrentCert, certLength); - if ( ! pTemp ) - { // error is somewhat of a misnomer - // 5 = X509_V_ERR_UNABLE_TO_DECODE_ISSUER_PUBLIC_KEY - sslCertificateError(e, 5); - //FIXME: free ptrs - return -1; - } - - if ( certCount == 1 ) - // The first certificate goes directly into the ctx. - pCert = pTemp; - else - // The remaining certificates (if any) are put into - // the list of untrusted certificates - sk_X509_push(untrustedCerts, pTemp); - - tempLength += certLength + 3; - } - - sslCertificateEvent(e, pCert); - - X509_STORE_CTX_init(&csc, ctx, pCert, untrustedCerts); - X509_STORE_CTX_set_time(&csc, 0, (time_t) network_time); - int i = X509_verify_cert(&csc); - X509_STORE_CTX_cleanup(&csc); - //X509_STORE_CTX_free(&csc); - int ret = 0; - - if ( ! i ) - { - sslCertificateError(e, csc.error); - ret = csc.error; - } - else - ret = 0; - - delete pCert; - // Free the stack, incuding. contents. - - // FIXME: could this break Bro's memory tracking? - sk_X509_pop_free(untrustedCerts, X509_free); - - return ret; - } diff --git a/src/ZIP.cc b/src/ZIP.cc index 26095d1f11..0ebe93abe6 100644 --- a/src/ZIP.cc +++ b/src/ZIP.cc @@ -2,8 +2,6 @@ #include "ZIP.h" -#ifdef HAVE_LIBZ - ZIP_Analyzer::ZIP_Analyzer(Connection* conn, bool orig, Method arg_method) : TCP_SupportAnalyzer(AnalyzerTag::Zip, conn, orig) { @@ -89,4 +87,3 @@ void ZIP_Analyzer::DeliverStream(int len, const u_char* data, bool orig) } while ( zip->avail_out == 0 ); } -#endif diff --git a/src/ZIP.h b/src/ZIP.h index ab5d2ce68b..6a8a180d1a 100644 --- a/src/ZIP.h +++ b/src/ZIP.h @@ -5,8 +5,6 @@ #include "config.h" -#ifdef HAVE_LIBZ - #include "zlib.h" #include "TCP.h" @@ -29,4 +27,3 @@ protected: }; #endif -#endif diff --git a/src/ayiya-analyzer.pac b/src/ayiya-analyzer.pac new file mode 100644 index 0000000000..7a151453c1 --- /dev/null +++ b/src/ayiya-analyzer.pac @@ -0,0 +1,89 @@ + +connection AYIYA_Conn(bro_analyzer: BroAnalyzer) + { + upflow = AYIYA_Flow; + downflow = AYIYA_Flow; + }; + +flow AYIYA_Flow + { + datagram = PDU withcontext(connection, this); + + function process_ayiya(pdu: PDU): bool + %{ + Connection *c = connection()->bro_analyzer()->Conn(); + const EncapsulationStack* e = c->GetEncapsulation(); + + if ( e && e->Depth() >= BifConst::Tunnel::max_depth ) + { + reporter->Weird(c, "tunnel_depth"); + return false; + } + + if ( ${pdu.op} != 1 ) + { + // 1 is the "forward" command. + return false; + } + + if ( ${pdu.next_header} != IPPROTO_IPV6 && + ${pdu.next_header} != IPPROTO_IPV4 ) + { + reporter->Weird(c, "ayiya_tunnel_non_ip"); + return false; + } + + if ( ${pdu.packet}.length() < (int)sizeof(struct ip) ) + { + connection()->bro_analyzer()->ProtocolViolation( + "Truncated AYIYA", (const char*) ${pdu.packet}.data(), + ${pdu.packet}.length()); + return false; + } + + const struct ip* ip = (const struct ip*) ${pdu.packet}.data(); + + if ( ( ${pdu.next_header} == IPPROTO_IPV6 && ip->ip_v != 6 ) || + ( ${pdu.next_header} == IPPROTO_IPV4 && ip->ip_v != 4) ) + { + connection()->bro_analyzer()->ProtocolViolation( + "AYIYA next header mismatch", (const char*)${pdu.packet}.data(), + ${pdu.packet}.length()); + return false; + } + + IP_Hdr* inner = 0; + int result = sessions->ParseIPPacket(${pdu.packet}.length(), + ${pdu.packet}.data(), ${pdu.next_header}, inner); + + if ( result == 0 ) + connection()->bro_analyzer()->ProtocolConfirmation(); + + else if ( result < 0 ) + connection()->bro_analyzer()->ProtocolViolation( + "Truncated AYIYA", (const char*) ${pdu.packet}.data(), + ${pdu.packet}.length()); + + else + connection()->bro_analyzer()->ProtocolViolation( + "AYIYA payload length", (const char*) ${pdu.packet}.data(), + ${pdu.packet}.length()); + + if ( result != 0 ) + { + delete inner; + return false; + } + + EncapsulatingConn ec(c, BifEnum::Tunnel::AYIYA); + + sessions->DoNextInnerPacket(network_time(), 0, inner, e, ec); + + return (result == 0) ? true : false; + %} + + }; + +refine typeattr PDU += &let { + proc_ayiya = $context.flow.process_ayiya(this); +}; diff --git a/src/ayiya-protocol.pac b/src/ayiya-protocol.pac new file mode 100644 index 0000000000..328d44ece7 --- /dev/null +++ b/src/ayiya-protocol.pac @@ -0,0 +1,16 @@ + +type PDU = record { + identity_byte: uint8; + signature_byte: uint8; + auth_and_op: uint8; + next_header: uint8; + epoch: uint32; + identity: bytestring &length=identity_len; + signature: bytestring &length=signature_len; + packet: bytestring &restofdata; +} &let { + identity_len = (1 << (identity_byte >> 4)); + signature_len = (signature_byte >> 4) * 4; + auth = auth_and_op >> 4; + op = auth_and_op & 0xF; +} &byteorder = littleendian; diff --git a/src/ayiya.pac b/src/ayiya.pac new file mode 100644 index 0000000000..58fa196c15 --- /dev/null +++ b/src/ayiya.pac @@ -0,0 +1,10 @@ +%include binpac.pac +%include bro.pac + +analyzer AYIYA withcontext { + connection: AYIYA_Conn; + flow: AYIYA_Flow; +}; + +%include ayiya-protocol.pac +%include ayiya-analyzer.pac diff --git a/src/bif_type.def b/src/bif_type.def index 4e206ceea2..5d9963ffc9 100644 --- a/src/bif_type.def +++ b/src/bif_type.def @@ -1,6 +1,6 @@ // DEFINE_BIF_TYPE(id, bif_type, bro_type, c_type, accessor, constructor) -DEFINE_BIF_TYPE(TYPE_ADDR, "addr", "addr", "addr_type", "%s->AsAddr()", "new AddrVal(%s)") +DEFINE_BIF_TYPE(TYPE_ADDR, "addr", "addr", "AddrVal*", "%s->AsAddrVal()", "%s") DEFINE_BIF_TYPE(TYPE_ANY, "any", "any", "Val*", "%s", "%s") DEFINE_BIF_TYPE(TYPE_BOOL, "bool", "bool", "int", "%s->AsBool()", "new Val(%s, TYPE_BOOL)") DEFINE_BIF_TYPE(TYPE_CONN_ID, "conn_id", "conn_id", "Val*", "%s", "%s") diff --git a/src/bittorrent-analyzer.pac b/src/bittorrent-analyzer.pac index ee7a70ea21..3bc6d90230 100644 --- a/src/bittorrent-analyzer.pac +++ b/src/bittorrent-analyzer.pac @@ -10,25 +10,25 @@ flow BitTorrent_Flow(is_orig: bool) { %member{ bool handshake_ok; - uint64 _next_message_offset; + //uint64 _next_message_offset; %} %init{ handshake_ok = false; - _next_message_offset = 0; + //_next_message_offset = 0; %} - function next_message_offset(): uint64 - %{ - return &_next_message_offset; - %} + #function next_message_offset(): uint64 + # %{ + # return &_next_message_offset; + # %} - function increment_next_message_offset(go: bool, len: uint32): bool - %{ - if ( go ) - _next_message_offset += len; - return true; - %} + #function increment_next_message_offset(go: bool, len: uint32): bool + # %{ + # if ( go ) + # _next_message_offset += len; + # return true; + # %} function is_handshake_delivered(): bool %{ diff --git a/src/bittorrent-protocol.pac b/src/bittorrent-protocol.pac index d3a147f157..76bbafbf20 100644 --- a/src/bittorrent-protocol.pac +++ b/src/bittorrent-protocol.pac @@ -22,8 +22,8 @@ type BitTorrent_Handshake = record { } &length = 68, &let { validate: bool = $context.flow.validate_handshake(pstrlen, pstr); - incoffsetffset: bool = - $context.flow.increment_next_message_offset(true, 68); + #incoffsetffset: bool = + # $context.flow.increment_next_message_offset(true, 68); deliver: bool = $context.flow.deliver_handshake(reserved, info_hash, peer_id); }; @@ -72,8 +72,8 @@ type BitTorrent_PieceHeader(len: uint32) = record { index: uint32; begin: uint32; } &let { - incoffset: bool = - $context.flow.increment_next_message_offset(true, len + 5); + #incoffset: bool = + # $context.flow.increment_next_message_offset(true, len + 5); }; type BitTorrent_Piece(len: uint32) = record { @@ -134,9 +134,9 @@ type BitTorrent_Message = record { default -> message_id: BitTorrent_MessageID(len.len); }; } &length = 4 + len.len, &let { - incoffset: bool = $context.flow.increment_next_message_offset( - len.len == 0 || message_id.id != TYPE_PIECE, - 4 + len.len); + #incoffset: bool = $context.flow.increment_next_message_offset( + # len.len == 0 || message_id.id != TYPE_PIECE, + # 4 + len.len); }; type BitTorrent_PDU = case $context.flow.is_handshake_delivered() of { diff --git a/src/bro.bif b/src/bro.bif index 3350de2867..1b1c23950d 100644 --- a/src/bro.bif +++ b/src/bro.bif @@ -1,15 +1,21 @@ -# Definitions of Bro built-in functions. +##! A collection of built-in functions that implement a variety of things +##! such as general programming algorithms, string processing, math functions, +##! introspection, type conversion, file/directory manipulation, packet +##! filtering, inter-process communication and controlling protocol analyzer +##! behavior. %%{ // C segment #include - #include #include #include #include #include +#include +#include "digest.h" #include "Reporter.h" +#include "IPAddr.h" using namespace std; @@ -174,35 +180,8 @@ static void do_fmt(const char*& fmt, Val* v, ODesc* d) // This makes only a very slight difference, so not // clear it would e worth the hassle. - addr_type u = v->AsAddr(); -#ifdef BROv6 - // We explicitly convert the address to host order - // in a copy, because if we just call ntohl() for - // our invocation on snprintf() below, on some systems - // it turns a 32-bit value (Linux), whereas on - // others it returns a long (FreeBSD); the latter - // gets us in trouble if we have longs > 32 bits, - // because then the format specifier needs to be %lx - // rather than %x ....... what a pain! - // - // Also note that we don't change u in-place because - // that would alter the byte order of the underlying - // value. (Speaking of which, I'm not clear on why - // we're allowed to assign a const addr_type to an - // addr_type above, both g++ allows it.) - uint32 host_order_u[4]; - host_order_u[0] = ntohl(u[0]); - host_order_u[1] = ntohl(u[1]); - host_order_u[2] = ntohl(u[2]); - host_order_u[3] = ntohl(u[3]); - - snprintf(out_buf, sizeof(out_buf), "%08x%08x%08x%08x", - host_order_u[0], host_order_u[1], - host_order_u[2], host_order_u[3]); -#else - u = ntohl(u); - snprintf(out_buf, sizeof(out_buf), "%08x", u); -#endif + snprintf(out_buf, sizeof(out_buf), "%s", + v->AsAddr().AsHexString().c_str()); } else if ( ! check_fmt_type(t, ok_d_fmt) ) @@ -317,470 +296,48 @@ static int next_fmt(const char*& fmt, val_list* args, ODesc* d, int& n) } %%} -function length%(v: any%): count - %{ - TableVal* tv = v->Type()->Tag() == TYPE_TABLE ? v->AsTableVal() : 0; - - if ( tv ) - return new Val(tv->Size(), TYPE_COUNT); - - else if ( v->Type()->Tag() == TYPE_VECTOR ) - return new Val(v->AsVectorVal()->Size(), TYPE_COUNT); - - else - { - builtin_error("length() requires a table/set/vector argument"); - return new Val(0, TYPE_COUNT); - } - %} - -function same_object%(o1: any, o2: any%): bool - %{ - return new Val(o1 == o2, TYPE_BOOL); - %} - -function clear_table%(v: any%): any - %{ - if ( v->Type()->Tag() == TYPE_TABLE ) - v->AsTableVal()->RemoveAll(); - else - builtin_error("clear_table() requires a table/set argument"); - - return 0; - %} - -function cat%(...%): string - %{ - ODesc d; - loop_over_list(@ARG@, i) - @ARG@[i]->Describe(&d); - - BroString* s = new BroString(1, d.TakeBytes(), d.Len()); - s->SetUseFreeToDelete(true); - - return new StringVal(s); - %} - -function record_type_to_vector%(rt: string%): string_vec - %{ - VectorVal* result = - new VectorVal(internal_type("string_vec")->AsVectorType()); - - RecordType *type = internal_type(rt->CheckString())->AsRecordType(); - - if ( type ) - { - for ( int i = 0; i < type->NumFields(); ++i ) - { - StringVal* val = new StringVal(type->FieldName(i)); - result->Assign(i+1, val, 0); - } - } - - return result; - %} - - -function cat_sep%(sep: string, def: string, ...%): string - %{ - ODesc d; - int pre_size = 0; - - loop_over_list(@ARG@, i) - { - // Skip named parameters. - if ( i < 2 ) - continue; - - if ( i > 2 ) - d.Add(sep->CheckString(), 0); - - Val* v = @ARG@[i]; - if ( v->Type()->Tag() == TYPE_STRING && ! v->AsString()->Len() ) - v = def; - - v->Describe(&d); - } - - BroString* s = new BroString(1, d.TakeBytes(), d.Len()); - s->SetUseFreeToDelete(true); - - return new StringVal(s); - %} - -function fmt%(...%): string - %{ - if ( @ARGC@ == 0 ) - return new StringVal(""); - - Val* fmt_v = @ARG@[0]; - if ( fmt_v->Type()->Tag() != TYPE_STRING ) - return bro_cat(frame, @ARGS@); - - const char* fmt = fmt_v->AsString()->CheckString(); - ODesc d; - int n = 0; - - while ( next_fmt(fmt, @ARGS@, &d, n) ) - ; - - if ( n < @ARGC@ - 1 ) - builtin_error("too many arguments for format", fmt_v); - - else if ( n >= @ARGC@ ) - builtin_error("too few arguments for format", fmt_v); - - BroString* s = new BroString(1, d.TakeBytes(), d.Len()); - s->SetUseFreeToDelete(true); - - return new StringVal(s); - %} - -function type_name%(t: any%): string - %{ - ODesc d; - t->Type()->Describe(&d); - - BroString* s = new BroString(1, d.TakeBytes(), d.Len()); - s->SetUseFreeToDelete(true); - - return new StringVal(s); - %} - -function to_int%(str: string%): int - %{ - const char* s = str->CheckString(); - char* end_s; - - long l = strtol(s, &end_s, 10); - int i = int(l); - -#if 0 - // Not clear we should complain. For example, is " 205 " - // a legal conversion? - if ( s[0] == '\0' || end_s[0] != '\0' ) - builtin_error("bad conversion to integer", @ARG@[0]); -#endif - - return new Val(i, TYPE_INT); - %} - -function int_to_count%(n: int%): count - %{ - if ( n < 0 ) - { - builtin_error("bad conversion to count", @ARG@[0]); - n = 0; - } - return new Val(n, TYPE_COUNT); - %} - -function double_to_count%(d: double%): count - %{ - if ( d < 0.0 ) - builtin_error("bad conversion to count", @ARG@[0]); - - return new Val(bro_uint_t(rint(d)), TYPE_COUNT); - %} - -function to_count%(str: string%): count - %{ - const char* s = str->CheckString(); - char* end_s; - - uint64 u = (uint64) strtoll(s, &end_s, 10); - - if ( s[0] == '\0' || end_s[0] != '\0' ) - { - builtin_error("bad conversion to count", @ARG@[0]); - u = 0; - } - - return new Val(u, TYPE_COUNT); - %} - -function interval_to_double%(i: interval%): double - %{ - return new Val(i, TYPE_DOUBLE); - %} - -function time_to_double%(t: time%): double - %{ - return new Val(t, TYPE_DOUBLE); - %} - -function double_to_time%(d: double%): time - %{ - return new Val(d, TYPE_TIME); - %} - -function double_to_interval%(d: double%): interval - %{ - return new Val(d, TYPE_INTERVAL); - %} - -function addr_to_count%(a: addr%): count - %{ -#ifdef BROv6 - if ( ! is_v4_addr(a) ) - { - builtin_error("conversion of non-IPv4 address to count", @ARG@[0]); - return new Val(0, TYPE_COUNT); - } - - uint32 addr = to_v4_addr(a); -#else - uint32 addr = a; -#endif - return new Val(ntohl(addr), TYPE_COUNT); - %} - -function port_to_count%(p: port%): count - %{ - return new Val(p->Port(), TYPE_COUNT); - %} - -function count_to_port%(c: count, t: transport_proto%): port - %{ - return new PortVal(c, (TransportProto)(t->InternalInt())); - %} - -function floor%(d: double%): double - %{ - return new Val(floor(d), TYPE_DOUBLE); - %} - -function to_addr%(ip: string%): addr - %{ - char* s = ip->AsString()->Render(); - Val* ret = new AddrVal(s); - delete [] s; - return ret; - %} - -function count_to_v4_addr%(ip: count%): addr - %{ - if ( ip > 4294967295LU ) - { - builtin_error("conversion of non-IPv4 count to addr", @ARG@[0]); - return new AddrVal(uint32(0)); - } - - return new AddrVal(htonl(uint32(ip))); - %} - -# Interprets the first 4 bytes of 'b' as an IPv4 address in network order. -function raw_bytes_to_v4_addr%(b: string%): addr - %{ - uint32 a = 0; - - if ( b->Len() < 4 ) - builtin_error("too short a string as input to raw_bytes_to_v4_addr()"); - - else - { - const u_char* bp = b->Bytes(); - a = (bp[0] << 24) | (bp[1] << 16) | (bp[2] << 8) | bp[3]; - } - - return new AddrVal(htonl(a)); - %} - -function to_port%(num: count, proto: transport_proto%): port - %{ - return new PortVal(num, (TransportProto)proto->AsEnum()); - %} - -function mask_addr%(a: addr, top_bits_to_keep: count%): subnet - %{ - return new SubNetVal(mask_addr(a, top_bits_to_keep), top_bits_to_keep); - %} - -# Take some top bits (e.g. subnet address) from a1 and the other -# bits (intra-subnet part) from a2 and merge them to get a new address. -# This is useful for anonymizing at subnet level while preserving -# serial scans. -function remask_addr%(a1: addr, a2: addr, top_bits_from_a1: count%): addr - %{ -#ifdef BROv6 - if ( ! is_v4_addr(a1) || ! is_v4_addr(a2) ) - { - builtin_error("cannot use remask_addr on IPv6 addresses"); - return new AddrVal(a1); - } - - uint32 x1 = to_v4_addr(a1); - uint32 x2 = to_v4_addr(a2); -#else - uint32 x1 = a1; - uint32 x2 = a2; -#endif - return new AddrVal( - mask_addr(x1, top_bits_from_a1) | - (x2 ^ mask_addr(x2, top_bits_from_a1)) ); - %} - -function is_tcp_port%(p: port%): bool - %{ - return new Val(p->IsTCP(), TYPE_BOOL); - %} - -function is_udp_port%(p: port%): bool - %{ - return new Val(p->IsUDP(), TYPE_BOOL); - %} - -function is_icmp_port%(p: port%): bool - %{ - return new Val(p->IsICMP(), TYPE_BOOL); - %} - -function reading_live_traffic%(%): bool - %{ - return new Val(reading_live, TYPE_BOOL); - %} - -function reading_traces%(%): bool - %{ - return new Val(reading_traces, TYPE_BOOL); - %} - -function open%(f: string%): file - %{ - const char* file = f->CheckString(); - - if ( streq(file, "-") ) - return new Val(new BroFile(stdout, "-", "w")); - else - return new Val(new BroFile(file, "w")); - %} - -function open_for_append%(f: string%): file - %{ - return new Val(new BroFile(f->CheckString(), "a")); - %} - -function close%(f: file%): bool - %{ - return new Val(f->Close(), TYPE_BOOL); - %} - -function write_file%(f: file, data: string%): bool - %{ - if ( ! f ) - return new Val(0, TYPE_BOOL); - - return new Val(f->Write((const char*) data->Bytes(), data->Len()), - TYPE_BOOL); - %} - -function set_buf%(f: file, buffered: bool%): any - %{ - f->SetBuf(buffered); - return new Val(0, TYPE_VOID); - %} - -function flush_all%(%): bool - %{ - return new Val(fflush(0) == 0, TYPE_BOOL); - %} - -function mkdir%(f: string%): bool - %{ - const char* filename = f->CheckString(); - if ( mkdir(filename, 0777) < 0 && errno != EEXIST ) - { - builtin_error("cannot create directory", @ARG@[0]); - return new Val(0, TYPE_BOOL); - } - else - return new Val(1, TYPE_BOOL); - %} - -function active_file%(f: file%): bool - %{ - return new Val(f->IsOpen(), TYPE_BOOL); - %} - -function active_connection%(id: conn_id%): bool - %{ - Connection* c = sessions->FindConnection(id); - return new Val(c ? 1 : 0, TYPE_BOOL); - %} - -# Note, you *must* first make sure that the connection is active -# (e.g., by calling active_connection()) before invoking this. -function connection_record%(cid: conn_id%): connection - %{ - Connection* c = sessions->FindConnection(cid); - if ( c ) - return c->BuildConnVal(); - else - { - // Hard to recover from this until we have union types ... - builtin_error("connection ID not a known connection (fatal)", cid); - exit(0); - return 0; - } - %} - -%%{ -EnumVal* map_conn_type(TransportProto tp) - { - switch ( tp ) { - case TRANSPORT_UNKNOWN: - return new EnumVal(0, transport_proto); - break; - - case TRANSPORT_TCP: - return new EnumVal(1, transport_proto); - break; - - case TRANSPORT_UDP: - return new EnumVal(2, transport_proto); - break; - - case TRANSPORT_ICMP: - return new EnumVal(3, transport_proto); - break; - - default: - reporter->InternalError("bad connection type in map_conn_type()"); - } - - // Cannot be reached; - assert(false); - return 0; // Make compiler happy. - } -%%} - -function get_conn_transport_proto%(cid: conn_id%): transport_proto - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - { - builtin_error("unknown connection id in get_conn_transport_proto()", cid); - return new EnumVal(0, transport_proto); - } - - return map_conn_type(c->ConnTransport()); - %} - -function get_port_transport_proto%(p: port%): transport_proto - %{ - return map_conn_type(p->PortType()); - %} - +# =========================================================================== +# +# Core +# +# =========================================================================== + +## Returns the current wall-clock time. +## +## In general, you should use :bro:id:`network_time` instead +## unless you are using Bro for non-networking uses (such as general +## scripting; not particularly recommended), because otherwise your script +## may behave very differently on live traffic versus played-back traffic +## from a save file. +## +## Returns: The wall-clock time. +## +## .. bro:see:: network_time function current_time%(%): time %{ return new Val(current_time(), TYPE_TIME); %} +## Returns the timestamp of the last packet processed. This function returns +## the timestamp of the most recently read packet, whether read from a +## live network interface or from a save file. +## +## Returns: The timestamp of the packet processed. +## +## .. bro:see:: current_time function network_time%(%): time %{ return new Val(network_time, TYPE_TIME); %} +## Returns a system environment variable. +## +## var: The name of the variable whose value to request. +## +## Returns: The system environment variable identified by *var*, or an empty +## string if it is not defined. +## +## .. bro:see:: setenv function getenv%(var: string%): string %{ const char* env_val = getenv(var->CheckString()); @@ -789,50 +346,42 @@ function getenv%(var: string%): string return new StringVal(env_val); %} +## Sets a system environment variable. +## +## var: The name of the variable. +## +## val: The (new) value of the variable *var*. +## +## Returns: True on success. +## +## .. bro:see:: getenv function setenv%(var: string, val: string%): bool %{ - int result = setenv(var->AsString()->CheckString(), + int result = setenv(var->AsString()->CheckString(), val->AsString()->CheckString(), 1); - + if ( result < 0 ) return new Val(0, TYPE_BOOL); return new Val(1, TYPE_BOOL); %} -function sqrt%(x: double%): double +## Shuts down the Bro process immediately. +## +## code: The exit code to return with. +## +## .. bro:see:: terminate +function exit%(code: int%): any %{ - if ( x < 0 ) - { - reporter->Error("negative sqrt argument"); - return new Val(-1.0, TYPE_DOUBLE); - } - - return new Val(sqrt(x), TYPE_DOUBLE); - %} - -function exp%(d: double%): double - %{ - return new Val(exp(d), TYPE_DOUBLE); - %} - -# Natural log. -function ln%(d: double%): double - %{ - return new Val(log(d), TYPE_DOUBLE); - %} - -# Common log. -function log10%(d: double%): double - %{ - return new Val(log10(d), TYPE_DOUBLE); - %} - -function exit%(%): int - %{ - exit(0); + exit(code); return 0; %} +## Gracefully shut down Bro by terminating outstanding processing. +## +## Returns: True after successful termination and false when Bro is still in +## the process of shutting down. +## +## .. bro:see:: exit bro_is_terminating function terminate%(%): bool %{ if ( terminating ) @@ -889,17 +438,47 @@ static int do_system(const char* s) } %%} +## Invokes a command via the ``system`` function of the OS. +## The command runs in the background with ``stdout`` redirecting to +## ``stderr``. Here is a usage example: +## ``system(fmt("rm \"%s\"", str_shell_escape(sniffed_data)));`` +## +## str: The command to execute. +## +## Returns: The return value from the OS ``system`` function. +## +## .. bro:see:: system_env str_shell_escape piped_exec +## +## .. note:: +## +## Note that this corresponds to the status of backgrounding the +## given command, not to the exit status of the command itself. A +## value of 127 corresponds to a failure to execute ``sh``, and -1 +## to an internal system failure. function system%(str: string%): int %{ int result = do_system(str->CheckString()); return new Val(result, TYPE_INT); %} -function system_env%(str: string, env: any%): int +## Invokes a command via the ``system`` function of the OS with a prepared +## environment. The function is essentially the same as :bro:id:`system`, +## but changes the environment before invoking the command. +## +## str: The command to execute. +## +## env: A :bro:type:`table` with the environment variables in the form +## of key-value pairs. Each specified environment variable name +## will be automatically prepended with ``BRO_ARG_``. +## +## Returns: The return value from the OS ``system`` function. +## +## .. bro:see:: system str_shell_escape piped_exec +function system_env%(str: string, env: table_string_of_string%): int %{ if ( env->Type()->Tag() != TYPE_TABLE ) { - builtin_error("system_env() requires a table/set argument"); + builtin_error("system_env() requires a table argument"); return new Val(-1, TYPE_INT); } @@ -913,532 +492,1425 @@ function system_env%(str: string, env: any%): int return new Val(result, TYPE_INT); %} - -%%{ -static Val* parse_eftp(const char* line) - { - RecordVal* r = new RecordVal(ftp_port); - - int net_proto = 0; // currently not used - uint32 addr = 0; - int port = 0; - int good = 0; - - if ( line ) - { - while ( isspace(*line) ) // skip whitespace - ++line; - - char delimiter = *line; - good = 1; - char* next_delim; - - ++line; // cut off delimiter - net_proto = strtol(line, &next_delim, 10); // currently ignored - if ( *next_delim != delimiter ) - good = 0; - - line = next_delim + 1; - if ( *line != delimiter ) // default of 0 is ok - { - addr = dotted_to_addr(line); - if ( addr == 0 ) - good = 0; - } - - // FIXME: check for garbage between IP and delimiter. - line = strchr(line, delimiter); - - ++line; // now the port - port = strtol(line, &next_delim, 10); - if ( *next_delim != delimiter ) - good = 0; - } - - r->Assign(0, new AddrVal(addr)); - r->Assign(1, new PortVal(port, TRANSPORT_TCP)); - r->Assign(2, new Val(good, TYPE_BOOL)); - - return r; - } -%%} - -%%{ -static Val* parse_port(const char* line) - { - RecordVal* r = new RecordVal(ftp_port); - - int bytes[6]; - if ( line && sscanf(line, "%d,%d,%d,%d,%d,%d", - &bytes[0], &bytes[1], &bytes[2], - &bytes[3], &bytes[4], &bytes[5]) == 6 ) - { - int good = 1; - - for ( int i = 0; i < 6; ++i ) - if ( bytes[i] < 0 || bytes[i] > 255 ) - { - good = 0; - break; - } - - uint32 addr = (bytes[0] << 24) | (bytes[1] << 16) | - (bytes[2] << 8) | bytes[3]; - uint32 port = (bytes[4] << 8) | bytes[5]; - - // Since port is unsigned, no need to check for < 0. - if ( port > 65535 ) - { - port = 0; - good = 0; - } - - r->Assign(0, new AddrVal(htonl(addr))); - r->Assign(1, new PortVal(port, TRANSPORT_TCP)); - r->Assign(2, new Val(good, TYPE_BOOL)); - } - else - { - r->Assign(0, new AddrVal(uint32(0))); - r->Assign(1, new PortVal(0, TRANSPORT_TCP)); - r->Assign(2, new Val(0, TYPE_BOOL)); - } - - return r; - } -%%} - -# Returns true if the given connection exists, false otherwise. -function connection_exists%(c: conn_id%): bool +## Opens a program with ``popen`` and writes a given string to the returned +## stream to send it to the opened process's stdin. +## +## program: The program to execute. +## +## to_write: Data to pipe to the opened program's process via ``stdin``. +## +## Returns: True on success. +## +## .. bro:see:: system system_env +function piped_exec%(program: string, to_write: string%): bool %{ - if ( sessions->FindConnection(c) ) - return new Val(1, TYPE_BOOL); - else - return new Val(0, TYPE_BOOL); - %} + const char* prog = program->CheckString(); -# For a given connection ID, returns the corresponding "connection" record. -# Generates a run-time error and returns a dummy value if the connection -# doesn't exist. -function lookup_connection%(cid: conn_id%): connection - %{ - Connection* conn = sessions->FindConnection(cid); - if ( conn ) - return conn->BuildConnVal(); - - builtin_error("connection ID not a known connection", cid); - - // Return a dummy connection record. - RecordVal* c = new RecordVal(connection_type); - - RecordVal* id_val = new RecordVal(conn_id); - id_val->Assign(0, new AddrVal((unsigned int) 0)); - id_val->Assign(1, new PortVal(ntohs(0), TRANSPORT_UDP)); - id_val->Assign(2, new AddrVal((unsigned int) 0)); - id_val->Assign(3, new PortVal(ntohs(0), TRANSPORT_UDP)); - c->Assign(0, id_val); - - RecordVal* orig_endp = new RecordVal(endpoint); - orig_endp->Assign(0, new Val(0, TYPE_COUNT)); - orig_endp->Assign(1, new Val(int(0), TYPE_COUNT)); - - RecordVal* resp_endp = new RecordVal(endpoint); - resp_endp->Assign(0, new Val(0, TYPE_COUNT)); - resp_endp->Assign(1, new Val(int(0), TYPE_COUNT)); - - c->Assign(1, orig_endp); - c->Assign(2, resp_endp); - - c->Assign(3, new Val(network_time, TYPE_TIME)); - c->Assign(4, new Val(0.0, TYPE_INTERVAL)); - c->Assign(5, new TableVal(string_set)); // service - c->Assign(6, new StringVal("")); // addl - c->Assign(7, new Val(0, TYPE_COUNT)); // hot - c->Assign(8, new StringVal("")); // history - - return c; - %} - -function skip_further_processing%(cid: conn_id%): bool - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_BOOL); - - c->SetSkip(1); - return new Val(1, TYPE_BOOL); - %} - -function set_record_packets%(cid: conn_id, do_record: bool%): bool - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_BOOL); - - c->SetRecordPackets(do_record); - return new Val(1, TYPE_BOOL); - %} - -function set_contents_file%(cid: conn_id, direction: count, f: file%): bool - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_BOOL); - - c->GetRootAnalyzer()->SetContentsFile(direction, f); - return new Val(1, TYPE_BOOL); - %} - -function get_contents_file%(cid: conn_id, direction: count%): file - %{ - Connection* c = sessions->FindConnection(cid); - BroFile* f = c ? c->GetRootAnalyzer()->GetContentsFile(direction) : 0; - - if ( f ) - { - Ref(f); - return new Val(f); - } - - // Return some sort of error value. - if ( ! c ) - builtin_error("unknown connection id in get_contents_file()", cid); - else - builtin_error("no contents file for given direction"); - - return new Val(new BroFile(stderr, "-", "w")); - %} - -function get_file_name%(f: file%): string - %{ + FILE* f = popen(prog, "w"); if ( ! f ) + { + reporter->Error("Failed to popen %s", prog); + return new Val(0, TYPE_BOOL); + } + + const u_char* input_data = to_write->Bytes(); + int input_data_len = to_write->Len(); + + int bytes_written = fwrite(input_data, 1, input_data_len, f); + + pclose(f); + + if ( bytes_written != input_data_len ) + { + reporter->Error("Failed to write all given data to %s", prog); + return new Val(0, TYPE_BOOL); + } + + return new Val(1, TYPE_BOOL); + %} + +%%{ +static void hash_md5_val(val_list& vlist, unsigned char digest[16]) + { + MD5_CTX h; + + md5_init(&h); + loop_over_list(vlist, i) + { + Val* v = vlist[i]; + if ( v->Type()->Tag() == TYPE_STRING ) + { + const BroString* str = v->AsString(); + md5_update(&h, str->Bytes(), str->Len()); + } + else + { + ODesc d(DESC_BINARY); + v->Describe(&d); + md5_update(&h, (const u_char *) d.Bytes(), d.Len()); + } + } + md5_final(&h, digest); + } + +static void hmac_md5_val(val_list& vlist, unsigned char digest[16]) + { + hash_md5_val(vlist, digest); + for ( int i = 0; i < 16; ++i ) + digest[i] = digest[i] ^ shared_hmac_md5_key[i]; + MD5(digest, 16, digest); + } + +static void hash_sha1_val(val_list& vlist, unsigned char digest[20]) + { + SHA_CTX h; + + sha1_init(&h); + loop_over_list(vlist, i) + { + Val* v = vlist[i]; + if ( v->Type()->Tag() == TYPE_STRING ) + { + const BroString* str = v->AsString(); + sha1_update(&h, str->Bytes(), str->Len()); + } + else + { + ODesc d(DESC_BINARY); + v->Describe(&d); + sha1_update(&h, (const u_char *) d.Bytes(), d.Len()); + } + } + sha1_final(&h, digest); + } + +static void hash_sha256_val(val_list& vlist, unsigned char digest[32]) + { + SHA256_CTX h; + + sha256_init(&h); + loop_over_list(vlist, i) + { + Val* v = vlist[i]; + if ( v->Type()->Tag() == TYPE_STRING ) + { + const BroString* str = v->AsString(); + sha256_update(&h, str->Bytes(), str->Len()); + } + else + { + ODesc d(DESC_BINARY); + v->Describe(&d); + sha256_update(&h, (const u_char *) d.Bytes(), d.Len()); + } + } + sha256_final(&h, digest); + } +%%} + +## Computes the MD5 hash value of the provided list of arguments. +## +## Returns: The MD5 hash value of the concatenated arguments. +## +## .. bro:see:: md5_hmac md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +## +## .. note:: +## +## This function performs a one-shot computation of its arguments. +## For incremental hash computation, see :bro:id:`md5_hash_init` and +## friends. +function md5_hash%(...%): string + %{ + unsigned char digest[16]; + hash_md5_val(@ARG@, digest); + return new StringVal(md5_digest_print(digest)); + %} + +## Computes the SHA1 hash value of the provided list of arguments. +## +## Returns: The SHA1 hash value of the concatenated arguments. +## +## .. bro:see:: md5_hash md5_hmac md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +## +## .. note:: +## +## This function performs a one-shot computation of its arguments. +## For incremental hash computation, see :bro:id:`sha1_hash_init` and +## friends. +function sha1_hash%(...%): string + %{ + unsigned char digest[20]; + hash_sha1_val(@ARG@, digest); + return new StringVal(sha1_digest_print(digest)); + %} + +## Computes the SHA256 hash value of the provided list of arguments. +## +## Returns: The SHA256 hash value of the concatenated arguments. +## +## .. bro:see:: md5_hash md5_hmac md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash_init sha256_hash_update sha256_hash_finish +## +## .. note:: +## +## This function performs a one-shot computation of its arguments. +## For incremental hash computation, see :bro:id:`sha256_hash_init` and +## friends. +function sha256_hash%(...%): string + %{ + unsigned char digest[32]; + hash_sha256_val(@ARG@, digest); + return new StringVal(sha256_digest_print(digest)); + %} + +## Computes an HMAC-MD5 hash value of the provided list of arguments. The HMAC +## secret key is generated from available entropy when Bro starts up, or it can +## be specified for repeatability using the ``-K`` command line flag. +## +## Returns: The HMAC-MD5 hash value of the concatenated arguments. +## +## .. bro:see:: md5_hash md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +function md5_hmac%(...%): string + %{ + unsigned char digest[16]; + hmac_md5_val(@ARG@, digest); + return new StringVal(md5_digest_print(digest)); + %} + +%%{ +static map md5_states; +static map sha1_states; +static map sha256_states; + +BroString* convert_index_to_string(Val* index) + { + ODesc d; + index->Describe(&d); + BroString* s = new BroString(1, d.TakeBytes(), d.Len()); + s->SetUseFreeToDelete(1); + return s; + } +%%} + +## Initializes MD5 state to enable incremental hash computation. After +## initializing the MD5 state with this function, you can feed data to +## :bro:id:`md5_hash_update` and finally need to call :bro:id:`md5_hash_finish` +## to finish the computation and get the final hash value. +## +## For example, when computing incremental MD5 values of transferred files in +## multiple concurrent HTTP connections, one would call ``md5_hash_init(c$id)`` +## once before invoking ``md5_hash_update(c$id, some_more_data)`` in the +## :bro:id:`http_entity_data` event handler. When all data has arrived, a call +## to :bro:id:`md5_hash_finish` returns the final hash value. +## +## index: The unique identifier to associate with this hash computation. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +function md5_hash_init%(index: any%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( md5_states.count(*s) < 1 ) + { + MD5_CTX h; + md5_init(&h); + md5_states[*s] = h; + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Initializes SHA1 state to enable incremental hash computation. After +## initializing the SHA1 state with this function, you can feed data to +## :bro:id:`sha1_hash_update` and finally need to call +## :bro:id:`sha1_hash_finish` to finish the computation and get the final hash +## value. +## +## For example, when computing incremental SHA1 values of transferred files in +## multiple concurrent HTTP connections, one would call ``sha1_hash_init(c$id)`` +## once before invoking ``sha1_hash_update(c$id, some_more_data)`` in the +## :bro:id:`http_entity_data` event handler. When all data has arrived, a call +## to :bro:id:`sha1_hash_finish` returns the final hash value. +## +## index: The unique identifier to associate with this hash computation. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +function sha1_hash_init%(index: any%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( sha1_states.count(*s) < 1 ) + { + SHA_CTX h; + sha1_init(&h); + sha1_states[*s] = h; + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Initializes SHA256 state to enable incremental hash computation. After +## initializing the SHA256 state with this function, you can feed data to +## :bro:id:`sha256_hash_update` and finally need to call +## :bro:id:`sha256_hash_finish` to finish the computation and get the final hash +## value. +## +## For example, when computing incremental SHA256 values of transferred files in +## multiple concurrent HTTP connections, one would call +## ``sha256_hash_init(c$id)`` once before invoking +## ``sha256_hash_update(c$id, some_more_data)`` in the +## :bro:id:`http_entity_data` event handler. When all data has arrived, a call +## to :bro:id:`sha256_hash_finish` returns the final hash value. +## +## index: The unique identifier to associate with this hash computation. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_update sha256_hash_finish +function sha256_hash_init%(index: any%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( sha256_states.count(*s) < 1 ) + { + SHA256_CTX h; + sha256_init(&h); + sha256_states[*s] = h; + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Update the MD5 value associated with a given index. It is required to +## call :bro:id:`md5_hash_init` once before calling this +## function. +## +## index: The unique identifier to associate with this hash computation. +## +## data: The data to add to the hash computation. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +function md5_hash_update%(index: any, data: string%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( md5_states.count(*s) > 0 ) + { + md5_update(&md5_states[*s], data->Bytes(), data->Len()); + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Update the SHA1 value associated with a given index. It is required to +## call :bro:id:`sha1_hash_init` once before calling this +## function. +## +## index: The unique identifier to associate with this hash computation. +## +## data: The data to add to the hash computation. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +function sha1_hash_update%(index: any, data: string%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( sha1_states.count(*s) > 0 ) + { + sha1_update(&sha1_states[*s], data->Bytes(), data->Len()); + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Update the SHA256 value associated with a given index. It is required to +## call :bro:id:`sha256_hash_init` once before calling this +## function. +## +## index: The unique identifier to associate with this hash computation. +## +## data: The data to add to the hash computation. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_finish +function sha256_hash_update%(index: any, data: string%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( sha256_states.count(*s) > 0 ) + { + sha256_update(&sha256_states[*s], data->Bytes(), data->Len()); + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Returns the final MD5 digest of an incremental hash computation. +## +## index: The unique identifier of this hash computation. +## +## Returns: The hash value associated with the computation at *index*. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_update +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +function md5_hash_finish%(index: any%): string + %{ + BroString* s = convert_index_to_string(index); + StringVal* printable_digest; + + if ( md5_states.count(*s) > 0 ) + { + unsigned char digest[16]; + md5_final(&md5_states[*s], digest); + md5_states.erase(*s); + printable_digest = new StringVal(md5_digest_print(digest)); + } + else + printable_digest = new StringVal(""); + + delete s; + return printable_digest; + %} + +## Returns the final SHA1 digest of an incremental hash computation. +## +## index: The unique identifier of this hash computation. +## +## Returns: The hash value associated with the computation at *index*. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update +## sha256_hash sha256_hash_init sha256_hash_update sha256_hash_finish +function sha1_hash_finish%(index: any%): string + %{ + BroString* s = convert_index_to_string(index); + StringVal* printable_digest; + + if ( sha1_states.count(*s) > 0 ) + { + unsigned char digest[20]; + sha1_final(&sha1_states[*s], digest); + sha1_states.erase(*s); + printable_digest = new StringVal(sha1_digest_print(digest)); + } + else + printable_digest = new StringVal(""); + + delete s; + return printable_digest; + %} + +## Returns the final SHA256 digest of an incremental hash computation. +## +## index: The unique identifier of this hash computation. +## +## Returns: The hash value associated with the computation at *index*. +## +## .. bro:see:: md5_hmac md5_hash md5_hash_init md5_hash_update md5_hash_finish +## sha1_hash sha1_hash_init sha1_hash_update sha1_hash_finish +## sha256_hash sha256_hash_init sha256_hash_update +function sha256_hash_finish%(index: any%): string + %{ + BroString* s = convert_index_to_string(index); + StringVal* printable_digest; + + if ( sha256_states.count(*s) > 0 ) + { + unsigned char digest[32]; + sha256_final(&sha256_states[*s], digest); + sha256_states.erase(*s); + printable_digest = new StringVal(sha256_digest_print(digest)); + } + else + printable_digest = new StringVal(""); + + delete s; + return printable_digest; + %} + +## Generates a random number. +## +## max: The maximum value of the random number. +## +## Returns: a random positive integer in the interval *[0, max)*. +## +## .. bro:see:: srand +## +## .. note:: +## +## This function is a wrapper about the function ``random`` +## provided by the OS. +function rand%(max: count%): count + %{ + int result; + result = bro_uint_t(double(max) * double(bro_random()) / (RAND_MAX + 1.0)); + return new Val(result, TYPE_COUNT); + %} + +## Sets the seed for subsequent :bro:id:`rand` calls. +## +## seed: The seed for the PRNG. +## +## .. bro:see:: rand +## +## .. note:: +## +## This function is a wrapper about the function ``srandom`` +## provided by the OS. +function srand%(seed: count%): any + %{ + bro_srandom(seed); + return 0; + %} + +%%{ +#include +%%} + +## Send a string to syslog. +## +## s: The string to log via syslog +function syslog%(s: string%): any + %{ + reporter->Syslog("%s", s->CheckString()); + return 0; + %} + +%%{ +extern "C" { +#include +} +%%} + +## Determines the MIME type of a piece of data using ``libmagic``. +## +## data: The data to find the MIME type for. +## +## return_mime: If true, the function returns a short MIME type string (e.g., +## ``text/plain`` instead of a more elaborate textual description). +## +## Returns: The MIME type of *data*. +function identify_data%(data: string, return_mime: bool%): string + %{ + const char* descr = ""; + + static magic_t magic_mime = 0; + static magic_t magic_descr = 0; + + magic_t* magic = return_mime ? &magic_mime : &magic_descr; + + if( ! *magic ) + { + *magic = magic_open(return_mime ? MAGIC_MIME : MAGIC_NONE); + + if ( ! *magic ) + { + reporter->Error("can't init libmagic: %s", magic_error(*magic)); + return new StringVal(""); + } + + if ( magic_load(*magic, 0) < 0 ) + { + reporter->Error("can't load magic file: %s", magic_error(*magic)); + magic_close(*magic); + *magic = 0; + return new StringVal(""); + } + } + + descr = magic_buffer(*magic, data->Bytes(), data->Len()); + + return new StringVal(descr); + %} + +%%{ +#include +static map entropy_states; +%%} + +## Performs an entropy test on the given data. +## See http://www.fourmilab.ch/random. +## +## data: The data to compute the entropy for. +## +## Returns: The result of the entropy test, which contains the following +## fields. +## +## - ``entropy``: The information density expressed as a number of +## bits per character. +## +## - ``chi_square``: The chi-square test value expressed as an +## absolute number and a percentage which indicates how +## frequently a truly random sequence would exceed the value +## calculated, i.e., the degree to which the sequence tested is +## suspected of being non-random. +## +## If the percentage is greater than 99% or less than 1%, the +## sequence is almost certainly not random. If the percentage is +## between 99% and 95% or between 1% and 5%, the sequence is +## suspect. Percentages between 90\% and 95\% and 5\% and 10\% +## indicate the sequence is "almost suspect." +## +## - ``mean``: The arithmetic mean of all the bytes. If the data +## are close to random, it should be around 127.5. +## +## - ``monte_carlo_pi``: Each successive sequence of six bytes is +## used as 24-bit *x* and *y* coordinates within a square. If +## the distance of the randomly-generated point is less than the +## radius of a circle inscribed within the square, the six-byte +## sequence is considered a "hit." The percentage of hits can +## be used to calculate the value of pi. For very large streams +## the value will approach the correct value of pi if the +## sequence is close to random. +## +## - ``serial_correlation``: This quantity measures the extent to +## which each byte in the file depends upon the previous byte. +## For random sequences this value will be close to zero. +## +## .. bro:see:: entropy_test_init entropy_test_add entropy_test_finish +function find_entropy%(data: string%): entropy_test_result + %{ + double montepi, scc, ent, mean, chisq; + montepi = scc = ent = mean = chisq = 0.0; + RecordVal* ent_result = new RecordVal(entropy_test_result); + RandTest *rt = new RandTest(); + + rt->add((char*) data->Bytes(), data->Len()); + rt->end(&ent, &chisq, &mean, &montepi, &scc); + delete rt; + + ent_result->Assign(0, new Val(ent, TYPE_DOUBLE)); + ent_result->Assign(1, new Val(chisq, TYPE_DOUBLE)); + ent_result->Assign(2, new Val(mean, TYPE_DOUBLE)); + ent_result->Assign(3, new Val(montepi, TYPE_DOUBLE)); + ent_result->Assign(4, new Val(scc, TYPE_DOUBLE)); + return ent_result; + %} + +## Initializes data structures for incremental entropy calculation. +## +## index: An arbitrary unique value per distinct computation. +## +## Returns: True on success. +## +## .. bro:see:: find_entropy entropy_test_add entropy_test_finish +function entropy_test_init%(index: any%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( entropy_states.count(*s) < 1 ) + { + entropy_states[*s] = new RandTest(); + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Adds data to an incremental entropy calculation. Before using this function, +## one needs to invoke :bro:id:`entropy_test_init`. +## +## data: The data to add to the entropy calculation. +## +## index: An arbitrary unique value that identifies a particular entropy +## computation. +## +## Returns: True on success. +## +## .. bro:see:: find_entropy entropy_test_add entropy_test_finish +function entropy_test_add%(index: any, data: string%): bool + %{ + BroString* s = convert_index_to_string(index); + int status = 0; + + if ( entropy_states.count(*s) > 0 ) + { + entropy_states[*s]->add((char*) data->Bytes(), data->Len()); + status = 1; + } + + delete s; + return new Val(status, TYPE_BOOL); + %} + +## Finishes an incremental entropy calculation. Before using this function, +## one needs to initialize the computation with :bro:id:`entropy_test_init` and +## add data to it via :bro:id:`entropy_test_add`. +## +## index: An arbitrary unique value that identifies a particular entropy +## computation. +## +## Returns: The result of the entropy test. See :bro:id:`find_entropy` for a +## description of the individual components. +## +## .. bro:see:: find_entropy entropy_test_init entropy_test_add +function entropy_test_finish%(index: any%): entropy_test_result + %{ + BroString* s = convert_index_to_string(index); + double montepi, scc, ent, mean, chisq; + montepi = scc = ent = mean = chisq = 0.0; + RecordVal* ent_result = new RecordVal(entropy_test_result); + + if ( entropy_states.count(*s) > 0 ) + { + RandTest *rt = entropy_states[*s]; + rt->end(&ent, &chisq, &mean, &montepi, &scc); + entropy_states.erase(*s); + delete rt; + } + + ent_result->Assign(0, new Val(ent, TYPE_DOUBLE)); + ent_result->Assign(1, new Val(chisq, TYPE_DOUBLE)); + ent_result->Assign(2, new Val(mean, TYPE_DOUBLE)); + ent_result->Assign(3, new Val(montepi, TYPE_DOUBLE)); + ent_result->Assign(4, new Val(scc, TYPE_DOUBLE)); + + delete s; + return ent_result; + %} + +## Creates an identifier that is unique with high probability. +## +## prefix: A custom string prepended to the result. +## +## .. bro:see:: unique_id_from +function unique_id%(prefix: string%) : string + %{ + char tmp[20]; + uint64 uid = calculate_unique_id(UID_POOL_DEFAULT_SCRIPT); + return new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62, prefix->CheckString())); + %} + +## Creates an identifier that is unique with high probability. +## +## pool: A seed for determinism. +## +## prefix: A custom string prepended to the result. +## +## .. bro:see:: unique_id +function unique_id_from%(pool: int, prefix: string%) : string + %{ + pool += UID_POOL_CUSTOM_SCRIPT; // Make sure we don't conflict with internal pool. + + char tmp[20]; + uint64 uid = calculate_unique_id(pool); + return new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62, prefix->CheckString())); + %} + +# =========================================================================== +# +# Generic Programming +# +# =========================================================================== + +## Removes all elements from a set or table. +## +## v: The set or table +function clear_table%(v: any%): any + %{ + if ( v->Type()->Tag() == TYPE_TABLE ) + v->AsTableVal()->RemoveAll(); + else + builtin_error("clear_table() requires a table/set argument"); + + return 0; + %} + +## Returns the number of elements in a container. This function works with all +## container types, i.e., sets, tables, and vectors. +## +## v: The container whose elements are counted. +## +## Returns: The number of elements in *v*. +function length%(v: any%): count + %{ + TableVal* tv = v->Type()->Tag() == TYPE_TABLE ? v->AsTableVal() : 0; + + if ( tv ) + return new Val(tv->Size(), TYPE_COUNT); + + else if ( v->Type()->Tag() == TYPE_VECTOR ) + return new Val(v->AsVectorVal()->Size(), TYPE_COUNT); + + else + { + builtin_error("length() requires a table/set/vector argument"); + return new Val(0, TYPE_COUNT); + } + %} + +## Checks whether two objects reference the same internal object. This function +## uses equality comparison of C++ raw pointer values to determine if the two +## objects are the same. +## +## o1: The first object. +## +## o2: The second object. +## +## Returns: True if *o1* and *o2* are equal. +function same_object%(o1: any, o2: any%): bool + %{ + return new Val(o1 == o2, TYPE_BOOL); + %} + +## Returns the number of bytes that a value occupies in memory. +## +## v: The value +## +## Returns: The number of bytes that *v* occupies. +function val_size%(v: any%): count + %{ + return new Val(v->MemoryAllocation(), TYPE_COUNT); + %} + +## Resizes a vector. +## +## aggr: The vector instance. +## +## newsize: The new size of *aggr*. +## +## Returns: The old size of *aggr*, or 0 if *aggr* is not a :bro:type:`vector`. +function resize%(aggr: any, newsize: count%) : count + %{ + if ( aggr->Type()->Tag() != TYPE_VECTOR ) + { + builtin_error("resize() operates on vectors"); + return 0; + } + + return new Val(aggr->AsVectorVal()->Resize(newsize), TYPE_COUNT); + %} + +## Tests whether a boolean vector (``vector of bool``) has *any* true +## element. +## +## v: The boolean vector instance. +## +## Returns: True if any element in *v* is true. +## +## .. bro:see:: all_set +function any_set%(v: any%) : bool + %{ + if ( v->Type()->Tag() != TYPE_VECTOR || + v->Type()->YieldType()->Tag() != TYPE_BOOL ) + { + builtin_error("any_set() requires vector of bool"); + return new Val(false, TYPE_BOOL); + } + + VectorVal* vv = v->AsVectorVal(); + for ( unsigned int i = 0; i < vv->Size(); ++i ) + if ( vv->Lookup(i) && vv->Lookup(i)->AsBool() ) + return new Val(true, TYPE_BOOL); + + return new Val(false, TYPE_BOOL); + %} + +## Tests whether *all* elements of a boolean vector (``vector of bool``) are +## true. +## +## v: The boolean vector instance. +## +## Returns: True iff all elements in *v* are true. +## +## .. bro:see:: any_set +## +## .. note:: +## +## Missing elements count as false. +function all_set%(v: any%) : bool + %{ + if ( v->Type()->Tag() != TYPE_VECTOR || + v->Type()->YieldType()->Tag() != TYPE_BOOL ) + { + builtin_error("all_set() requires vector of bool"); + return new Val(false, TYPE_BOOL); + } + + VectorVal* vv = v->AsVectorVal(); + for ( unsigned int i = 0; i < vv->Size(); ++i ) + if ( ! vv->Lookup(i) || ! vv->Lookup(i)->AsBool() ) + return new Val(false, TYPE_BOOL); + + return new Val(true, TYPE_BOOL); + %} + +%%{ +static Func* sort_function_comp = 0; +static Val** index_map = 0; // used for indirect sorting to support order() + +bool sort_function(Val* a, Val* b) + { + // Sort missing values as "high". + if ( ! a ) + return 0; + if ( ! b ) + return 1; + + val_list sort_func_args; + sort_func_args.append(a->Ref()); + sort_func_args.append(b->Ref()); + + Val* result = sort_function_comp->Call(&sort_func_args); + int int_result = result->CoerceToInt(); + Unref(result); + + sort_func_args.remove_nth(1); + sort_func_args.remove_nth(0); + + return int_result < 0; + } + +bool indirect_sort_function(int a, int b) + { + return sort_function(index_map[a], index_map[b]); + } + +bool int_sort_function (Val* a, Val* b) + { + if ( ! a ) + return 0; + if ( ! b ) + return 1; + + int ia = a->CoerceToInt(); + int ib = b->CoerceToInt(); + + return ia < ib; + } + +bool indirect_int_sort_function(int a, int b) + { + return int_sort_function(index_map[a], index_map[b]); + } +%%} + +## Sorts a vector in place. The second argument is a comparison function that +## takes two arguments: if the vector type is ``vector of T``, then the +## comparison function must be ``function(a: T, b: T): int``, which returns +## a value less than zero if ``a < b`` for some type-specific notion of the +## less-than operator. The comparison function is optional if the type +## is an integral type (int, count, etc.). +## +## v: The vector instance to sort. +## +## Returns: The vector, sorted from minimum to maximum value. If the vector +## could not be sorted, then the original vector is returned instead. +## +## .. bro:see:: order +function sort%(v: any, ...%) : any + %{ + v->Ref(); // we always return v + + if ( v->Type()->Tag() != TYPE_VECTOR ) + { + builtin_error("sort() requires vector"); + return v; + } + + BroType* elt_type = v->Type()->YieldType(); + Func* comp = 0; + + if ( @ARG@.length() > 2 ) + builtin_error("sort() called with extraneous argument"); + + if ( @ARG@.length() == 2 ) + { + Val* comp_val = @ARG@[1]; + if ( ! IsFunc(comp_val->Type()->Tag()) ) + { + builtin_error("second argument to sort() needs to be comparison function"); + return v; + } + + comp = comp_val->AsFunc(); + } + + if ( ! comp && ! IsIntegral(elt_type->Tag()) ) + builtin_error("comparison function required for sort() with non-integral types"); + + vector& vv = *v->AsVector(); + + if ( comp ) + { + FuncType* comp_type = comp->FType()->AsFuncType(); + if ( comp_type->YieldType()->Tag() != TYPE_INT || + ! comp_type->ArgTypes()->AllMatch(elt_type, 0) ) + { + builtin_error("invalid comparison function in call to sort()"); + return v; + } + + sort_function_comp = comp; + + sort(vv.begin(), vv.end(), sort_function); + } + else + sort(vv.begin(), vv.end(), int_sort_function); + + return v; + %} + +## Returns the order of the elements in a vector according to some +## comparison function. See :bro:id:`sort` for details about the comparison +## function. +## +## v: The vector whose order to compute. +## +## Returns: A ``vector of count`` with the indices of the ordered elements. +## For example, the elements of *v* in order are (assuming ``o`` +## is the vector returned by ``order``): v[o[0]], v[o[1]], etc. +## +## .. bro:see:: sort +function order%(v: any, ...%) : index_vec + %{ + VectorVal* result_v = new VectorVal( + internal_type("index_vec")->AsVectorType()); + + if ( v->Type()->Tag() != TYPE_VECTOR ) + { + builtin_error("order() requires vector"); + return result_v; + } + + BroType* elt_type = v->Type()->YieldType(); + Func* comp = 0; + + if ( @ARG@.length() > 2 ) + builtin_error("order() called with extraneous argument"); + + if ( @ARG@.length() == 2 ) + { + Val* comp_val = @ARG@[1]; + if ( ! IsFunc(comp_val->Type()->Tag()) ) + { + builtin_error("second argument to order() needs to be comparison function"); + return v; + } + + comp = comp_val->AsFunc(); + } + + if ( ! comp && ! IsIntegral(elt_type->Tag()) ) + builtin_error("comparison function required for order() with non-integral types"); + + vector& vv = *v->AsVector(); + int n = vv.size(); + + // Set up initial mapping of indices directly to corresponding + // elements. + vector ind_vv(n); + index_map = new Val*[n]; + int i; + for ( i = 0; i < n; ++i ) + { + ind_vv[i] = i; + index_map[i] = vv[i]; + } + + if ( comp ) + { + FuncType* comp_type = comp->FType()->AsFuncType(); + if ( comp_type->YieldType()->Tag() != TYPE_INT || + ! comp_type->ArgTypes()->AllMatch(elt_type, 0) ) + { + builtin_error("invalid comparison function in call to order()"); + return v; + } + + sort_function_comp = comp; + + sort(ind_vv.begin(), ind_vv.end(), indirect_sort_function); + } + else + sort(ind_vv.begin(), ind_vv.end(), indirect_int_sort_function); + + delete [] index_map; + index_map = 0; + + // Now spin through ind_vv to read out the rearrangement. + for ( i = 0; i < n; ++i ) + { + int ind = ind_vv[i]; + result_v->Assign(i, new Val(ind, TYPE_COUNT), 0); + } + + return result_v; + %} + +# =========================================================================== +# +# String Processing +# +# =========================================================================== + +## Returns the concatenation of the string representation of its arguments. The +## arguments can be of any type. For example, ``cat("foo", 3, T)`` returns +## ``"foo3T"``. +## +## Returns: A string concatentation of all arguments. +function cat%(...%): string + %{ + ODesc d; + loop_over_list(@ARG@, i) + @ARG@[i]->Describe(&d); + + BroString* s = new BroString(1, d.TakeBytes(), d.Len()); + s->SetUseFreeToDelete(true); + + return new StringVal(s); + %} + +## Concatenates all arguments, with a separator placed between each one. This +## function is similar to :bro:id:`cat`, but places a separator between each +## given argument. If any of the variable arguments is an empty string it is +## replaced by a given default string instead. +## +## sep: The separator to place between each argument. +## +## def: The default string to use when an argument is the empty string. +## +## Returns: A concatenation of all arguments with *sep* between each one and +## empty strings replaced with *def*. +## +## .. bro:see:: cat string_cat cat_string_array cat_string_array_n +function cat_sep%(sep: string, def: string, ...%): string + %{ + ODesc d; + int pre_size = 0; + + loop_over_list(@ARG@, i) + { + // Skip named parameters. + if ( i < 2 ) + continue; + + if ( i > 2 ) + d.Add(sep->CheckString(), 0); + + Val* v = @ARG@[i]; + if ( v->Type()->Tag() == TYPE_STRING && ! v->AsString()->Len() ) + v = def; + + v->Describe(&d); + } + + BroString* s = new BroString(1, d.TakeBytes(), d.Len()); + s->SetUseFreeToDelete(true); + + return new StringVal(s); + %} + +## Produces a formatted string à la ``printf``. The first argument is the +## *format string* and specifies how subsequent arguments are converted for +## output. It is composed of zero or more directives: ordinary characters (not +## ``%``), which are copied unchanged to the output, and conversion +## specifications, each of which fetches zero or more subsequent arguments. +## Conversion specifications begin with ``%`` and the arguments must properly +## correspond to the specifier. After the ``%``, the following characters +## may appear in sequence: +## +## - ``%``: Literal ``%`` +## +## - ``-``: Left-align field +## +## - ``[0-9]+``: The field width (< 128) +## +## - ``.``: Precision of floating point specifiers ``[efg]`` (< 128) +## +## - ``A``: Escape only NUL bytes (each one replaced with ``\0``) in a string +## +## - ``[DTdxsefg]``: Format specifier +## +## - ``[DT]``: ISO timestamp with microsecond precision +## +## - ``d``: Signed/Unsigned integer (using C-style ``%lld``/``%llu`` +## for ``int``/``count``) +## +## - ``x``: Unsigned hexadecimal (using C-style ``%llx``); +## addresses/ports are converted to host-byte order +## +## - ``s``: String (byte values less than 32 or greater than 126 +## will be escaped) +## +## - ``[efg]``: Double +## +## Returns: Returns the formatted string. Given no arguments, :bro:id:`fmt` +## returns an empty string. Given no format string or the wrong +## number of additional arguments for the given format specifier, +## :bro:id:`fmt` generates a run-time error. +## +## .. bro:see:: cat cat_sep string_cat cat_string_array cat_string_array_n +function fmt%(...%): string + %{ + if ( @ARGC@ == 0 ) return new StringVal(""); - return new StringVal(f->Name()); - %} + Val* fmt_v = @ARG@[0]; -# Set an individual inactivity timeout for this connection -# (overrides the global inactivity_timeout). Returns previous -# timeout interval. -function set_inactivity_timeout%(cid: conn_id, t: interval%): interval - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_INTERVAL); + // Type of fmt_v will be string here, check_built_in_call() in Func.cc + // checks that. - double old_timeout = c->InactivityTimeout(); - c->SetInactivityTimeout(t); + const char* fmt = fmt_v->AsString()->CheckString(); + ODesc d; + int n = 0; - return new Val(old_timeout, TYPE_INTERVAL); - %} + while ( next_fmt(fmt, @ARGS@, &d, n) ) + ; -function get_login_state%(cid: conn_id%): count - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_BOOL); - - Analyzer* la = c->FindAnalyzer(AnalyzerTag::Login); - if ( ! la ) - return new Val(0, TYPE_BOOL); - - return new Val(int(static_cast(la)->LoginState()), - TYPE_COUNT); - %} - -function set_login_state%(cid: conn_id, new_state: count%): bool - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_BOOL); - - Analyzer* la = c->FindAnalyzer(AnalyzerTag::Login); - if ( ! la ) - return new Val(0, TYPE_BOOL); - - static_cast(la)->SetLoginState(login_state(new_state)); - return new Val(1, TYPE_BOOL); - %} - -%%{ -#include "TCP.h" -%%} - -function get_orig_seq%(cid: conn_id%): count - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_COUNT); - - if ( c->ConnTransport() != TRANSPORT_TCP ) - return new Val(0, TYPE_COUNT); - - Analyzer* tc = c->FindAnalyzer(AnalyzerTag::TCP); - if ( tc ) - return new Val(static_cast(tc)->OrigSeq(), - TYPE_COUNT); - else + if ( n < @ARGC@ - 1 ) { - reporter->Error("connection does not have TCP analyzer"); - return new Val(0, TYPE_COUNT); + builtin_error("too many arguments for format", fmt_v); + return new StringVal(""); } - %} -function get_resp_seq%(cid: conn_id%): count - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - return new Val(0, TYPE_COUNT); - - if ( c->ConnTransport() != TRANSPORT_TCP ) - return new Val(0, TYPE_COUNT); - - Analyzer* tc = c->FindAnalyzer(AnalyzerTag::TCP); - if ( tc ) - return new Val(static_cast(tc)->RespSeq(), - TYPE_COUNT); - else + else if ( n >= @ARGC@ ) { - reporter->Error("connection does not have TCP analyzer"); - return new Val(0, TYPE_COUNT); - } - %} - -# These convert addresses <-> n3.n2.n1.n0.in-addr.arpa -function ptr_name_to_addr%(s: string%): addr - %{ - int a[4]; - uint32 addr; - - if ( sscanf(s->CheckString(), - "%d.%d.%d.%d.in-addr.arpa", - a, a+1, a+2, a+3) != 4 ) - { - builtin_error("bad PTR name", @ARG@[0]); - addr = 0; - } - else - addr = (a[3] << 24) | (a[2] << 16) | (a[1] << 8) | a[0]; - - return new AddrVal(htonl(addr)); - %} - -function addr_to_ptr_name%(a: addr%): string - %{ - // ## Question: - // uint32 addr = ntohl((*args)[0]->InternalUnsigned()); - uint32 addr; -#ifdef BROv6 - if ( is_v4_addr(a) ) - addr = a[3]; - else - { - builtin_error("conversion of non-IPv4 address to net", @ARG@[0]); - addr = 0; - } -#else - addr = a; -#endif - - addr = ntohl(addr); - uint32 a3 = (addr >> 24) & 0xff; - uint32 a2 = (addr >> 16) & 0xff; - uint32 a1 = (addr >> 8) & 0xff; - uint32 a0 = addr & 0xff; - - char buf[256]; - sprintf(buf, "%u.%u.%u.%u.in-addr.arpa", a0, a1, a2, a3); - - return new StringVal(buf); - %} - -# Transforms n0.n1.n2.n3 -> addr. -function parse_dotted_addr%(s: string%): addr - %{ - return new AddrVal(dotted_to_addr(s->CheckString())); - %} - -function parse_ftp_port%(s: string%): ftp_port - %{ - return parse_port(s->CheckString()); - %} - -function parse_eftp_port%(s: string%): ftp_port - %{ - return parse_eftp(s->CheckString()); - %} - -function parse_ftp_pasv%(str: string%): ftp_port - %{ - const char* s = str->CheckString(); - const char* line = strchr(s, '('); - if ( line ) - ++line; // move past '(' - else if ( (line = strstr(s, "PORT")) ) - line += 5; // Skip over - else if ( (line = strchr(s, ',')) ) - { // Look for comma-separated list. - while ( --line >= s && isdigit(*line) ) - ; // Back up over preceding digits. - ++line; // now points to first digit, or beginning of s + builtin_error("too few arguments for format", fmt_v); + return new StringVal(""); } - return parse_port(line); + BroString* s = new BroString(1, d.TakeBytes(), d.Len()); + s->SetUseFreeToDelete(true); + + return new StringVal(s); %} -function parse_ftp_epsv%(str: string%): ftp_port - %{ - const char* s = str->CheckString(); - const char* line = strchr(s, '('); - if ( line ) - ++line; // move past '(' - return parse_eftp(line); - %} - -function fmt_ftp_port%(a: addr, p: port%): string - %{ -#ifdef BROv6 - if ( ! is_v4_addr(a) ) - builtin_error("conversion of non-IPv4 address to net", @ARG@[0]); - - uint32 addr = to_v4_addr(a); -#else - uint32 addr = a; -#endif - addr = ntohl(addr); - uint32 pn = p->Port(); - return new StringVal(fmt("%d,%d,%d,%d,%d,%d", - addr >> 24, (addr >> 16) & 0xff, - (addr >> 8) & 0xff, addr & 0xff, - pn >> 8, pn & 0xff)); - %} - -function decode_netbios_name%(name: string%): string - %{ - char buf[16]; - char result[16]; - const u_char* s = name->Bytes(); - int i, j; - - for ( i = 0, j = 0; i < 16; ++i ) - { - char c0 = (j < name->Len()) ? toupper(s[j++]) : 'A'; - char c1 = (j < name->Len()) ? toupper(s[j++]) : 'A'; - buf[i] = ((c0 - 'A') << 4) + (c1 - 'A'); - } - - for ( i = 0; i < 15; ++i ) - { - if ( isalnum(buf[i]) || ispunct(buf[i]) || - // \x01\x02 is seen in at least one case as the first two bytes. - // I think that any \x01 and \x02 should always be passed through. - buf[i] < 3 ) - result[i] = buf[i]; - else - break; - } - - return new StringVal(i, result); - %} - -function decode_netbios_name_type%(name: string%): count - %{ - const u_char* s = name->Bytes(); - char return_val = ((toupper(s[30]) - 'A') << 4) + (toupper(s[31]) - 'A'); - return new Val(return_val, TYPE_COUNT); - %} - -%%{ -#include "HTTP.h" - -const char* conn_id_string(Val* c) - { - Val* id = (*(c->AsRecord()))[0]; - const val_list* vl = id->AsRecord(); - - addr_type orig_h = (*vl)[0]->AsAddr(); - uint32 orig_p = (*vl)[1]->AsPortVal()->Port(); - addr_type resp_h = (*vl)[2]->AsAddr(); - uint32 resp_p = (*vl)[3]->AsPortVal()->Port(); - - return fmt("%s/%u -> %s/%u\n", dotted_addr(orig_h), orig_p, dotted_addr(resp_h), resp_p); - } -%%} - -# Skip data of the HTTP entity on the connection -function skip_http_entity_data%(c: connection, is_orig: bool%): any - %{ - AnalyzerID id = mgr.CurrentAnalyzer(); - if ( id ) - { - Analyzer* ha = c->FindAnalyzer(id); - - if ( ha ) - { - if ( ha->GetTag() == AnalyzerTag::HTTP ) - static_cast(ha)->SkipEntityData(is_orig); - else - reporter->Error("non-HTTP analyzer associated with connection record"); - } - else - reporter->Error("could not find analyzer for skip_http_entity_data"); - - } - else - reporter->Error("no analyzer associated with connection record"); - - return 0; - %} - -# Unescape all characters in the URI, i.e. decode every %xx group. +# =========================================================================== # -# Note that unescaping reserved characters may cause loss of -# information (see below). +# Math # -# RFC 2396: A URI is always in an "escaped" form, since escaping or -# unescaping a completed URI might change its semantics. Normally, -# the only time escape encodings can safely be made is when the URI -# is being created from its component parts. +# =========================================================================== -function unescape_URI%(URI: string%): string +## Computes the greatest integer less than the given :bro:type:`double` value. +## For example, ``floor(3.14)`` returns ``3.0``, and ``floor(-3.14)`` +## returns ``-4.0``. +## +## d: The :bro:type:`double` to manipulate. +## +## Returns: The next lowest integer of *d* as :bro:type:`double`. +## +## .. bro:see:: sqrt exp ln log10 +function floor%(d: double%): double %{ - const u_char* line = URI->Bytes(); - const u_char* const line_end = line + URI->Len(); - - return new StringVal(unescape_URI(line, line_end, 0)); + return new Val(floor(d), TYPE_DOUBLE); %} -%%{ -#include "SMTP.h" -%%} - -# Skip smtp_data till next mail -function skip_smtp_data%(c: connection%): any +## Computes the square root of a :bro:type:`double`. +## +## x: The number to compute the square root of. +## +## Returns: The square root of *x*. +## +## .. bro:see:: floor exp ln log10 +function sqrt%(x: double%): double %{ - Analyzer* sa = c->FindAnalyzer(AnalyzerTag::SMTP); - if ( sa ) - static_cast(sa)->SkipData(); - return 0; + if ( x < 0 ) + { + reporter->Error("negative sqrt argument"); + return new Val(-1.0, TYPE_DOUBLE); + } + + return new Val(sqrt(x), TYPE_DOUBLE); %} - -function bytestring_to_hexstr%(bytestring: string%): string +## Computes the exponential function. +## +## d: The argument to the exponential function. +## +## Returns: *e* to the power of *d*. +## +## .. bro:see:: floor sqrt ln log10 +function exp%(d: double%): double %{ - bro_uint_t len = bytestring->AsString()->Len(); - const u_char* bytes = bytestring->AsString()->Bytes(); - char hexstr[(2 * len) + 1]; + return new Val(exp(d), TYPE_DOUBLE); + %} - hexstr[0] = 0; - for ( bro_uint_t i = 0; i < len; ++i ) - snprintf(hexstr + (2 * i), 3, "%.2hhx", bytes[i]); +## Computes the natural logarithm of a number. +## +## d: The argument to the logarithm. +## +## Returns: The natural logarithm of *d*. +## +## .. bro:see:: exp floor sqrt log10 +function ln%(d: double%): double + %{ + return new Val(log(d), TYPE_DOUBLE); + %} - return new StringVal(hexstr); +## Computes the common logarithm of a number. +## +## d: The argument to the logarithm. +## +## Returns: The common logarithm of *d*. +## +## .. bro:see:: exp floor sqrt ln +function log10%(d: double%): double + %{ + return new Val(log10(d), TYPE_DOUBLE); + %} + +# =========================================================================== +# +# Introspection +# +# =========================================================================== + +## Determines whether *c* has been received externally. For example, +## Broccoli or the Time Machine can send packets to Bro via a mechanism that is +## one step lower than sending events. This function checks whether the packets +## of a connection stem from one of these external *packet sources*. +## +## c: The connection to test. +## +## Returns: True if *c* has been received externally. +function is_external_connection%(c: connection%) : bool + %{ + return new Val(c && c->IsExternal(), TYPE_BOOL); + %} + +## Returns the ID of the analyzer which raised the current event. +## +## Returns: The ID of the analyzer which raised the current event, or 0 if +## none. +function current_analyzer%(%) : count + %{ + return new Val(mgr.CurrentAnalyzer(), TYPE_COUNT); + %} + +## Returns Bro's process ID. +## +## Returns: Bro's process ID. +function getpid%(%) : count + %{ + return new Val(getpid(), TYPE_COUNT); %} %%{ extern const char* bro_version(); %%} +## Returns the Bro version string. +## +## Returns: Bro's version, e.g., 2.0-beta-47-debug. +function bro_version%(%): string + %{ + return new StringVal(bro_version()); + %} + +## Converts a record type name to a vector of strings, where each element is +## the name of a record field. Nested records are flattened. +## +## rt: The name of the record type. +## +## Returns: A string vector with the field names of *rt*. +function record_type_to_vector%(rt: string%): string_vec + %{ + VectorVal* result = + new VectorVal(internal_type("string_vec")->AsVectorType()); + + RecordType *type = internal_type(rt->CheckString())->AsRecordType(); + + if ( type ) + { + for ( int i = 0; i < type->NumFields(); ++i ) + { + StringVal* val = new StringVal(type->FieldName(i)); + result->Assign(i+1, val, 0); + } + } + + return result; + %} + +## Returns the type name of an arbitrary Bro variable. +## +## t: An arbitrary object. +## +## Returns: The type name of *t*. +function type_name%(t: any%): string + %{ + ODesc d; + t->Type()->Describe(&d); + + BroString* s = new BroString(1, d.TakeBytes(), d.Len()); + s->SetUseFreeToDelete(true); + + return new StringVal(s); + %} + +## Checks whether Bro reads traffic from one or more network interfaces (as +## opposed to from a network trace in a file). Note that this function returns +## true even after Bro has stopped reading network traffic, for example due to +## receiving a termination signal. +## +## Returns: True if reading traffic from a network interface. +## +## .. bro:see:: reading_traces +function reading_live_traffic%(%): bool + %{ + return new Val(reading_live, TYPE_BOOL); + %} + +## Checks whether Bro reads traffic from a trace file (as opposed to from a +## network interface). +## +## Returns: True if reading traffic from a network trace. +## +## .. bro:see:: reading_live_traffic +function reading_traces%(%): bool + %{ + return new Val(reading_traces, TYPE_BOOL); + %} + +## Returns packet capture statistics. Statistics include the number of +## packets *(i)* received by Bro, *(ii)* dropped, and *(iii)* seen on the +## link (not always available). +## +## Returns: A record of packet statistics. +## +## .. bro:see:: do_profiling +## resource_usage +## get_matcher_stats +## dump_rule_stats +## get_gap_summary function net_stats%(%): NetStats %{ unsigned int recv = 0; @@ -1464,6 +1936,17 @@ function net_stats%(%): NetStats return ns; %} +## Returns Bro process statistics. Statistics include real/user/sys CPU time, +## memory usage, page faults, number of TCP/UDP/ICMP connections, timers, +## and events queued/dispatched. +## +## Returns: A record with resource usage statistics. +## +## .. bro:see:: do_profiling +## net_stats +## get_matcher_stats +## dump_rule_stats +## get_gap_summary function resource_usage%(%): bro_resources %{ struct rusage r; @@ -1530,6 +2013,18 @@ function resource_usage%(%): bro_resources return res; %} +## Returns statistics about the regular expression engine. Statistics include +## the number of distinct matchers, DFA states, DFA state transitions, memory +## usage of DFA states, cache hits/misses, and average number of NFA states +## across all matchers. +## +## Returns: A record with matcher statistics. +## +## .. bro:see:: do_profiling +## net_stats +## resource_usage +## dump_rule_stats +## get_gap_summary function get_matcher_stats%(%): matcher_stats %{ RuleMatcher::Stats s; @@ -1550,6 +2045,15 @@ function get_matcher_stats%(%): matcher_stats return r; %} +## Returns statistics about TCP gaps. +## +## Returns: A record with TCP gap statistics. +## +## .. bro:see:: do_profiling +## net_stats +## resource_usage +## dump_rule_stats +## get_matcher_stats function get_gap_summary%(%): gap_info %{ RecordVal* r = new RecordVal(gap_info); @@ -1561,11 +2065,12 @@ function get_gap_summary%(%): gap_info return r; %} -function val_size%(v: any%): count - %{ - return new Val(v->MemoryAllocation(), TYPE_COUNT); - %} - +## Generates a table of the size of all global variables. The table index is +## the variable name and the value is the variable size in bytes. +## +## Returns: A table that maps variable names to their sizes. +## +## .. bro:see:: global_ids function global_sizes%(%): var_sizes %{ TableVal* sizes = new TableVal(var_sizes); @@ -1586,6 +2091,14 @@ function global_sizes%(%): var_sizes return sizes; %} +## Generates a table with information about all global identifiers. The table +## value is a record containing the type name of the identifier, whether it is +## exported, a constant, an enum constant, redefinable, and its value (if it +## has one). +## +## Returns: A table that maps identifier names to information about them. +## +## .. bro:see:: global_sizes function global_ids%(%): id_table %{ TableVal* ids = new TableVal(id_table); @@ -1620,6 +2133,31 @@ function global_ids%(%): id_table return ids; %} +## Returns the value of a global identifier. +## +## id: The global identifier. +## +## Returns the value of *id*. If *id* does not describe a valid identifier, the +## function returns the string ``""`` or ``""``. +function lookup_ID%(id: string%) : any + %{ + ID* i = global_scope()->Lookup(id->CheckString()); + if ( ! i ) + return new StringVal(""); + + if ( ! i->ID_Val() ) + return new StringVal(""); + + return i->ID_Val()->Ref(); + %} + +## Generates metadata about a record's fields. The returned information +## includes the field name, whether it is logged, its value (if it has one), +## and its default value (if specified). +## +## rec: The record to inspect. +## +## Returns: A table that describes the fields of a record. function record_fields%(rec: any%): record_field_table %{ TableVal* fields = new TableVal(record_field_table); @@ -1658,196 +2196,883 @@ function record_fields%(rec: any%): record_field_table return fields; %} -%%{ -#include "Anon.h" -%%} - -# Preserve prefix as original one in anonymization. -function preserve_prefix%(a: addr, width: count%): any +## Enables detailed collection of profiling statistics. Statistics include +## CPU/memory usage, connections, TCP states/reassembler, DNS lookups, +## timers, and script-level state. The script variable :bro:id:`profiling_file` +## holds the name of the file. +## +## .. bro:see:: net_stats +## resource_usage +## get_matcher_stats +## dump_rule_stats +## get_gap_summary +function do_profiling%(%) : any %{ - AnonymizeIPAddr* ip_anon = ip_anonymizer[PREFIX_PRESERVING_A50]; - if ( ip_anon ) - { -#ifdef BROv6 - if ( ! is_v4_addr(a) ) - builtin_error("preserve_prefix() not supported for IPv6 addresses"); - else - ip_anon->PreservePrefix(a[3], width); -#else - ip_anon->PreservePrefix(a, width); -#endif - } - + if ( profiling_logger ) + profiling_logger->Log(); return 0; %} -function preserve_subnet%(a: subnet%): any +## Checks whether a given IP address belongs to a local interface. +## +## ip: The IP address to check. +## +## Returns: True if *ip* belongs to a local interface. +function is_local_interface%(ip: addr%) : bool %{ - DEBUG_MSG("%s/%d\n", dotted_addr(a->AsAddr()), a->Width()); - AnonymizeIPAddr* ip_anon = ip_anonymizer[PREFIX_PRESERVING_A50]; - if ( ip_anon ) + if ( ip->AsAddr().IsLoopback() ) + return new Val(1, TYPE_BOOL); + + list addrs; + + char host[MAXHOSTNAMELEN]; + + strcpy(host, "localhost"); + gethostname(host, MAXHOSTNAMELEN); + host[MAXHOSTNAMELEN-1] = '\0'; + + struct hostent* ent = gethostbyname2(host, AF_INET); + + if ( ent ) { -#ifdef BROv6 - if ( ! is_v4_addr(a->AsAddr()) ) - builtin_error("preserve_subnet() not supported for IPv6 addresses"); - else - ip_anon->PreservePrefix(a->AsAddr()[3], a->Width()); -#else - ip_anon->PreservePrefix(a->AsAddr(), a->Width()); -#endif + for ( unsigned int len = 0; ent->h_addr_list[len]; ++len ) + addrs.push_back(IPAddr(IPv4, (uint32*)ent->h_addr_list[len], + IPAddr::Network)); } - return 0; + ent = gethostbyname2(host, AF_INET6); + + if ( ent ) + { + for ( unsigned int len = 0; ent->h_addr_list[len]; ++len ) + addrs.push_back(IPAddr(IPv6, (uint32*)ent->h_addr_list[len], + IPAddr::Network)); + } + + list::const_iterator it; + for ( it = addrs.begin(); it != addrs.end(); ++it ) + { + if ( *it == ip->AsAddr() ) + return new Val(1, TYPE_BOOL); + } + + return new Val(0, TYPE_BOOL); %} -# Anonymize given IP address. -function anonymize_addr%(a: addr, cl: IPAddrAnonymizationClass%): addr +## Write rule matcher statistics (DFA states, transitions, memory usage, cache +## hits/misses) to a file. +## +## f: The file to write to. +## +## Returns: True (unconditionally). +## +## .. bro:see:: do_profiling +## resource_usage +## get_matcher_stats +## net_stats +## get_gap_summary +## +## .. todo:: The return value should be changed to any or check appropriately. +function dump_rule_stats%(f: file%): bool %{ - int anon_class = cl->InternalInt(); - if ( anon_class < 0 || anon_class >= NUM_ADDR_ANONYMIZATION_CLASSES ) - builtin_error("anonymize_addr(): invalid ip addr anonymization class"); + if ( rule_matcher ) + rule_matcher->DumpStats(f); -#ifdef BROv6 - if ( ! is_v4_addr(a) ) + return new Val(1, TYPE_BOOL); + %} + +## Checks if Bro is terminating. +## +## Returns: True if Bro is in the process of shutting down. +## +## .. bro:see:: terminate +function bro_is_terminating%(%): bool + %{ + return new Val(terminating, TYPE_BOOL); + %} + +## Returns the hostname of the machine Bro runs on. +## +## Returns: The hostname of the machine Bro runs on. +function gethostname%(%) : string + %{ + char buffer[MAXHOSTNAMELEN]; + if ( gethostname(buffer, MAXHOSTNAMELEN) < 0 ) + strcpy(buffer, ""); + + buffer[MAXHOSTNAMELEN-1] = '\0'; + return new StringVal(buffer); + %} + +## Returns whether an address is IPv4 or not. +## +## a: the address to check. +## +## Returns: true if *a* is an IPv4 address, else false. +function is_v4_addr%(a: addr%): bool + %{ + if ( a->AsAddr().GetFamily() == IPv4 ) + return new Val(1, TYPE_BOOL); + else + return new Val(0, TYPE_BOOL); + %} + +## Returns whether an address is IPv6 or not. +## +## a: the address to check. +## +## Returns: true if *a* is an IPv6 address, else false. +function is_v6_addr%(a: addr%): bool + %{ + if ( a->AsAddr().GetFamily() == IPv6 ) + return new Val(1, TYPE_BOOL); + else + return new Val(0, TYPE_BOOL); + %} + +# =========================================================================== +# +# Conversion +# +# =========================================================================== + +## Converts the *data* field of :bro:type:`ip6_routing` records that have +## *rtype* of 0 into a vector of addresses. +## +## s: The *data* field of an :bro:type:`ip6_routing` record that has +## an *rtype* of 0. +## +## Returns: The vector of addresses contained in the routing header data. +function routing0_data_to_addrs%(s: string%): addr_vec + %{ + VectorVal* rval = new VectorVal(internal_type("addr_vec")->AsVectorType()); + + int len = s->Len(); + const u_char* bytes = s->Bytes(); + bytes += 4; // go past 32-bit reserved field + len -= 4; + + if ( ( len % 16 ) != 0 ) + reporter->Warning("Bad ip6_routing data length: %d", s->Len()); + + while ( len > 0 ) { - builtin_error("anonymize_addr() not supported for IPv6 addresses"); - return 0; + IPAddr a(IPv6, (const uint32*) bytes, IPAddr::Network); + rval->Assign(rval->Size(), new AddrVal(a), 0); + bytes += 16; + len -= 16; + } + + return rval; + %} + +## Converts an :bro:type:`addr` to an :bro:type:`index_vec`. +## +## a: The address to convert into a vector of counts. +## +## Returns: A vector containing the host-order address representation, +## four elements in size for IPv6 addresses, or one element for IPv4. +## +## .. bro:see:: counts_to_addr +function addr_to_counts%(a: addr%): index_vec + %{ + VectorVal* rval = new VectorVal(internal_type("index_vec")->AsVectorType()); + const uint32* bytes; + int len = a->AsAddr().GetBytes(&bytes); + + for ( int i = 0; i < len; ++i ) + rval->Assign(i, new Val(ntohl(bytes[i]), TYPE_COUNT), 0); + + return rval; + %} + +## Converts an :bro:type:`index_vec` to an :bro:type:`addr`. +## +## v: The vector containing host-order IP address representation, +## one element for IPv4 addresses, four elements for IPv6 addresses. +## +## Returns: An IP address. +## +## .. bro:see:: addr_to_counts +function counts_to_addr%(v: index_vec%): addr + %{ + if ( v->AsVector()->size() == 1 ) + { + return new AddrVal(htonl((*v->AsVector())[0]->AsCount())); + } + else if ( v->AsVector()->size() == 4 ) + { + uint32 bytes[4]; + for ( int i = 0; i < 4; ++i ) + bytes[i] = htonl((*v->AsVector())[i]->AsCount()); + return new AddrVal(bytes); } else - return new AddrVal(anonymize_ip(a[3], - (enum ip_addr_anonymization_class_t) anon_class)); -#else - return new AddrVal(anonymize_ip(a, - (enum ip_addr_anonymization_class_t) anon_class)); -#endif + { + builtin_error("invalid vector size", @ARG@[0]); + uint32 bytes[4]; + memset(bytes, 0, sizeof(bytes)); + return new AddrVal(bytes); + } %} -%%{ -static void hash_md5_val(val_list& vlist, unsigned char digest[16]) - { - md5_state_s h; +## Converts a :bro:type:`string` to an :bro:type:`int`. +## +## str: The :bro:type:`string` to convert. +## +## Returns: The :bro:type:`string` *str* as :bro:type:`int`. +## +## .. bro:see:: to_addr to_port to_subnet +function to_int%(str: string%): int + %{ + const char* s = str->CheckString(); + char* end_s; - md5_init(&h); - loop_over_list(vlist, i) + long l = strtol(s, &end_s, 10); + int i = int(l); + +#if 0 + // Not clear we should complain. For example, is " 205 " + // a legal conversion? + if ( s[0] == '\0' || end_s[0] != '\0' ) + builtin_error("bad conversion to integer", @ARG@[0]); +#endif + + return new Val(i, TYPE_INT); + %} + + +## Converts a (positive) :bro:type:`int` to a :bro:type:`count`. +## +## n: The :bro:type:`int` to convert. +## +## Returns: The :bro:type:`int` *n* as unsigned integer, or 0 if *n* < 0. +function int_to_count%(n: int%): count + %{ + if ( n < 0 ) { - Val* v = vlist[i]; - if ( v->Type()->Tag() == TYPE_STRING ) + builtin_error("bad conversion to count", @ARG@[0]); + n = 0; + } + return new Val(n, TYPE_COUNT); + %} + +## Converts a :bro:type:`double` to a :bro:type:`count`. +## +## d: The :bro:type:`double` to convert. +## +## Returns: The :bro:type:`double` *d* as unsigned integer, or 0 if *d* < 0.0. +## +## .. bro:see:: double_to_time +function double_to_count%(d: double%): count + %{ + if ( d < 0.0 ) + builtin_error("bad conversion to count", @ARG@[0]); + + return new Val(bro_uint_t(rint(d)), TYPE_COUNT); + %} + +## Converts a :bro:type:`string` to a :bro:type:`count`. +## +## str: The :bro:type:`string` to convert. +## +## Returns: The :bro:type:`string` *str* as unsigned integer, or 0 if *str* has +## an invalid format. +## +## .. bro:see:: to_addr to_int to_port to_subnet +function to_count%(str: string%): count + %{ + const char* s = str->CheckString(); + char* end_s; + + uint64 u = (uint64) strtoll(s, &end_s, 10); + + if ( s[0] == '\0' || end_s[0] != '\0' ) + { + builtin_error("bad conversion to count", @ARG@[0]); + u = 0; + } + + return new Val(u, TYPE_COUNT); + %} + +## Converts an :bro:type:`interval` to a :bro:type:`double`. +## +## i: The :bro:type:`interval` to convert. +## +## Returns: The :bro:type:`interval` *i* as :bro:type:`double`. +## +## .. bro:see:: double_to_interval +function interval_to_double%(i: interval%): double + %{ + return new Val(i, TYPE_DOUBLE); + %} + +## Converts a :bro:type:`time` value to a :bro:type:`double`. +## +## t: The :bro:type:`time` to convert. +## +## Returns: The :bro:type:`time` value *t* as :bro:type:`double`. +## +## .. bro:see:: double_to_time +function time_to_double%(t: time%): double + %{ + return new Val(t, TYPE_DOUBLE); + %} + +## Converts a :bro:type:`double` value to a :bro:type:`time`. +## +## d: The :bro:type:`double` to convert. +## +## Returns: The :bro:type:`double` value *d* as :bro:type:`time`. +## +## .. bro:see:: time_to_double double_to_count +function double_to_time%(d: double%): time + %{ + return new Val(d, TYPE_TIME); + %} + +## Converts a :bro:type:`double` to an :bro:type:`interval`. +## +## d: The :bro:type:`double` to convert. +## +## Returns: The :bro:type:`double` *d* as :bro:type:`interval`. +## +## .. bro:see:: interval_to_double +function double_to_interval%(d: double%): interval + %{ + return new Val(d, TYPE_INTERVAL); + %} + +## Converts a :bro:type:`port` to a :bro:type:`count`. +## +## p: The :bro:type:`port` to convert. +## +## Returns: The :bro:type:`port` *p* as :bro:type:`count`. +## +## .. bro:see:: count_to_port +function port_to_count%(p: port%): count + %{ + return new Val(p->Port(), TYPE_COUNT); + %} + +## Converts a :bro:type:`count` and ``transport_proto`` to a :bro:type:`port`. +## +## num: The :bro:type:`port` number. +## +## proto: The transport protocol. +## +## Returns: The :bro:type:`count` *num* as :bro:type:`port`. +## +## .. bro:see:: port_to_count +function count_to_port%(num: count, proto: transport_proto%): port + %{ + return new PortVal(num, (TransportProto)proto->AsEnum()); + %} + +## Converts a :bro:type:`string` to an :bro:type:`addr`. +## +## ip: The :bro:type:`string` to convert. +## +## Returns: The :bro:type:`string` *ip* as :bro:type:`addr`, or the unspecified +## address ``::`` if the input string does not parse correctly. +## +## .. bro:see:: to_count to_int to_port count_to_v4_addr raw_bytes_to_v4_addr +## to_subnet +function to_addr%(ip: string%): addr + %{ + char* s = ip->AsString()->Render(); + Val* ret = new AddrVal(s); + delete [] s; + return ret; + %} + +## Converts a :bro:type:`string` to a :bro:type:`subnet`. +## +## sn: The subnet to convert. +## +## Returns: The *sn* string as a :bro:type:`subnet`, or the unspecified subnet +## ``::/0`` if the input string does not parse correctly. +## +## .. bro:see:: to_count to_int to_port count_to_v4_addr raw_bytes_to_v4_addr +## to_addr +function to_subnet%(sn: string%): subnet + %{ + char* s = sn->AsString()->Render(); + Val* ret = new SubNetVal(s); + delete [] s; + return ret; + %} + +## Converts a :bro:type:`string` to a :bro:type:`double`. +## +## str: The :bro:type:`string` to convert. +## +## Returns: The :bro:type:`string` *str* as double, or 0 if *str* has +## an invalid format. +## +function to_double%(str: string%): double + %{ + const char* s = str->CheckString(); + char* end_s; + + double d = strtod(s, &end_s); + + if ( s[0] == '\0' || end_s[0] != '\0' ) + { + builtin_error("bad conversion to double", @ARG@[0]); + d = 0; + } + + return new Val(d, TYPE_DOUBLE); + %} + +## Converts a :bro:type:`count` to an :bro:type:`addr`. +## +## ip: The :bro:type:`count` to convert. +## +## Returns: The :bro:type:`count` *ip* as :bro:type:`addr`. +## +## .. bro:see:: raw_bytes_to_v4_addr to_addr to_subnet +function count_to_v4_addr%(ip: count%): addr + %{ + if ( ip > 4294967295LU ) + { + builtin_error("conversion of non-IPv4 count to addr", @ARG@[0]); + return new AddrVal(uint32(0)); + } + + return new AddrVal(htonl(uint32(ip))); + %} + +## Converts a :bro:type:`string` of bytes into an IPv4 address. In particular, +## this function interprets the first 4 bytes of the string as an IPv4 address +## in network order. +## +## b: The raw bytes (:bro:type:`string`) to convert. +## +## Returns: The byte :bro:type:`string` *b* as :bro:type:`addr`. +## +## .. bro:see:: raw_bytes_to_v4_addr to_addr to_subnet +function raw_bytes_to_v4_addr%(b: string%): addr + %{ + uint32 a = 0; + + if ( b->Len() < 4 ) + builtin_error("too short a string as input to raw_bytes_to_v4_addr()"); + + else + { + const u_char* bp = b->Bytes(); + a = (bp[0] << 24) | (bp[1] << 16) | (bp[2] << 8) | bp[3]; + } + + return new AddrVal(htonl(a)); + %} + +## Converts a :bro:type:`string` to a :bro:type:`port`. +## +## s: The :bro:type:`string` to convert. +## +## Returns: A :bro:type:`port` converted from *s*. +## +## .. bro:see:: to_addr to_count to_int to_subnet +function to_port%(s: string%): port + %{ + int port = 0; + if ( s->Len() < 10 ) + { + char* slash; + port = strtol(s->CheckString(), &slash, 10); + if ( port ) + { + ++slash; + if ( streq(slash, "tcp") ) + return new PortVal(port, TRANSPORT_TCP); + else if ( streq(slash, "udp") ) + return new PortVal(port, TRANSPORT_UDP); + else if ( streq(slash, "icmp") ) + return new PortVal(port, TRANSPORT_ICMP); + } + } + + builtin_error("wrong port format, must be /[0-9]{1,5}\\/(tcp|udp|icmp)/"); + return new PortVal(port, TRANSPORT_UNKNOWN); + %} + +## Converts a reverse pointer name to an address. For example, +## ``1.0.168.192.in-addr.arpa`` to ``192.168.0.1``. +## +## s: The string with the reverse pointer name. +## +## Returns: The IP address corresponding to *s*. +## +## .. bro:see:: addr_to_ptr_name to_addr +function ptr_name_to_addr%(s: string%): addr + %{ + if ( s->Len() != 72 ) + { + int a[4]; + uint32 addr; + char ss[13]; // this will contain "in-addr.arpa" + + if ( sscanf(s->CheckString(), + "%d.%d.%d.%d.%12s", + a, a+1, a+2, a+3, ss) != 5 + || strcmp(ss, "in-addr.arpa") != 0 ) { - const BroString* str = v->AsString(); - md5_append(&h, str->Bytes(), str->Len()); + builtin_error("bad PTR name", @ARG@[0]); + addr = 0; + } + else + addr = (a[3] << 24) | (a[2] << 16) | (a[1] << 8) | a[0]; + + return new AddrVal(htonl(addr)); + } + else + { + uint32 addr6[4]; + uint32 b[32]; + char ss[9]; // this will contain "ip6.arpa" + if ( sscanf(s->CheckString(), + "%1x.%1x.%1x.%1x.%1x.%1x.%1x.%1x." + "%1x.%1x.%1x.%1x.%1x.%1x.%1x.%1x." + "%1x.%1x.%1x.%1x.%1x.%1x.%1x.%1x." + "%1x.%1x.%1x.%1x.%1x.%1x.%1x.%1x.%8s", + b+31, b+30, b+29, b+28, b+27, b+26, b+25, b+24, + b+23, b+22, b+21, b+20, b+19, b+18, b+17, b+16, + b+15, b+14, b+13, b+12, b+11, b+10, b+9, b+8, + b+7, b+6, b+5, b+4, b+3, b+2, b+1, b, ss) != 33 + || strcmp(ss, "ip6.arpa") != 0 ) + { + builtin_error("bad PTR name", @ARG@[0]); + memset(addr6, 0, sizeof addr6); } else { - ODesc d(DESC_BINARY); - v->Describe(&d); - md5_append(&h, (const md5_byte_t *) d.Bytes(), d.Len()); + for ( unsigned int i = 0; i < 4; ++i ) + { + uint32 a = 0; + for ( unsigned int j = 1; j <= 8; ++j ) + a |= b[8*i+j-1] << (32-j*4); + + addr6[i] = htonl(a); + } } + + return new AddrVal(addr6); } - md5_finish(&h, digest); - } - -static void hmac_md5_val(val_list& vlist, unsigned char digest[16]) - { - hash_md5_val(vlist, digest); - for ( int i = 0; i < 16; ++i ) - digest[i] = digest[i] ^ shared_hmac_md5_key[i]; - hash_md5(16, digest, digest); - } -%%} - -function md5_hash%(...%): string - %{ - unsigned char digest[16]; - hash_md5_val(@ARG@, digest); - return new StringVal(md5_digest_print(digest)); %} -function md5_hmac%(...%): string +## Converts an IP address to a reverse pointer name. For example, +## ``192.168.0.1`` to ``1.0.168.192.in-addr.arpa``. +## +## a: The IP address to convert to a reverse pointer name. +## +## Returns: The reverse pointer representation of *a*. +## +## .. bro:see:: ptr_name_to_addr to_addr +function addr_to_ptr_name%(a: addr%): string %{ - unsigned char digest[16]; - hmac_md5_val(@ARG@, digest); - return new StringVal(md5_digest_print(digest)); + return new StringVal(a->AsAddr().PtrName().c_str()); %} + %%{ -static map md5_states; - -BroString* convert_index_to_string(Val* index) +static Val* parse_port(const char* line) { - ODesc d; - index->Describe(&d); - BroString* s = new BroString(1, d.TakeBytes(), d.Len()); - s->SetUseFreeToDelete(1); - return s; + RecordVal* r = new RecordVal(ftp_port); + + int bytes[6]; + if ( line && sscanf(line, "%d,%d,%d,%d,%d,%d", + &bytes[0], &bytes[1], &bytes[2], + &bytes[3], &bytes[4], &bytes[5]) == 6 ) + { + int good = 1; + + for ( int i = 0; i < 6; ++i ) + if ( bytes[i] < 0 || bytes[i] > 255 ) + { + good = 0; + break; + } + + uint32 addr = (bytes[0] << 24) | (bytes[1] << 16) | + (bytes[2] << 8) | bytes[3]; + uint32 port = (bytes[4] << 8) | bytes[5]; + + // Since port is unsigned, no need to check for < 0. + if ( port > 65535 ) + { + port = 0; + good = 0; + } + + r->Assign(0, new AddrVal(htonl(addr))); + r->Assign(1, new PortVal(port, TRANSPORT_TCP)); + r->Assign(2, new Val(good, TYPE_BOOL)); + } + else + { + r->Assign(0, new AddrVal(uint32(0))); + r->Assign(1, new PortVal(0, TRANSPORT_TCP)); + r->Assign(2, new Val(0, TYPE_BOOL)); + } + + return r; + } + +static Val* parse_eftp(const char* line) + { + RecordVal* r = new RecordVal(ftp_port); + + int net_proto = 0; // currently not used + IPAddr addr; // unspecified IPv6 address (all 128 bits zero) + int port = 0; + int good = 0; + + if ( line ) + { + while ( isspace(*line) ) // skip whitespace + ++line; + + char delimiter = *line; + char* next_delim; + + if ( *line ) + { + good = 1; + ++line; // skip delimiter + + net_proto = strtol(line, &next_delim, 10); + if ( *next_delim != delimiter ) + good = 0; + + line = next_delim; + if ( *line ) + ++line; + + if ( *line && *line != delimiter ) + { + const char* nptr = strchr(line, delimiter); + if ( nptr == NULL ) + { + nptr = line + strlen(line); + good = 0; + } + + string s(line, nptr-line); // extract IP address + IPAddr tmp(s); + // on error, "tmp" will have all 128 bits zero + if ( tmp == addr ) + good = 0; + + addr = tmp; + } + + line = strchr(line, delimiter); + + if ( line != NULL ) + { + ++line; // now the port + port = strtol(line, &next_delim, 10); + if ( *next_delim != delimiter ) + good = 0; + } + + } + + } + + r->Assign(0, new AddrVal(addr)); + r->Assign(1, new PortVal(port, TRANSPORT_TCP)); + r->Assign(2, new Val(good, TYPE_BOOL)); + + return r; } %%} -function md5_hash_init%(index: any%): bool +## Converts a string representation of the FTP PORT command to an ``ftp_port``. +## +## s: The string of the FTP PORT command, e.g., ``"10,0,0,1,4,31"``. +## +## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]`` +## +## .. bro:see:: parse_eftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port +function parse_ftp_port%(s: string%): ftp_port %{ - BroString* s = convert_index_to_string(index); - int status = 0; - - if ( md5_states.count(*s) < 1 ) - { - md5_state_s h; - md5_init(&h); - md5_states[*s] = h; - status = 1; - } - - delete s; - return new Val(status, TYPE_BOOL); + return parse_port(s->CheckString()); %} -function md5_hash_update%(index: any, data: string%): bool +## Converts a string representation of the FTP EPRT command to an ``ftp_port``. +## See `RFC 2428 `_. +## The format is ``EPRT``, +## where ```` is a delimiter in the ASCII range 33-126 (usually ``|``). +## +## s: The string of the FTP EPRT command, e.g., ``"|1|10.0.0.1|1055|"``. +## +## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]`` +## +## .. bro:see:: parse_ftp_port parse_ftp_pasv parse_ftp_epsv fmt_ftp_port +function parse_eftp_port%(s: string%): ftp_port %{ - BroString* s = convert_index_to_string(index); - int status = 0; - - if ( md5_states.count(*s) > 0 ) - { - md5_append(&md5_states[*s], data->Bytes(), data->Len()); - status = 1; - } - - delete s; - return new Val(status, TYPE_BOOL); + return parse_eftp(s->CheckString()); %} -function md5_hash_finish%(index: any%): string +## Converts the result of the FTP PASV command to an ``ftp_port``. +## +## str: The string containing the result of the FTP PASV command. +## +## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]`` +## +## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_epsv fmt_ftp_port +function parse_ftp_pasv%(str: string%): ftp_port %{ - BroString* s = convert_index_to_string(index); - StringVal* printable_digest; + const char* s = str->CheckString(); + const char* line = strchr(s, '('); + if ( line ) + ++line; // move past '(' + else if ( (line = strstr(s, "PORT")) ) + line += 5; // Skip over + else if ( (line = strchr(s, ',')) ) + { // Look for comma-separated list. + while ( --line >= s && isdigit(*line) ) + ; // Back up over preceding digits. + ++line; // now points to first digit, or beginning of s + } - if ( md5_states.count(*s) > 0 ) + return parse_port(line); + %} + +## Converts the result of the FTP EPSV command to an ``ftp_port``. +## See `RFC 2428 `_. +## The format is `` ()``, where ```` is a +## delimiter in the ASCII range 33-126 (usually ``|``). +## +## str: The string containing the result of the FTP EPSV command. +## +## Returns: The FTP PORT, e.g., ``[h=10.0.0.1, p=1055/tcp, valid=T]`` +## +## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_pasv fmt_ftp_port +function parse_ftp_epsv%(str: string%): ftp_port + %{ + const char* s = str->CheckString(); + const char* line = strchr(s, '('); + if ( line ) + ++line; // move past '(' + return parse_eftp(line); + %} + +## Formats an IP address and TCP port as an FTP PORT command. For example, +## ``10.0.0.1`` and ``1055/tcp`` yields ``"10,0,0,1,4,31"``. +## +## a: The IP address. +## +## p: The TCP port. +## +## Returns: The FTP PORT string. +## +## .. bro:see:: parse_ftp_port parse_eftp_port parse_ftp_pasv parse_ftp_epsv +function fmt_ftp_port%(a: addr, p: port%): string + %{ + const uint32* addr; + int len = a->AsAddr().GetBytes(&addr); + if ( len == 1 ) { - unsigned char digest[16]; - md5_finish(&md5_states[*s], digest); - md5_states.erase(*s); - printable_digest = new StringVal(md5_digest_print(digest)); + uint32 a = ntohl(addr[0]); + uint32 pn = p->Port(); + return new StringVal(fmt("%d,%d,%d,%d,%d,%d", + a >> 24, (a >> 16) & 0xff, + (a >> 8) & 0xff, a & 0xff, + pn >> 8, pn & 0xff)); } else - printable_digest = new StringVal(""); - - delete s; - return printable_digest; + { + builtin_error("conversion of non-IPv4 address in fmt_ftp_port", + @ARG@[0]); + return new StringVal(""); + } %} -# Wrappings for rand() and srand() -function rand%(max: count%): count +## Decode a NetBIOS name. See http://support.microsoft.com/kb/194203. +## +## name: The encoded NetBIOS name, e.g., ``"FEEIEFCAEOEFFEECEJEPFDCAEOEBENEF"``. +## +## Returns: The decoded NetBIOS name, e.g., ``"THE NETBIOS NAME"``. +## +## .. bro:see:: decode_netbios_name_type +function decode_netbios_name%(name: string%): string %{ - int result; - result = bro_uint_t(double(max) * double(rand()) / (RAND_MAX + 1.0)); - return new Val(result, TYPE_COUNT); + char buf[16]; + char result[16]; + const u_char* s = name->Bytes(); + int i, j; + + for ( i = 0, j = 0; i < 16; ++i ) + { + char c0 = (j < name->Len()) ? toupper(s[j++]) : 'A'; + char c1 = (j < name->Len()) ? toupper(s[j++]) : 'A'; + buf[i] = ((c0 - 'A') << 4) + (c1 - 'A'); + } + + for ( i = 0; i < 15; ++i ) + { + if ( isalnum(buf[i]) || ispunct(buf[i]) || + // \x01\x02 is seen in at least one case as the first two bytes. + // I think that any \x01 and \x02 should always be passed through. + buf[i] < 3 ) + result[i] = buf[i]; + else + break; + } + + return new StringVal(i, result); %} -function srand%(seed: count%): any +## Converts a NetBIOS name type to its corresponding numeric value. +## See http://support.microsoft.com/kb/163409. +## +## name: The NetBIOS name type. +## +## Returns: The numeric value of *name*. +## +## .. bro:see:: decode_netbios_name +function decode_netbios_name_type%(name: string%): count %{ - srand(seed); - return 0; + const u_char* s = name->Bytes(); + char return_val = ((toupper(s[30]) - 'A') << 4) + (toupper(s[31]) - 'A'); + return new Val(return_val, TYPE_COUNT); %} +## Converts a string of bytes into its hexadecimal representation. +## For example, ``"04"`` would be converted to ``"3034"``. +## +## bytestring: The string of bytes. +## +## Returns: The hexadecimal representation of *bytestring*. +## +## .. bro:see:: hexdump +function bytestring_to_hexstr%(bytestring: string%): string + %{ + bro_uint_t len = bytestring->AsString()->Len(); + const u_char* bytes = bytestring->AsString()->Bytes(); + char hexstr[(2 * len) + 1]; + + hexstr[0] = 0; + for ( bro_uint_t i = 0; i < len; ++i ) + snprintf(hexstr + (2 * i), 3, "%.2hhx", bytes[i]); + + return new StringVal(hexstr); + %} + +## Decodes a Base64-encoded string. +## +## s: The Base64-encoded string. +## +## Returns: The decoded version of *s*. +## +## .. bro:see:: decode_base64_custom function decode_base64%(s: string%): string %{ BroString* t = decode_base64(s->AsString()); @@ -1860,6 +3085,29 @@ function decode_base64%(s: string%): string } %} +## Decodes a Base64-encoded string with a custom alphabet. +## +## s: The Base64-encoded string. +## +## a: The custom alphabet. The empty string indicates the default alphabet. The +## length of *a* must be 64. For example, a custom alphabet could be +## ``"!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"``. +## +## Returns: The decoded version of *s*. +## +## .. bro:see:: decode_base64 +function decode_base64_custom%(s: string, a: string%): string + %{ + BroString* t = decode_base64(s->AsString(), a->AsString()); + if ( t ) + return new StringVal(t); + else + { + reporter->Error("error in decoding string %s", s->CheckString()); + return new StringVal(""); + } + %} + %%{ #include "DCE_RPC.h" @@ -1873,6 +3121,14 @@ typedef struct { } bro_uuid_t; %%} +## Converts a bytes representation of a UUID into its string form. For example, +## given a string of 16 bytes, it produces an output string in this format: +## ``550e8400-e29b-41d4-a716-446655440000``. +## See ``_. +## +## uuid: The 16 bytes of the UUID. +## +## Returns: The string representation of *uuid*. function uuid_to_string%(uuid: string%): string %{ if ( uuid->Len() != 16 ) @@ -1897,11 +3153,20 @@ function uuid_to_string%(uuid: string%): string return new StringVal(s); %} - -# The following functions convert strings into patterns at run-time. As the -# computed NFAs and DFAs cannot be cleanly deallocated (at least for now), -# they can only be used at initialization time. - +## Merges and compiles two regular expressions at initialization time. +## +## p1: The first pattern. +## +## p2: The second pattern. +## +## Returns: The compiled pattern of the concatenation of *p1* and *p2*. +## +## .. bro:see:: convert_for_pattern string_to_pattern +## +## .. note:: +## +## This function must be called at Bro startup time, e.g., in the event +## :bro:id:`bro_init`. function merge_pattern%(p1: pattern, p2: pattern%): pattern %{ if ( bro_start_network_time != 0.0 ) @@ -1940,6 +3205,17 @@ char* to_pat_str(int sn, const char* ss) } %%} +## Escapes a string so that it becomes a valid :bro:type:`pattern` and can be +## used with the :bro:id:`string_to_pattern`. Any character from the set +## ``^$-:"\/|*+?.(){}[]`` is prefixed with a ``\``. +## +## s: The string to escape. +## +## Returns: An escaped version of *s* that has the structure of a valid +## :bro:type:`pattern`. +## +## .. bro:see:: merge_pattern string_to_pattern +## function convert_for_pattern%(s: string%): string %{ char* t = to_pat_str(s->Len(), (const char*)(s->Bytes())); @@ -1948,6 +3224,22 @@ function convert_for_pattern%(s: string%): string return ret; %} +## Converts a :bro:type:`string` into a :bro:type:`pattern`. +## +## s: The string to convert. +## +## convert: If true, *s* is first passed through the function +## :bro:id:`convert_for_pattern` to escape special characters of +## patterns. +## +## Returns: *s* as :bro:type:`pattern`. +## +## .. bro:see:: convert_for_pattern merge_pattern +## +## .. note:: +## +## This function must be called at Bro startup time, e.g., in the event +## :bro:id:`bro_init`. function string_to_pattern%(s: string, convert: bool%): pattern %{ if ( bro_start_network_time != 0.0 ) @@ -1975,354 +3267,337 @@ function string_to_pattern%(s: string, convert: bool%): pattern return new PatternVal(re); %} -# Precompile a pcap filter. -function precompile_pcap_filter%(id: PcapFilterID, s: string%): bool +## Formats a given time value according to a format string. +## +## fmt: The format string. See ``man strftime`` for the syntax. +## +## d: The time value. +## +## Returns: The time *d* formatted according to *fmt*. +function strftime%(fmt: string, d: time%) : string %{ - bool success = true; + static char buffer[128]; - loop_over_list(pkt_srcs, i) + time_t t = time_t(d); + + if ( strftime(buffer, 128, fmt->CheckString(), localtime(&t)) == 0 ) + return new StringVal(""); + + return new StringVal(buffer); + %} + + +## Parse a textual representation of a date/time value into a ``time`` type value. +## +## fmt: The format string used to parse the following *d* argument. See ``man strftime`` +## for the syntax. +## +## d: The string representing the time. +## +## Returns: The time value calculated from parsing *d* with *fmt*. +function strptime%(fmt: string, d: string%) : time + %{ + const time_t timeval = time_t(NULL); + struct tm t = *localtime(&timeval); + + if ( strptime(d->CheckString(), fmt->CheckString(), &t) == NULL ) { - pkt_srcs[i]->ClearErrorMsg(); + reporter->Warning("strptime conversion failed: fmt:%s d:%s", fmt->CheckString(), d->CheckString()); + return new Val(0.0, TYPE_TIME); + } - if ( ! pkt_srcs[i]->PrecompileFilter(id->ForceAsInt(), - s->CheckString()) ) + double ret = mktime(&t); + return new Val(ret, TYPE_TIME); + %} + + +# =========================================================================== +# +# Network Type Processing +# +# =========================================================================== + +## Masks an address down to the number of given upper bits. For example, +## ``mask_addr(1.2.3.4, 18)`` returns ``1.2.0.0``. +## +## a: The address to mask. +## +## top_bits_to_keep: The number of top bits to keep in *a*; must be greater +## than 0 and less than 33 for IPv4, or 129 for IPv6. +## +## Returns: The address *a* masked down to *top_bits_to_keep* bits. +## +## .. bro:see:: remask_addr +function mask_addr%(a: addr, top_bits_to_keep: count%): subnet + %{ + return new SubNetVal(a->AsAddr(), top_bits_to_keep); + %} + +## Takes some top bits (such as a subnet address) from one address and the other +## bits (intra-subnet part) from a second address and merges them to get a new +## address. This is useful for anonymizing at subnet level while preserving +## serial scans. +## +## a1: The address to mask with *top_bits_from_a1*. +## +## a2: The address to take the remaining bits from. +## +## top_bits_from_a1: The number of top bits to keep in *a1*; must be greater +## than 0 and less than 129. This value is always interpreted +## relative to the IPv6 bit width (v4-mapped addresses start +## at bit number 96). +## +## Returns: The address *a* masked down to *top_bits_to_keep* bits. +## +## .. bro:see:: mask_addr +function remask_addr%(a1: addr, a2: addr, top_bits_from_a1: count%): addr + %{ + IPAddr addr1(a1->AsAddr()); + addr1.Mask(top_bits_from_a1); + IPAddr addr2(a2->AsAddr()); + addr2.ReverseMask(top_bits_from_a1); + return new AddrVal(addr1|addr2); + %} + +## Checks whether a given :bro:type:`port` has TCP as transport protocol. +## +## p: The :bro:type:`port` to check. +## +## Returns: True iff *p* is a TCP port. +## +## .. bro:see:: is_udp_port is_icmp_port +function is_tcp_port%(p: port%): bool + %{ + return new Val(p->IsTCP(), TYPE_BOOL); + %} + +## Checks whether a given :bro:type:`port` has UDP as transport protocol. +## +## p: The :bro:type:`port` to check. +## +## Returns: True iff *p* is a UDP port. +## +## .. bro:see:: is_icmp_port is_tcp_port +function is_udp_port%(p: port%): bool + %{ + return new Val(p->IsUDP(), TYPE_BOOL); + %} + +## Checks whether a given :bro:type:`port` has ICMP as transport protocol. +## +## p: The :bro:type:`port` to check. +## +## Returns: True iff *p* is an ICMP port. +## +## .. bro:see:: is_tcp_port is_udp_port +function is_icmp_port%(p: port%): bool + %{ + return new Val(p->IsICMP(), TYPE_BOOL); + %} + +%%{ +EnumVal* map_conn_type(TransportProto tp) + { + switch ( tp ) { + case TRANSPORT_UNKNOWN: + return new EnumVal(0, transport_proto); + break; + + case TRANSPORT_TCP: + return new EnumVal(1, transport_proto); + break; + + case TRANSPORT_UDP: + return new EnumVal(2, transport_proto); + break; + + case TRANSPORT_ICMP: + return new EnumVal(3, transport_proto); + break; + + default: + reporter->InternalError("bad connection type in map_conn_type()"); + } + + // Cannot be reached; + assert(false); + return 0; // Make compiler happy. + } +%%} + +## Extracts the transport protocol from a connection. +## +## cid: The connection identifier. +## +## Returns: The transport protocol of the connection identified by *cid*. +## +## .. bro:see:: get_port_transport_proto +## get_orig_seq get_resp_seq +function get_conn_transport_proto%(cid: conn_id%): transport_proto + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + { + builtin_error("unknown connection id in get_conn_transport_proto()", cid); + return new EnumVal(0, transport_proto); + } + + return map_conn_type(c->ConnTransport()); + %} + +## Extracts the transport protocol from a :bro:type:`port`. +## +## p: The port. +## +## Returns: The transport protocol of the port *p*. +## +## .. bro:see:: get_conn_transport_proto +## get_orig_seq get_resp_seq +function get_port_transport_proto%(p: port%): transport_proto + %{ + return map_conn_type(p->PortType()); + %} + +## Checks whether a connection is (still) active. +## +## c: The connection id to check. +## +## Returns: True if the connection identified by *c* exists. +## +## .. bro:see:: lookup_connection +function connection_exists%(c: conn_id%): bool + %{ + if ( sessions->FindConnection(c) ) + return new Val(1, TYPE_BOOL); + else + return new Val(0, TYPE_BOOL); + %} + +## Returns the :bro:type:`connection` record for a given connection identifier. +## +## cid: The connection ID. +## +## Returns: The :bro:type:`connection` record for *cid*. If *cid* does not point +## to an existing connection, the function generates a run-time error +## and returns a dummy value. +## +## .. bro:see:: connection_exists +function lookup_connection%(cid: conn_id%): connection + %{ + Connection* conn = sessions->FindConnection(cid); + if ( conn ) + return conn->BuildConnVal(); + + builtin_error("connection ID not a known connection", cid); + + // Return a dummy connection record. + RecordVal* c = new RecordVal(connection_type); + + RecordVal* id_val = new RecordVal(conn_id); + id_val->Assign(0, new AddrVal((unsigned int) 0)); + id_val->Assign(1, new PortVal(ntohs(0), TRANSPORT_UDP)); + id_val->Assign(2, new AddrVal((unsigned int) 0)); + id_val->Assign(3, new PortVal(ntohs(0), TRANSPORT_UDP)); + c->Assign(0, id_val); + + RecordVal* orig_endp = new RecordVal(endpoint); + orig_endp->Assign(0, new Val(0, TYPE_COUNT)); + orig_endp->Assign(1, new Val(int(0), TYPE_COUNT)); + + RecordVal* resp_endp = new RecordVal(endpoint); + resp_endp->Assign(0, new Val(0, TYPE_COUNT)); + resp_endp->Assign(1, new Val(int(0), TYPE_COUNT)); + + c->Assign(1, orig_endp); + c->Assign(2, resp_endp); + + c->Assign(3, new Val(network_time, TYPE_TIME)); + c->Assign(4, new Val(0.0, TYPE_INTERVAL)); + c->Assign(5, new TableVal(string_set)); // service + c->Assign(6, new StringVal("")); // addl + c->Assign(7, new Val(0, TYPE_COUNT)); // hot + c->Assign(8, new StringVal("")); // history + + return c; + %} + +%%{ +#include "HTTP.h" + +const char* conn_id_string(Val* c) + { + Val* id = (*(c->AsRecord()))[0]; + const val_list* vl = id->AsRecord(); + + const IPAddr& orig_h = (*vl)[0]->AsAddr(); + uint32 orig_p = (*vl)[1]->AsPortVal()->Port(); + const IPAddr& resp_h = (*vl)[2]->AsAddr(); + uint32 resp_p = (*vl)[3]->AsPortVal()->Port(); + + return fmt("%s/%u -> %s/%u\n", orig_h.AsString().c_str(), orig_p, + resp_h.AsString().c_str(), resp_p); + } +%%} + +## Skips the data of the HTTP entity. +## +## c: The HTTP connection. +## +## is_orig: If true, the client data is skipped, and the server data otherwise. +## +## .. bro:see:: skip_smtp_data +function skip_http_entity_data%(c: connection, is_orig: bool%): any + %{ + AnalyzerID id = mgr.CurrentAnalyzer(); + if ( id ) + { + Analyzer* ha = c->FindAnalyzer(id); + + if ( ha ) { - reporter->Error("precompile_pcap_filter: %s", - pkt_srcs[i]->ErrorMsg()); - success = false; + if ( ha->GetTag() == AnalyzerTag::HTTP ) + static_cast(ha)->SkipEntityData(is_orig); + else + reporter->Error("non-HTTP analyzer associated with connection record"); } + else + reporter->Error("could not find analyzer for skip_http_entity_data"); + } - - return new Val(success, TYPE_BOOL); - %} - -# Install precompiled pcap filter. -function install_pcap_filter%(id: PcapFilterID%): bool - %{ - bool success = true; - - loop_over_list(pkt_srcs, i) - { - pkt_srcs[i]->ClearErrorMsg(); - - if ( ! pkt_srcs[i]->SetFilter(id->ForceAsInt()) ) - success = false; - } - - return new Val(success, TYPE_BOOL); - %} - -# If last pcap function failed, returns a descriptive error message -function pcap_error%(%): string - %{ - loop_over_list(pkt_srcs, i) - { - const char* err = pkt_srcs[i]->ErrorMsg(); - if ( *err ) - return new StringVal(err); - } - - return new StringVal("no error"); - %} - -# Install filter to drop packets from a source (addr/subnet) with a given -# probability (0.0-1.0) if none of the given TCP flags is set. -function install_src_addr_filter%(ip: addr, tcp_flags: count, prob: double%) : bool - %{ - sessions->GetPacketFilter()->AddSrc(ip, tcp_flags, prob); - return new Val(1, TYPE_BOOL); - %} - -function install_src_net_filter%(snet: subnet, tcp_flags: count, prob: double%) : bool - %{ - sessions->GetPacketFilter()->AddSrc(snet, tcp_flags, prob); - return new Val(1, TYPE_BOOL); - %} - -# Remove the filter for the source. -function uninstall_src_addr_filter%(ip: addr%) : bool - %{ - return new Val(sessions->GetPacketFilter()->RemoveSrc(ip), TYPE_BOOL); - %} - -function uninstall_src_net_filter%(snet: subnet%) : bool - %{ - return new Val(sessions->GetPacketFilter()->RemoveSrc(snet), TYPE_BOOL); - %} - -# Same for destination. -function install_dst_addr_filter%(ip: addr, tcp_flags: count, prob: double%) : bool - %{ - sessions->GetPacketFilter()->AddDst(ip, tcp_flags, prob); - return new Val(1, TYPE_BOOL); - %} - -function install_dst_net_filter%(snet: subnet, tcp_flags: count, prob: double%) : bool - %{ - sessions->GetPacketFilter()->AddDst(snet, tcp_flags, prob); - return new Val(1, TYPE_BOOL); - %} - -function uninstall_dst_addr_filter%(ip: addr%) : bool - %{ - return new Val(sessions->GetPacketFilter()->RemoveDst(ip), TYPE_BOOL); - %} - -function uninstall_dst_net_filter%(snet: subnet%) : bool - %{ - return new Val(sessions->GetPacketFilter()->RemoveDst(snet), TYPE_BOOL); - %} - -function checkpoint_state%(%) : bool - %{ - return new Val(persistence_serializer->WriteState(true), TYPE_BOOL); - %} - -function dump_config%(%) : bool - %{ - return new Val(persistence_serializer->WriteConfig(true), TYPE_BOOL); - %} - -function rescan_state%(%) : bool - %{ - return new Val(persistence_serializer->ReadAll(false, true), TYPE_BOOL); - %} - -function capture_events%(filename: string%) : bool - %{ - if ( ! event_serializer ) - event_serializer = new FileSerializer(); else - event_serializer->Close(); + reporter->Error("no analyzer associated with connection record"); - return new Val(event_serializer->Open( - (const char*) filename->CheckString()), TYPE_BOOL); + return 0; %} -function capture_state_updates%(filename: string%) : bool +## Unescapes all characters in a URI (decode every ``%xx`` group). +## +## URI: The URI to unescape. +## +## Returns: The unescaped URI with all ``%xx`` groups decoded. +## +## .. note:: +## +## Unescaping reserved characters may cause loss of information. RFC 2396: +## A URI is always in an "escaped" form, since escaping or unescaping a +## completed URI might change its semantics. Normally, the only time +## escape encodings can safely be made is when the URI is being created +## from its component parts. +function unescape_URI%(URI: string%): string %{ - if ( ! state_serializer ) - state_serializer = new FileSerializer(); - else - state_serializer->Close(); + const u_char* line = URI->Bytes(); + const u_char* const line_end = line + URI->Len(); - return new Val(state_serializer->Open( - (const char*) filename->CheckString()), TYPE_BOOL); - %} - -function connect%(ip: addr, p: port, our_class: string, retry: interval, ssl: bool%) : count - %{ - return new Val(uint32(remote_serializer->Connect(ip, p->Port(), - our_class->CheckString(), retry, ssl)), - TYPE_COUNT); - %} - -function disconnect%(p: event_peer%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->CloseConnection(id), TYPE_BOOL); - %} - -function request_remote_events%(p: event_peer, handlers: pattern%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->RequestEvents(id, handlers), - TYPE_BOOL); - %} - -function request_remote_sync%(p: event_peer, auth: bool%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->RequestSync(id, auth), TYPE_BOOL); - %} - -function request_remote_logs%(p: event_peer%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->RequestLogs(id), TYPE_BOOL); - %} - -function set_accept_state%(p: event_peer, accept: bool%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->SetAcceptState(id, accept), - TYPE_BOOL); - %} - -function set_compression_level%(p: event_peer, level: count%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->SetCompressionLevel(id, level), - TYPE_BOOL); - %} - -function listen%(ip: addr, p: port, ssl: bool %) : bool - %{ - return new Val(remote_serializer->Listen(ip, p->Port(), ssl), TYPE_BOOL); - %} - -function is_remote_event%(%) : bool - %{ - return new Val(mgr.CurrentSource() != SOURCE_LOCAL, TYPE_BOOL); - %} - -function send_state%(p: event_peer%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(persistence_serializer->SendState(id, true), TYPE_BOOL); - %} - -# Send the value of the given global ID to the peer (which might -# then install it locally). -function send_id%(p: event_peer, id: string%) : bool - %{ - RemoteSerializer::PeerID pid = p->AsRecordVal()->Lookup(0)->AsCount(); - - ID* i = global_scope()->Lookup(id->CheckString()); - if ( ! i ) - { - reporter->Error("send_id: no global id %s", id->CheckString()); - return new Val(0, TYPE_BOOL); - } - - SerialInfo info(remote_serializer); - return new Val(remote_serializer->SendID(&info, pid, *i), TYPE_BOOL); - %} - -# Gracely shut down communication. -function terminate_communication%(%) : bool - %{ - return new Val(remote_serializer->Terminate(), TYPE_BOOL); - %} - -function complete_handshake%(p: event_peer%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->CompleteHandshake(id), TYPE_BOOL); - %} - -function send_ping%(p: event_peer, seq: count%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->SendPing(id, seq), TYPE_BOOL); - %} - -function send_current_packet%(p: event_peer%) : bool - %{ - Packet pkt(""); - - if ( ! current_pktsrc || - ! current_pktsrc->GetCurrentPacket(&pkt.hdr, &pkt.pkt) ) - return new Val(0, TYPE_BOOL); - - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - - pkt.time = pkt.hdr->ts.tv_sec + double(pkt.hdr->ts.tv_usec) / 1e6; - pkt.hdr_size = current_pktsrc->HdrSize(); - pkt.link_type = current_pktsrc->LinkType(); - - SerialInfo info(remote_serializer); - return new Val(remote_serializer->SendPacket(&info, id, pkt), TYPE_BOOL); - %} - -function do_profiling%(%) : bool - %{ - if ( profiling_logger ) - profiling_logger->Log(); - - return new Val(1, TYPE_BOOL); - %} - -function get_event_peer%(%) : event_peer - %{ - SourceID src = mgr.CurrentSource(); - - if ( src == SOURCE_LOCAL ) - { - RecordVal* p = mgr.GetLocalPeerVal(); - Ref(p); - return p; - } - - if ( ! remote_serializer ) - reporter->InternalError("remote_serializer not initialized"); - - Val* v = remote_serializer->GetPeerVal(src); - if ( ! v ) - { - reporter->Error("peer %d does not exist anymore", int(src)); - RecordVal* p = mgr.GetLocalPeerVal(); - Ref(p); - return p; - } - - return v; - %} - -function get_local_event_peer%(%) : event_peer - %{ - RecordVal* p = mgr.GetLocalPeerVal(); - Ref(p); - return p; - %} - - -function send_capture_filter%(p: event_peer, s: string%) : bool - %{ - RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); - return new Val(remote_serializer->SendCaptureFilter(id, s->CheckString()), TYPE_BOOL); - %} - -function make_connection_persistent%(c: connection%) : bool - %{ - c->MakePersistent(); - return new Val(1, TYPE_BOOL); - %} - -function is_local_interface%(ip: addr%) : bool - %{ - static uint32* addrs; - static int len = -1; - - if ( len < 0 ) - { - char host[MAXHOSTNAMELEN]; - - strcpy(host, "localhost"); - gethostname(host, MAXHOSTNAMELEN); - host[MAXHOSTNAMELEN-1] = '\0'; - - struct hostent* ent = gethostbyname(host); - - for ( len = 0; ent->h_addr_list[len]; ++len ) - ; - - addrs = new uint32[len + 1]; - for ( int i = 0; i < len; i++ ) - addrs[i] = *(uint32*) ent->h_addr_list[i]; - - addrs[len++] = 0x0100007f; // 127.0.0.1 - } - -#ifdef BROv6 - if ( ! is_v4_addr(ip) ) - { - builtin_error("is_local_interface() only supports IPv4 addresses"); - return new Val(0, TYPE_BOOL); - } - - uint32 ip4 = to_v4_addr(ip); -#else - uint32 ip4 = ip; -#endif - - for ( int i = 0; i < len; i++ ) - if ( addrs[i] == ip4 ) - return new Val(1, TYPE_BOOL); - - return new Val(0, TYPE_BOOL); + return new StringVal(unescape_URI(line, line_end, 0)); %} +## Writes the current packet to a file. +## +## file_name: The name of the file to write the packet to. +## +## Returns: True on success. +## +## .. bro:see:: dump_packet get_current_packet send_current_packet function dump_current_packet%(file_name: string%) : bool %{ const struct pcap_pkthdr* hdr; @@ -2341,6 +3616,12 @@ function dump_current_packet%(file_name: string%) : bool return new Val(! addl_pkt_dumper->IsError(), TYPE_BOOL); %} +## Returns the currently processed PCAP packet. +## +## Returns: The currently processed packet, which is a record +## containing the timestamp, ``snaplen``, and packet data. +## +## .. bro:see:: dump_current_packet dump_packet send_current_packet function get_current_packet%(%) : pcap_packet %{ const struct pcap_pkthdr* hdr; @@ -2367,6 +3648,15 @@ function get_current_packet%(%) : pcap_packet return pkt; %} +## Writes a given packet to a file. +## +## pkt: The PCAP packet. +## +## file_name: The name of the file to write *pkt* to. +## +## Returns: True on success +## +## .. bro:see:: get_current_packet dump_current_packet send_current_packet function dump_packet%(pkt: pcap_packet, file_name: string%) : bool %{ struct pcap_pkthdr hdr; @@ -2386,425 +3676,6 @@ function dump_packet%(pkt: pcap_packet, file_name: string%) : bool return new Val(addl_pkt_dumper->IsError(), TYPE_BOOL); %} -# The return value is the old size of the vector. -# ### Need better type checking on the argument here and in subsequent -# vector builtins -- probably need to update "bif" code generator. -function resize%(aggr: any, newsize: count%) : count - %{ - if ( aggr->Type()->Tag() != TYPE_VECTOR ) - { - builtin_error("resize() operates on vectors"); - return 0; - } - - return new Val(aggr->AsVectorVal()->Resize(newsize), TYPE_COUNT); - %} - -# Returns true if any element is T. -function any_set%(v: any%) : bool - %{ - if ( v->Type()->Tag() != TYPE_VECTOR || - v->Type()->YieldType()->Tag() != TYPE_BOOL ) - { - builtin_error("any_set() requires vector of bool"); - return new Val(false, TYPE_BOOL); - } - - VectorVal* vv = v->AsVectorVal(); - for ( unsigned int i = 0; i < vv->Size(); ++i ) - if ( vv->Lookup(i) && vv->Lookup(i)->AsBool() ) - return new Val(true, TYPE_BOOL); - - return new Val(false, TYPE_BOOL); - %} - -# Returns true if all elements are T (missing counts as F). -function all_set%(v: any%) : bool - %{ - if ( v->Type()->Tag() != TYPE_VECTOR || - v->Type()->YieldType()->Tag() != TYPE_BOOL ) - { - builtin_error("all_set() requires vector of bool"); - return new Val(false, TYPE_BOOL); - } - - VectorVal* vv = v->AsVectorVal(); - for ( unsigned int i = 0; i < vv->Size(); ++i ) - if ( ! vv->Lookup(i) || ! vv->Lookup(i)->AsBool() ) - return new Val(false, TYPE_BOOL); - - return new Val(true, TYPE_BOOL); - %} - -# sort() takes a vector and comparison function on the elements and -# sorts the vector in place. It returns the original vector. - -%%{ -static Func* sort_function_comp = 0; -static Val** index_map = 0; // used for indirect sorting to support order() - -bool sort_function(Val* a, Val* b) - { - // Sort missing values as "high". - if ( ! a ) - return 0; - if ( ! b ) - return 1; - - val_list sort_func_args; - sort_func_args.append(a->Ref()); - sort_func_args.append(b->Ref()); - - Val* result = sort_function_comp->Call(&sort_func_args); - int int_result = result->CoerceToInt(); - Unref(result); - - sort_func_args.remove_nth(1); - sort_func_args.remove_nth(0); - - return int_result < 0; - } - -bool indirect_sort_function(int a, int b) - { - return sort_function(index_map[a], index_map[b]); - } - -bool int_sort_function (Val* a, Val* b) - { - if ( ! a ) - return 0; - if ( ! b ) - return 1; - - int ia = a->CoerceToInt(); - int ib = b->CoerceToInt(); - - return ia < ib; - } - -bool indirect_int_sort_function(int a, int b) - { - return int_sort_function(index_map[a], index_map[b]); - } -%%} - -function sort%(v: any, ...%) : any - %{ - v->Ref(); // we always return v - - if ( v->Type()->Tag() != TYPE_VECTOR ) - { - builtin_error("sort() requires vector"); - return v; - } - - BroType* elt_type = v->Type()->YieldType(); - Func* comp = 0; - - if ( @ARG@.length() > 2 ) - builtin_error("sort() called with extraneous argument"); - - if ( @ARG@.length() == 2 ) - { - Val* comp_val = @ARG@[1]; - if ( ! IsFunc(comp_val->Type()->Tag()) ) - { - builtin_error("second argument to sort() needs to be comparison function"); - return v; - } - - comp = comp_val->AsFunc(); - } - - if ( ! comp && ! IsIntegral(elt_type->Tag()) ) - builtin_error("comparison function required for sort() with non-integral types"); - - vector& vv = *v->AsVector(); - - if ( comp ) - { - FuncType* comp_type = comp->FType()->AsFuncType(); - if ( comp_type->YieldType()->Tag() != TYPE_INT || - ! comp_type->ArgTypes()->AllMatch(elt_type, 0) ) - { - builtin_error("invalid comparison function in call to sort()"); - return v; - } - - sort_function_comp = comp; - - sort(vv.begin(), vv.end(), sort_function); - } - else - sort(vv.begin(), vv.end(), int_sort_function); - - return v; - %} - -function order%(v: any, ...%) : index_vec - %{ - VectorVal* result_v = - new VectorVal(new VectorType(base_type(TYPE_COUNT))); - - if ( v->Type()->Tag() != TYPE_VECTOR ) - { - builtin_error("order() requires vector"); - return result_v; - } - - BroType* elt_type = v->Type()->YieldType(); - Func* comp = 0; - - if ( @ARG@.length() > 2 ) - builtin_error("order() called with extraneous argument"); - - if ( @ARG@.length() == 2 ) - { - Val* comp_val = @ARG@[1]; - if ( ! IsFunc(comp_val->Type()->Tag()) ) - { - builtin_error("second argument to order() needs to be comparison function"); - return v; - } - - comp = comp_val->AsFunc(); - } - - if ( ! comp && ! IsIntegral(elt_type->Tag()) ) - builtin_error("comparison function required for sort() with non-integral types"); - - vector& vv = *v->AsVector(); - int n = vv.size(); - - // Set up initial mapping of indices directly to corresponding - // elements. We stay zero-based until after the sorting. - vector ind_vv(n); - index_map = new Val*[n]; - int i; - for ( i = 0; i < n; ++i ) - { - ind_vv[i] = i; - index_map[i] = vv[i]; - } - - if ( comp ) - { - FuncType* comp_type = comp->FType()->AsFuncType(); - if ( comp_type->YieldType()->Tag() != TYPE_INT || - ! comp_type->ArgTypes()->AllMatch(elt_type, 0) ) - { - builtin_error("invalid comparison function in call to sort()"); - return v; - } - - sort_function_comp = comp; - - sort(ind_vv.begin(), ind_vv.end(), indirect_sort_function); - } - else - sort(ind_vv.begin(), ind_vv.end(), indirect_int_sort_function); - - delete [] index_map; - index_map = 0; - - // Now spin through ind_vv to read out the rearrangement, - // adjusting indices as we do so. - for ( i = 0; i < n; ++i ) - { - int ind = ind_vv[i]; - result_v->Assign(i, new Val(ind, TYPE_COUNT), 0); - } - - return result_v; - %} - -%%{ -// Experimental code to add support for IDMEF XML output based on -// notices. For now, we're implementing it as a builtin you can call on an -// notices record. - -#ifdef USE_IDMEF -extern "C" { -#include -} -#endif - -#include - -char* port_to_string(PortVal* port) - { - char buf[256]; // to hold sprintf results on port numbers - snprintf(buf, sizeof(buf), "%u", port->Port()); - return copy_string(buf); - } - -%%} - -function generate_idmef%(src_ip: addr, src_port: port, - dst_ip: addr, dst_port: port%) : bool - %{ -#ifdef USE_IDMEF - xmlNodePtr message = - newIDMEF_Message(newAttribute("version","1.0"), - newAlert(newCreateTime(NULL), - newSource( - newNode(newAddress( - newAttribute("category","ipv4-addr"), - newSimpleElement("address", - copy_string(dotted_addr(src_ip))), - NULL), NULL), - newService( - newSimpleElement("port", - port_to_string(src_port)), - NULL), NULL), - newTarget( - newNode(newAddress( - newAttribute("category","ipv4-addr"), - newSimpleElement("address", - copy_string(dotted_addr(dst_ip))), - NULL), NULL), - newService( - newSimpleElement("port", - port_to_string(dst_port)), - NULL), NULL), NULL), NULL); - - // if ( validateCurrentDoc() ) - printCurrentMessage(stderr); - return new Val(1, TYPE_BOOL); -#else - builtin_error("Bro was not configured for IDMEF support"); - return new Val(0, TYPE_BOOL); -#endif - %} - -function dump_rule_stats%(f: file%): bool - %{ - if ( rule_matcher ) - rule_matcher->DumpStats(f); - - return new Val(1, TYPE_BOOL); - %} - -function bro_is_terminating%(%): bool - %{ - return new Val(terminating, TYPE_BOOL); - %} - -function rotate_file%(f: file%): rotate_info - %{ - RecordVal* info = f->Rotate(); - if ( info ) - return info; - - // Record indicating error. - info = new RecordVal(rotate_info); - info->Assign(0, new StringVal("")); - info->Assign(1, new StringVal("")); - info->Assign(2, new Val(0, TYPE_TIME)); - info->Assign(3, new Val(0, TYPE_TIME)); - - return info; - %} - -function rotate_file_by_name%(f: string%): rotate_info - %{ - RecordVal* info = new RecordVal(rotate_info); - - bool is_pkt_dumper = false; - bool is_addl_pkt_dumper = false; - - // Special case: one of current dump files. - if ( pkt_dumper && streq(pkt_dumper->FileName(), f->CheckString()) ) - { - is_pkt_dumper = true; - pkt_dumper->Close(); - } - - if ( addl_pkt_dumper && - streq(addl_pkt_dumper->FileName(), f->CheckString()) ) - { - is_addl_pkt_dumper = true; - addl_pkt_dumper->Close(); - } - - FILE* file = rotate_file(f->CheckString(), info); - if ( ! file ) - { - // Record indicating error. - info->Assign(0, new StringVal("")); - info->Assign(1, new StringVal("")); - info->Assign(2, new Val(0, TYPE_TIME)); - info->Assign(3, new Val(0, TYPE_TIME)); - return info; - } - - fclose(file); - - if ( is_pkt_dumper ) - { - info->Assign(2, new Val(pkt_dumper->OpenTime(), TYPE_TIME)); - pkt_dumper->Open(); - } - - if ( is_addl_pkt_dumper ) - info->Assign(2, new Val(addl_pkt_dumper->OpenTime(), TYPE_TIME)); - - return info; - %} - -function calc_next_rotate%(i: interval%) : interval - %{ - const char* base_time = log_rotate_base_time ? - log_rotate_base_time->AsString()->CheckString() : 0; - return new Val(calc_next_rotate(i, base_time), TYPE_INTERVAL); - %} - -function file_size%(f: string%) : double - %{ - struct stat s; - - if ( stat(f->CheckString(), &s) < 0 ) - return new Val(-1.0, TYPE_DOUBLE); - - return new Val(double(s.st_size), TYPE_DOUBLE); - %} - -function strftime%(fmt: string, d: time%) : string - %{ - static char buffer[128]; - - time_t t = time_t(d); - - if ( strftime(buffer, 128, fmt->CheckString(), localtime(&t)) == 0 ) - return new StringVal(""); - - return new StringVal(buffer); - %} - -function match_signatures%(c: connection, pattern_type: int, s: string, - bol: bool, eol: bool, - from_orig: bool, clear: bool%) : bool - %{ - if ( ! rule_matcher ) - return new Val(0, TYPE_BOOL); - - c->Match((Rule::PatternType) pattern_type, s->Bytes(), s->Len(), - from_orig, bol, eol, clear); - - return new Val(1, TYPE_BOOL); - %} - -function gethostname%(%) : string - %{ - char buffer[MAXHOSTNAMELEN]; - if ( gethostname(buffer, MAXHOSTNAMELEN) < 0 ) - strcpy(buffer, ""); - - buffer[MAXHOSTNAMELEN-1] = '\0'; - return new StringVal(buffer); - %} - %%{ #include "DNS_Mgr.h" #include "Trigger.h" @@ -2870,8 +3741,15 @@ private: }; %%} -# These two functions issue DNS lookups asynchronously and delay the -# function result. Therefore, they can only be called inside a when-condition. +## Issues an asynchronous reverse DNS lookup and delays the function result. +## This function can therefore only be called inside a ``when`` condition, +## e.g., ``when ( local host = lookup_addr(10.0.0.1) ) { f(host); }``. +## +## host: The IP address to lookup. +## +## Returns: The DNS name of *host*. +## +## .. bro:see:: lookup_hostname function lookup_addr%(host: addr%) : string %{ // FIXME: It should be easy to adapt the function to synchronous @@ -2887,32 +3765,20 @@ function lookup_addr%(host: addr%) : string frame->SetDelayed(); trigger->Hold(); -#ifdef BROv6 - if ( ! is_v4_addr(host) ) - { - // FIXME: This is a temporary work-around until we get this - // fixed. We warn the user once, and always trigger a timeout. - // Ticket #355 records the problem. - static bool warned = false; - if ( ! warned ) - { - reporter->Warning("lookup_addr() only supports IPv4 addresses currently"); - warned = true; - } - - trigger->Timeout(); - return 0; - } - - dns_mgr->AsyncLookupAddr(to_v4_addr(host), + dns_mgr->AsyncLookupAddr(host->AsAddr(), new LookupHostCallback(trigger, frame->GetCall(), true)); -#else - dns_mgr->AsyncLookupAddr(host, - new LookupHostCallback(trigger, frame->GetCall(), true)); -#endif return 0; %} +## Issues an asynchronous DNS lookup and delays the function result. +## This function can therefore only be called inside a ``when`` condition, +## e.g., ``when ( local h = lookup_hostname("www.bro-ids.org") ) { f(h); }``. +## +## host: The hostname to lookup. +## +## Returns: A set of DNS A and AAAA records associated with *host*. +## +## .. bro:see:: lookup_addr function lookup_hostname%(host: string%) : addr_set %{ // FIXME: Is should be easy to adapt the function to synchronous @@ -2933,110 +3799,6 @@ function lookup_hostname%(host: string%) : addr_set return 0; %} -# Stop Bro's packet processing. -function suspend_processing%(%) : any - %{ - net_suspend_processing(); - return 0; - %} - -# Resume Bro's packet processing. -function continue_processing%(%) : any - %{ - net_continue_processing(); - return 0; - %} - -%%{ -#include "DPM.h" -%%} - -# Schedule analyzer for a future connection. -function expect_connection%(orig: addr, resp: addr, resp_p: port, - analyzer: count, tout: interval%) : bool - %{ - dpm->ExpectConnection(orig, resp, resp_p->Port(), resp_p->PortType(), - (AnalyzerTag::Tag) analyzer, tout, 0); - return new Val(1, TYPE_BOOL); - %} - -# Disables the analyzer which raised the current event (if the analyzer -# belongs to the given connection). -function disable_analyzer%(cid: conn_id, aid: count%) : bool - %{ - Connection* c = sessions->FindConnection(cid); - if ( ! c ) - { - reporter->Error("cannot find connection"); - return new Val(0, TYPE_BOOL); - } - - Analyzer* a = c->FindAnalyzer(aid); - if ( ! a ) - { - reporter->Error("connection does not have analyzer specified to disable"); - return new Val(0, TYPE_BOOL); - } - - a->Remove(); - return new Val(1, TYPE_BOOL); - %} - -# Translate analyzer type into an ASCII string. -function analyzer_name%(aid: count%) : string - %{ - return new StringVal(Analyzer::GetTagName((AnalyzerTag::Tag) aid)); - %} - -function lookup_ID%(id: string%) : any - %{ - ID* i = global_scope()->Lookup(id->CheckString()); - if ( ! i ) - return new StringVal(""); - - if ( ! i->ID_Val() ) - return new StringVal(""); - - return i->ID_Val()->Ref(); - %} - -# Stop propagating &synchronized accesses. -function suspend_state_updates%(%) : any - %{ - if ( remote_serializer ) - remote_serializer->SuspendStateUpdates(); - return 0; - %} - -# Resume propagating &synchronized accesses. -function resume_state_updates%(%) : any - %{ - if ( remote_serializer ) - remote_serializer->ResumeStateUpdates(); - return 0; - %} - -# Return ID of analyzer which raised current event, or 0 if none. -function current_analyzer%(%) : count - %{ - return new Val(mgr.CurrentAnalyzer(), TYPE_COUNT); - %} - -# Returns Bro's process id. -function getpid%(%) : count - %{ - return new Val(getpid(), TYPE_COUNT); - %} -%%{ -#include -%%} - -function syslog%(s: string%): any - %{ - reporter->Syslog("%s", s->CheckString()); - return 0; - %} - %%{ #ifdef USE_GEOIP extern "C" { @@ -3051,7 +3813,7 @@ static GeoIP* open_geoip_db(GeoIPDBTypes type) geoip = GeoIP_open_type(type, GEOIP_MEMORY_CACHE); if ( ! geoip ) - reporter->Warning("Failed to open GeoIP database: %s", + reporter->Info("Failed to open GeoIP database: %s", GeoIPDBFileName[type]); return geoip; } @@ -3059,7 +3821,14 @@ static GeoIP* open_geoip_db(GeoIPDBTypes type) #endif %%} -# Return a record with the city, region, and country of an IPv4 address. +## Performs a geo-lookup of an IP address. +## Requires Bro to be built with ``libgeoip``. +## +## a: The IP address to lookup. +## +## Returns: A record with country, region, city, latitude, and longitude. +## +## .. bro:see:: lookup_asn function lookup_location%(a: addr%) : geo_location %{ RecordVal* location = new RecordVal(geo_location); @@ -3084,13 +3853,11 @@ function lookup_location%(a: addr%) : geo_location if ( ! geoip ) builtin_error("Can't initialize GeoIP City/Country database"); else - reporter->Warning("Fell back to GeoIP Country database"); + reporter->Info("Fell back to GeoIP Country database"); } else have_city_db = true; -#ifdef BROv6 - #ifdef HAVE_GEOIP_CITY_EDITION_REV0_V6 geoip_v6 = open_geoip_db(GEOIP_CITY_EDITION_REV0_V6); if ( geoip_v6 ) @@ -3103,16 +3870,13 @@ function lookup_location%(a: addr%) : geo_location #endif if ( ! geoip_v6 ) builtin_error("Can't initialize GeoIPv6 City/Country database"); -#endif } -#ifdef BROv6 - #ifdef HAVE_GEOIP_COUNTRY_EDITION_V6 - if ( geoip_v6 && ! is_v4_addr(a) ) + if ( geoip_v6 && a->AsAddr().GetFamily() == IPv6 ) { geoipv6_t ga; - memcpy(&ga, a, 16); + a->AsAddr().CopyIPv6(&ga); if ( have_cityv6_db ) gir = GeoIP_record_by_ipnum_v6(geoip_v6, ga); else @@ -3121,25 +3885,16 @@ function lookup_location%(a: addr%) : geo_location else #endif - if ( geoip && is_v4_addr(a) ) + if ( geoip && a->AsAddr().GetFamily() == IPv4 ) { - uint32 addr = to_v4_addr(a); + const uint32* bytes; + a->AsAddr().GetBytes(&bytes); if ( have_city_db ) - gir = GeoIP_record_by_ipnum(geoip, ntohl(addr)); + gir = GeoIP_record_by_ipnum(geoip, ntohl(*bytes)); else - cc = GeoIP_country_code_by_ipnum(geoip, ntohl(addr)); + cc = GeoIP_country_code_by_ipnum(geoip, ntohl(*bytes)); } -#else // not BROv6 - if ( geoip ) - { - if ( have_city_db ) - gir = GeoIP_record_by_ipnum(geoip, ntohl(a)); - else - cc = GeoIP_country_code_by_ipnum(geoip, ntohl(a)); - } -#endif - if ( gir ) { if ( gir->country_code ) @@ -3187,6 +3942,14 @@ function lookup_location%(a: addr%) : geo_location return location; %} +## Performs an AS lookup of an IP address. +## Requires Bro to be built with ``libgeoip``. +## +## a: The IP address to lookup. +## +## Returns: The number of the AS that contains *a*. +## +## .. bro:see:: lookup_location function lookup_asn%(a: addr%) : count %{ #ifdef USE_GEOIP @@ -3204,28 +3967,23 @@ function lookup_asn%(a: addr%) : count if ( geoip_asn ) { -#ifdef BROv6 - // IPv6 support showed up in 1.4.5. #ifdef HAVE_GEOIP_COUNTRY_EDITION_V6 - if ( ! is_v4_addr(a) ) + if ( a->AsAddr().GetFamily() == IPv6 ) { geoipv6_t ga; - memcpy(&ga, a, 16); + a->AsAddr().CopyIPv6(&ga); gir = GeoIP_name_by_ipnum_v6(geoip_asn, ga); } else #endif - if ( is_v4_addr(a) ) + if ( a->AsAddr().GetFamily() == IPv4 ) { - uint32 addr = to_v4_addr(a); - gir = GeoIP_name_by_ipnum(geoip_asn, ntohl(addr)); + const uint32* bytes; + a->AsAddr().GetBytes(&bytes); + gir = GeoIP_name_by_ipnum(geoip_asn, ntohl(*bytes)); } - -#else // not BROv6 - gir = GeoIP_name_by_ipnum(geoip_asn, ntohl(a)); -#endif } if ( gir ) @@ -3251,185 +4009,6 @@ function lookup_asn%(a: addr%) : count return new Val(0, TYPE_COUNT); %} -# Returns true if connection has been received externally. -function is_external_connection%(c: connection%) : bool - %{ - return new Val(c && c->IsExternal(), TYPE_BOOL); - %} - -# Function equivalent of the &disable_print_hook attribute. -function disable_print_hook%(f: file%): any - %{ - f->DisablePrintHook(); - return 0; - %} - -# Function equivalent of the &raw_output attribute. -function enable_raw_output%(f: file%): any - %{ - f->EnableRawOutput(); - return 0; - %} - -%%{ -#ifdef HAVE_LIBMAGIC -extern "C" { -#include -} -#endif -%%} - -function identify_data%(data: string, return_mime: bool%): string - %{ - const char* descr = ""; - -#ifdef HAVE_LIBMAGIC - static magic_t magic_mime = 0; - static magic_t magic_descr = 0; - - magic_t* magic = return_mime ? &magic_mime : &magic_descr; - - if( ! *magic ) - { - *magic = magic_open(return_mime ? MAGIC_MIME : MAGIC_NONE); - - if ( ! *magic ) - { - reporter->Error("can't init libmagic: %s", magic_error(*magic)); - return new StringVal(""); - } - - if ( magic_load(*magic, 0) < 0 ) - { - reporter->Error("can't load magic file: %s", magic_error(*magic)); - magic_close(*magic); - *magic = 0; - return new StringVal(""); - } - } - - descr = magic_buffer(*magic, data->Bytes(), data->Len()); -#endif - - return new StringVal(descr); - %} - -function enable_event_group%(group: string%) : any - %{ - event_registry->EnableGroup(group->CheckString(), true); - return 0; - %} - -function disable_event_group%(group: string%) : any - %{ - event_registry->EnableGroup(group->CheckString(), false); - return 0; - %} - - -%%{ -#include -static map entropy_states; -%%} - -function find_entropy%(data: string%): entropy_test_result - %{ - double montepi, scc, ent, mean, chisq; - montepi = scc = ent = mean = chisq = 0.0; - RecordVal* ent_result = new RecordVal(entropy_test_result); - RandTest *rt = new RandTest(); - - rt->add((char*) data->Bytes(), data->Len()); - rt->end(&ent, &chisq, &mean, &montepi, &scc); - delete rt; - - ent_result->Assign(0, new Val(ent, TYPE_DOUBLE)); - ent_result->Assign(1, new Val(chisq, TYPE_DOUBLE)); - ent_result->Assign(2, new Val(mean, TYPE_DOUBLE)); - ent_result->Assign(3, new Val(montepi, TYPE_DOUBLE)); - ent_result->Assign(4, new Val(scc, TYPE_DOUBLE)); - return ent_result; - %} - -function entropy_test_init%(index: any%): bool - %{ - BroString* s = convert_index_to_string(index); - int status = 0; - - if ( entropy_states.count(*s) < 1 ) - { - entropy_states[*s] = new RandTest(); - status = 1; - } - - delete s; - return new Val(status, TYPE_BOOL); - %} - -function entropy_test_add%(index: any, data: string%): bool - %{ - BroString* s = convert_index_to_string(index); - int status = 0; - - if ( entropy_states.count(*s) > 0 ) - { - entropy_states[*s]->add((char*) data->Bytes(), data->Len()); - status = 1; - } - - delete s; - return new Val(status, TYPE_BOOL); - %} - -function entropy_test_finish%(index: any%): entropy_test_result - %{ - BroString* s = convert_index_to_string(index); - double montepi, scc, ent, mean, chisq; - montepi = scc = ent = mean = chisq = 0.0; - RecordVal* ent_result = new RecordVal(entropy_test_result); - - if ( entropy_states.count(*s) > 0 ) - { - RandTest *rt = entropy_states[*s]; - rt->end(&ent, &chisq, &mean, &montepi, &scc); - entropy_states.erase(*s); - delete rt; - } - - ent_result->Assign(0, new Val(ent, TYPE_DOUBLE)); - ent_result->Assign(1, new Val(chisq, TYPE_DOUBLE)); - ent_result->Assign(2, new Val(mean, TYPE_DOUBLE)); - ent_result->Assign(3, new Val(montepi, TYPE_DOUBLE)); - ent_result->Assign(4, new Val(scc, TYPE_DOUBLE)); - - delete s; - return ent_result; - %} - -function bro_has_ipv6%(%) : bool - %{ -#ifdef BROv6 - return new Val(1, TYPE_BOOL); -#else - return new Val(0, TYPE_BOOL); -#endif - %} - -function unique_id%(prefix: string%) : string - %{ - char tmp[20]; - uint64 uid = calculate_unique_id(UID_POOL_DEFAULT_SCRIPT); - return new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62, prefix->CheckString())); - %} - -function unique_id_from%(pool: int, prefix: string%) : string - %{ - pool += UID_POOL_CUSTOM_SCRIPT; // Make sure we don't conflict with internal pool. - - char tmp[20]; - uint64 uid = calculate_unique_id(pool); - return new StringVal(uitoa_n(uid, tmp, sizeof(tmp), 62, prefix->CheckString())); - %} %%{ #include #include @@ -3455,6 +4034,21 @@ X509* d2i_X509_(X509** px, const u_char** in, int len) %%} +## Verifies a certificate. +## +## der_cert: The X.509 certificate in DER format. +## +## cert_stack: Specifies a certificate chain to validate against, with index 0 +## typically being the root CA. Bro uses the Mozilla root CA list +## by default. +## +## root_certs: A list of additional root certificates that extends +## *cert_stack*. +## +## Returns: A status code of the verification which can be converted into an +## ASCII string via :bro:id:`x509_err2str`. +## +## .. bro:see:: x509_err2str function x509_verify%(der_cert: string, cert_stack: string_vec, root_certs: table_string_of_string%): count %{ X509_STORE* ctx = 0; @@ -3535,12 +4129,25 @@ function x509_verify%(der_cert: string, cert_stack: string_vec, root_certs: tabl return new Val((uint64) csc.error, TYPE_COUNT); %} +## Converts a certificate verification error code into an ASCII string. +## +## err_num: The error code. +## +## Returns: A string representation of *err_num*. +## +## .. bro:see:: x509_verify function x509_err2str%(err_num: count%): string %{ return new StringVal(X509_verify_cert_error_string(err_num)); %} -function NFS3::mode2string%(mode: count%): string +## Converts UNIX file permissions given by a mode to an ASCII string. +## +## mode: The permissions (an octal number like 0644 converted to decimal). +## +## Returns: A string representation of *mode* in the format +## ``rw[xsS]rw[xsS]rw[xtT]``. +function file_mode%(mode: count%): string %{ char str[12]; char *p = str; @@ -3626,41 +4233,1005 @@ function NFS3::mode2string%(mode: count%): string return new StringVal(str); %} -## Opens a program with popen() and writes a given string to the returned -## stream to send it to the opened process's stdin. -## program: a string naming the program to execute -## to_write: a string to pipe to the opened program's process over stdin -## Returns: F if popen'ing the program failed, else T -function piped_exec%(program: string, to_write: string%): bool +# =========================================================================== +# +# Controlling Analyzer Behavior +# +# =========================================================================== + +%%{ +#include "DPM.h" +%%} + +## Schedules an analyzer for a future connection from a given IP address and +## port. The function ignores the scheduling request if the connection did +## not occur within the specified time interval. +## +## orig: The IP address originating a connection in the future. +## +## resp: The IP address responding to a connection from *orig*. +## +## resp_p: The destination port at *resp*. +## +## analyzer: The analyzer ID. +## +## tout: The timeout interval after which to ignore the scheduling request. +## +## Returns: True (unconditionally). +## +## .. bro:see:: disable_analyzer analyzer_name +## +## .. todo:: The return value should be changed to any. +function expect_connection%(orig: addr, resp: addr, resp_p: port, + analyzer: count, tout: interval%) : any %{ - const char* prog = program->CheckString(); - - FILE* f = popen(prog, "w"); - if ( ! f ) - { - reporter->Error("Failed to popen %s", prog); - return new Val(0, TYPE_BOOL); - } - - const u_char* input_data = to_write->Bytes(); - int input_data_len = to_write->Len(); - - int bytes_written = fwrite(input_data, 1, input_data_len, f); - - pclose(f); - - if ( bytes_written != input_data_len ) - { - reporter->Error("Failed to write all given data to %s", prog); - return new Val(0, TYPE_BOOL); - } - + dpm->ExpectConnection(orig->AsAddr(), resp->AsAddr(), resp_p->Port(), + resp_p->PortType(), (AnalyzerTag::Tag) analyzer, tout, 0); return new Val(1, TYPE_BOOL); %} -## Enables the communication system. Note that by default, -## communication is off until explicitly enabled, and all other calls -## to communication-related BiFs' will be ignored until done so. +## Disables the analyzer which raised the current event (if the analyzer +## belongs to the given connection). +## +## cid: The connection identifier. +## +## aid: The analyzer ID. +## +## Returns: True if the connection identified by *cid* exists and has analyzer +## *aid*. +## +## .. bro:see:: expect_connection analyzer_name +function disable_analyzer%(cid: conn_id, aid: count%) : bool + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + { + reporter->Error("cannot find connection"); + return new Val(0, TYPE_BOOL); + } + + Analyzer* a = c->FindAnalyzer(aid); + if ( ! a ) + { + reporter->Error("connection does not have analyzer specified to disable"); + return new Val(0, TYPE_BOOL); + } + + a->Remove(); + return new Val(1, TYPE_BOOL); + %} + +## Translate an analyzer type to an ASCII string. +## +## aid: The analyzer ID. +## +## Returns: The analyzer *aid* as string. +## +## .. bro:see:: expect_connection disable_analyzer current_analyzer +function analyzer_name%(aid: count%) : string + %{ + return new StringVal(Analyzer::GetTagName((AnalyzerTag::Tag) aid)); + %} + +## Informs Bro that it should skip any further processing of the contents of +## a given connection. In particular, Bro will refrain from reassembling the +## TCP byte stream and from generating events relating to any analyzers that +## have been processing the connection. +## +## cid: The connection ID. +## +## Returns: False if *cid* does not point to an active connection, and true +## otherwise. +## +## .. note:: +## +## Bro will still generate connection-oriented events such as +## :bro:id:`connection_finished`. +function skip_further_processing%(cid: conn_id%): bool + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_BOOL); + + c->SetSkip(1); + return new Val(1, TYPE_BOOL); + %} + +## Controls whether packet contents belonging to a connection should be +## recorded (when ``-w`` option is provided on the command line). +## +## cid: The connection identifier. +## +## do_record: True to enable packet contents, and false to disable for the +## connection identified by *cid*. +## +## Returns: False if *cid* does not point to an active connection, and true +## otherwise. +## +## .. bro:see:: skip_further_processing +## +## .. note:: +## +## This is independent of whether Bro processes the packets of this +## connection, which is controlled separately by +## :bro:id:`skip_further_processing`. +## +## .. bro:see:: get_contents_file set_contents_file +function set_record_packets%(cid: conn_id, do_record: bool%): bool + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_BOOL); + + c->SetRecordPackets(do_record); + return new Val(1, TYPE_BOOL); + %} + +## Associates a file handle with a connection for writing TCP byte stream +## contents. +## +## cid: The connection ID. +## +## direction: Controls what sides of the connection to record. The argument can +## take one of the four values: +## +## - ``CONTENTS_NONE``: Stop recording the connection's content. +## - ``CONTENTS_ORIG``: Record the data sent by the connection +## originator (often the client). +## - ``CONTENTS_RESP``: Record the data sent by the connection +## responder (often the server). +## - ``CONTENTS_BOTH``: Record the data sent in both directions. +## Results in the two directions being +## intermixed in the file, in the order the +## data was seen by Bro. +## +## f: The file handle of the file to write the contents to. +## +## Returns: Returns false if *cid* does not point to an active connection, and +## true otherwise. +## +## .. note:: +## +## The data recorded to the file reflects the byte stream, not the +## contents of individual packets. Reordering and duplicates are +## removed. If any data is missing, the recording stops at the +## missing data; this can happen, e.g., due to an +## :bro:id:`ack_above_hole` event. +## +## .. bro:see:: get_contents_file set_record_packets +function set_contents_file%(cid: conn_id, direction: count, f: file%): bool + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_BOOL); + + c->GetRootAnalyzer()->SetContentsFile(direction, f); + return new Val(1, TYPE_BOOL); + %} + +## Returns the file handle of the contents file of a connection. +## +## cid: The connection ID. +## +## direction: Controls what sides of the connection to record. See +## :bro:id:`set_contents_file` for possible values. +## +## Returns: The :bro:type:`file` handle for the contents file of the +## connection identified by *cid*. If the connection exists +## but there is no contents file for *direction*, then the function +## generates an error and returns a file handle to ``stderr``. +## +## .. bro:see:: set_contents_file set_record_packets +function get_contents_file%(cid: conn_id, direction: count%): file + %{ + Connection* c = sessions->FindConnection(cid); + BroFile* f = c ? c->GetRootAnalyzer()->GetContentsFile(direction) : 0; + + if ( f ) + { + Ref(f); + return new Val(f); + } + + // Return some sort of error value. + if ( ! c ) + builtin_error("unknown connection id in get_contents_file()", cid); + else + builtin_error("no contents file for given direction"); + + return new Val(new BroFile(stderr, "-", "w")); + %} + +## Sets an individual inactivity timeout for a connection and thus +## overrides the global inactivity timeout. +## +## cid: The connection ID. +## +## t: The new inactivity timeout for the connection identified by *cid*. +## +## Returns: The previous timeout interval. +function set_inactivity_timeout%(cid: conn_id, t: interval%): interval + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_INTERVAL); + + double old_timeout = c->InactivityTimeout(); + c->SetInactivityTimeout(t); + + return new Val(old_timeout, TYPE_INTERVAL); + %} + +## Returns the state of the given login (Telnet or Rlogin) connection. +## +## cid: The connection ID. +## +## Returns: False if the connection is not active or is not tagged as a +## login analyzer. Otherwise the function returns the state, which can +## be one of: +## +## - ``LOGIN_STATE_AUTHENTICATE``: The connection is in its +## initial authentication dialog. +## - ``LOGIN_STATE_LOGGED_IN``: The analyzer believes the user has +## successfully authenticated. +## - ``LOGIN_STATE_SKIP``: The analyzer has skipped any further +## processing of the connection. +## - ``LOGIN_STATE_CONFUSED``: The analyzer has concluded that it +## does not correctly know the state of the connection, and/or +## the username associated with it. +## +## .. bro:see:: set_login_state +function get_login_state%(cid: conn_id%): count + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_BOOL); + + Analyzer* la = c->FindAnalyzer(AnalyzerTag::Login); + if ( ! la ) + return new Val(0, TYPE_BOOL); + + return new Val(int(static_cast(la)->LoginState()), + TYPE_COUNT); + %} + +## Sets the login state of a connection with a login analyzer. +## +## cid: The connection ID. +## +## new_state: The new state of the login analyzer. See +## :bro:id:`get_login_state` for possible values. +## +## Returns: Returns false if *cid* is not an active connection +## or is not tagged as a login analyzer, and true otherwise. +## +## .. bro:see:: get_login_state +function set_login_state%(cid: conn_id, new_state: count%): bool + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_BOOL); + + Analyzer* la = c->FindAnalyzer(AnalyzerTag::Login); + if ( ! la ) + return new Val(0, TYPE_BOOL); + + static_cast(la)->SetLoginState(login_state(new_state)); + return new Val(1, TYPE_BOOL); + %} + +%%{ +#include "TCP.h" +%%} + +## Get the originator sequence number of a TCP connection. Sequence numbers +## are absolute (i.e., they reflect the values seen directly in packet headers; +## they are not relative to the beginning of the connection). +## +## cid: The connection ID. +## +## Returns: The highest sequence number sent by a connection's originator, or 0 +## if *cid* does not point to an active TCP connection. +## +## .. bro:see:: get_resp_seq +function get_orig_seq%(cid: conn_id%): count + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_COUNT); + + if ( c->ConnTransport() != TRANSPORT_TCP ) + return new Val(0, TYPE_COUNT); + + Analyzer* tc = c->FindAnalyzer(AnalyzerTag::TCP); + if ( tc ) + return new Val(static_cast(tc)->OrigSeq(), + TYPE_COUNT); + else + { + reporter->Error("connection does not have TCP analyzer"); + return new Val(0, TYPE_COUNT); + } + %} + +## Get the responder sequence number of a TCP connection. Sequence numbers +## are absolute (i.e., they reflect the values seen directly in packet headers; +## they are not relative to the beginning of the connection). +## +## cid: The connection ID. +## +## Returns: The highest sequence number sent by a connection's responder, or 0 +## if *cid* does not point to an active TCP connection. +## +## .. bro:see:: get_orig_seq +function get_resp_seq%(cid: conn_id%): count + %{ + Connection* c = sessions->FindConnection(cid); + if ( ! c ) + return new Val(0, TYPE_COUNT); + + if ( c->ConnTransport() != TRANSPORT_TCP ) + return new Val(0, TYPE_COUNT); + + Analyzer* tc = c->FindAnalyzer(AnalyzerTag::TCP); + if ( tc ) + return new Val(static_cast(tc)->RespSeq(), + TYPE_COUNT); + else + { + reporter->Error("connection does not have TCP analyzer"); + return new Val(0, TYPE_COUNT); + } + %} + +%%{ +#include "SMTP.h" +%%} + +## Skips SMTP data until the next email in a connection. +## +## c: The SMTP connection. +## +## .. bro:see:: skip_http_entity_data +function skip_smtp_data%(c: connection%): any + %{ + Analyzer* sa = c->FindAnalyzer(AnalyzerTag::SMTP); + if ( sa ) + static_cast(sa)->SkipData(); + return 0; + %} + +## Enables all event handlers in a given group. One can tag event handlers with +## the :bro:attr:`&group` attribute to logically group them together, e.g, +## ``event foo() &group="bar"``. This function enables all event handlers that +## belong to such a group. +## +## group: The group. +## +## .. bro:see:: disable_event_group +function enable_event_group%(group: string%) : any + %{ + event_registry->EnableGroup(group->CheckString(), true); + return 0; + %} + +## Disables all event handlers in a given group. +## +## group: The group. +## +## .. bro:see:: enable_event_group +function disable_event_group%(group: string%) : any + %{ + event_registry->EnableGroup(group->CheckString(), false); + return 0; + %} + +# =========================================================================== +# +# Files and Directories +# +# =========================================================================== + +## Opens a file for writing. If a file with the same name already exists, this +## function overwrites it (as opposed to :bro:id:`open_for_append`). +## +## f: The path to the file. +## +## Returns: A :bro:type:`file` handle for subsequent operations. +## +## .. bro:see:: active_file open_for_append close write_file +## get_file_name set_buf flush_all mkdir enable_raw_output +function open%(f: string%): file + %{ + const char* file = f->CheckString(); + + if ( streq(file, "-") ) + return new Val(new BroFile(stdout, "-", "w")); + else + return new Val(new BroFile(file, "w")); + %} + +## Opens a file for writing or appending. If a file with the same name already +## exists, this function appends to it (as opposed to :bro:id:`open`). +## +## f: The path to the file. +## +## Returns: A :bro:type:`file` handle for subsequent operations. +## +## .. bro:see:: active_file open close write_file +## get_file_name set_buf flush_all mkdir enable_raw_output +function open_for_append%(f: string%): file + %{ + return new Val(new BroFile(f->CheckString(), "a")); + %} + +## Closes an open file and flushes any buffered content. +## +## f: A :bro:type:`file` handle to an open file. +## +## Returns: True on success. +## +## .. bro:see:: active_file open open_for_append write_file +## get_file_name set_buf flush_all mkdir enable_raw_output +function close%(f: file%): bool + %{ + return new Val(f->Close(), TYPE_BOOL); + %} + +## Writes data to an open file. +## +## f: A :bro:type:`file` handle to an open file. +## +## data: The data to write to *f*. +## +## Returns: True on success. +## +## .. bro:see:: active_file open open_for_append close +## get_file_name set_buf flush_all mkdir enable_raw_output +function write_file%(f: file, data: string%): bool + %{ + if ( ! f ) + return new Val(0, TYPE_BOOL); + + return new Val(f->Write((const char*) data->Bytes(), data->Len()), + TYPE_BOOL); + %} + +## Alters the buffering behavior of a file. +## +## f: A :bro:type:`file` handle to an open file. +## +## buffered: When true, *f* is fully buffered, i.e., bytes are saved in a +## buffer until the block size has been reached. When +## false, *f* is line buffered, i.e., bytes are saved up until a +## newline occurs. +## +## .. bro:see:: active_file open open_for_append close +## get_file_name write_file flush_all mkdir enable_raw_output +function set_buf%(f: file, buffered: bool%): any + %{ + f->SetBuf(buffered); + return new Val(0, TYPE_VOID); + %} + +## Flushes all open files to disk. +## +## Returns: True on success. +## +## .. bro:see:: active_file open open_for_append close +## get_file_name write_file set_buf mkdir enable_raw_output +function flush_all%(%): bool + %{ + return new Val(fflush(0) == 0, TYPE_BOOL); + %} + +## Creates a new directory. +## +## f: The directory name. +## +## Returns: Returns true if the operation succeeds, or false if the +## creation fails or if *f* exists already. +## +## .. bro:see:: active_file open_for_append close write_file +## get_file_name set_buf flush_all enable_raw_output +function mkdir%(f: string%): bool + %{ + const char* filename = f->CheckString(); + if ( mkdir(filename, 0777) < 0 && errno != EEXIST ) + { + builtin_error("cannot create directory", @ARG@[0]); + return new Val(0, TYPE_BOOL); + } + else + return new Val(1, TYPE_BOOL); + %} + +## Checks whether a given file is open. +## +## f: The file to check. +## +## Returns: True if *f* is an open :bro:type:`file`. +## +## .. todo:: Rename to ``is_open``. +function active_file%(f: file%): bool + %{ + return new Val(f->IsOpen(), TYPE_BOOL); + %} + +## Gets the filename associated with a file handle. +## +## f: The file handle to inquire the name for. +## +## Returns: The filename associated with *f*. +## +## .. bro:see:: open +function get_file_name%(f: file%): string + %{ + if ( ! f ) + return new StringVal(""); + + return new StringVal(f->Name()); + %} + +## Rotates a file. +## +## f: An open file handle. +## +## Returns: Rotation statistics which include the original file name, the name +## after the rotation, and the time when *f* was opened/closed. +## +## .. bro:see:: rotate_file_by_name calc_next_rotate +function rotate_file%(f: file%): rotate_info + %{ + RecordVal* info = f->Rotate(); + if ( info ) + return info; + + // Record indicating error. + info = new RecordVal(rotate_info); + info->Assign(0, new StringVal("")); + info->Assign(1, new StringVal("")); + info->Assign(2, new Val(0, TYPE_TIME)); + info->Assign(3, new Val(0, TYPE_TIME)); + + return info; + %} + +## Rotates a file identified by its name. +## +## f: The name of the file to rotate +## +## Returns: Rotation statistics which include the original file name, the name +## after the rotation, and the time when *f* was opened/closed. +## +## .. bro:see:: rotate_file calc_next_rotate +function rotate_file_by_name%(f: string%): rotate_info + %{ + RecordVal* info = new RecordVal(rotate_info); + + bool is_pkt_dumper = false; + bool is_addl_pkt_dumper = false; + + // Special case: one of current dump files. + if ( pkt_dumper && streq(pkt_dumper->FileName(), f->CheckString()) ) + { + is_pkt_dumper = true; + pkt_dumper->Close(); + } + + if ( addl_pkt_dumper && + streq(addl_pkt_dumper->FileName(), f->CheckString()) ) + { + is_addl_pkt_dumper = true; + addl_pkt_dumper->Close(); + } + + FILE* file = rotate_file(f->CheckString(), info); + if ( ! file ) + { + // Record indicating error. + info->Assign(0, new StringVal("")); + info->Assign(1, new StringVal("")); + info->Assign(2, new Val(0, TYPE_TIME)); + info->Assign(3, new Val(0, TYPE_TIME)); + return info; + } + + fclose(file); + + if ( is_pkt_dumper ) + { + info->Assign(2, new Val(pkt_dumper->OpenTime(), TYPE_TIME)); + pkt_dumper->Open(); + } + + if ( is_addl_pkt_dumper ) + info->Assign(2, new Val(addl_pkt_dumper->OpenTime(), TYPE_TIME)); + + return info; + %} + +## Calculates the duration until the next time a file is to be rotated, based +## on a given rotate interval. +## +## i: The rotate interval to base the calculation on. +## +## Returns: The duration until the next file rotation time. +## +## .. bro:see:: rotate_file rotate_file_by_name +function calc_next_rotate%(i: interval%) : interval + %{ + const char* base_time = log_rotate_base_time ? + log_rotate_base_time->AsString()->CheckString() : 0; + + double base = parse_rotate_base_time(base_time); + return new Val(calc_next_rotate(network_time, i, base), TYPE_INTERVAL); + %} + +## Returns the size of a given file. +## +## f: The name of the file whose size to lookup. +## +## Returns: The size of *f* in bytes. +function file_size%(f: string%) : double + %{ + struct stat s; + + if ( stat(f->CheckString(), &s) < 0 ) + return new Val(-1.0, TYPE_DOUBLE); + + return new Val(double(s.st_size), TYPE_DOUBLE); + %} + +## Disables sending :bro:id:`print_hook` events to remote peers for a given +## file. In a +## distributed setup, communicating Bro instances generate the event +## :bro:id:`print_hook` for each print statement and send it to the remote +## side. When disabled for a particular file, these events will not be +## propagated to other peers. +## +## f: The file to disable :bro:id:`print_hook` events for. +## +## .. bro:see:: enable_raw_output +function disable_print_hook%(f: file%): any + %{ + f->DisablePrintHook(); + return 0; + %} + +## Prevents escaping of non-ASCII characters when writing to a file. +## This function is equivalent to :bro:attr:`&raw_output`. +## +## f: The file to disable raw output for. +## +## .. bro:see:: disable_print_hook +function enable_raw_output%(f: file%): any + %{ + f->EnableRawOutput(); + return 0; + %} + +# =========================================================================== +# +# Packet Filtering +# +# =========================================================================== + +## Precompiles a PCAP filter and binds it to a given identifier. +## +## id: The PCAP identifier to reference the filter *s* later on. +## +## s: The PCAP filter. See ``man tcpdump`` for valid expressions. +## +## Returns: True if *s* is valid and precompiles successfully. +## +## .. bro:see:: install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +function precompile_pcap_filter%(id: PcapFilterID, s: string%): bool + %{ + bool success = true; + + loop_over_list(pkt_srcs, i) + { + pkt_srcs[i]->ClearErrorMsg(); + + if ( ! pkt_srcs[i]->PrecompileFilter(id->ForceAsInt(), + s->CheckString()) ) + { + reporter->Error("precompile_pcap_filter: %s", + pkt_srcs[i]->ErrorMsg()); + success = false; + } + } + + return new Val(success, TYPE_BOOL); + %} + +## Installs a PCAP filter that has been precompiled with +## :bro:id:`precompile_pcap_filter`. +## +## id: The PCAP filter id of a precompiled filter. +## +## Returns: True if the filter associated with *id* has been installed +## successfully. +## +## .. bro:see:: precompile_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +function install_pcap_filter%(id: PcapFilterID%): bool + %{ + bool success = true; + + loop_over_list(pkt_srcs, i) + { + pkt_srcs[i]->ClearErrorMsg(); + + if ( ! pkt_srcs[i]->SetFilter(id->ForceAsInt()) ) + success = false; + } + + return new Val(success, TYPE_BOOL); + %} + +## Returns a string representation of the last PCAP error. +## +## Returns: A descriptive error message of the PCAP function that failed. +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +function pcap_error%(%): string + %{ + loop_over_list(pkt_srcs, i) + { + const char* err = pkt_srcs[i]->ErrorMsg(); + if ( *err ) + return new StringVal(err); + } + + return new StringVal("no error"); + %} + +## Installs a filter to drop packets from a given IP source address with +## a certain probability if none of a given set of TCP flags are set. +## Note that for IPv6 packets with a Destination options header that has +## the Home Address option, this filters out against that home address. +## +## ip: The IP address to drop. +## +## tcp_flags: If none of these TCP flags are set, drop packets from *ip* with +## probability *prob*. +## +## prob: The probability [0.0, 1.0] used to drop packets from *ip*. +## +## Returns: True (unconditionally). +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +## +## .. todo:: The return value should be changed to any. +function install_src_addr_filter%(ip: addr, tcp_flags: count, prob: double%) : bool + %{ + sessions->GetPacketFilter()->AddSrc(ip->AsAddr(), tcp_flags, prob); + return new Val(1, TYPE_BOOL); + %} + +## Installs a filter to drop packets originating from a given subnet with +## a certain probability if none of a given set of TCP flags are set. +## +## snet: The subnet to drop packets from. +## +## tcp_flags: If none of these TCP flags are set, drop packets from *snet* with +## probability *prob*. +## +## prob: The probability [0.0, 1.0] used to drop packets from *snet*. +## +## Returns: True (unconditionally). +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +## +## .. todo:: The return value should be changed to any. +function install_src_net_filter%(snet: subnet, tcp_flags: count, prob: double%) : bool + %{ + sessions->GetPacketFilter()->AddSrc(snet, tcp_flags, prob); + return new Val(1, TYPE_BOOL); + %} + +## Removes a source address filter. +## +## ip: The IP address for which a source filter was previously installed. +## +## Returns: True on success. +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +function uninstall_src_addr_filter%(ip: addr%) : bool + %{ + return new Val(sessions->GetPacketFilter()->RemoveSrc(ip->AsAddr()), TYPE_BOOL); + %} + +## Removes a source subnet filter. +## +## snet: The subnet for which a source filter was previously installed. +## +## Returns: True on success. +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +function uninstall_src_net_filter%(snet: subnet%) : bool + %{ + return new Val(sessions->GetPacketFilter()->RemoveSrc(snet), TYPE_BOOL); + %} + +## Installs a filter to drop packets destined to a given IP address with +## a certain probability if none of a given set of TCP flags are set. +## Note that for IPv6 packets with a routing type header and non-zero +## segments left, this filters out against the final destination of the +## packet according to the routing extension header. +## +## ip: Drop packets to this IP address. +## +## tcp_flags: If none of these TCP flags are set, drop packets to *ip* with +## probability *prob*. +## +## prob: The probability [0.0, 1.0] used to drop packets to *ip*. +## +## Returns: True (unconditionally). +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +## +## .. todo:: The return value should be changed to any. +function install_dst_addr_filter%(ip: addr, tcp_flags: count, prob: double%) : bool + %{ + sessions->GetPacketFilter()->AddDst(ip->AsAddr(), tcp_flags, prob); + return new Val(1, TYPE_BOOL); + %} + +## Installs a filter to drop packets destined to a given subnet with +## a certain probability if none of a given set of TCP flags are set. +## +## snet: Drop packets to this subnet. +## +## tcp_flags: If none of these TCP flags are set, drop packets to *snet* with +## probability *prob*. +## +## prob: The probability [0.0, 1.0] used to drop packets to *snet*. +## +## Returns: True (unconditionally). +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## uninstall_dst_addr_filter +## uninstall_dst_net_filter +## pcap_error +## +## .. todo:: The return value should be changed to any. +function install_dst_net_filter%(snet: subnet, tcp_flags: count, prob: double%) : bool + %{ + sessions->GetPacketFilter()->AddDst(snet, tcp_flags, prob); + return new Val(1, TYPE_BOOL); + %} + +## Removes a destination address filter. +## +## ip: The IP address for which a destination filter was previously installed. +## +## Returns: True on success. +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_net_filter +## pcap_error +function uninstall_dst_addr_filter%(ip: addr%) : bool + %{ + return new Val(sessions->GetPacketFilter()->RemoveDst(ip->AsAddr()), TYPE_BOOL); + %} + +## Removes a destination subnet filter. +## +## snet: The subnet for which a destination filter was previously installed. +## +## Returns: True on success. +## +## .. bro:see:: precompile_pcap_filter +## install_pcap_filter +## install_src_addr_filter +## install_src_net_filter +## uninstall_src_addr_filter +## uninstall_src_net_filter +## install_dst_addr_filter +## install_dst_net_filter +## uninstall_dst_addr_filter +## pcap_error +function uninstall_dst_net_filter%(snet: subnet%) : bool + %{ + return new Val(sessions->GetPacketFilter()->RemoveDst(snet), TYPE_BOOL); + %} + +# =========================================================================== +# +# Communication +# +# =========================================================================== + +## Enables the communication system. By default, the communication is off until +## explicitly enabled, and all other calls to communication-related functions +## will be ignored until done so. function enable_communication%(%): any %{ if ( bro_start_network_time != 0.0 ) @@ -3678,8 +5249,562 @@ function enable_communication%(%): any return 0; %} -## Returns the Bro version string -function bro_version%(%): string +## Flushes in-memory state tagged with the :bro:attr:`&persistent` attribute +## to disk. The function writes the state to the file ``.state/state.bst`` in +## the directory where Bro was started. +## +## Returns: True on success. +## +## .. bro:see:: rescan_state +function checkpoint_state%(%) : bool %{ - return new StringVal(bro_version()); + return new Val(persistence_serializer->WriteState(true), TYPE_BOOL); %} + +## Reads persistent state and populates the in-memory data structures +## accordingly. Persistent state is read from the ``.state`` directory. +## This function is the dual to :bro:id:`checkpoint_state`. +## +## Returns: True on success. +## +## .. bro:see:: checkpoint_state +function rescan_state%(%) : bool + %{ + return new Val(persistence_serializer->ReadAll(false, true), TYPE_BOOL); + %} + +## Writes the binary event stream generated by the core to a given file. +## Use the ``-x `` command line switch to replay saved events. +## +## filename: The name of the file which stores the events. +## +## Returns: True if opening the target file succeeds. +## +## .. bro:see:: capture_state_updates +function capture_events%(filename: string%) : bool + %{ + if ( ! event_serializer ) + event_serializer = new FileSerializer(); + else + event_serializer->Close(); + + return new Val(event_serializer->Open( + (const char*) filename->CheckString()), TYPE_BOOL); + %} + +## Writes state updates generated by :bro:attr:`&synchronized` variables to a +## file. +## +## filename: The name of the file which stores the state updates. +## +## Returns: True if opening the target file succeeds. +## +## .. bro:see:: capture_events +function capture_state_updates%(filename: string%) : bool + %{ + if ( ! state_serializer ) + state_serializer = new FileSerializer(); + else + state_serializer->Close(); + + return new Val(state_serializer->Open( + (const char*) filename->CheckString()), TYPE_BOOL); + %} + +## Establishes a connection to a remote Bro or Broccoli instance. +## +## ip: The IP address of the remote peer. +## +## zone_id: If *ip* is a non-global IPv6 address, a particular :rfc:`4007` +## ``zone_id`` can given here. An empty string, ``""``, means +## not to add any ``zone_id``. +## +## p: The port of the remote peer. +## +## our_class: If a non-empty string, then the remote (listening) peer checks it +## against its class name in its peer table and terminates the +## connection if they don't match. +## +## retry: If the connection fails, try to reconnect with the peer after this +## time interval. +## +## ssl: If true, use SSL to encrypt the session. +## +## Returns: A locally unique ID of the new peer. +## +## .. bro:see:: disconnect +## listen +## request_remote_events +## request_remote_sync +## request_remote_logs +## request_remote_events +## set_accept_state +## set_compression_level +## send_state +## send_id +function connect%(ip: addr, zone_id: string, p: port, our_class: string, retry: interval, ssl: bool%) : count + %{ + return new Val(uint32(remote_serializer->Connect(ip->AsAddr(), + zone_id->CheckString(), p->Port(), our_class->CheckString(), + retry, ssl)), + TYPE_COUNT); + %} + +## Terminate the connection with a peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## Returns: True on success. +## +## .. bro:see:: connect listen +function disconnect%(p: event_peer%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->CloseConnection(id), TYPE_BOOL); + %} + +## Subscribes to all events from a remote peer whose names match a given +## pattern. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## handlers: The pattern describing the events to request from peer *p*. +## +## Returns: True on success. +## +## .. bro:see:: request_remote_sync +## request_remote_logs +## set_accept_state +function request_remote_events%(p: event_peer, handlers: pattern%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->RequestEvents(id, handlers), + TYPE_BOOL); + %} + +## Requests synchronization of IDs with a remote peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## auth: If true, the local instance considers its current state authoritative +## and sends it to *p* right after the handshake. +## +## Returns: True on success. +## +## .. bro:see:: request_remote_events +## request_remote_logs +## set_accept_state +function request_remote_sync%(p: event_peer, auth: bool%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->RequestSync(id, auth), TYPE_BOOL); + %} + +## Requests logs from a remote peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## Returns: True on success. +## +## .. bro:see:: request_remote_events +## request_remote_sync +function request_remote_logs%(p: event_peer%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->RequestLogs(id), TYPE_BOOL); + %} + +## Sets a boolean flag indicating whether Bro accepts state from a remote peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## accept: True if Bro accepts state from peer *p*, or false otherwise. +## +## Returns: True on success. +## +## .. bro:see:: request_remote_events +## request_remote_sync +## set_compression_level +function set_accept_state%(p: event_peer, accept: bool%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->SetAcceptState(id, accept), + TYPE_BOOL); + %} + +## Sets the compression level of the session with a remote peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## level: Allowed values are in the range *[0, 9]*, where 0 is the default and +## means no compression. +## +## Returns: True on success. +## +## .. bro:see:: set_accept_state +function set_compression_level%(p: event_peer, level: count%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->SetCompressionLevel(id, level), + TYPE_BOOL); + %} + +## Listens on a given IP address and port for remote connections. +## +## ip: The IP address to bind to. +## +## p: The TCP port to listen on. +## +## ssl: If true, Bro uses SSL to encrypt the session. +## +## ipv6: If true, enable listening on IPv6 addresses. +## +## zone_id: If *ip* is a non-global IPv6 address, a particular :rfc:`4007` +## ``zone_id`` can given here. An empty string, ``""``, means +## not to add any ``zone_id``. +## +## retry_interval: If address *ip* is found to be already in use, this is +## the interval at which to automatically retry binding. +## +## Returns: True on success. +## +## .. bro:see:: connect disconnect +function listen%(ip: addr, p: port, ssl: bool, ipv6: bool, zone_id: string, retry_interval: interval%) : bool + %{ + return new Val(remote_serializer->Listen(ip->AsAddr(), p->Port(), ssl, ipv6, zone_id->CheckString(), retry_interval), TYPE_BOOL); + %} + +## Checks whether the last raised event came from a remote peer. +## +## Returns: True if the last raised event came from a remote peer. +function is_remote_event%(%) : bool + %{ + return new Val(mgr.CurrentSource() != SOURCE_LOCAL, TYPE_BOOL); + %} + +## Sends all persistent state to a remote peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## Returns: True on success. +## +## .. bro:see:: send_id send_ping send_current_packet send_capture_filter +function send_state%(p: event_peer%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(persistence_serializer->SendState(id, true), TYPE_BOOL); + %} + +## Sends a global identifier to a remote peer, which then might install it +## locally. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## id: The identifier to send. +## +## Returns: True on success. +## +## .. bro:see:: send_state send_ping send_current_packet send_capture_filter +function send_id%(p: event_peer, id: string%) : bool + %{ + RemoteSerializer::PeerID pid = p->AsRecordVal()->Lookup(0)->AsCount(); + + ID* i = global_scope()->Lookup(id->CheckString()); + if ( ! i ) + { + reporter->Error("send_id: no global id %s", id->CheckString()); + return new Val(0, TYPE_BOOL); + } + + SerialInfo info(remote_serializer); + return new Val(remote_serializer->SendID(&info, pid, *i), TYPE_BOOL); + %} + +## Gracefully finishes communication by first making sure that all remaining +## data from parent and child has been sent out. +## +## Returns: True if the termination process has been started successfully. +function terminate_communication%(%) : bool + %{ + return new Val(remote_serializer->Terminate(), TYPE_BOOL); + %} + +## Signals a remote peer that the local Bro instance finished the initial +## handshake. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## Returns: True on success. +function complete_handshake%(p: event_peer%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->CompleteHandshake(id), TYPE_BOOL); + %} + +## Sends a ping event to a remote peer. In combination with an event handler +## for :bro:id:`remote_pong`, this function can be used to measure latency +## between two peers. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## seq: A sequence number (also included by :bro:id:`remote_pong`). +## +## Returns: True if sending the ping succeeds. +## +## .. bro:see:: send_state send_id send_current_packet send_capture_filter +function send_ping%(p: event_peer, seq: count%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->SendPing(id, seq), TYPE_BOOL); + %} + +## Sends the currently processed packet to a remote peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## Returns: True if sending the packet succeeds. +## +## .. bro:see:: send_id send_state send_ping send_capture_filter +## dump_packet dump_current_packet get_current_packet +function send_current_packet%(p: event_peer%) : bool + %{ + Packet pkt(""); + + if ( ! current_pktsrc || + ! current_pktsrc->GetCurrentPacket(&pkt.hdr, &pkt.pkt) ) + return new Val(0, TYPE_BOOL); + + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + + pkt.time = pkt.hdr->ts.tv_sec + double(pkt.hdr->ts.tv_usec) / 1e6; + pkt.hdr_size = current_pktsrc->HdrSize(); + pkt.link_type = current_pktsrc->LinkType(); + + SerialInfo info(remote_serializer); + return new Val(remote_serializer->SendPacket(&info, id, pkt), TYPE_BOOL); + %} + +## Returns the peer who generated the last event. +## +## Returns: The ID of the peer who generated the last event. +## +## .. bro:see:: get_local_event_peer +function get_event_peer%(%) : event_peer + %{ + SourceID src = mgr.CurrentSource(); + + if ( src == SOURCE_LOCAL ) + { + RecordVal* p = mgr.GetLocalPeerVal(); + Ref(p); + return p; + } + + if ( ! remote_serializer ) + reporter->InternalError("remote_serializer not initialized"); + + Val* v = remote_serializer->GetPeerVal(src); + if ( ! v ) + { + reporter->Error("peer %d does not exist anymore", int(src)); + RecordVal* p = mgr.GetLocalPeerVal(); + Ref(p); + return p; + } + + return v; + %} + +## Returns the local peer ID. +## +## Returns: The peer ID of the local Bro instance. +## +## .. bro:see:: get_event_peer +function get_local_event_peer%(%) : event_peer + %{ + RecordVal* p = mgr.GetLocalPeerVal(); + Ref(p); + return p; + %} + +## Sends a capture filter to a remote peer. +## +## p: The peer ID returned from :bro:id:`connect`. +## +## s: The capture filter. +## +## Returns: True if sending the packet succeeds. +## +## .. bro:see:: send_id send_state send_ping send_current_packet +function send_capture_filter%(p: event_peer, s: string%) : bool + %{ + RemoteSerializer::PeerID id = p->AsRecordVal()->Lookup(0)->AsCount(); + return new Val(remote_serializer->SendCaptureFilter(id, s->CheckString()), TYPE_BOOL); + %} + +## Stops Bro's packet processing. This function is used to synchronize +## distributed trace processing with communication enabled +## (*pseudo-realtime* mode). +## +## .. bro:see:: continue_processing suspend_state_updates resume_state_updates +function suspend_processing%(%) : any + %{ + net_suspend_processing(); + return 0; + %} + +## Resumes Bro's packet processing. +## +## .. bro:see:: suspend_processing suspend_state_updates resume_state_updates +function continue_processing%(%) : any + %{ + net_continue_processing(); + return 0; + %} + +## Stops propagating :bro:attr:`&synchronized` accesses. +## +## .. bro:see:: suspend_processing continue_processing resume_state_updates +function suspend_state_updates%(%) : any + %{ + if ( remote_serializer ) + remote_serializer->SuspendStateUpdates(); + return 0; + %} + +## Resumes propagating :bro:attr:`&synchronized` accesses. +## +## .. bro:see:: suspend_processing continue_processing suspend_state_updates +function resume_state_updates%(%) : any + %{ + if ( remote_serializer ) + remote_serializer->ResumeStateUpdates(); + return 0; + %} + +# =========================================================================== +# +# Internal Functions +# +# =========================================================================== + +## Manually triggers the signature engine for a given connection. +## This is an internal function. +function match_signatures%(c: connection, pattern_type: int, s: string, + bol: bool, eol: bool, + from_orig: bool, clear: bool%) : bool + %{ + if ( ! rule_matcher ) + return new Val(0, TYPE_BOOL); + + c->Match((Rule::PatternType) pattern_type, s->Bytes(), s->Len(), + from_orig, bol, eol, clear); + + return new Val(1, TYPE_BOOL); + %} + +# =========================================================================== +# +# Deprecated Functions +# +# =========================================================================== + + + +%%{ +#include "Anon.h" +%%} + +## Preserves the prefix of an IP address in anonymization. +## +## a: The address to preserve. +## +## width: The number of bits from the top that should remain intact. +## +## .. bro:see:: preserve_subnet anonymize_addr +## +## .. todo:: Currently dysfunctional. +function preserve_prefix%(a: addr, width: count%): any + %{ + AnonymizeIPAddr* ip_anon = ip_anonymizer[PREFIX_PRESERVING_A50]; + if ( ip_anon ) + { + if ( a->AsAddr().GetFamily() == IPv6 ) + builtin_error("preserve_prefix() not supported for IPv6 addresses"); + else + { + const uint32* bytes; + a->AsAddr().GetBytes(&bytes); + ip_anon->PreservePrefix(*bytes, width); + } + } + + + return 0; + %} + +## Preserves the prefix of a subnet in anonymization. +## +## a: The subnet to preserve. +## +## width: The number of bits from the top that should remain intact. +## +## .. bro:see:: preserve_prefix anonymize_addr +## +## .. todo:: Currently dysfunctional. +function preserve_subnet%(a: subnet%): any + %{ + DEBUG_MSG("%s/%d\n", a->Prefix().AsString().c_str(), a->Width()); + AnonymizeIPAddr* ip_anon = ip_anonymizer[PREFIX_PRESERVING_A50]; + if ( ip_anon ) + { + if ( a->AsSubNet().Prefix().GetFamily() == IPv6 ) + builtin_error("preserve_subnet() not supported for IPv6 addresses"); + else + { + const uint32* bytes; + a->AsSubNet().Prefix().GetBytes(&bytes); + ip_anon->PreservePrefix(*bytes, a->AsSubNet().Length()); + } + } + + return 0; + %} + +## Anonymizes an IP address. +## +## a: The address to anonymize. +## +## cl: The anonymization class, which can take on three different values: +## +## - ``ORIG_ADDR``: Tag *a* as an originator address. +## +## - ``RESP_ADDR``: Tag *a* as an responder address. +## +## - ``OTHER_ADDR``: Tag *a* as an arbitrary address. +## +## Returns: An anonymized version of *a*. +## +## .. bro:see:: preserve_prefix preserve_subnet +## +## .. todo:: Currently dysfunctional. +function anonymize_addr%(a: addr, cl: IPAddrAnonymizationClass%): addr + %{ + int anon_class = cl->InternalInt(); + if ( anon_class < 0 || anon_class >= NUM_ADDR_ANONYMIZATION_CLASSES ) + builtin_error("anonymize_addr(): invalid ip addr anonymization class"); + + if ( a->AsAddr().GetFamily() == IPv6 ) + { + builtin_error("anonymize_addr() not supported for IPv6 addresses"); + return 0; + } + else + { + const uint32* bytes; + a->AsAddr().GetBytes(&bytes); + return new AddrVal(anonymize_ip(*bytes, + (enum ip_addr_anonymization_class_t) anon_class)); + } + %} + diff --git a/src/bro_inet_ntop.c b/src/bro_inet_ntop.c new file mode 100644 index 0000000000..c66c1daeda --- /dev/null +++ b/src/bro_inet_ntop.c @@ -0,0 +1,189 @@ +/* Taken/adapted from FreeBSD 9.0.0 inet_ntop.c (CVS revision 1.3.16.1.2.1) */ +/* + * Copyright (c) 2004 by Internet Systems Consortium, Inc. ("ISC") + * Copyright (c) 1996-1999 by Internet Software Consortium. + * + * Permission to use, copy, modify, and distribute this software for any + * purpose with or without fee is hereby granted, provided that the above + * copyright notice and this permission notice appear in all copies. + * + * THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT + * OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. + */ + +#include "bro_inet_ntop.h" + +#include +#include +#include + +#include +#include +#include + +#include +#include +#include + +/*% + * WARNING: Don't even consider trying to compile this on a system where + * sizeof(int) < 4. sizeof(int) > 4 is fine; all the world's not a VAX. + */ + +static const char *bro_inet_ntop4(const u_char *src, char *dst, socklen_t size); +static const char *bro_inet_ntop6(const u_char *src, char *dst, socklen_t size); + +/* char * + * bro_inet_ntop(af, src, dst, size) + * convert a network format address to presentation format. + * return: + * pointer to presentation format address (`dst'), or NULL (see errno). + * author: + * Paul Vixie, 1996. + */ +const char * +bro_inet_ntop(int af, const void * __restrict src, char * __restrict dst, + socklen_t size) +{ + switch (af) { + case AF_INET: + return (bro_inet_ntop4(src, dst, size)); + case AF_INET6: + return (bro_inet_ntop6(src, dst, size)); + default: + errno = EAFNOSUPPORT; + return (NULL); + } + /* NOTREACHED */ +} + +/* const char * + * bro_inet_ntop4(src, dst, size) + * format an IPv4 address + * return: + * `dst' (as a const) + * notes: + * (1) uses no statics + * (2) takes a u_char* not an in_addr as input + * author: + * Paul Vixie, 1996. Modified by Jon Siwek, 2012, to replace strlcpy + */ +static const char * +bro_inet_ntop4(const u_char *src, char *dst, socklen_t size) +{ + static const char fmt[] = "%u.%u.%u.%u"; + char tmp[sizeof "255.255.255.255"]; + int l; + + l = snprintf(tmp, sizeof(tmp), fmt, src[0], src[1], src[2], src[3]); + if (l <= 0 || (socklen_t) l >= size) { + errno = ENOSPC; + return (NULL); + } + strncpy(dst, tmp, size - 1); + dst[size - 1] = 0; + return (dst); +} + +/* const char * + * bro_inet_ntop6(src, dst, size) + * convert IPv6 binary address into presentation (printable) format + * author: + * Paul Vixie, 1996. Modified by Jon Siwek, 2012, for IPv4-translated format + */ +static const char * +bro_inet_ntop6(const u_char *src, char *dst, socklen_t size) +{ + /* + * Note that int32_t and int16_t need only be "at least" large enough + * to contain a value of the specified size. On some systems, like + * Crays, there is no such thing as an integer variable with 16 bits. + * Keep this in mind if you think this function should have been coded + * to use pointer overlays. All the world's not a VAX. + */ + char tmp[sizeof "ffff:ffff:ffff:ffff:ffff:ffff:255.255.255.255"], *tp; + struct { int base, len; } best, cur; + u_int words[NS_IN6ADDRSZ / NS_INT16SZ]; + int i; + + /* + * Preprocess: + * Copy the input (bytewise) array into a wordwise array. + * Find the longest run of 0x00's in src[] for :: shorthanding. + */ + memset(words, '\0', sizeof words); + for (i = 0; i < NS_IN6ADDRSZ; i++) + words[i / 2] |= (src[i] << ((1 - (i % 2)) << 3)); + best.base = -1; + best.len = 0; + cur.base = -1; + cur.len = 0; + for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) { + if (words[i] == 0) { + if (cur.base == -1) + cur.base = i, cur.len = 1; + else + cur.len++; + } else { + if (cur.base != -1) { + if (best.base == -1 || cur.len > best.len) + best = cur; + cur.base = -1; + } + } + } + if (cur.base != -1) { + if (best.base == -1 || cur.len > best.len) + best = cur; + } + if (best.base != -1 && best.len < 2) + best.base = -1; + + /* + * Format the result. + */ + tp = tmp; + for (i = 0; i < (NS_IN6ADDRSZ / NS_INT16SZ); i++) { + /* Are we inside the best run of 0x00's? */ + if (best.base != -1 && i >= best.base && + i < (best.base + best.len)) { + if (i == best.base) + *tp++ = ':'; + continue; + } + /* Are we following an initial run of 0x00s or any real hex? */ + if (i != 0) + *tp++ = ':'; + /* Is this address an encapsulated IPv4? */ + if (i == 6 && best.base == 0 && (best.len == 6 || + (best.len == 7 && words[7] != 0x0001) || + (best.len == 5 && words[5] == 0xffff) || + (best.len == 4 && words[4] == 0xffff && words[5] == 0))) { + if (!bro_inet_ntop4(src+12, tp, sizeof tmp - (tp - tmp))) + return (NULL); + tp += strlen(tp); + break; + } + tp += sprintf(tp, "%x", words[i]); + } + /* Was it a trailing run of 0x00's? */ + if (best.base != -1 && (best.base + best.len) == + (NS_IN6ADDRSZ / NS_INT16SZ)) + *tp++ = ':'; + *tp++ = '\0'; + + /* + * Check for overflow, copy, and we're done. + */ + if ((socklen_t)(tp - tmp) > size) { + errno = ENOSPC; + return (NULL); + } + strcpy(dst, tmp); + return (dst); +} diff --git a/src/bro_inet_ntop.h b/src/bro_inet_ntop.h new file mode 100644 index 0000000000..00326b092e --- /dev/null +++ b/src/bro_inet_ntop.h @@ -0,0 +1,18 @@ +#ifndef BRO_INET_NTOP_H +#define BRO_INET_NTOP_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +const char * +bro_inet_ntop(int af, const void * __restrict src, char * __restrict dst, + socklen_t size); + +#ifdef __cplusplus +} +#endif + +#endif diff --git a/src/const.bif b/src/const.bif index 96630e300b..7373403c11 100644 --- a/src/const.bif +++ b/src/const.bif @@ -1,8 +1,9 @@ -# Documentation and default values for these are located in policy/bro.init. +##! Declaration of various scripting-layer constants that the Bro core uses +##! internally. Documentation and default values for the scripting-layer +##! variables themselves are found in :doc:`/scripts/base/init-bare`. const ignore_keep_alive_rexmit: bool; const skip_http_data: bool; -const parse_udp_tunnels: bool; const use_conn_size_analyzer: bool; const report_gaps_for_partial: bool; @@ -10,3 +11,12 @@ const NFS3::return_data: bool; const NFS3::return_data_max: count; const NFS3::return_data_first_only: bool; +const Tunnel::max_depth: count; +const Tunnel::enable_ip: bool; +const Tunnel::enable_ayiya: bool; +const Tunnel::enable_teredo: bool; +const Tunnel::yielding_teredo_decapsulation: bool; +const Tunnel::delay_teredo_confirmation: bool; +const Tunnel::ip_tunnel_timeout: interval; + +const Threading::heartbeat_interval: interval; diff --git a/src/dhcp-analyzer.pac b/src/dhcp-analyzer.pac index a9f1c6bab0..5267075445 100644 --- a/src/dhcp-analyzer.pac +++ b/src/dhcp-analyzer.pac @@ -55,33 +55,18 @@ flow DHCP_Flow(is_orig: bool) { vector::const_iterator ptr; // Requested IP address to the server. -#ifdef BROv6 - ::uint32 req_addr[4], serv_addr[4]; - - req_addr[0] = req_addr[1] = req_addr[2] = req_addr[3] = 0; - serv_addr[0] = serv_addr[1] = serv_addr[2] = serv_addr[3] = 0; -#else - addr_type req_addr = 0, serv_addr = 0; -#endif + ::uint32 req_addr = 0, serv_addr = 0; for ( ptr = options->begin(); ptr != options->end() && ! (*ptr)->last(); ++ptr ) { switch ( (*ptr)->code() ) { case REQ_IP_OPTION: -#ifdef BROv6 - req_addr[3] = htonl((*ptr)->info()->req_addr()); -#else req_addr = htonl((*ptr)->info()->req_addr()); -#endif break; case SERV_ID_OPTION: -#ifdef BROv6 - serv_addr[3] = htonl((*ptr)->info()->serv_addr()); -#else serv_addr = htonl((*ptr)->info()->serv_addr()); -#endif break; } } @@ -91,13 +76,14 @@ flow DHCP_Flow(is_orig: bool) { case DHCPDISCOVER: BifEvent::generate_dhcp_discover(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), - dhcp_msg_val_->Ref(), req_addr); + dhcp_msg_val_->Ref(), new AddrVal(req_addr)); break; case DHCPREQUEST: BifEvent::generate_dhcp_request(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), - dhcp_msg_val_->Ref(), req_addr, serv_addr); + dhcp_msg_val_->Ref(), new AddrVal(req_addr), + new AddrVal(serv_addr)); break; case DHCPDECLINE: @@ -129,15 +115,7 @@ flow DHCP_Flow(is_orig: bool) { // RFC 1533 allows a list of router addresses. TableVal* router_list = 0; -#ifdef BROv6 - ::uint32 subnet_mask[4], serv_addr[4]; - - subnet_mask[0] = subnet_mask[1] = - subnet_mask[2] = subnet_mask[3] = 0; - serv_addr[0] = serv_addr[1] = serv_addr[2] = serv_addr[3] = 0; -#else - addr_type subnet_mask = 0, serv_addr = 0; -#endif + ::uint32 subnet_mask = 0, serv_addr = 0; uint32 lease = 0; @@ -146,13 +124,7 @@ flow DHCP_Flow(is_orig: bool) { { switch ( (*ptr)->code() ) { case SUBNET_OPTION: -#ifdef BROv6 - subnet_mask[0] = - subnet_mask[1] = subnet_mask[2] = 0; - subnet_mask[3] = htonl((*ptr)->info()->mask()); -#else subnet_mask = htonl((*ptr)->info()->mask()); -#endif break; case ROUTER_OPTION: @@ -170,14 +142,8 @@ flow DHCP_Flow(is_orig: bool) { vector* rlist = (*ptr)->info()->router_list(); uint32 raddr = (*rlist)[i]; -#ifdef BROv6 - ::uint32 tmp_addr[4]; - tmp_addr[0] = tmp_addr[1] = tmp_addr[2] = 0; - tmp_addr[3] = htonl(raddr); -#else ::uint32 tmp_addr; tmp_addr = htonl(raddr); -#endif // index starting from 1 Val* index = new Val(i + 1, TYPE_COUNT); router_list->Assign(index, new AddrVal(tmp_addr)); @@ -191,11 +157,7 @@ flow DHCP_Flow(is_orig: bool) { break; case SERV_ID_OPTION: -#ifdef BROv6 - serv_addr[3] = htonl((*ptr)->info()->serv_addr()); -#else serv_addr = htonl((*ptr)->info()->serv_addr()); -#endif break; } } @@ -204,15 +166,15 @@ flow DHCP_Flow(is_orig: bool) { case DHCPOFFER: BifEvent::generate_dhcp_offer(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), - dhcp_msg_val_->Ref(), subnet_mask, - router_list, lease, serv_addr); + dhcp_msg_val_->Ref(), new AddrVal(subnet_mask), + router_list, lease, new AddrVal(serv_addr)); break; case DHCPACK: BifEvent::generate_dhcp_ack(connection()->bro_analyzer(), connection()->bro_analyzer()->Conn(), - dhcp_msg_val_->Ref(), subnet_mask, - router_list, lease, serv_addr); + dhcp_msg_val_->Ref(), new AddrVal(subnet_mask), + router_list, lease, new AddrVal(serv_addr)); break; case DHCPNAK: diff --git a/src/digest.h b/src/digest.h new file mode 100644 index 0000000000..ef52ba059a --- /dev/null +++ b/src/digest.h @@ -0,0 +1,92 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +/** + * Wrapper and helper functions for MD5/SHA digest algorithms. + */ + +#ifndef bro_digest_h +#define bro_digest_h + +#include +#include + +#include "Reporter.h" + +static inline const char* digest_print(const u_char* digest, size_t n) + { + static char buf[256]; // big enough for any of md5/sha1/sha256 + for ( size_t i = 0; i < n; ++i ) + snprintf(buf + i * 2, 3, "%02x", digest[i]); + return buf; + } + +inline const char* md5_digest_print(const u_char digest[MD5_DIGEST_LENGTH]) + { + return digest_print(digest, MD5_DIGEST_LENGTH); + } + +inline const char* sha1_digest_print(const u_char digest[SHA_DIGEST_LENGTH]) + { + return digest_print(digest, SHA_DIGEST_LENGTH); + } + +inline const char* sha256_digest_print(const u_char digest[SHA256_DIGEST_LENGTH]) + { + return digest_print(digest, SHA256_DIGEST_LENGTH); + } + +inline void md5_init(MD5_CTX* c) + { + if ( ! MD5_Init(c) ) + reporter->InternalError("MD5_Init failed"); + } + +inline void md5_update(MD5_CTX* c, const void* data, unsigned long len) + { + if ( ! MD5_Update(c, data, len) ) + reporter->InternalError("MD5_Update failed"); + } + +inline void md5_final(MD5_CTX* c, u_char md[MD5_DIGEST_LENGTH]) + { + if ( ! MD5_Final(md, c) ) + reporter->InternalError("MD5_Final failed"); + } + +inline void sha1_init(SHA_CTX* c) + { + if ( ! SHA1_Init(c) ) + reporter->InternalError("SHA_Init failed"); + } + +inline void sha1_update(SHA_CTX* c, const void* data, unsigned long len) + { + if ( ! SHA1_Update(c, data, len) ) + reporter->InternalError("SHA_Update failed"); + } + +inline void sha1_final(SHA_CTX* c, u_char md[SHA_DIGEST_LENGTH]) + { + if ( ! SHA1_Final(md, c) ) + reporter->InternalError("SHA_Final failed"); + } + +inline void sha256_init(SHA256_CTX* c) + { + if ( ! SHA256_Init(c) ) + reporter->InternalError("SHA256_Init failed"); + } + +inline void sha256_update(SHA256_CTX* c, const void* data, unsigned long len) + { + if ( ! SHA256_Update(c, data, len) ) + reporter->InternalError("SHA256_Update failed"); + } + +inline void sha256_final(SHA256_CTX* c, u_char md[SHA256_DIGEST_LENGTH]) + { + if ( ! SHA256_Final(md, c) ) + reporter->InternalError("SHA256_Final failed"); + } + +#endif //bro_digest_h diff --git a/src/dns-analyzer.pac b/src/dns-analyzer.pac index 0c2dc1b491..e92b6ef709 100644 --- a/src/dns-analyzer.pac +++ b/src/dns-analyzer.pac @@ -216,44 +216,42 @@ flow DNS_Flow switch ( rr->rr_type() ) { case TYPE_A: + if ( dns_A_reply ) + { + ::uint32 addr = rd->type_a(); + BifEvent::generate_dns_A_reply(connection()->bro_analyzer(), + connection()->bro_analyzer()->Conn(), + dns_msg_val_->Ref(), build_dns_answer(rr), + new AddrVal(htonl(addr))); + } + break; + case TYPE_A6: - case TYPE_AAAA: - if ( ! dns_A_reply ) - break; - -#ifdef BROv6 - ::uint32 addr[4]; -#else - addr_type addr; -#endif - - if ( rr->rr_type() == TYPE_A ) + if ( dns_A6_reply ) { -#ifdef BROv6 - addr[0] = addr[1] = addr[2] = 0; - addr[3] = htonl(rd->type_a()); -#else - addr = htonl(rd->type_a()); -#endif - } - - else - { -#ifdef BROv6 - for ( int i = 0; i < 4; ++i ) + ::uint32 addr[4]; + for ( unsigned int i = 0; i < 4; ++i ) addr[i] = htonl((*rd->type_aaaa())[i]); -#else - addr = htonl((*rd->type_aaaa())[3]); -#endif - } - // For now, we treat A6 and AAAA as A's. Given the - // above fixes for BROv6, we can probably now introduce - // their own events. (It's not clear A6 is needed - - // do we actually encounter it in practice?) - BifEvent::generate_dns_A_reply(connection()->bro_analyzer(), - connection()->bro_analyzer()->Conn(), - dns_msg_val_->Ref(), build_dns_answer(rr), addr); + BifEvent::generate_dns_A6_reply(connection()->bro_analyzer(), + connection()->bro_analyzer()->Conn(), + dns_msg_val_->Ref(), build_dns_answer(rr), + new AddrVal(addr)); + } + break; + + case TYPE_AAAA: + if ( dns_AAAA_reply ) + { + ::uint32 addr[4]; + for ( unsigned int i = 0; i < 4; ++i ) + addr[i] = htonl((*rd->type_aaaa())[i]); + + BifEvent::generate_dns_AAAA_reply(connection()->bro_analyzer(), + connection()->bro_analyzer()->Conn(), + dns_msg_val_->Ref(), build_dns_answer(rr), + new AddrVal(addr)); + } break; case TYPE_NS: diff --git a/src/event.bif b/src/event.bif index d953ac78fe..0c79c6159d 100644 --- a/src/event.bif +++ b/src/event.bif @@ -1,481 +1,6701 @@ +##! The events that the C/C++ core of Bro can generate. This is mostly +##! consisting of high-level network events that protocol analyzers detect, +##! but there are also several general-utility events generated by internal +##! Bro frameworks. + +# +# Documentation conventions: +# +# - Use past tense for activity that has already occured. +# +# - List parameters with an empty line in between. +# +# - Within the description, reference other parameters of the same event +# as *arg*. +# +# - Order: +# +# - Short initial sentence (which doesn't need to be a sentence), +# starting with "Generated ..." +# +# - Description +# +# - Parameters +# +# - .. bro:see:: +# +# - .. note:: +# +# - .. todo:: + +## Generated at Bro initialization time. The event engine generates this +## event just before normal input processing begins. It can be used to execute +## one-time initialization code at startup. At the time a handler runs, Bro will +## have executed any global initializations and statements. +## +## .. bro:see:: bro_done +## +## .. note:: +## +## When a ``bro_init`` handler executes, Bro has not yet seen any input +## packets and therefore :bro:id:`network_time` is not initialized yet. An +## artifact of that is that any timer installed in a ``bro_init`` handler +## will fire immediately with the first packet. The standard way to work +## around that is to ignore the first time the timer fires and immediately +## reschedule. +## event bro_init%(%); + +## Generated at Bro termination time. The event engine generates this event when +## Bro is about to terminate, either due to having exhausted reading its input +## trace file(s), receiving a termination signal, or because Bro was run without +## a network input source and has finished executing any global statements. +## +## .. bro:see:: bro_init +## +## .. note:: +## +## If Bro terminates due to an invocation of :bro:id:`exit`, then this event +## is not generated. event bro_done%(%); +## Generated when an internal DNS lookup produces the same result as last time. +## Bro keeps an internal DNS cache for host names and IP addresses it has +## already resolved. This event is generated when a subsequent lookup returns +## the same result as stored in the cache. +## +## dm: A record describing the new resolver result (which matches the old one). +## +## .. bro:see:: dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified event dns_mapping_valid%(dm: dns_mapping%); + +## Generated when an internal DNS lookup got no answer even though it had +## succeeded in the past. Bro keeps an internal DNS cache for host names and IP +## addresses it has already resolved. This event is generated when a +## subsequent lookup does not produce an answer even though we have +## already stored a result in the cache. +## +## dm: A record describing the old resolver result. +## +## .. bro:see:: dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_valid event dns_mapping_unverified%(dm: dns_mapping%); + +## Generated when an internal DNS lookup succeeded but an earlier attempt +## did not. Bro keeps an internal DNS cache for host names and IP +## addresses it has already resolved. This event is generated when a subsequent +## lookup produces an answer for a query that was marked as failed in the cache. +## +## dm: A record describing the new resolver result. +## +## .. bro:see:: dns_mapping_altered dns_mapping_lost_name dns_mapping_unverified +## dns_mapping_valid event dns_mapping_new_name%(dm: dns_mapping%); + +## Generated when an internal DNS lookup returned zero answers even though it +## had succeeded in the past. Bro keeps an internal DNS cache for host names +## and IP addresses it has already resolved. This event is generated when +## on a subsequent lookup we receive an answer that is empty even +## though we have already stored a result in the cache. +## +## dm: A record describing the old resolver result. +## +## .. bro:see:: dns_mapping_altered dns_mapping_new_name dns_mapping_unverified +## dns_mapping_valid event dns_mapping_lost_name%(dm: dns_mapping%); -event dns_mapping_name_changed%(old_dm: dns_mapping, new_dm: dns_mapping%); + +## Generated when an internal DNS lookup produced a different result than in +## the past. Bro keeps an internal DNS cache for host names and IP addresses +## it has already resolved. This event is generated when a subsequent lookup +## returns a different answer than we have stored in the cache. +## +## dm: A record describing the new resolver result. +## +## old_addrs: Addresses that used to be part of the returned set for the query +## described by *dm*, but are not anymore. +## +## new_addrs: Addresses that were not part of the returned set for the query +## described by *dm*, but now are. +## +## .. bro:see:: dns_mapping_lost_name dns_mapping_new_name dns_mapping_unverified +## dns_mapping_valid event dns_mapping_altered%(dm: dns_mapping, old_addrs: addr_set, new_addrs: addr_set%); +## Generated for every new connection. This event is raised with the first +## packet of a previously unknown connection. Bro uses a flow-based definition +## of "connection" here that includes not only TCP sessions but also UDP and +## ICMP flows. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_reused +## connection_state_remove connection_status_update connection_timeout +## expected_connection_seen new_connection_contents partial_connection +## +## .. note:: +## +## Handling this event is potentially expensive. For example, during a SYN +## flooding attack, every spoofed SYN packet will lead to a new +## event. event new_connection%(c: connection%); + +## Generated for a connection whose tunneling has changed. This could +## be from a previously seen connection now being encapsulated in a tunnel, +## or from the outer encapsulation changing. Note that connection *c*'s +## *tunnel* field is NOT automatically/internally assigned to the new +## encapsulation value of *e* after this event is raised. If the desired +## behavior is to track the latest tunnel encapsulation per-connection, +## then a handler of this event should assign *e* to ``c$tunnel`` (which Bro's +## default scripts are doing). +## +## c: The connection whose tunnel/encapsulation changed. +## +## e: The new encapsulation. +event tunnel_changed%(c: connection, e: EncapsulatingConnVector%); + +## Generated when reassembly starts for a TCP connection. This event is raised +## at the moment when Bro's TCP analyzer enables stream reassembly for a +## connection. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_reused +## connection_state_remove connection_status_update connection_timeout +## expected_connection_seen new_connection partial_connection event new_connection_contents%(c: connection%); -event new_packet%(c: connection, p: pkt_hdr%); + +## Generated for an unsuccessful connection attempt. This event is raised when +## an originator unsuccessfully attempted to establish a connection. +## "Unsuccessful" is defined as at least :bro:id:`tcp_attempt_delay` seconds +## having elapsed since the originator first sent a connection establishment +## packet to the destination without seeing a reply. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_established +## connection_external connection_finished connection_first_ACK +## connection_half_finished connection_partial_close connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_attempt%(c: connection%); + +## Generated when a SYN-ACK packet is seen in response to a SYN packet during +## a TCP handshake. The final ACK of the handshake in response to SYN-ACK may +## or may not occur later, one way to tell is to check the *history* field of +## :bro:type:`connection` to see if the originator sent an ACK, indicated by +## 'A' in the history string. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_external connection_finished connection_first_ACK +## connection_half_finished connection_partial_close connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_established%(c: connection%); + +## Generated for a new active TCP connection if Bro did not see the initial +## handshake. This event is raised when Bro has observed traffic from each +## endpoint, but the activity did not begin with the usual connection +## establishment. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_reused +## connection_state_remove connection_status_update connection_timeout +## expected_connection_seen new_connection new_connection_contents +## event partial_connection%(c: connection%); + +## Generated when a previously inactive endpoint attempts to close a TCP +## connection via a normal FIN handshake or an abort RST sequence. When the +## endpoint sent one of these packets, Bro waits +## :bro:id:`tcp_partial_close_delay` prior to generating the event, to give +## the other endpoint a chance to close the connection normally. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_partial_close%(c: connection%); + +## Generated for a TCP connection that finished normally. The event is raised +## when a regular FIN handshake from both endpoints was observed. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_first_ACK +## connection_half_finished connection_partial_close connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_finished%(c: connection%); + +## Generated when one endpoint of a TCP connection attempted to gracefully close +## the connection, but the other endpoint is in the TCP_INACTIVE state. This can +## happen due to split routing, in which Bro only sees one side of a connection. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_partial_close connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_half_finished%(c: connection%); + +## Generated for a rejected TCP connection. This event is raised when an +## originator attempted to setup a TCP connection but the responder replied +## with a RST packet denying it. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection +## +## c: The connection. +## +## .. note:: +## +## If the responder does not respond at all, :bro:id:`connection_attempt` is +## raised instead. If the responder initially accepts the connection but +## aborts it later, Bro first generates :bro:id:`connection_established` +## and then :bro:id:`connection_reset`. event connection_rejected%(c: connection%); + +## Generated when an endpoint aborted a TCP connection. The event is raised +## when one endpoint of an established TCP connection aborted by sending a RST +## packet. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reused +## connection_state_remove connection_status_update connection_timeout +## expected_connection_seen new_connection new_connection_contents +## partial_connection event connection_reset%(c: connection%); + +## Generated for each still-open connection when Bro terminates. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection bro_done event connection_pending%(c: connection%); + +## Generated when a connection's internal state is about to be removed from +## memory. Bro generates this event reliably once for every connection when it +## is about to delete the internal state. As such, the event is well-suited for +## script-level cleanup that needs to be performed for every connection. This +## event is generated not only for TCP sessions but also for UDP and ICMP +## flows. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_reused +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection udp_inactivity_timeout +## tcp_inactivity_timeout icmp_inactivity_timeout conn_stats event connection_state_remove%(c: connection%); + +## Generated for a SYN packet. Bro raises this event for every SYN packet seen +## by its TCP analyzer. +## +## c: The connection. +## +## pkt: Information extracted from the SYN packet. +## +## .. bro:see:: connection_EOF connection_attempt connection_established +## connection_external connection_finished connection_first_ACK +## connection_half_finished connection_partial_close connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection +## +## .. note:: +## +## This event has quite low-level semantics and can potentially be expensive +## to generate. It should only be used if one really needs the specific +## information passed into the handler via the ``pkt`` argument. If not, +## handling one of the other ``connection_*`` events is typically the +## better approach. event connection_SYN_packet%(c: connection, pkt: SYN_packet%); + +## Generated for the first ACK packet seen for a TCP connection from +## its *originator*. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_half_finished connection_partial_close connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection +## +## .. note:: +## +## This event has quite low-level semantics and should be used only rarely. event connection_first_ACK%(c: connection%); + +## Generated when a TCP connection timed out. This event is raised when +## no activity was seen for an interval of at least +## :bro:id:`tcp_connection_linger`, and either one endpoint has already +## closed the connection or one side never became active. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_reused +## connection_state_remove connection_status_update expected_connection_seen +## new_connection new_connection_contents partial_connection +## +## .. note:: +## +## The precise semantics of this event can be unintuitive as it only +## covers a subset of cases where a connection times out. Often, handling +## :bro:id:`connection_state_remove` is the better option. That one will be +## generated reliably when an interval of ``tcp_inactivity_timeout`` has +## passed without any activity seen (but also for all other ways a +## connection may terminate). event connection_timeout%(c: connection%); + +## Generated when a connection 4-tuple is reused. This event is raised when Bro +## sees a new TCP session or UDP flow using a 4-tuple matching that of an +## earlier connection it still considers active. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_reused%(c: connection%); + +## Generated in regular intervals during the lifetime of a connection. The +## event is raised each ``connection_status_update_interval`` seconds +## and can be used to check conditions on a regular basis. +## +## c: The connection. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_reused +## connection_state_remove connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_status_update%(c: connection%); + +## Generated for a connection over IPv6 when one direction has changed +## the flow label that it's using. +## +## c: The connection. +## +## is_orig: True if the event is raised for the originator side. +## +## old_label: The old flow label that the endpoint was using. +## +## new_label: The new flow label that the endpoint is using. +## +## .. bro:see:: connection_established new_connection +event connection_flow_label_changed%(c: connection, is_orig: bool, old_label: count, new_label: count%); + +## Generated at the end of reassembled TCP connections. The TCP reassembler +## raised the event once for each endpoint of a connection when it finished +## reassembling the corresponding side of the communication. +## +## c: The connection. +## +## is_orig: True if the event is raised for the originator side. +## +## .. bro:see:: connection_SYN_packet connection_attempt connection_established +## connection_external connection_finished connection_first_ACK +## connection_half_finished connection_partial_close connection_pending +## connection_rejected connection_reset connection_reused connection_state_remove +## connection_status_update connection_timeout expected_connection_seen +## new_connection new_connection_contents partial_connection event connection_EOF%(c: connection, is_orig: bool%); + +## Generated for a new connection received from the communication subsystem. +## Remote peers can inject packets into Bro's packet loop, for example via +## :doc:`Broccoli `. The communication system +## raises this event with the first packet of a connection coming in this way. +## +## c: The connection. +## +## tag: TODO. event connection_external%(c: connection, tag: string%); + +## Generated when a connection is seen that is marked as being expected. +## The function :bro:id:`expect_connection` tells Bro to expect a particular +## connection to come up, and which analyzer to associate with it. Once the +## first packet of such a connection is indeed seen, this event is raised. +## +## c: The connection. +## +## a: The analyzer that was scheduled for the connection with the +## :bro:id:`expect_connection` call. When the event is raised, that +## analyzer will already have been activated to process the connection. The +## ``count`` is one of the ``ANALYZER_*`` constants, e.g., ``ANALYZER_HTTP``. +## +## .. bro:see:: connection_EOF connection_SYN_packet connection_attempt +## connection_established connection_external connection_finished +## connection_first_ACK connection_half_finished connection_partial_close +## connection_pending connection_rejected connection_reset connection_reused +## connection_state_remove connection_status_update connection_timeout +## new_connection new_connection_contents partial_connection +## +## .. todo:: We don't have a good way to document the automatically generated +## ``ANALYZER_*`` constants right now. event expected_connection_seen%(c: connection, a: count%); +## Generated for every packet Bro sees. This is a very low-level and expensive +## event that should be avoided when at all possible. It's usually infeasible to +## handle when processing even medium volumes of traffic in real-time. That +## said, if you work from a trace and want to do some packet-level analysis, +## it may come in handy. +## +## c: The connection the packet is part of. +## +## p: Information from the header of the packet that triggered the event. +## +## .. bro:see:: tcp_packet packet_contents +event new_packet%(c: connection, p: pkt_hdr%); + +## Generated for every IPv6 packet that contains extension headers. +## This is potentially an expensive event to handle if analysing IPv6 traffic +## that happens to utilize extension headers frequently. +## +## c: The connection the packet is part of. +## +## p: Information from the header of the packet that triggered the event. +## +## .. bro:see:: new_packet tcp_packet packet_contents esp_packet +event ipv6_ext_headers%(c: connection, p: pkt_hdr%); + +## Generated for any packets using the IPv6 Encapsulating Security Payload (ESP) +## extension header. +## +## p: Information from the header of the packet that triggered the event. +## +## .. bro:see:: new_packet tcp_packet ipv6_ext_headers +event esp_packet%(p: pkt_hdr%); + +## Generated for any packet using a Mobile IPv6 Mobility Header. +## +## p: Information from the header of the packet that triggered the event. +## +## .. bro:see:: new_packet tcp_packet ipv6_ext_headers +event mobile_ipv6_message%(p: pkt_hdr%); + +## Generated for any IPv6 packet encapsulated in a Teredo tunnel. +## See :rfc:`4380` for more information about the Teredo protocol. +## +## outer: The Teredo tunnel connection. +## +## inner: The Teredo-encapsulated IPv6 packet header and transport header. +## +## .. bro:see:: teredo_authentication teredo_origin_indication teredo_bubble +## +## .. note:: Since this event may be raised on a per-packet basis, handling +## it may become particularly expensive for real-time analysis. +event teredo_packet%(outer: connection, inner: teredo_hdr%); + +## Generated for IPv6 packets encapsulated in a Teredo tunnel that +## use the Teredo authentication encapsulation method. +## See :rfc:`4380` for more information about the Teredo protocol. +## +## outer: The Teredo tunnel connection. +## +## inner: The Teredo-encapsulated IPv6 packet header and transport header. +## +## .. bro:see:: teredo_packet teredo_origin_indication teredo_bubble +## +## .. note:: Since this event may be raised on a per-packet basis, handling +## it may become particularly expensive for real-time analysis. +event teredo_authentication%(outer: connection, inner: teredo_hdr%); + +## Generated for IPv6 packets encapsulated in a Teredo tunnel that +## use the Teredo origin indication encapsulation method. +## See :rfc:`4380` for more information about the Teredo protocol. +## +## outer: The Teredo tunnel connection. +## +## inner: The Teredo-encapsulated IPv6 packet header and transport header. +## +## .. bro:see:: teredo_packet teredo_authentication teredo_bubble +## +## .. note:: Since this event may be raised on a per-packet basis, handling +## it may become particularly expensive for real-time analysis. +event teredo_origin_indication%(outer: connection, inner: teredo_hdr%); + +## Generated for Teredo bubble packets. That is, IPv6 packets encapsulated +## in a Teredo tunnel that have a Next Header value of :bro:id:`IPPROTO_NONE`. +## See :rfc:`4380` for more information about the Teredo protocol. +## +## outer: The Teredo tunnel connection. +## +## inner: The Teredo-encapsulated IPv6 packet header and transport header. +## +## .. bro:see:: teredo_packet teredo_authentication teredo_origin_indication +## +## .. note:: Since this event may be raised on a per-packet basis, handling +## it may become particularly expensive for real-time analysis. +event teredo_bubble%(outer: connection, inner: teredo_hdr%); + +## Generated for every packet that has a non-empty transport-layer payload. +## This is a very low-level and expensive event that should be avoided when +## at all possible. It's usually infeasible to handle when processing even +## medium volumes of traffic in real-time. It's even worse than +## :bro:id:`new_packet`. That said, if you work from a trace and want to +## do some packet-level analysis, it may come in handy. +## +## c: The connection the packet is part of. +## +## contents: The raw transport-layer payload. +## +## .. bro:see:: new_packet tcp_packet +event packet_contents%(c: connection, contents: string%); + +## Generated for every TCP packet. This is a very low-level and expensive event +## that should be avoided when at all possible. It's usually infeasible to +## handle when processing even medium volumes of traffic in real-time. It's +## slightly better than :bro:id:`new_packet` because it affects only TCP, but +## not much. That said, if you work from a trace and want to do some +## packet-level analysis, it may come in handy. +## +## c: The connection the packet is part of. +## +## is_orig: True if the packet was sent by the connection's originator. +## +## flags: A string with the packet's TCP flags. In the string, each character +## corresponds to one set flag, as follows: ``S`` -> SYN; ``F`` -> FIN; +## ``R`` -> RST; ``A`` -> ACK; ``P`` -> PUSH. +## +## seq: The packet's TCP sequence number. +## +## ack: The packet's ACK number. +## +## len: The length of the TCP payload, as specified in the packet header. +## +## payload: The raw TCP payload. Note that this may be shorter than *len* if +## the packet was not fully captured. +## +## .. bro:see:: new_packet packet_contents tcp_option tcp_contents tcp_rexmit +event tcp_packet%(c: connection, is_orig: bool, flags: string, seq: count, ack: count, len: count, payload: string%); + +## Generated for each option found in a TCP header. Like many of the ``tcp_*`` +## events, this is a very low-level event and potentially expensive as it may +## be raised very often. +## +## c: The connection the packet is part of. +## +## is_orig: True if the packet was sent by the connection's originator. +## +## opt: The numerical option number, as found in the TCP header. +## +## optlen: The length of the options value. +## +## .. bro:see:: tcp_packet tcp_contents tcp_rexmit +## +## .. note:: There is currently no way to get the actual option value, if any. +event tcp_option%(c: connection, is_orig: bool, opt: count, optlen: count%); + +## Generated for each chunk of reassembled TCP payload. When content delivery is +## enabled for a TCP connection (via :bro:id:`tcp_content_delivery_ports_orig`, +## :bro:id:`tcp_content_delivery_ports_resp`, +## :bro:id:`tcp_content_deliver_all_orig`, +## :bro:id:`tcp_content_deliver_all_resp`), this event is raised for each chunk +## of in-order payload reconstructed from the packet stream. Note that this +## event is potentially expensive if many connections carry significant amounts +## of data as then all that data needs to be passed on to the scripting layer. +## +## c: The connection the payload is part of. +## +## is_orig: True if the packet was sent by the connection's originator. +## +## seq: The sequence number corresponding to the first byte of the payload +## chunk. +## +## contents: The raw payload, which will be non-empty. +## +## .. bro:see:: tcp_packet tcp_option tcp_rexmit +## tcp_content_delivery_ports_orig tcp_content_delivery_ports_resp +## tcp_content_deliver_all_resp tcp_content_deliver_all_orig +## +## .. note:: +## +## The payload received by this event is the same that is also passed into +## application-layer protocol analyzers internally. Subsequent invocations of +## this event for the same connection receive non-overlapping in-order chunks +## of its TCP payload stream. It is however undefined what size each chunk +## has; while Bro passes the data on as soon as possible, specifics depend on +## network-level effects such as latency, acknowledgements, reordering, etc. +event tcp_contents%(c: connection, is_orig: bool, seq: count, contents: string%); + +## TODO. +event tcp_rexmit%(c: connection, is_orig: bool, seq: count, len: count, data_in_flight: count, window: count%); + +## Generated when Bro detects a TCP retransmission inconsistency. When +## reassembling a TCP stream, Bro buffers all payload until it sees the +## responder acking it. If during that time, the sender resends a chunk of +## payload but with different content than originally, this event will be +## raised. +## +## c: The connection showing the inconsistency. +## +## t1: The original payload. +## +## t2: The new payload. +## +## .. bro:see:: tcp_rexmit tcp_contents +event rexmit_inconsistency%(c: connection, t1: string, t2: string%); + +## Generated when a TCP endpoint acknowledges payload that Bro never saw. +## +## c: The connection. +## +## .. bro:see:: content_gap +## +## .. note:: +## +## Seeing an acknowledgment indicates that the responder of the connection +## says it has received the corresponding data. If Bro did not, it must have +## either missed one or more packets, or the responder's TCP stack is broken +## (which isn't unheard of). In practice, one will always see a few of these +## events in any larger volume of network traffic. If there are lots of them, +## however, that typically means that there is a problem with the monitoring +## infrastructure such as a tap dropping packets, split routing on the path, +## or reordering at the tap. +## +## This event reports similar situations as :bro:id:`content_gap`, though +## their specifics differ slightly. Often, however, both will be raised for +## the same connection if some of its data is missing. We should eventually +## merge the two. +event ack_above_hole%(c: connection%); + +## Generated when Bro detects a gap in a reassembled TCP payload stream. This +## event is raised when Bro, while reassembling a payload stream, determines +## that a chunk of payload is missing (e.g., because the responder has already +## acknowledged it, even though Bro didn't see it). +## +## c: The connection. +## +## is_orig: True if the gap is on the originator's side. +## +## seq: The sequence number where the gap starts. +## +## length: The number of bytes missing. +## +## .. bro:see:: ack_above_hole +## +## .. note:: +## +## Content gaps tend to occur occasionally for various reasons, including +## broken TCP stacks. If, however, one finds lots of them, that typically +## means that there is a problem with the monitoring infrastructure such as +## a tap dropping packets, split routing on the path, or reordering at the +## tap. +## +## This event reports similar situations as :bro:id:`ack_above_hole`, though +## their specifics differ slightly. Often, however, both will be raised for +## a connection if some of its data is missing. We should eventually merge +## the two. +event content_gap%(c: connection, is_orig: bool, seq: count, length: count%); + +## Summarizes the amount of missing TCP payload at regular intervals. +## Internally, Bro tracks (1) the number of :bro:id:`ack_above_hole` events, +## including the number of bytes missing; and (2) the total number of TCP +## acks seen, with the total volume of bytes that have been acked. This event +## reports these statistics in :bro:id:`gap_report_freq` intervals for the +## purpose of determining packet loss. +## +## dt: The time that has passed since the last ``gap_report`` interval. +## +## info: The gap statistics. +## +## .. bro:see:: content_gap ack_above_hole +## +## .. note:: +## +## Bro comes with a script :doc:`/scripts/policy/misc/capture-loss` that uses +## this event to estimate packet loss and report when a predefined threshold +## is exceeded. +event gap_report%(dt: interval, info: gap_info%); + + +## Generated when a protocol analyzer confirms that a connection is indeed +## using that protocol. Bro's dynamic protocol detection heuristically activates +## analyzers as soon as it believes a connection *could* be using a particular +## protocol. It is then left to the corresponding analyzer to verify whether +## that is indeed the case; if so, this event will be generated. +## +## c: The connection. +## +## atype: The type of the analyzer confirming that its protocol is in +## use. The value is one of the ``ANALYZER_*`` constants. For example, +## ``ANALYZER_HTTP`` means the HTTP analyzers determined that it's indeed +## parsing an HTTP connection. +## +## aid: A unique integer ID identifying the specific *instance* of the +## analyzer *atype* that is analyzing the connection ``c``. The ID can +## be used to reference the analyzer when using builtin functions like +## :bro:id:`disable_analyzer`. +## +## .. bro:see:: protocol_violation +## +## .. note:: +## +## Bro's default scripts use this event to determine the ``service`` column +## of :bro:type:`Conn::Info`: once confirmed, the protocol will be listed +## there (and thus in ``conn.log``). event protocol_confirmation%(c: connection, atype: count, aid: count%); + +## Generated when a protocol analyzer determines that a connection it is parsing +## is not conforming to the protocol it expects. Bro's dynamic protocol +## detection heuristically activates analyzers as soon as it believes a +## connection *could* be using a particular protocol. It is then left to the +## corresponding analyzer to verify whether that is indeed the case; if not, +## the analyzer will trigger this event. +## +## c: The connection. +## +## atype: The type of the analyzer confirming that its protocol is in +## use. The value is one of the ``ANALYZER_*`` constants. For example, +## ``ANALYZER_HTTP`` means the HTTP analyzers determined that it's indeed +## parsing an HTTP connection. +## +## aid: A unique integer ID identifying the specific *instance* of the +## analyzer *atype* that is analyzing the connection ``c``. The ID can +## be used to reference the analyzer when using builtin functions like +## :bro:id:`disable_analyzer`. +## +## reason: TODO. +## +## .. bro:see:: protocol_confirmation +## +## .. note:: +## +## Bro's default scripts use this event to disable an analyzer via +## :bro:id:`disable_analyzer` if it's parsing the wrong protocol. That's +## however a script-level decision and not done automatically by the event +## engine. event protocol_violation%(c: connection, atype: count, aid: count, reason: string%); -event packet_contents%(c: connection, contents: string%); -event tcp_packet%(c: connection, is_orig: bool, flags: string, seq: count, ack: count, len: count, payload: string%); -event tcp_option%(c: connection, is_orig: bool, opt: count, optlen: count%); -event tcp_contents%(c: connection, is_orig: bool, seq: count, contents: string%); -event tcp_rexmit%(c: connection, is_orig: bool, seq: count, len: count, data_in_flight: count, window: count%); +## Generated for each packet sent by a UDP flow's originator. This a potentially +## expensive event due to the volume of UDP traffic and should be used with +## care. +## +## u: The connection record for the corresponding UDP flow. +## +## .. bro:see:: udp_contents udp_reply udp_session_done event udp_request%(u: connection%); -event udp_reply%(u: connection%); -event udp_contents%(u: connection, is_orig: bool, contents: string%); -event udp_session_done%(u: connection%); -event icmp_sent%(c: connection, icmp: icmp_conn%); -event icmp_echo_request%(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string%); -event icmp_echo_reply%(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string%); -event icmp_unreachable%(c: connection, icmp: icmp_conn, code: count, context: icmp_context%); -event icmp_time_exceeded%(c: connection, icmp: icmp_conn, code: count, context: icmp_context%); -event icmp_redirect%(c: connection, icmp: icmp_conn, a: addr%); -event conn_stats%(c: connection, os: endpoint_stats, rs: endpoint_stats%); -event conn_weird%(name: string, c: connection, addl: string%); -event flow_weird%(name: string, src: addr, dst: addr%); -event net_weird%(name: string%); -event load_sample%(samples: load_sample_info, CPU: interval, dmem: int%); -event rexmit_inconsistency%(c: connection, t1: string, t2: string%); -event ack_above_hole%(c: connection%); -event content_gap%(c: connection, is_orig: bool, seq: count, length: count%); -event gap_report%(dt: interval, info: gap_info%); -event inconsistent_option%(c: connection%); -event bad_option%(c: connection%); -event bad_option_termination%(c: connection%); +## Generated for each packet sent by a UDP flow's responder. This a potentially +## expensive event due to the volume of UDP traffic and should be used with +## care. +## +## u: The connection record for the corresponding UDP flow. +## +## .. bro:see:: udp_contents udp_request udp_session_done +event udp_reply%(u: connection%); + +## Generated for UDP packets to pass on their payload. As the number of UDP +## packets can be very large, this event is normally raised only for those on +## ports configured in :bro:id:`udp_content_delivery_ports_orig` (for packets +## sent by the flow's originator) or :bro:id:`udp_content_delivery_ports_resp` +## (for packets sent by the flow's responder). However, delivery can be enabled +## for all UDP request and reply packets by setting +## :bro:id:`udp_content_deliver_all_orig` or +## :bro:id:`udp_content_deliver_all_resp`, respectively. Note that this +## event is also raised for all matching UDP packets, including empty ones. +## +## u: The connection record for the corresponding UDP flow. +## +## is_orig: True if the event is raised for the originator side. +## +## contents: TODO. +## +## .. bro:see:: udp_reply udp_request udp_session_done +## udp_content_deliver_all_orig udp_content_deliver_all_resp +## udp_content_delivery_ports_orig udp_content_delivery_ports_resp +event udp_contents%(u: connection, is_orig: bool, contents: string%); + +## Generated when a UDP session for a supported protocol has finished. Some of +## Bro's application-layer UDP analyzers flag the end of a session by raising +## this event. Currently, the analyzers for DNS, NTP, Netbios, Syslog, AYIYA, +## and Teredo support this. +## +## u: The connection record for the corresponding UDP flow. +## +## .. bro:see:: udp_contents udp_reply udp_request +event udp_session_done%(u: connection%); + +## Generated for all ICMP messages that are not handled separately with +## dedicated ICMP events. Bro's ICMP analyzer handles a number of ICMP messages +## directly with dedicated events. This event acts as a fallback for those it +## doesn't. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard +## connection record *c*. +## +## .. bro:see:: icmp_error_message +event icmp_sent%(c: connection, icmp: icmp_conn%); + +## Generated for ICMP *echo request* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard +## connection record *c*. +## +## id: The *echo request* identifier. +## +## seq: The *echo request* sequence number. +## +## payload: The message-specific data of the packet payload, i.e., everything +## after the first 8 bytes of the ICMP header. +## +## .. bro:see:: icmp_echo_reply +event icmp_echo_request%(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string%); + +## Generated for ICMP *echo reply* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## id: The *echo reply* identifier. +## +## seq: The *echo reply* sequence number. +## +## payload: The message-specific data of the packet payload, i.e., everything +## after the first 8 bytes of the ICMP header. +## +## .. bro:see:: icmp_echo_request +event icmp_echo_reply%(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string%); + +## Generated for all ICMPv6 error messages that are not handled +## separately with dedicated events. Bro's ICMP analyzer handles a number +## of ICMP error messages directly with dedicated events. This event acts +## as a fallback for those it doesn't. +## +## See `Wikipedia +## `__ for more +## information about the ICMPv6 protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard +## connection record *c*. +## +## code: The ICMP code of the error message. +## +## context: A record with specifics of the original packet that the message +## refers to. +## +## .. bro:see:: icmp_unreachable icmp_packet_too_big +## icmp_time_exceeded icmp_parameter_problem +event icmp_error_message%(c: connection, icmp: icmp_conn, code: count, context: icmp_context%); + +## Generated for ICMP *destination unreachable* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## code: The ICMP code of the *unreachable* message. +## +## context: A record with specifics of the original packet that the message +## refers to. *Unreachable* messages should include the original IP +## header from the packet that triggered them, and Bro parses that +## into the *context* structure. Note that if the *unreachable* +## includes only a partial IP header for some reason, no +## fields of *context* will be filled out. +## +## .. bro:see:: icmp_error_message icmp_packet_too_big +## icmp_time_exceeded icmp_parameter_problem +event icmp_unreachable%(c: connection, icmp: icmp_conn, code: count, context: icmp_context%); + +## Generated for ICMPv6 *packet too big* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMPv6 protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## code: The ICMP code of the *too big* message. +## +## context: A record with specifics of the original packet that the message +## refers to. *Too big* messages should include the original IP header +## from the packet that triggered them, and Bro parses that into +## the *context* structure. Note that if the *too big* includes only +## a partial IP header for some reason, no fields of *context* will +## be filled out. +## +## .. bro:see:: icmp_error_message icmp_unreachable +## icmp_time_exceeded icmp_parameter_problem +event icmp_packet_too_big%(c: connection, icmp: icmp_conn, code: count, context: icmp_context%); + +## Generated for ICMP *time exceeded* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## code: The ICMP code of the *exceeded* message. +## +## context: A record with specifics of the original packet that the message +## refers to. *Unreachable* messages should include the original IP +## header from the packet that triggered them, and Bro parses that +## into the *context* structure. Note that if the *exceeded* includes +## only a partial IP header for some reason, no fields of *context* +## will be filled out. +## +## .. bro:see:: icmp_error_message icmp_unreachable icmp_packet_too_big +## icmp_parameter_problem +event icmp_time_exceeded%(c: connection, icmp: icmp_conn, code: count, context: icmp_context%); + +## Generated for ICMPv6 *parameter problem* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMPv6 protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## code: The ICMP code of the *parameter problem* message. +## +## context: A record with specifics of the original packet that the message +## refers to. *Parameter problem* messages should include the original +## IP header from the packet that triggered them, and Bro parses that +## into the *context* structure. Note that if the *parameter problem* +## includes only a partial IP header for some reason, no fields +## of *context* will be filled out. +## +## .. bro:see:: icmp_error_message icmp_unreachable icmp_packet_too_big +## icmp_time_exceeded +event icmp_parameter_problem%(c: connection, icmp: icmp_conn, code: count, context: icmp_context%); + +## Generated for ICMP *router solicitation* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## options: Any Neighbor Discovery options included with message (:rfc:`4861`). +## +## .. bro:see:: icmp_router_advertisement +## icmp_neighbor_solicitation icmp_neighbor_advertisement icmp_redirect +event icmp_router_solicitation%(c: connection, icmp: icmp_conn, options: icmp6_nd_options%); + +## Generated for ICMP *router advertisement* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## cur_hop_limit: The default value that should be placed in Hop Count field +## for outgoing IP packets. +## +## managed: Managed address configuration flag, :rfc:`4861`. +## +## other: Other stateful configuration flag, :rfc:`4861`. +## +## home_agent: Mobile IPv6 home agent flag, :rfc:`3775`. +## +## pref: Router selection preferences, :rfc:`4191`. +## +## proxy: Neighbor discovery proxy flag, :rfc:`4389`. +## +## rsv: Remaining two reserved bits of router advertisement flags. +## +## router_lifetime: How long this router should be used as a default router. +## +## reachable_time: How long a neighbor should be considered reachable. +## +## retrans_timer: How long a host should wait before retransmitting. +## +## options: Any Neighbor Discovery options included with message (:rfc:`4861`). +## +## .. bro:see:: icmp_router_solicitation +## icmp_neighbor_solicitation icmp_neighbor_advertisement icmp_redirect +event icmp_router_advertisement%(c: connection, icmp: icmp_conn, cur_hop_limit: count, managed: bool, other: bool, home_agent: bool, pref: count, proxy: bool, rsv: count, router_lifetime: interval, reachable_time: interval, retrans_timer: interval, options: icmp6_nd_options%); + +## Generated for ICMP *neighbor solicitation* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## tgt: The IP address of the target of the solicitation. +## +## options: Any Neighbor Discovery options included with message (:rfc:`4861`). +## +## .. bro:see:: icmp_router_solicitation icmp_router_advertisement +## icmp_neighbor_advertisement icmp_redirect +event icmp_neighbor_solicitation%(c: connection, icmp: icmp_conn, tgt: addr, options: icmp6_nd_options%); + +## Generated for ICMP *neighbor advertisement* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## router: Flag indicating the sender is a router. +## +## solicited: Flag indicating advertisement is in response to a solicitation. +## +## override: Flag indicating advertisement should override existing caches. +## +## tgt: the Target Address in the soliciting message or the address whose +## link-layer address has changed for unsolicited adverts. +## +## options: Any Neighbor Discovery options included with message (:rfc:`4861`). +## +## .. bro:see:: icmp_router_solicitation icmp_router_advertisement +## icmp_neighbor_solicitation icmp_redirect +event icmp_neighbor_advertisement%(c: connection, icmp: icmp_conn, router: bool, solicited: bool, override: bool, tgt: addr, options: icmp6_nd_options%); + +## Generated for ICMP *redirect* messages. +## +## See `Wikipedia +## `__ for more +## information about the ICMP protocol. +## +## c: The connection record for the corresponding ICMP flow. +## +## icmp: Additional ICMP-specific information augmenting the standard connection +## record *c*. +## +## tgt: The address that is supposed to be a better first hop to use for +## ICMP Destination Address. +## +## dest: The address of the destination which is redirected to the target. +## +## options: Any Neighbor Discovery options included with message (:rfc:`4861`). +## +## .. bro:see:: icmp_router_solicitation icmp_router_advertisement +## icmp_neighbor_solicitation icmp_neighbor_advertisement +event icmp_redirect%(c: connection, icmp: icmp_conn, tgt: addr, dest: addr, options: icmp6_nd_options%); + +## Generated when a TCP connection terminated, passing on statistics about the +## two endpoints. This event is always generated when Bro flushes the internal +## connection state, independent of how a connection terminates. +## +## c: The connection. +## +## os: Statistics for the originator endpoint. +## +## rs: Statistics for the responder endpoint. +## +## .. bro:see:: connection_state_remove +event conn_stats%(c: connection, os: endpoint_stats, rs: endpoint_stats%); + +## Generated for unexpected activity related to a specific connection. When +## Bro's packet analysis encounters activity that does not conform to a +## protocol's specification, it raises one of the ``*_weird`` events to report +## that. This event is raised if the activity is tied directly to a specific +## connection. +## +## name: A unique name for the specific type of "weird" situation. Bro's default +## scripts use this name in filtering policies that specify which +## "weirds" are worth reporting. +## +## c: The corresponding connection. +## +## addl: Optional additional context further describing the situation. +## +## .. bro:see:: flow_weird net_weird +## +## .. note:: "Weird" activity is much more common in real-world network traffic +## than one would intuitively expect. While in principle, any protocol +## violation could be an attack attempt, it's much more likely that an +## endpoint's implementation interprets an RFC quite liberally. +event conn_weird%(name: string, c: connection, addl: string%); + +## Generated for unexpected activity related to a pair of hosts, but independent +## of a specific connection. When Bro's packet analysis encounters activity +## that does not conform to a protocol's specification, it raises one of +## the ``*_weird`` events to report that. This event is raised if the activity +## is related to a pair of hosts, yet not to a specific connection between +## them. +## +## name: A unique name for the specific type of "weird" situation. Bro's default +## scripts use this name in filtering policies that specify which +## "weirds" are worth reporting. +## +## src: The source address corresponding to the activity. +## +## dst: The destination address corresponding to the activity. +## +## .. bro:see:: conn_weird net_weird +## +## .. note:: "Weird" activity is much more common in real-world network traffic +## than one would intuitively expect. While in principle, any protocol +## violation could be an attack attempt, it's much more likely that an +## endpoint's implementation interprets an RFC quite liberally. +event flow_weird%(name: string, src: addr, dst: addr%); + +## Generated for unexpected activity that is not tied to a specific connection +## or pair of hosts. When Bro's packet analysis encounters activity that +## does not conform to a protocol's specification, it raises one of the +## ``*_weird`` events to report that. This event is raised if the activity is +## not tied directly to a specific connection or pair of hosts. +## +## name: A unique name for the specific type of "weird" situation. Bro's default +## scripts use this name in filtering policies that specify which +## "weirds" are worth reporting. +## +## .. bro:see:: flow_weird +## +## .. note:: "Weird" activity is much more common in real-world network traffic +## than one would intuitively expect. While in principle, any protocol +## violation could be an attack attempt, it's much more likely that an +## endpoint's implementation interprets an RFC quite liberally. +event net_weird%(name: string%); + +## Generated regularly for the purpose of profiling Bro's processing. This event +## is raised for every :bro:id:`load_sample_freq` packet. For these packets, +## Bro records script-level functions executed during their processing as well +## as further internal locations. By sampling the processing in this form, one +## can understand where Bro spends its time. +## +## samples: A set with functions and locations seen during the processing of +## the sampled packet. +## +## CPU: The CPU time spent on processing the sampled packet. +## +## dmem: The difference in memory usage caused by processing the sampled packet. +event load_sample%(samples: load_sample_info, CPU: interval, dmem: int%); + +## Generated for ARP requests. +## +## See `Wikipedia `__ +## for more information about the ARP protocol. +## +## mac_src: The request's source MAC address. +## +## mac_dst: The request's destination MAC address. +## +## SPA: The sender protocol address. +## +## SHA: The sender hardware address. +## +## TPA: The target protocol address. +## +## THA: The target hardware address. +## +## .. bro:see:: arp_reply bad_arp event arp_request%(mac_src: string, mac_dst: string, SPA: addr, SHA: string, TPA: addr, THA: string%); + +## Generated for ARP replies. +## +## See `Wikipedia `__ +## for more information about the ARP protocol. +## +## mac_src: The reply's source MAC address. +## +## mac_dst: The reply's destination MAC address. +## +## SPA: The sender protocol address. +## +## SHA: The sender hardware address. +## +## TPA: The target protocol address. +## +## THA: The target hardware address. +## +## .. bro:see:: arp_request bad_arp event arp_reply%(mac_src: string, mac_dst: string, SPA: addr, SHA: string, TPA: addr, THA: string%); + +## Generated for ARP packets that Bro cannot interpret. Examples are packets +## with non-standard hardware address formats or hardware addresses that do not +## match the originator of the packet. +## +## SPA: The sender protocol address. +## +## SHA: The sender hardware address. +## +## TPA: The target protocol address. +## +## THA: The target hardware address. +## +## explanation: A short description of why the ARP packet is considered "bad". +## +## .. bro:see:: arp_reply arp_request +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event bad_arp%(SPA: addr, SHA: string, TPA: addr, THA: string, explanation: string%); +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_have bittorrent_peer_interested bittorrent_peer_keep_alive +## bittorrent_peer_not_interested bittorrent_peer_piece bittorrent_peer_port +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_handshake%(c: connection, is_orig: bool, reserved: string, info_hash: string, peer_id: string%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_not_interested bittorrent_peer_piece bittorrent_peer_port +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_keep_alive%(c: connection, is_orig: bool%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_unknown bittorrent_peer_weird event bittorrent_peer_choke%(c: connection, is_orig: bool%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request +## bittorrent_peer_unknown bittorrent_peer_weird event bittorrent_peer_unchoke%(c: connection, is_orig: bool%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_keep_alive +## bittorrent_peer_not_interested bittorrent_peer_piece bittorrent_peer_port +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_interested%(c: connection, is_orig: bool%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_piece bittorrent_peer_port +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_not_interested%(c: connection, is_orig: bool%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_interested bittorrent_peer_keep_alive +## bittorrent_peer_not_interested bittorrent_peer_piece bittorrent_peer_port +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_have%(c: connection, is_orig: bool, piece_index: count%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_cancel bittorrent_peer_choke bittorrent_peer_handshake +## bittorrent_peer_have bittorrent_peer_interested bittorrent_peer_keep_alive +## bittorrent_peer_not_interested bittorrent_peer_piece bittorrent_peer_port +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_bitfield%(c: connection, is_orig: bool, bitfield: string%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_request%(c: connection, is_orig: bool, index: count, begin: count, length: count%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_port +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_piece%(c: connection, is_orig: bool, index: count, begin: count, piece_length: count%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_unknown bittorrent_peer_weird event bittorrent_peer_cancel%(c: connection, is_orig: bool, index: count, begin: count, length: count%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_request bittorrent_peer_unchoke bittorrent_peer_unknown +## bittorrent_peer_weird event bittorrent_peer_port%(c: connection, is_orig: bool, listen_port: port%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_weird event bittorrent_peer_unknown%(c: connection, is_orig: bool, message_id: count, data: string%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_unknown event bittorrent_peer_weird%(c: connection, is_orig: bool, msg: string%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_unknown bittorrent_peer_weird event bt_tracker_request%(c: connection, uri: string, headers: bt_tracker_headers%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_unknown bittorrent_peer_weird event bt_tracker_response%(c: connection, status: count, headers: bt_tracker_headers, peers: bittorrent_peer_set, benc: bittorrent_benc_dir%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_unknown bittorrent_peer_weird event bt_tracker_response_not_ok%(c: connection, status: count, headers: bt_tracker_headers%); + +## TODO. +## +## See `Wikipedia `__ for +## more information about the BitTorrent protocol. +## +## .. bro:see:: bittorrent_peer_bitfield bittorrent_peer_cancel bittorrent_peer_choke +## bittorrent_peer_handshake bittorrent_peer_have bittorrent_peer_interested +## bittorrent_peer_keep_alive bittorrent_peer_not_interested bittorrent_peer_piece +## bittorrent_peer_port bittorrent_peer_request bittorrent_peer_unchoke +## bittorrent_peer_unknown bittorrent_peer_weird event bt_tracker_weird%(c: connection, is_orig: bool, msg: string%); +## Generated for Finger requests. +## +## See `Wikipedia `__ for more +## information about the Finger protocol. +## +## c: The connection. +## +## full: True if verbose information is requested (``/W`` switch). +## +## username: The request's user name. +## +## hostname: The request's host name. +## +## .. bro:see:: finger_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event finger_request%(c: connection, full: bool, username: string, hostname: string%); + +## Generated for Finger replies. +## +## See `Wikipedia `__ for more +## information about the Finger protocol. +## +## c: The connection. +## +## reply_line: The reply as returned by the server +## +## .. bro:see:: finger_request +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event finger_reply%(c: connection, reply_line: string%); + +## TODO. +## +## See `Wikipedia `__ for more +## information about the Gnutella protocol. +## +## .. bro:see:: gnutella_binary_msg gnutella_establish gnutella_http_notify +## gnutella_not_establish gnutella_partial_binary_msg gnutella_signature_found +## +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event gnutella_text_msg%(c: connection, orig: bool, headers: string%); + +## TODO. +## +## See `Wikipedia `__ for more +## information about the Gnutella protocol. +## +## .. bro:see:: gnutella_establish gnutella_http_notify gnutella_not_establish +## gnutella_partial_binary_msg gnutella_signature_found gnutella_text_msg +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event gnutella_binary_msg%(c: connection, orig: bool, msg_type: count, ttl: count, hops: count, msg_len: count, payload: string, payload_len: count, trunc: bool, complete: bool%); + +## TODO. +## +## See `Wikipedia `__ for more +## information about the Gnutella protocol. +## +## .. bro:see:: gnutella_binary_msg gnutella_establish gnutella_http_notify +## gnutella_not_establish gnutella_signature_found gnutella_text_msg +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event gnutella_partial_binary_msg%(c: connection, orig: bool, msg: string, len: count%); + +## TODO. +## +## See `Wikipedia `__ for more +## information about the Gnutella protocol. +## +## .. bro:see:: gnutella_binary_msg gnutella_http_notify gnutella_not_establish +## gnutella_partial_binary_msg gnutella_signature_found gnutella_text_msg +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event gnutella_establish%(c: connection%); + +## TODO. +## +## See `Wikipedia `__ for more +## information about the Gnutella protocol. +## +## .. bro:see:: gnutella_binary_msg gnutella_establish gnutella_http_notify +## gnutella_partial_binary_msg gnutella_signature_found gnutella_text_msg +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event gnutella_not_establish%(c: connection%); + +## TODO. +## +## See `Wikipedia `__ for more +## information about the Gnutella protocol. +## +## .. bro:see:: gnutella_binary_msg gnutella_establish gnutella_not_establish +## gnutella_partial_binary_msg gnutella_signature_found gnutella_text_msg +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event gnutella_http_notify%(c: connection%); +## Generated for Ident requests. +## +## See `Wikipedia `__ for more +## information about the Ident protocol. +## +## c: The connection. +## +## lport: The request's local port. +## +## rport: The request's remote port. +## +## .. bro:see:: ident_error ident_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event ident_request%(c: connection, lport: port, rport: port%); + +## Generated for Ident replies. +## +## See `Wikipedia `__ for more +## information about the Ident protocol. +## +## c: The connection. +## +## lport: The corresponding request's local port. +## +## rport: The corresponding request's remote port. +## +## user_id: The user id returned by the reply. +## +## system: The operating system returned by the reply. +## +## .. bro:see:: ident_error ident_request +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event ident_reply%(c: connection, lport: port, rport: port, user_id: string, system: string%); + +## Generated for Ident error replies. +## +## See `Wikipedia `__ for more +## information about the Ident protocol. +## +## c: The connection. +## +## lport: The corresponding request's local port. +## +## rport: The corresponding request's remote port. +## +## line: The error description returned by the reply. +## +## .. bro:see:: ident_reply ident_request +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event ident_error%(c: connection, lport: port, rport: port, line: string%); +## Generated for Telnet/Rlogin login failures. The *login* analyzer inspects +## Telnet/Rlogin sessions to heuristically extract username and password +## information as well as the text returned by the login server. This event is +## raised if a login attempt appears to have been unsuccessful. +## +## c: The connection. +## +## user: The user name tried. +## +## client_user: For Telnet connections, this is an empty string, but for Rlogin +## connections, it is the client name passed in the initial authentication +## information (to check against .rhosts). +## +## password: The password tried. +## +## line: The line of text that led the analyzer to conclude that the +## authentication had failed. +## +## .. bro:see:: login_confused login_confused_text login_display login_input_line +## login_output_line login_prompt login_success login_terminal direct_login_prompts +## get_login_state login_failure_msgs login_non_failure_msgs login_prompts login_success_msgs +## login_timeouts set_login_state +## +## .. note:: The login analyzer depends on a set of script-level variables that +## need to be configured with patterns identifying login attempts. This +## configuration has not yet been ported over from Bro 1.5 to Bro 2.x, and +## the analyzer is therefore not directly usable at the moment. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_failure%(c: connection, user: string, client_user: string, password: string, line: string%); + +## Generated for successful Telnet/Rlogin logins. The *login* analyzer inspects +## Telnet/Rlogin sessions to heuristically extract username and password +## information as well as the text returned by the login server. This event is +## raised if a login attempt appears to have been successful. +## +## c: The connection. +## +## user: The user name used. +## +## client_user: For Telnet connections, this is an empty string, but for Rlogin +## connections, it is the client name passed in the initial authentication +## information (to check against .rhosts). +## +## password: The password used. +## +## line: The line of text that led the analyzer to conclude that the +## authentication had succeeded. +## +## .. bro:see:: login_confused login_confused_text login_display login_failure +## login_input_line login_output_line login_prompt login_terminal +## direct_login_prompts get_login_state login_failure_msgs login_non_failure_msgs +## login_prompts login_success_msgs login_timeouts set_login_state +## +## .. note:: The login analyzer depends on a set of script-level variables that +## need to be configured with patterns identifying login attempts. This +## configuration has not yet been ported over from Bro 1.5 to Bro 2.x, and +## the analyzer is therefore not directly usable at the moment. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_success%(c: connection, user: string, client_user: string, password: string, line: string%); + +## Generated for lines of input on Telnet/Rlogin sessions. The line will have +## control characters (such as in-band Telnet options) removed. +## +## c: The connection. +## +## line: The input line. +## +## .. bro:see:: login_confused login_confused_text login_display login_failure +## login_output_line login_prompt login_success login_terminal rsh_request +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_input_line%(c: connection, line: string%); + +## Generated for lines of output on Telnet/Rlogin sessions. The line will have +## control characters (such as in-band Telnet options) removed. +## +## c: The connection. +## +## line: The ouput line. +## +## .. bro:see:: login_confused login_confused_text login_display login_failure +## login_input_line login_prompt login_success login_terminal rsh_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_output_line%(c: connection, line: string%); + +## Generated when tracking of Telnet/Rlogin authentication failed. As Bro's +## *login* analyzer uses a number of heuristics to extract authentication +## information, it may become confused. If it can no longer correctly track +## the authentication dialog, it raises this event. +## +## c: The connection. +## +## msg: Gives the particular problem the heuristics detected (for example, +## ``multiple_login_prompts`` means that the engine saw several login +## prompts in a row, without the type-ahead from the client side presumed +## necessary to cause them) +## +## line: The line of text that caused the heuristics to conclude they were +## confused. +## +## .. bro:see:: login_confused_text login_display login_failure login_input_line login_output_line +## login_prompt login_success login_terminal direct_login_prompts get_login_state +## login_failure_msgs login_non_failure_msgs login_prompts login_success_msgs +## login_timeouts set_login_state +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_confused%(c: connection, msg: string, line: string%); + +## Generated after getting confused while tracking a Telnet/Rlogin +## authentication dialog. The *login* analyzer generates this even for every +## line of user input after it has reported :bro:id:`login_confused` for a +## connection. +## +## c: The connection. +## +## line: The line the user typed. +## +## .. bro:see:: login_confused login_display login_failure login_input_line +## login_output_line login_prompt login_success login_terminal direct_login_prompts +## get_login_state login_failure_msgs login_non_failure_msgs login_prompts +## login_success_msgs login_timeouts set_login_state +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_confused_text%(c: connection, line: string%); + +## Generated for clients transmitting a terminal type in a Telnet session. This +## information is extracted out of environment variables sent as Telnet options. +## +## c: The connection. +## +## terminal: The TERM value transmitted. +## +## .. bro:see:: login_confused login_confused_text login_display login_failure +## login_input_line login_output_line login_prompt login_success +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_terminal%(c: connection, terminal: string%); + +## Generated for clients transmitting an X11 DISPLAY in a Telnet session. This +## information is extracted out of environment variables sent as Telnet options. +## +## c: The connection. +## +## display: The DISPLAY transmitted. +## +## .. bro:see:: login_confused login_confused_text login_failure login_input_line +## login_output_line login_prompt login_success login_terminal +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event login_display%(c: connection, display: string%); -event login_prompt%(c: connection, prompt: string%); -event rsh_request%(c: connection, client_user: string, server_user: string, line: string, new_session: bool%); -event rsh_reply%(c: connection, client_user: string, server_user: string, line: string%); -event excessive_line%(c: connection%); + +## Generated when a Telnet authentication has been successful. The Telnet +## protocol includes options for negotiating authentication. When such an +## option is sent from client to server and the server replies that it accepts +## the authentication, then the event engine generates this event. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## name: The authenticated name. +## +## c: The connection. +## +## .. bro:see:: authentication_rejected authentication_skipped login_success +## +## .. note:: This event inspects the corresponding Telnet option +## while :bro:id:`login_success` heuristically determines success by watching +## session data. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event authentication_accepted%(name: string, c: connection%); + +## Generated when a Telnet authentication has been unsuccessful. The Telnet +## protocol includes options for negotiating authentication. When such an option +## is sent from client to server and the server replies that it did not accept +## the authentication, then the event engine generates this event. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## name: The attempted authentication name. +## +## c: The connection. +## +## .. bro:see:: authentication_accepted authentication_skipped login_failure +## +## .. note:: This event inspects the corresponding Telnet option +## while :bro:id:`login_success` heuristically determines failure by watching +## session data. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event authentication_rejected%(name: string, c: connection%); + +## Generated for Telnet/Rlogin sessions when a pattern match indicates +## that no authentication is performed. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## c: The connection. +## +## .. bro:see:: authentication_accepted authentication_rejected direct_login_prompts +## get_login_state login_failure_msgs login_non_failure_msgs login_prompts +## login_success_msgs login_timeouts set_login_state +## +## .. note:: The login analyzer depends on a set of script-level variables that +## need to be configured with patterns identifying activity. This +## configuration has not yet been ported over from Bro 1.5 to Bro 2.x, and +## the analyzer is therefore not directly usable at the moment. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event authentication_skipped%(c: connection%); + +## Generated for clients transmitting a terminal prompt in a Telnet session. +## This information is extracted out of environment variables sent as Telnet +## options. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## c: The connection. +## +## prompt: The TTYPROMPT transmitted. +## +## .. bro:see:: login_confused login_confused_text login_display login_failure +## login_input_line login_output_line login_success login_terminal +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. +event login_prompt%(c: connection, prompt: string%); + +## Generated for Telnet sessions when encryption is activated. The Telnet +## protocol includes options for negotiating encryption. When such a series of +## options is successfully negotiated, the event engine generates this event. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## c: The connection. +## +## .. bro:see:: authentication_accepted authentication_rejected authentication_skipped +## login_confused login_confused_text login_display login_failure login_input_line +## login_output_line login_prompt login_success login_terminal event activating_encryption%(c: connection%); +## Generated for an inconsistent Telnet option. Telnet options are specified +## by the client and server stating which options they are willing to +## support vs. which they are not, and then instructing one another which in +## fact they should or should not use for the current connection. If the event +## engine sees a peer violate either what the other peer has instructed it to +## do, or what it itself offered in terms of options in the past, then the +## engine generates this event. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## c: The connection. +## +## .. bro:see:: bad_option bad_option_termination authentication_accepted +## authentication_rejected authentication_skipped login_confused +## login_confused_text login_display login_failure login_input_line +## login_output_line login_prompt login_success login_terminal +event inconsistent_option%(c: connection%); + +## Generated for an ill-formed or unrecognized Telnet option. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## c: The connection. +## +## .. bro:see:: inconsistent_option bad_option_termination authentication_accepted +## authentication_rejected authentication_skipped login_confused +## login_confused_text login_display login_failure login_input_line +## login_output_line login_prompt login_success login_terminal +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. +event bad_option%(c: connection%); + +## Generated for a Telnet option that's incorrectly terminated. +## +## See `Wikipedia `__ for more information +## about the Telnet protocol. +## +## c: The connection. +## +## .. bro:see:: inconsistent_option bad_option authentication_accepted +## authentication_rejected authentication_skipped login_confused +## login_confused_text login_display login_failure login_input_line +## login_output_line login_prompt login_success login_terminal +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. +event bad_option_termination%(c: connection%); + +## Generated for client side commands on an RSH connection. +## +## See `RFC 1258 `__ for more information +## about the Rlogin/Rsh protocol. +## +## c: The connection. +## +## client_user: The client-side user name as sent in the initial protocol +## handshake. +## +## server_user: The server-side user name as sent in the initial protocol +## handshake. +## +## line: The command line sent in the request. +## +## new_session: True if this is the first command of the Rsh session. +## +## .. bro:see:: rsh_reply login_confused login_confused_text login_display +## login_failure login_input_line login_output_line login_prompt login_success +## login_terminal +## +## .. note:: For historical reasons, these events are separate from the +## ``login_`` events. Ideally, they would all be handled uniquely. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. +event rsh_request%(c: connection, client_user: string, server_user: string, line: string, new_session: bool%); + +## Generated for client side commands on an RSH connection. +## +## See `RFC 1258 `__ for more information +## about the Rlogin/Rsh protocol. +## +## c: The connection. +## +## client_user: The client-side user name as sent in the initial protocol +## handshake. +## +## server_user: The server-side user name as sent in the initial protocol +## handshake. +## +## line: The command line sent in the request. +## +## .. bro:see:: rsh_request login_confused login_confused_text login_display +## login_failure login_input_line login_output_line login_prompt login_success +## login_terminal +## +## .. note:: For historical reasons, these events are separate from the +## ``login_`` events. Ideally, they would all be handled uniquely. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. +event rsh_reply%(c: connection, client_user: string, server_user: string, line: string%); + +## Generated for client-side FTP commands. +## +## See `Wikipedia `__ for +## more information about the FTP protocol. +## +## c: The connection. +## +## command: The FTP command issued by the client (without any arguments). +## +## arg: The arguments going with the command. +## +## .. bro:see:: ftp_reply fmt_ftp_port parse_eftp_port +## parse_ftp_epsv parse_ftp_pasv parse_ftp_port event ftp_request%(c: connection, command: string, arg: string%) &group="ftp"; + +## Generated for server-side FTP replies. +## +## See `Wikipedia `__ for +## more information about the FTP protocol. +## +## c: The connection. +## +## code: The numerical response code the server responded with. +## +## msg: The textual message of the response. +## +## cont_resp: True if the reply line is tagged as being continued to the next +## line. If so, further events will be raised and a handler may want +## to reassemble the pieces before processing the response any +## further. +## +## .. bro:see:: ftp_request fmt_ftp_port parse_eftp_port +## parse_ftp_epsv parse_ftp_pasv parse_ftp_port event ftp_reply%(c: connection, code: count, msg: string, cont_resp: bool%) &group="ftp"; +## Generated for client-side SMTP commands. +## +## See `Wikipedia `__ +## for more information about the SMTP protocol. +## +## c: The connection. +## +## is_orig: True if the sender of the command is the originator of the TCP +## connection. Note that this is not redundant: the SMTP ``TURN`` command +## allows client and server to flip roles on established SMTP sessions, +## and hence a "request" might still come from the TCP-level responder. +## In practice, however, that will rarely happen as TURN is considered +## insecure and rarely used. +## +## command: The request's command, without any arguments. +## +## arg: The request command's arguments. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_end_entity mime_entity_data mime_event mime_one_header mime_segment_data +## smtp_data smtp_reply +## +## .. note:: Bro does not support the newer ETRN extension yet. event smtp_request%(c: connection, is_orig: bool, command: string, arg: string%) &group="smtp"; + +## Generated for server-side SMTP commands. +## +## See `Wikipedia `__ +## for more information about the SMTP protocol. +## +## c: The connection. +## +## is_orig: True if the sender of the command is the originator of the TCP +## connection. Note that this is not redundant: the SMTP ``TURN`` command +## allows client and server to flip roles on established SMTP sessions, +## and hence a "reply" might still come from the TCP-level originator. In +## practice, however, that will rarely happen as TURN is considered +## insecure and rarely used. +## +## code: The reply's numerical code. +## +## cmd: TODO. +## +## msg: The reply's textual description. +## +## cont_resp: True if the reply line is tagged as being continued to the next +## line. If so, further events will be raised and a handler may want to +## reassemble the pieces before processing the response any further. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_end_entity mime_entity_data mime_event mime_one_header mime_segment_data +## smtp_data smtp_request +## +## .. note:: Bro doesn't support the newer ETRN extension yet. event smtp_reply%(c: connection, is_orig: bool, code: count, cmd: string, msg: string, cont_resp: bool%) &group="smtp"; + +## Generated for DATA transmitted on SMTP sessions. This event is raised for +## subsequent chunks of raw data following the ``DATA`` SMTP command until the +## corresponding end marker ``.`` is seen. A handler may want to reassemble +## the pieces as they come in if stream-analysis is required. +## +## See `Wikipedia `__ +## for more information about the SMTP protocol. +## +## c: The connection. +## +## is_orig: True if the sender of the data is the originator of the TCP +## connection. +## +## data: The raw data. Note that the size of each chunk is undefined and +## depends on specifics of the underlying TCP connection. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_end_entity mime_entity_data mime_event mime_one_header mime_segment_data +## smtp_reply smtp_request skip_smtp_data +## +## .. note:: This event receives the unprocessed raw data. There is a separate +## set of ``mime_*`` events that strip out the outer MIME-layer of emails and +## provide structured access to their content. event smtp_data%(c: connection, is_orig: bool, data: string%) &group="smtp"; + +## Generated for unexpected activity on SMTP sessions. The SMTP analyzer tracks +## the state of SMTP sessions and reports commands and other activity with this +## event that it sees even though it would not expect so at the current point +## of the communication. +## +## See `Wikipedia `__ +## for more information about the SMTP protocol. +## +## c: The connection. +## +## is_orig: True if the sender of the unexpected activity is the originator of +## the TCP connection. +## +## msg: A descriptive message of what was unexpected. +## +## detail: The actual SMTP line triggering the event. +## +## .. bro:see:: smtp_data smtp_request smtp_reply event smtp_unexpected%(c: connection, is_orig: bool, msg: string, detail: string%) &group="smtp"; +## Generated when starting to parse an email MIME entity. MIME is a +## protocol-independent data format for encoding text and files, along with +## corresponding metadata, for transmission. Bro raises this event when it +## begins parsing a MIME entity extracted from an email protocol. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## .. bro:see:: mime_all_data mime_all_headers mime_content_hash mime_end_entity +## mime_entity_data mime_event mime_one_header mime_segment_data smtp_data +## http_begin_entity +## +## .. note:: Bro also extracts MIME entities from HTTP sessions. For those, +## however, it raises :bro:id:`http_begin_entity` instead. event mime_begin_entity%(c: connection%); -event mime_next_entity%(c: connection%); + +## Generated when finishing parsing an email MIME entity. MIME is a +## protocol-independent data format for encoding text and files, along with +## corresponding metadata, for transmission. Bro raises this event when it +## finished parsing a MIME entity extracted from an email protocol. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_entity_data mime_event mime_one_header mime_segment_data smtp_data +## http_end_entity +## +## .. note:: Bro also extracts MIME entities from HTTP sessions. For those, +## however, it raises :bro:id:`http_end_entity` instead. event mime_end_entity%(c: connection%); + +## Generated for individual MIME headers extracted from email MIME +## entities. MIME is a protocol-independent data format for encoding text and +## files, along with corresponding metadata, for transmission. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## h: The parsed MIME header. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_end_entity mime_entity_data mime_event mime_segment_data +## http_header http_all_headers +## +## .. note:: Bro also extracts MIME headers from HTTP sessions. For those, +## however, it raises :bro:id:`http_header` instead. event mime_one_header%(c: connection, h: mime_header_rec%); + +## Generated for MIME headers extracted from email MIME entities, passing all +## headers at once. MIME is a protocol-independent data format for encoding +## text and files, along with corresponding metadata, for transmission. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## hlist: A *table* containing all headers extracted from the current entity. +## The table is indexed by the position of the header (1 for the first, +## 2 for the second, etc.). +## +## .. bro:see:: mime_all_data mime_begin_entity mime_content_hash mime_end_entity +## mime_entity_data mime_event mime_one_header mime_segment_data +## http_header http_all_headers +## +## .. note:: Bro also extracts MIME headers from HTTP sessions. For those, +## however, it raises :bro:id:`http_header` instead. event mime_all_headers%(c: connection, hlist: mime_header_list%); + +## Generated for chunks of decoded MIME data from email MIME entities. MIME +## is a protocol-independent data format for encoding text and files, along with +## corresponding metadata, for transmission. As Bro parses the data of an +## entity, it raises a sequence of these events, each coming as soon as a new +## chunk of data is available. In contrast, there is also +## :bro:id:`mime_entity_data`, which passes all of an entities data at once +## in a single block. While the latter is more convenient to handle, +## ``mime_segment_data`` is more efficient as Bro does not need to buffer +## the data. Thus, if possible, this event should be preferred. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## length: The length of *data*. +## +## data: The raw data of one segment of the current entity. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_end_entity mime_entity_data mime_event mime_one_header http_entity_data +## mime_segment_length mime_segment_overlap_length +## +## .. note:: Bro also extracts MIME data from HTTP sessions. For those, +## however, it raises :bro:id:`http_entity_data` (sic!) instead. event mime_segment_data%(c: connection, length: count, data: string%); + +## Generated for data decoded from an email MIME entity. This event delivers +## the complete content of a single MIME entity. In contrast, there is also +## :bro:id:`mime_segment_data`, which passes on a sequence of data chunks as +## they come in. While ``mime_entity_data`` is more convenient to handle, +## ``mime_segment_data`` is more efficient as Bro does not need to buffer the +## data. Thus, if possible, the latter should be preferred. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## length: The length of *data*. +## +## data: The raw data of the complete entity. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_end_entity mime_event mime_one_header mime_segment_data +## +## .. note:: While Bro also decodes MIME entities extracted from HTTP +## sessions, there's no corresponding event for that currently. event mime_entity_data%(c: connection, length: count, data: string%); + +## Generated for passing on all data decoded from a single email MIME +## message. If an email message has more than one MIME entity, this event +## combines all their data into a single value for analysis. Note that because +## of the potentially significant buffering necessary, using this event can be +## expensive. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## length: The length of *data*. +## +## data: The raw data of all MIME entities concatenated. +## +## .. bro:see:: mime_all_headers mime_begin_entity mime_content_hash mime_end_entity +## mime_entity_data mime_event mime_one_header mime_segment_data +## +## .. note:: While Bro also decodes MIME entities extracted from HTTP +## sessions, there's no corresponding event for that currently. event mime_all_data%(c: connection, length: count, data: string%); + +## Generated for errors found when decoding email MIME entities. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## event_type: A string describing the general category of the problem found +## (e.g., ``illegal format``). +## +## detail: Further more detailed description of the error. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_content_hash +## mime_end_entity mime_entity_data mime_one_header mime_segment_data http_event +## +## .. note:: Bro also extracts MIME headers from HTTP sessions. For those, +## however, it raises :bro:id:`http_event` instead. event mime_event%(c: connection, event_type: string, detail: string%); + +## Generated for decoded MIME entities extracted from email messages, passing on +## their MD5 checksums. Bro computes the MD5 over the complete decoded data of +## each MIME entity. +## +## Bro's MIME analyzer for emails currently supports SMTP and POP3. See +## `Wikipedia `__ for more information +## about MIME. +## +## c: The connection. +## +## content_len: The length of the entity being hashed. +## +## hash_value: The MD5 hash. +## +## .. bro:see:: mime_all_data mime_all_headers mime_begin_entity mime_end_entity +## mime_entity_data mime_event mime_one_header mime_segment_data +## +## .. note:: While Bro also decodes MIME entities extracted from HTTP +## sessions, there's no corresponding event for that currently. event mime_content_hash%(c: connection, content_len: count, hash_value: string%); -# Generated for each RPC request / reply *pair* (if there is no reply, the event -# will be generated on timeout). +## Generated for RPC request/reply *pairs*. The RPC analyzer associates request +## and reply by their transaction identifiers and raises this event once both +## have been seen. If there's not a reply, this event will still be generated +## eventually on timeout. In that case, *status* will be set to +## :bro:enum:`RPC_TIMEOUT`. +## +## See `Wikipedia `__ for more information +## about the ONC RPC protocol. +## +## c: The connection. +## +## prog: The remote program to call. +## +## ver: The version of the remote program to call. +## +## proc: The procedure of the remote program to call. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## start_time: The time when the *call* was seen. +## +## call_len: The size of the *call_body* PDU. +## +## reply_len: The size of the *reply_body* PDU. +## +## .. bro:see:: rpc_call rpc_reply dce_rpc_bind dce_rpc_message dce_rpc_request +## dce_rpc_response rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event rpc_dialogue%(c: connection, prog: count, ver: count, proc: count, status: rpc_status, start_time: time, call_len: count, reply_len: count%); -# Generated for each (correctly formed) RPC_CALL message received. + +## Generated for RPC *call* messages. +## +## See `Wikipedia `__ for more information +## about the ONC RPC protocol. +## +## c: The connection. +## +## xid: The transaction identifier allowing to match requests with replies. +## +## prog: The remote program to call. +## +## ver: The version of the remote program to call. +## +## proc: The procedure of the remote program to call. +## +## call_len: The size of the *call_body* PDU. +## +## .. bro:see:: rpc_dialogue rpc_reply dce_rpc_bind dce_rpc_message dce_rpc_request +## dce_rpc_response rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event rpc_call%(c: connection, xid: count, prog: count, ver: count, proc: count, call_len: count%); -# Generated for each (correctly formed) RPC_REPLY message received. + +## Generated for RPC *reply* messages. +## +## See `Wikipedia `__ for more information +## about the ONC RPC protocol. +## +## c: The connection. +## +## xid: The transaction identifier allowing to match requests with replies. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## reply_len: The size of the *reply_body* PDU. +## +## .. bro:see:: rpc_call rpc_dialogue dce_rpc_bind dce_rpc_message dce_rpc_request +## dce_rpc_response rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event rpc_reply%(c: connection, xid: count, status: rpc_status, reply_len: count%); +## Generated for Portmapper requests of type *null*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit +## pm_request_dump pm_request_getport pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_request_null%(r: connection%); + +## Generated for Portmapper request/reply dialogues of type *set*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## m: The argument to the request. +## +## success: True if the request was successful, according to the corresponding +## reply. If no reply was seen, this will be false once the request +## times out. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit +## pm_request_dump pm_request_getport pm_request_null pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_request_set%(r: connection, m: pm_mapping, success: bool%); + +## Generated for Portmapper request/reply dialogues of type *unset*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## m: The argument to the request. +## +## success: True if the request was successful, according to the corresponding +## reply. If no reply was seen, this will be false once the request +## times out. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit +## pm_request_dump pm_request_getport pm_request_null pm_request_set rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_request_unset%(r: connection, m: pm_mapping, success: bool%); + +## Generated for Portmapper request/reply dialogues of type *getport*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## pr: The argument to the request. +## +## p: The port returned by the server. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit +## pm_request_dump pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_request_getport%(r: connection, pr: pm_port_request, p: port%); + +## Generated for Portmapper request/reply dialogues of type *dump*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## m: The mappings returned by the server. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_request_dump%(r: connection, m: pm_mappings%); + +## Generated for Portmapper request/reply dialogues of type *callit*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## call: The argument to the request. +## +## p: The port value returned by the call. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset pm_bad_port pm_request_dump +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_request_callit%(r: connection, call: pm_callit_request, p: port%); + +## Generated for failed Portmapper requests of type *null*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit pm_request_dump +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_attempt_null%(r: connection, status: rpc_status%); + +## Generated for failed Portmapper requests of type *set*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## m: The argument to the original request. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_unset pm_bad_port pm_request_callit pm_request_dump +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_attempt_set%(r: connection, status: rpc_status, m: pm_mapping%); + +## Generated for failed Portmapper requests of type *unset*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## m: The argument to the original request. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_bad_port pm_request_callit pm_request_dump +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_attempt_unset%(r: connection, status: rpc_status, m: pm_mapping%); + +## Generated for failed Portmapper requests of type *getport*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## pr: The argument to the original request. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_null +## pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit pm_request_dump +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_attempt_getport%(r: connection, status: rpc_status, pr: pm_port_request%); + +## Generated for failed Portmapper requests of type *dump*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_getport pm_attempt_null +## pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit pm_request_dump +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_attempt_dump%(r: connection, status: rpc_status%); + +## Generated for failed Portmapper requests of type *callit*. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## status: The status of the reply, which should be one of the index values of +## :bro:id:`RPC_status`. +## +## call: The argument to the original request. +## +## .. bro:see:: epm_map_response pm_attempt_dump pm_attempt_getport pm_attempt_null +## pm_attempt_set pm_attempt_unset pm_bad_port pm_request_callit pm_request_dump +## pm_request_getport pm_request_null pm_request_set pm_request_unset rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_attempt_callit%(r: connection, status: rpc_status, call: pm_callit_request%); + +## Generated for Portmapper requests or replies that include an invalid port +## number. Since ports are represented by unsigned 4-byte integers, they can +## stray outside the allowed range of 0--65535 by being >= 65536. If so, this +## event is generated. +## +## Portmapper is a service running on top of RPC. See `Wikipedia +## `__ for more information about the +## service. +## +## r: The RPC connection. +## +## bad_p: The invalid port value. +## +## .. bro:see:: epm_map_response pm_attempt_callit pm_attempt_dump pm_attempt_getport +## pm_attempt_null pm_attempt_set pm_attempt_unset pm_request_callit +## pm_request_dump pm_request_getport pm_request_null pm_request_set +## pm_request_unset rpc_call rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pm_bad_port%(r: connection, bad_p: count%); -# Events for the NFS analyzer. An event is generated if we have received a -# Call (request) / Response pair (or in case of a time out). info$rpc_stat and -# info$nfs_stat show whether the request was successful. The request record is -# always filled out, however, the reply record(s) might not be set or might only -# be partially set. See the comments for the record types in bro.init to see which -# reply fields are set when. +## Generated for NFSv3 request/reply dialogues of type *null*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_read nfs_proc_readdir nfs_proc_readlink +## nfs_proc_remove nfs_proc_rmdir nfs_proc_write nfs_reply_status rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_null%(c: connection, info: NFS3::info_t%); -event nfs_proc_not_implemented%(c: connection, info: NFS3::info_t, proc: NFS3::proc_t%); +## Generated for NFSv3 request/reply dialogues of type *getattr*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## fh: TODO. +## +## attrs: The attributes returned in the reply. The values may not be valid if +## the request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_remove nfs_proc_rmdir nfs_proc_write nfs_reply_status +## rpc_call rpc_dialogue rpc_reply file_mode +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_getattr%(c: connection, info: NFS3::info_t, fh: string, attrs: NFS3::fattr_t%); + +## Generated for NFSv3 request/reply dialogues of type *lookup*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: The arguments passed in the request. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_remove nfs_proc_rmdir nfs_proc_write nfs_reply_status +## rpc_call rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_lookup%(c: connection, info: NFS3::info_t, req: NFS3::diropargs_t, rep: NFS3::lookup_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *read*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: The arguments passed in the request. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_remove nfs_proc_rmdir +## nfs_proc_write nfs_reply_status rpc_call rpc_dialogue rpc_reply +## NFS3::return_data NFS3::return_data_first_only NFS3::return_data_max +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_read%(c: connection, info: NFS3::info_t, req: NFS3::readargs_t, rep: NFS3::read_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *readlink*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## fh: The file handle passed in the request. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_remove nfs_proc_rmdir nfs_proc_write nfs_reply_status rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_readlink%(c: connection, info: NFS3::info_t, fh: string, rep: NFS3::readlink_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *write*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: TODO. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_remove nfs_proc_rmdir nfs_reply_status rpc_call +## rpc_dialogue rpc_reply NFS3::return_data NFS3::return_data_first_only +## NFS3::return_data_max +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_write%(c: connection, info: NFS3::info_t, req: NFS3::writeargs_t, rep: NFS3::write_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *create*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: TODO. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_remove nfs_proc_rmdir nfs_proc_write nfs_reply_status +## rpc_call rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_create%(c: connection, info: NFS3::info_t, req: NFS3::diropargs_t, rep: NFS3::newobj_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *mkdir*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: TODO. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_remove nfs_proc_rmdir nfs_proc_write nfs_reply_status +## rpc_call rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_mkdir%(c: connection, info: NFS3::info_t, req: NFS3::diropargs_t, rep: NFS3::newobj_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *remove*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: TODO. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_rmdir nfs_proc_write nfs_reply_status rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_remove%(c: connection, info: NFS3::info_t, req: NFS3::diropargs_t, rep: NFS3::delobj_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *rmdir*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: TODO. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_remove nfs_proc_write nfs_reply_status rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_rmdir%(c: connection, info: NFS3::info_t, req: NFS3::diropargs_t, rep: NFS3::delobj_reply_t%); + +## Generated for NFSv3 request/reply dialogues of type *readdir*. The event is +## generated once we have either seen both the request and its corresponding +## reply, or an unanswered request has timed out. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## req: TODO. +## +## rep: The response returned in the reply. The values may not be valid if the +## request was unsuccessful. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readlink +## nfs_proc_remove nfs_proc_rmdir nfs_proc_write nfs_reply_status rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_proc_readdir%(c: connection, info: NFS3::info_t, req: NFS3::readdirargs_t, rep: NFS3::readdir_reply_t%); -# Generated for each NFS reply message we receive, giving just gives the status. +## Generated for NFSv3 request/reply dialogues of a type that Bro's NFSv3 +## analyzer does not implement. +## +## NFS is a service running on top of RPC. See `Wikipedia +## `__ for more +## information about the service. +## +## c: The RPC connection. +## +## info: Reports the status of the dialogue, along with some meta information. +## +## proc: The procedure called that Bro does not implement. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_null nfs_proc_read nfs_proc_readdir nfs_proc_readlink nfs_proc_remove +## nfs_proc_rmdir nfs_proc_write nfs_reply_status rpc_call rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. +event nfs_proc_not_implemented%(c: connection, info: NFS3::info_t, proc: NFS3::proc_t%); + +## Generated for each NFSv3 reply message received, reporting just the +## status included. +## +## n: The connection. +## +## info: Reports the status included in the reply. +## +## .. bro:see:: nfs_proc_create nfs_proc_getattr nfs_proc_lookup nfs_proc_mkdir +## nfs_proc_not_implemented nfs_proc_null nfs_proc_read nfs_proc_readdir +## nfs_proc_readlink nfs_proc_remove nfs_proc_rmdir nfs_proc_write rpc_call +## rpc_dialogue rpc_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event nfs_reply_status%(n: connection, info: NFS3::info_t%); +## Generated for all NTP messages. Different from many other of Bro's events, +## this one is generated for both client-side and server-side messages. +## +## See `Wikipedia `__ for +## more information about the NTP protocol. +## +## u: The connection record describing the corresponding UDP flow. +## +## msg: The parsed NTP message. +## +## excess: The raw bytes of any optional parts of the NTP packet. Bro does not +## further parse any optional fields. +## +## .. bro:see:: ntp_session_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event ntp_message%(u: connection, msg: ntp_msg, excess: string%); +## Generated for all NetBIOS SSN and DGM messages. Bro's NetBIOS analyzer +## processes the NetBIOS session service running on TCP port 139, and (despite +## its name!) the NetBIOS datagram service on UDP port 138. +## +## See `Wikipedia `__ for more information +## about NetBIOS. `RFC 1002 `__ describes +## the packet format for NetBIOS over TCP/IP, which Bro parses. +## +## c: The connection, which may be TCP or UDP, depending on the type of the +## NetBIOS session. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## msg_type: The general type of message, as defined in Section 4.3.1 of +## `RFC 1002 `__. +## +## data_len: The length of the message's payload. +## +## .. bro:see:: netbios_session_accepted netbios_session_keepalive +## netbios_session_raw_message netbios_session_rejected netbios_session_request +## netbios_session_ret_arg_resp decode_netbios_name decode_netbios_name_type +## +## .. note:: These days, NetBIOS is primarily used as a transport mechanism for +## `SMB/CIFS `__. Bro's +## SMB analyzer parses both SMB-over-NetBIOS and SMB-over-TCP on port 445. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event netbios_session_message%(c: connection, is_orig: bool, msg_type: count, data_len: count%); + +## Generated for NetBIOS messages of type *session request*. Bro's NetBIOS +## analyzer processes the NetBIOS session service running on TCP port 139, and +## (despite its name!) the NetBIOS datagram service on UDP port 138. +## +## See `Wikipedia `__ for more information +## about NetBIOS. `RFC 1002 `__ describes +## the packet format for NetBIOS over TCP/IP, which Bro parses. +## +## c: The connection, which may be TCP or UDP, depending on the type of the +## NetBIOS session. +## +## msg: The raw payload of the message sent, excluding the common NetBIOS +## header. +## +## .. bro:see:: netbios_session_accepted netbios_session_keepalive +## netbios_session_message netbios_session_raw_message netbios_session_rejected +## netbios_session_ret_arg_resp decode_netbios_name decode_netbios_name_type +## +## .. note:: These days, NetBIOS is primarily used as a transport mechanism for +## `SMB/CIFS `__. Bro's +## SMB analyzer parses both SMB-over-NetBIOS and SMB-over-TCP on port 445. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event netbios_session_request%(c: connection, msg: string%); + +## Generated for NetBIOS messages of type *positive session response*. Bro's +## NetBIOS analyzer processes the NetBIOS session service running on TCP port +## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138. +## +## See `Wikipedia `__ for more information +## about NetBIOS. `RFC 1002 `__ describes +## the packet format for NetBIOS over TCP/IP, which Bro parses. +## +## c: The connection, which may be TCP or UDP, depending on the type of the +## NetBIOS session. +## +## msg: The raw payload of the message sent, excluding the common NetBIOS +## header. +## +## .. bro:see:: netbios_session_keepalive netbios_session_message +## netbios_session_raw_message netbios_session_rejected netbios_session_request +## netbios_session_ret_arg_resp decode_netbios_name decode_netbios_name_type +## +## .. note:: These days, NetBIOS is primarily used as a transport mechanism for +## `SMB/CIFS `__. Bro's +## SMB analyzer parses both SMB-over-NetBIOS and SMB-over-TCP on port 445. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event netbios_session_accepted%(c: connection, msg: string%); + +## Generated for NetBIOS messages of type *negative session response*. Bro's +## NetBIOS analyzer processes the NetBIOS session service running on TCP port +## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138. +## +## See `Wikipedia `__ for more information +## about NetBIOS. `RFC 1002 `__ describes +## the packet format for NetBIOS over TCP/IP, which Bro parses. +## +## c: The connection, which may be TCP or UDP, depending on the type of the +## NetBIOS session. +## +## msg: The raw payload of the message sent, excluding the common NetBIOS +## header. +## +## .. bro:see:: netbios_session_accepted netbios_session_keepalive +## netbios_session_message netbios_session_raw_message netbios_session_request +## netbios_session_ret_arg_resp decode_netbios_name decode_netbios_name_type +## +## .. note:: These days, NetBIOS is primarily used as a transport mechanism for +## `SMB/CIFS `__. Bro's +## SMB analyzer parses both SMB-over-NetBIOS and SMB-over-TCP on port 445. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event netbios_session_rejected%(c: connection, msg: string%); + +## Generated for NetBIOS messages of type *session message* that are not +## carrying an SMB payload. +## +## NetBIOS analyzer processes the NetBIOS session service running on TCP port +## 139, and (despite its name!) the NetBIOS datagram service on UDP port 138. +## +## See `Wikipedia `__ for more information +## about NetBIOS. `RFC 1002 `__ describes +## the packet format for NetBIOS over TCP/IP, which Bro parses. +## +## c: The connection, which may be TCP or UDP, depending on the type of the +## NetBIOS session. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## msg: The raw payload of the message sent, excluding the common NetBIOS +## header (i.e., the ``user_data``). +## +## .. bro:see:: netbios_session_accepted netbios_session_keepalive +## netbios_session_message netbios_session_rejected netbios_session_request +## netbios_session_ret_arg_resp decode_netbios_name decode_netbios_name_type +## +## .. note:: These days, NetBIOS is primarily used as a transport mechanism for +## `SMB/CIFS `__. Bro's +## SMB analyzer parses both SMB-over-NetBIOS and SMB-over-TCP on port 445. +## +## .. todo:: This is an oddly named event. In fact, it's probably an odd event +## to have to begin with. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event netbios_session_raw_message%(c: connection, is_orig: bool, msg: string%); + +## Generated for NetBIOS messages of type *retarget response*. Bro's NetBIOS +## analyzer processes the NetBIOS session service running on TCP port 139, and +## (despite its name!) the NetBIOS datagram service on UDP port 138. +## +## See `Wikipedia `__ for more information +## about NetBIOS. `RFC 1002 `__ describes +## the packet format for NetBIOS over TCP/IP, which Bro parses. +## +## c: The connection, which may be TCP or UDP, depending on the type of the +## NetBIOS session. +## +## msg: The raw payload of the message sent, excluding the common NetBIOS +## header. +## +## .. bro:see:: netbios_session_accepted netbios_session_keepalive +## netbios_session_message netbios_session_raw_message netbios_session_rejected +## netbios_session_request decode_netbios_name decode_netbios_name_type +## +## .. note:: These days, NetBIOS is primarily used as a transport mechanism for +## `SMB/CIFS `__. Bro's +## SMB analyzer parses both SMB-over-NetBIOS and SMB-over-TCP on port 445. +## +## .. todo:: This is an oddly named event. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event netbios_session_ret_arg_resp%(c: connection, msg: string%); + +## Generated for NetBIOS messages of type *keep-alive*. Bro's NetBIOS analyzer +## processes the NetBIOS session service running on TCP port 139, and (despite +## its name!) the NetBIOS datagram service on UDP port 138. +## +## See `Wikipedia `__ for more information +## about NetBIOS. `RFC 1002 `__ describes +## the packet format for NetBIOS over TCP/IP, which Bro parses. +## +## c: The connection, which may be TCP or UDP, depending on the type of the +## NetBIOS session. +## +## msg: The raw payload of the message sent, excluding the common NetBIOS +## header. +## +## .. bro:see:: netbios_session_accepted netbios_session_message +## netbios_session_raw_message netbios_session_rejected netbios_session_request +## netbios_session_ret_arg_resp decode_netbios_name decode_netbios_name_type +## +## .. note:: These days, NetBIOS is primarily used as a transport mechanism for +## `SMB/CIFS `__. Bro's +## SMB analyzer parses both SMB-over-NetBIOS and SMB-over-TCP on port 445. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event netbios_session_keepalive%(c: connection, msg: string%); +## Generated for all SMB/CIFS messages. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## is_orig: True if the message was sent by the originator of the underlying +## transport-level connection. +## +## cmd: A string mnemonic of the SMB command code. +## +## body_length: The length of the SMB message body, i.e. the data starting after +## the SMB header. +## +## body: The raw SMB message body, i.e., the data starting after the SMB header. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction smb_com_transaction2 +## smb_com_tree_connect_andx smb_com_tree_disconnect smb_com_write_andx smb_error +## smb_get_dfs_referral +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_message%(c: connection, hdr: smb_hdr, is_orig: bool, cmd: string, body_length: count, body: string%); + +## Generated for SMB/CIFS messages of type *tree connect andx*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## path: The ``path`` attribute specified in the message. +## +## service: The ``service`` attribute specified in the message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction smb_com_transaction2 +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_tree_connect_andx%(c: connection, hdr: smb_hdr, path: string, service: string%); + +## Generated for SMB/CIFS messages of type *tree disconnect*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction smb_com_transaction2 +## smb_com_tree_connect_andx smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_tree_disconnect%(c: connection, hdr: smb_hdr%); + +## Generated for SMB/CIFS messages of type *nt create andx*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## name: The ``name`` attribute specified in the message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_read_andx +## smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_nt_create_andx%(c: connection, hdr: smb_hdr, name: string%); + +## Generated for SMB/CIFS messages of type *nt transaction*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## trans: The parsed transaction header. +## +## data: The raw transaction data. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_pipe +## smb_com_trans_rap smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_transaction%(c: connection, hdr: smb_hdr, trans: smb_trans, data: smb_trans_data, is_orig: bool%); + +## Generated for SMB/CIFS messages of type *nt transaction 2*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## trans: The parsed transaction header. +## +## data: The raw transaction data. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_pipe +## smb_com_trans_rap smb_com_transaction smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_transaction2%(c: connection, hdr: smb_hdr, trans: smb_trans, data: smb_trans_data, is_orig: bool%); + +## Generated for SMB/CIFS messages of type *transaction mailslot*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## trans: The parsed transaction header. +## +## data: The raw transaction data. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_trans_mailslot%(c: connection, hdr: smb_hdr, trans: smb_trans, data: smb_trans_data, is_orig: bool%); + +## Generated for SMB/CIFS messages of type *transaction rap*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## trans: The parsed transaction header. +## +## data: The raw transaction data. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_transaction smb_com_transaction2 +## smb_com_tree_connect_andx smb_com_tree_disconnect smb_com_write_andx smb_error +## smb_get_dfs_referral smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_trans_rap%(c: connection, hdr: smb_hdr, trans: smb_trans, data: smb_trans_data, is_orig: bool%); + +## Generated for SMB/CIFS messages of type *transaction pipe*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## trans: The parsed transaction header. +## +## data: The raw transaction data. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_trans_pipe%(c: connection, hdr: smb_hdr, trans: smb_trans, data: smb_trans_data, is_orig: bool%); + +## Generated for SMB/CIFS messages of type *read andx*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## data: Always empty. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_read_andx%(c: connection, hdr: smb_hdr, data: string%); + +## Generated for SMB/CIFS messages of type *read andx*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## data: Always empty. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction smb_com_transaction2 +## smb_com_tree_connect_andx smb_com_tree_disconnect smb_error +## smb_get_dfs_referral smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_write_andx%(c: connection, hdr: smb_hdr, data: string%); + +## Generated for SMB/CIFS messages of type *get dfs referral*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## max_referral_level: The ``max_referral_level`` attribute specified in the +## message. +## +## file_name: The ``filene_name`` attribute specified in the message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction smb_com_transaction2 +## smb_com_tree_connect_andx smb_com_tree_disconnect smb_com_write_andx smb_error +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_get_dfs_referral%(c: connection, hdr: smb_hdr, max_referral_level: count, file_name: string%); + +## Generated for SMB/CIFS messages of type *negotiate*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate_response smb_com_nt_create_andx smb_com_read_andx smb_com_setup_andx +## smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap smb_com_transaction +## smb_com_transaction2 smb_com_tree_connect_andx smb_com_tree_disconnect +## smb_com_write_andx smb_error smb_get_dfs_referral smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_negotiate%(c: connection, hdr: smb_hdr%); + +## Generated for SMB/CIFS messages of type *negotiate response*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## dialect_index: The ``dialect`` indicated in the message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_nt_create_andx smb_com_read_andx smb_com_setup_andx +## smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap smb_com_transaction +## smb_com_transaction2 smb_com_tree_connect_andx smb_com_tree_disconnect +## smb_com_write_andx smb_error smb_get_dfs_referral smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_negotiate_response%(c: connection, hdr: smb_hdr, dialect_index: count%); + +## Generated for SMB/CIFS messages of type *setup andx*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_setup_andx%(c: connection, hdr: smb_hdr%); + +## Generated for SMB/CIFS messages of type *generic andx*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## .. bro:see:: smb_com_close smb_com_logoff_andx smb_com_negotiate +## smb_com_negotiate_response smb_com_nt_create_andx smb_com_read_andx +## smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_generic_andx%(c: connection, hdr: smb_hdr%); + +## Generated for SMB/CIFS messages of type *close*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## .. bro:see:: smb_com_generic_andx smb_com_logoff_andx smb_com_negotiate +## smb_com_negotiate_response smb_com_nt_create_andx smb_com_read_andx +## smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_close%(c: connection, hdr: smb_hdr%); + +## Generated for SMB/CIFS messages of type *logoff andx*. +## +## See `Wikipedia `__ for +## more information about the SMB/CIFS protocol. Bro's SMB/CIFS analyzer parses +## both SMB-over-NetBIOS on ports 138/139 and SMB-over-TCP on port 445. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_negotiate +## smb_com_negotiate_response smb_com_nt_create_andx smb_com_read_andx +## smb_com_setup_andx smb_com_trans_mailslot smb_com_trans_pipe smb_com_trans_rap +## smb_com_transaction smb_com_transaction2 smb_com_tree_connect_andx +## smb_com_tree_disconnect smb_com_write_andx smb_error smb_get_dfs_referral +## smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_com_logoff_andx%(c: connection, hdr: smb_hdr%); + +## Generated for SMB/CIFS messages that indicate an error. This event is +## triggered by an SMB header including a status that signals an error. +## +## c: The connection. +## +## hdr: The parsed header of the SMB message. +## +## cmd: The SMB command code. +## +## cmd_str: A string mnemonic of the SMB command code. +## +## data: The raw SMB message body, i.e., the data starting after the SMB header. +## +## .. bro:see:: smb_com_close smb_com_generic_andx smb_com_logoff_andx +## smb_com_negotiate smb_com_negotiate_response smb_com_nt_create_andx +## smb_com_read_andx smb_com_setup_andx smb_com_trans_mailslot +## smb_com_trans_pipe smb_com_trans_rap smb_com_transaction smb_com_transaction2 +## smb_com_tree_connect_andx smb_com_tree_disconnect smb_com_write_andx +## smb_get_dfs_referral smb_message +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event smb_error%(c: connection, hdr: smb_hdr, cmd: count, cmd_str: string, data: string%); +## Generated for all DNS messages. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## is_orig: True if the message was sent by the originator of the connection. +## +## msg: The parsed DNS message header. +## +## len: The length of the message's raw representation (i.e., the DNS payload). +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_query_reply dns_rejected +## dns_request non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth event dns_message%(c: connection, is_orig: bool, msg: dns_msg, len: count%) &group="dns"; + +## Generated for DNS requests. For requests with multiple queries, this event +## is raised once for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## query: The queried name. +## +## qtype: The queried resource record type. +## +## qclass: The queried resource record class. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth event dns_request%(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count%) &group="dns"; -event dns_full_request%(%) &group="dns"; + +## Generated for DNS replies that reject a query. This event is raised if a DNS +## reply either indicates failure via its status code or does not pass on any +## answers to a query. Note that all of the event's parameters are parsed out of +## the reply; there's no stateful correlation with the query. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## query: The queried name. +## +## qtype: The queried resource record type. +## +## qclass: The queried resource record class. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_request non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth event dns_rejected%(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count%) &group="dns"; -event dns_A_reply%(c: connection, msg: dns_msg, ans: dns_answer, a: addr%) &group="dns"; -event dns_AAAA_reply%(c: connection, msg: dns_msg, ans: dns_answer, a: addr, astr: string%) &group="dns"; -event dns_NS_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string%) &group="dns"; -event dns_CNAME_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string%) &group="dns"; -event dns_PTR_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string%) &group="dns"; -event dns_SOA_reply%(c: connection, msg: dns_msg, ans: dns_answer, soa: dns_soa%) &group="dns"; -event dns_WKS_reply%(c: connection, msg: dns_msg, ans: dns_answer%) &group="dns"; -event dns_HINFO_reply%(c: connection, msg: dns_msg, ans: dns_answer%) &group="dns"; -event dns_MX_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string, preference: count%) &group="dns"; -event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, str: string%) &group="dns"; -event dns_SRV_reply%(c: connection, msg: dns_msg, ans: dns_answer%) &group="dns"; -event dns_EDNS%(c: connection, msg: dns_msg, ans: dns_answer%) &group="dns"; -event dns_EDNS_addl%(c: connection, msg: dns_msg, ans: dns_edns_additional%) &group="dns"; -event dns_TSIG_addl%(c: connection, msg: dns_msg, ans: dns_tsig_additional%) &group="dns"; - -# Generated at the end of processing a DNS packet. -event dns_end%(c: connection, msg: dns_msg%) &group="dns"; +## Generated for DNS replies with an *ok* status code but no question section. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## query: The queried name. +## +## qtype: The queried resource record type. +## +## qclass: The queried resource record class. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_rejected +## dns_request non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth event dns_query_reply%(c: connection, msg: dns_msg, query: string, qtype: count, qclass: count%) &group="dns"; -# Generated when a port 53 UDP message cannot be parsed as a DNS request. +## Generated when the DNS analyzer processes what seems to be a non-DNS packet. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The raw DNS payload. +## +## .. note:: This event is deprecated and superseded by Bro's dynamic protocol +## detection framework. event non_dns_request%(c: connection, msg: string%) &group="dns"; +## Generated for DNS replies of type *A*. For replies with multiple answers, an +## individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## a: The address returned by the reply. +## +## .. bro:see:: dns_AAAA_reply dns_A6_reply dns_CNAME_reply dns_EDNS_addl dns_HINFO_reply +## dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply +## dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_A_reply%(c: connection, msg: dns_msg, ans: dns_answer, a: addr%) &group="dns"; + +## Generated for DNS replies of type *AAAA*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## a: The address returned by the reply. +## +## .. bro:see:: dns_A_reply dns_A6_reply dns_CNAME_reply dns_EDNS_addl dns_HINFO_reply dns_MX_reply +## dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply dns_TSIG_addl +## dns_TXT_reply dns_WKS_reply dns_end dns_full_request dns_mapping_altered +## dns_mapping_lost_name dns_mapping_new_name dns_mapping_unverified +## dns_mapping_valid dns_message dns_query_reply dns_rejected dns_request +## non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_AAAA_reply%(c: connection, msg: dns_msg, ans: dns_answer, a: addr%) &group="dns"; + +## Generated for DNS replies of type *A6*. For replies with multiple answers, an +## individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## a: The address returned by the reply. +## +## .. bro:see:: dns_A_reply dns_AAAA_reply dns_CNAME_reply dns_EDNS_addl dns_HINFO_reply dns_MX_reply +## dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply dns_TSIG_addl +## dns_TXT_reply dns_WKS_reply dns_end dns_full_request dns_mapping_altered +## dns_mapping_lost_name dns_mapping_new_name dns_mapping_unverified +## dns_mapping_valid dns_message dns_query_reply dns_rejected dns_request +## non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_A6_reply%(c: connection, msg: dns_msg, ans: dns_answer, a: addr%) &group="dns"; + +## Generated for DNS replies of type *NS*. For replies with multiple answers, an +## individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## name: The name returned by the reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply +## dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_NS_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string%) &group="dns"; + +## Generated for DNS replies of type *CNAME*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## name: The name returned by the reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_EDNS_addl dns_HINFO_reply dns_MX_reply +## dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply dns_TSIG_addl +## dns_TXT_reply dns_WKS_reply dns_end dns_full_request dns_mapping_altered +## dns_mapping_lost_name dns_mapping_new_name dns_mapping_unverified +## dns_mapping_valid dns_message dns_query_reply dns_rejected dns_request +## non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_CNAME_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string%) &group="dns"; + +## Generated for DNS replies of type *PTR*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## name: The name returned by the reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_SOA_reply dns_SRV_reply +## dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_PTR_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string%) &group="dns"; + +## Generated for DNS replies of type *CNAME*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## soa: The parsed SOA value. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SRV_reply +## dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_SOA_reply%(c: connection, msg: dns_msg, ans: dns_answer, soa: dns_soa%) &group="dns"; + +## Generated for DNS replies of type *WKS*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_WKS_reply%(c: connection, msg: dns_msg, ans: dns_answer%) &group="dns"; + +## Generated for DNS replies of type *HINFO*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl dns_MX_reply +## dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply dns_TSIG_addl +## dns_TXT_reply dns_WKS_reply dns_end dns_full_request dns_mapping_altered +## dns_mapping_lost_name dns_mapping_new_name dns_mapping_unverified +## dns_mapping_valid dns_message dns_query_reply dns_rejected dns_request +## non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_HINFO_reply%(c: connection, msg: dns_msg, ans: dns_answer%) &group="dns"; + +## Generated for DNS replies of type *MX*. For replies with multiple answers, an +## individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## name: The name returned by the reply. +## +## preference: The preference for *name* specified by the reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply +## dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_MX_reply%(c: connection, msg: dns_msg, ans: dns_answer, name: string, preference: count%) &group="dns"; + +## Generated for DNS replies of type *TXT*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## str: The textual information returned by the reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_TXT_reply%(c: connection, msg: dns_msg, ans: dns_answer, str: string%) &group="dns"; + +## Generated for DNS replies of type *SRV*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The type-independent part of the parsed answer record. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_SRV_reply%(c: connection, msg: dns_msg, ans: dns_answer%) &group="dns"; + +## Generated for DNS replies of type *EDNS*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The parsed EDNS reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_HINFO_reply dns_MX_reply +## dns_NS_reply dns_PTR_reply dns_SOA_reply dns_SRV_reply dns_TSIG_addl +## dns_TXT_reply dns_WKS_reply dns_end dns_full_request dns_mapping_altered +## dns_mapping_lost_name dns_mapping_new_name dns_mapping_unverified +## dns_mapping_valid dns_message dns_query_reply dns_rejected dns_request +## non_dns_request dns_max_queries dns_session_timeout dns_skip_addl +## dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_EDNS_addl%(c: connection, msg: dns_msg, ans: dns_edns_additional%) &group="dns"; + +## Generated for DNS replies of type *TSIG*. For replies with multiple answers, +## an individual event of the corresponding type is raised for each. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## ans: The parsed TSIG reply. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TXT_reply dns_WKS_reply dns_end dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_TSIG_addl%(c: connection, msg: dns_msg, ans: dns_tsig_additional%) &group="dns"; + +## Generated at the end of processing a DNS packet. This event is the last +## ``dns_*`` event that will be raised for a DNS query/reply and signals that +## all resource records have been passed on. +## +## See `Wikipedia `__ for more +## information about the DNS protocol. Bro analyzes both UDP and TCP DNS +## sessions. +## +## c: The connection, which may be UDP or TCP depending on the type of the +## transport-layer session being analyzed. +## +## msg: The parsed DNS message header. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_full_request +## dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +event dns_end%(c: connection, msg: dns_msg%) &group="dns"; + +## Generated for DHCP messages of type *discover*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: The parsed type-independent part of the DHCP message. +## +## req_addr: The specific address requested by the client. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request dns_max_queries dns_session_timeout +## dns_skip_addl dns_skip_all_addl dns_skip_all_auth dns_skip_auth +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_discover%(c: connection, msg: dhcp_msg, req_addr: addr%); + +## Generated for DHCP messages of type *offer*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: TODO. +## +## mask: The subnet mask specified by the message. +## +## router: The list of routers specified by the message. +## +## lease: The least interval specified by the message. +## +## serv_addr: The server address specified by the message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_offer%(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr%); + +## Generated for DHCP messages of type *request*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: The parsed type-independent part of the DHCP message. +## +## req_addr: The client address specified by the message. +## +## serv_addr: The server address specified by the message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_request%(c: connection, msg: dhcp_msg, req_addr: addr, serv_addr: addr%); + +## Generated for DHCP messages of type *decline*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: The parsed type-independent part of the DHCP message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_decline%(c: connection, msg: dhcp_msg%); + +## Generated for DHCP messages of type *acknowledgment*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: The parsed type-independent part of the DHCP message. +## +## mask: The subnet mask specified by the message. +## +## router: The list of routers specified by the message. +## +## lease: The least interval specified by the message. +## +## serv_addr: The server address specified by the message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_ack%(c: connection, msg: dhcp_msg, mask: addr, router: dhcp_router_list, lease: interval, serv_addr: addr%); + +## Generated for DHCP messages of type *negative acknowledgment*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: The parsed type-independent part of the DHCP message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_nak%(c: connection, msg: dhcp_msg%); + +## Generated for DHCP messages of type *release*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: The parsed type-independent part of the DHCP message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_release%(c: connection, msg: dhcp_msg%); + +## Generated for DHCP messages of type *inform*. +## +## See `Wikipedia +## `__ for +## more information about the DHCP protocol. +## +## c: The connection record describing the underlying UDP flow. +## +## msg: The parsed type-independent part of the DHCP message. +## +## .. bro:see:: dns_AAAA_reply dns_A_reply dns_CNAME_reply dns_EDNS_addl +## dns_HINFO_reply dns_MX_reply dns_NS_reply dns_PTR_reply dns_SOA_reply +## dns_SRV_reply dns_TSIG_addl dns_TXT_reply dns_WKS_reply dns_end +## dns_full_request dns_mapping_altered dns_mapping_lost_name dns_mapping_new_name +## dns_mapping_unverified dns_mapping_valid dns_message dns_query_reply +## dns_rejected dns_request non_dns_request +## +## .. note:: Bro does not support broadcast packets (as used by the DHCP +## protocol). It treats broadcast addresses just like any other and +## associates packets into transport-level flows in the same way as usual. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dhcp_inform%(c: connection, msg: dhcp_msg%); +## Generated for HTTP requests. Bro supports persistent and pipelined HTTP +## sessions and raises corresponding events as it parses client/server +## dialogues. This event is generated as soon as a request's initial line has +## been parsed, and before any :bro:id:`http_header` events are raised. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## method: The HTTP method extracted from the request (e.g., ``GET``, ``POST``). +## +## original_URI: The unprocessed URI as specified in the request. +## +## unescaped_URI: The URI with all percent-encodings decoded. +## +## version: The version number specified in the request (e.g., ``1.1``). +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity +## http_entity_data http_event http_header http_message_done http_reply http_stats +## truncate_http_URI event http_request%(c: connection, method: string, original_URI: string, unescaped_URI: string, version: string%) &group="http-request"; + +## Generated for HTTP replies. Bro supports persistent and pipelined HTTP +## sessions and raises corresponding events as it parses client/server +## dialogues. This event is generated as soon as a reply's initial line has +## been parsed, and before any :bro:id:`http_header` events are raised. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## version: The version number specified in the reply (e.g., ``1.1``). +## +## code: The numerical response code returned by the server. +## +## reason: The textual description returned by the server along with *code*. +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity +## http_entity_data http_event http_header http_message_done http_request +## http_stats event http_reply%(c: connection, version: string, code: count, reason: string%) &group="http-reply"; + +## Generated for HTTP headers. Bro supports persistent and pipelined HTTP +## sessions and raises corresponding events as it parses client/server +## dialogues. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## is_orig: True if the header was sent by the originator of the TCP connection. +## +## name: The name of the header. +## +## value: The value of the header. +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity +## http_entity_data http_event http_message_done http_reply http_request +## http_stats +## +## .. note:: This event is also raised for headers found in nested body +## entities. event http_header%(c: connection, is_orig: bool, name: string, value: string%) &group="http-header"; + +## Generated for HTTP headers, passing on all headers of an HTTP message at +## once. Bro supports persistent and pipelined HTTP sessions and raises +## corresponding events as it parses client/server dialogues. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## is_orig: True if the header was sent by the originator of the TCP connection. +## +## hlist: A *table* containing all headers extracted from the current entity. +## The table is indexed by the position of the header (1 for the first, +## 2 for the second, etc.). +## +## .. bro:see:: http_begin_entity http_content_type http_end_entity http_entity_data +## http_event http_header http_message_done http_reply http_request http_stats +## +## .. note:: This event is also raised for headers found in nested body +## entities. event http_all_headers%(c: connection, is_orig: bool, hlist: mime_header_list%) &group="http-header"; + +## Generated when starting to parse an HTTP body entity. This event is generated +## at least once for each non-empty (client or server) HTTP body; and +## potentially more than once if the body contains further nested MIME +## entities. Bro raises this event just before it starts parsing each entity's +## content. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## is_orig: True if the entity was sent by the originator of the TCP +## connection. +## +## .. bro:see:: http_all_headers http_content_type http_end_entity http_entity_data +## http_event http_header http_message_done http_reply http_request http_stats +## mime_begin_entity event http_begin_entity%(c: connection, is_orig: bool%) &group="http-body"; + +## Generated when finishing parsing an HTTP body entity. This event is generated +## at least once for each non-empty (client or server) HTTP body; and +## potentially more than once if the body contains further nested MIME +## entities. Bro raises this event at the point when it has finished parsing an +## entity's content. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## is_orig: True if the entity was sent by the originator of the TCP +## connection. +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_entity_data +## http_event http_header http_message_done http_reply http_request +## http_stats mime_end_entity event http_end_entity%(c: connection, is_orig: bool%) &group="http-body"; -event http_content_type%(c: connection, is_orig: bool, ty: string, subty: string%) &group="http-body"; + +## Generated when parsing an HTTP body entity, passing on the data. This event +## can potentially be raised many times for each entity, each time passing a +## chunk of the data of not further defined size. +## +## A common idiom for using this event is to first *reassemble* the data +## at the scripting layer by concatenating it to a successively growing +## string; and only perform further content analysis once the corresponding +## :bro:id:`http_end_entity` event has been raised. Note, however, that doing so +## can be quite expensive for HTTP tranders. At the very least, one should +## impose an upper size limit on how much data is being buffered. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## is_orig: True if the entity was sent by the originator of the TCP +## connection. +## +## length: The length of *data*. +## +## data: One chunk of raw entity data. +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity +## http_event http_header http_message_done http_reply http_request http_stats +## mime_entity_data http_entity_data_delivery_size skip_http_data event http_entity_data%(c: connection, is_orig: bool, length: count, data: string%) &group="http-body"; + +## Generated for reporting an HTTP body's content type. This event is +## generated at the end of parsing an HTTP header, passing on the MIME +## type as specified by the ``Content-Type`` header. If that header is +## missing, this event is still raised with a default value of ``text/plain``. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## is_orig: True if the entity was sent by the originator of the TCP +## connection. +## +## ty: The main type. +## +## subty: The subtype. +## +## .. bro:see:: http_all_headers http_begin_entity http_end_entity http_entity_data +## http_event http_header http_message_done http_reply http_request http_stats +## +## .. note:: This event is also raised for headers found in nested body +## entities. +event http_content_type%(c: connection, is_orig: bool, ty: string, subty: string%) &group="http-body"; + +## Generated once at the end of parsing an HTTP message. Bro supports persistent +## and pipelined HTTP sessions and raises corresponding events as it parses +## client/server dialogues. A "message" is one top-level HTTP entity, such as a +## complete request or reply. Each message can have further nested sub-entities +## inside. This event is raised once all sub-entities belonging to a top-level +## message have been processed (and their corresponding ``http_entity_*`` events +## generated). +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## is_orig: True if the entity was sent by the originator of the TCP +## connection. +## +## stat: Further meta information about the message. +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity +## http_entity_data http_event http_header http_reply http_request http_stats event http_message_done%(c: connection, is_orig: bool, stat: http_message_stat%) &group="http-body"; + +## Generated for errors found when decoding HTTP requests or replies. +## +## See `Wikipedia `__ +## for more information about the HTTP protocol. +## +## c: The connection. +## +## event_type: A string describing the general category of the problem found +## (e.g., ``illegal format``). +## +## detail: Further more detailed description of the error. +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity +## http_entity_data http_header http_message_done http_reply http_request +## http_stats mime_event event http_event%(c: connection, event_type: string, detail: string%); + +## Generated at the end of an HTTP session to report statistics about it. This +## event is raised after all of an HTTP session's requests and replies have been +## fully processed. +## +## c: The connection. +## +## stats: Statistics summarizing HTTP-level properties of the finished +## connection. +## +## .. bro:see:: http_all_headers http_begin_entity http_content_type http_end_entity +## http_entity_data http_event http_header http_message_done http_reply +## http_request event http_stats%(c: connection, stats: http_stats_rec%); +## Generated when seeing an SSH client's version identification. The SSH +## protocol starts with a clear-text handshake message that reports client and +## server protocol/software versions. This event provides access to what the +## client sent. +## +## +## See `Wikipedia `__ for more +## information about the SSH protocol. +## +## c: The connection. +## +## version: The version string the client sent (e.g., `SSH-2.0-libssh-0.11`). +## +## .. bro:see:: ssh_server_version +## +## .. note:: As everything after the initial version handshake proceeds +## encrypted, Bro cannot further analyze SSH sessions. event ssh_client_version%(c: connection, version: string%); + +## Generated when seeing an SSH server's version identification. The SSH +## protocol starts with a clear-text handshake message that reports client and +## server protocol/software versions. This event provides access to what the +## server sent. +## +## See `Wikipedia `__ for more +## information about the SSH protocol. +## +## c: The connection. +## +## version: The version string the server sent (e.g., +## ``SSH-1.99-OpenSSH_3.9p1``). +## +## .. bro:see:: ssh_client_version +## +## .. note:: As everything coming after the initial version handshake proceeds +## encrypted, Bro cannot further analyze SSH sessions. event ssh_server_version%(c: connection, version: string%); +## Generated for an SSL/TLS client's initial *hello* message. SSL/TLS sessions +## start with an unencrypted handshake, and Bro extracts as much information out +## of that as it can. This event provides access to the initial information +## sent by the client. +## +## See `Wikipedia `__ for +## more information about the SSL/TLS protocol. +## +## c: The connection. +## +## version: The protocol version as extracted from the client's message. The +## values are standardized as part of the SSL/TLS protocol. The +## :bro:id:`SSL::version_strings` table maps them to descriptive names. +## +## possible_ts: The current time as sent by the client. Note that SSL/TLS does +## not require clocks to be set correctly, so treat with care. +## +## session_id: The session ID sent by the client (if any). +## +## ciphers: The list of ciphers the client offered to use. The values are +## standardized as part of the SSL/TLS protocol. The +## :bro:id:`SSL::cipher_desc` table maps them to descriptive names. +## +## .. bro:see:: ssl_alert ssl_established ssl_extension ssl_server_hello +## ssl_session_ticket_handshake x509_certificate x509_error x509_extension event ssl_client_hello%(c: connection, version: count, possible_ts: time, session_id: string, ciphers: count_set%); + +## Generated for an SSL/TLS server's initial *hello* message. SSL/TLS sessions +## start with an unencrypted handshake, and Bro extracts as much information out +## of that as it can. This event provides access to the initial information +## sent by the client. +## +## See `Wikipedia `__ for +## more information about the SSL/TLS protocol. +## +## c: The connection. +## +## version: The protocol version as extracted from the server's message. +## The values are standardized as part of the SSL/TLS protocol. The +## :bro:id:`SSL::version_strings` table maps them to descriptive names. +## +## possible_ts: The current time as sent by the server. Note that SSL/TLS does +## not require clocks to be set correctly, so treat with care. +## +## session_id: The session ID as sent back by the server (if any). +## +## cipher: The cipher chosen by the server. The values are standardized as part +## of the SSL/TLS protocol. The :bro:id:`SSL::cipher_desc` table maps +## them to descriptive names. +## +## comp_method: The compression method chosen by the client. The values are +## standardized as part of the SSL/TLS protocol. +## +## .. bro:see:: ssl_alert ssl_client_hello ssl_established ssl_extension +## ssl_session_ticket_handshake x509_certificate x509_error x509_extension event ssl_server_hello%(c: connection, version: count, possible_ts: time, session_id: string, cipher: count, comp_method: count%); -event ssl_extension%(c: connection, code: count, val: string%); + +## Generated for SSL/TLS extensions seen in an initial handshake. SSL/TLS +## sessions start with an unencrypted handshake, and Bro extracts as much +## information out of that as it can. This event provides access to any +## extensions either side sends as part of an extended *hello* message. +## +## c: The connection. +## +## is_orig: True if event is raised for originator side of the connection. +## +## code: The numerical code of the extension. The values are standardized as +## part of the SSL/TLS protocol. The :bro:id:`SSL::extensions` table maps +## them to descriptive names. +## +## val: The raw extension value that was sent in the message. +## +## .. bro:see:: ssl_alert ssl_client_hello ssl_established ssl_server_hello +## ssl_session_ticket_handshake x509_certificate x509_error x509_extension +event ssl_extension%(c: connection, is_orig: bool, code: count, val: string%); + +## Generated at the end of an SSL/TLS handshake. SSL/TLS sessions start with +## an unencrypted handshake, and Bro extracts as much information out of that +## as it can. This event signals the time when an SSL/TLS has finished the +## handshake and its endpoints consider it as fully established. Typically, +## everything from now on will be encrypted. +## +## See `Wikipedia `__ for +## more information about the SSL/TLS protocol. +## +## c: The connection. +## +## .. bro:see:: ssl_alert ssl_client_hello ssl_extension ssl_server_hello +## ssl_session_ticket_handshake x509_certificate x509_error x509_extension event ssl_established%(c: connection%); -event ssl_alert%(c: connection, level: count, desc: count%); -event x509_certificate%(c: connection, cert: X509, is_server: bool, chain_idx: count, chain_len: count, der_cert: string%); -event x509_extension%(c: connection, data: string%); -event x509_error%(c: connection, err: count%); +## Generated for SSL/TLS alert records. SSL/TLS sessions start with an +## unencrypted handshake, and Bro extracts as much information out of that as +## it can. If during that handshake, an endpoint encounters a fatal error, it +## sends an *alert* record, that in turn triggers this event. After an *alert*, +## any endpoint may close the connection immediately. +## +## See `Wikipedia `__ for +## more information about the SSL/TLS protocol. +## +## c: The connection. +## +## is_orig: True if event is raised for originator side of the connection. +## +## level: The severity level, as sent in the *alert*. The values are defined as +## part of the SSL/TLS protocol. +## +## desc: A numerical value identifying the cause of the *alert*. The values are +## defined as part of the SSL/TLS protocol. +## +## .. bro:see:: ssl_client_hello ssl_established ssl_extension ssl_server_hello +## ssl_session_ticket_handshake x509_certificate x509_error x509_extension +event ssl_alert%(c: connection, is_orig: bool, level: count, desc: count%); -event stp_create_endp%(c: connection, e: int, is_orig: bool%); -event stp_resume_endp%(e: int%); -event stp_correlate_pair%(e1: int, e2: int%); -event stp_remove_pair%(e1: int, e2: int%); -event stp_remove_endp%(e: int%); +## Generated for SSL/TLS handshake messages that are a part of the +## stateless-server session resumption mechanism. SSL/TLS sessions start with +## an unencrypted handshake, and Bro extracts as much information out of that +## as it can. This event is raised when an SSL/TLS server passes a session +## ticket to the client that can later be used for resuming the session. The +## mechanism is described in :rfc:`4507` +## +## See `Wikipedia `__ for +## more information about the SSL/TLS protocol. +## +## c: The connection. +## +## ticket_lifetime_hint: A hint from the server about how long the ticket +## should be stored by the client. +## +## ticket: The raw ticket data. +## +## .. bro:see:: ssl_client_hello ssl_established ssl_extension ssl_server_hello +## x509_certificate x509_error x509_extension ssl_alert +event ssl_session_ticket_handshake%(c: connection, ticket_lifetime_hint: count, ticket: string%); +## Generated for X509 certificates seen in SSL/TLS connections. During the +## initial SSL/TLS handshake, certificates are exchanged in the clear. Bro +## raises this event for each certificate seen (including both a site's primary +## cert, and further certs sent as part of the validation chain). +## +## See `Wikipedia `__ for more information +## about the X.509 format. +## +## c: The connection. +## +## is_orig: True if event is raised for originator side of the connection. +## +## cert: The parsed certificate. +## +## chain_idx: The index in the validation chain that this cert has. Index zero +## indicates an endpoint's primary cert, while higher indices +## indicate the place in the validation chain (which has length +## *chain_len*). +## +## chain_len: The total length of the validation chain that this cert is part +## of. +## +## der_cert: The complete cert encoded in `DER +## `__ +## format. +## +## .. bro:see:: ssl_alert ssl_client_hello ssl_established ssl_extension +## ssl_server_hello x509_error x509_extension x509_verify +event x509_certificate%(c: connection, is_orig: bool, cert: X509, chain_idx: count, chain_len: count, der_cert: string%); + +## Generated for X509 extensions seen in a certificate. +## +## See `Wikipedia `__ for more information +## about the X.509 format. +## +## c: The connection. +## +## is_orig: True if event is raised for originator side of the connection. +## +## data: The raw data associated with the extension. +## +## .. bro:see:: ssl_alert ssl_client_hello ssl_established ssl_extension +## ssl_server_hello x509_certificate x509_error x509_verify +event x509_extension%(c: connection, is_orig: bool, data: string%); + +## Generated when errors occur during parsing an X509 certificate. +## +## See `Wikipedia `__ for more information +## about the X.509 format. +## +## c: The connection. +## +## is_orig: True if event is raised for originator side of the connection. +## +## err: An error code describing what went wrong. :bro:id:`SSL::x509_errors` +## maps error codes to a textual description. +## +## .. bro:see:: ssl_alert ssl_client_hello ssl_established ssl_extension +## ssl_server_hello x509_certificate x509_extension x509_err2str x509_verify +event x509_error%(c: connection, is_orig: bool, err: count%); + +## TODO. +## +## .. bro:see:: rpc_call rpc_dialogue rpc_reply dce_rpc_bind dce_rpc_request +## dce_rpc_response rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dce_rpc_message%(c: connection, is_orig: bool, ptype: dce_rpc_ptype, msg: string%); + +## TODO. +## +## .. bro:see:: rpc_call rpc_dialogue rpc_reply dce_rpc_message dce_rpc_request +## dce_rpc_response rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dce_rpc_bind%(c: connection, uuid: string%); + +## TODO. +## +## .. bro:see:: rpc_call rpc_dialogue rpc_reply dce_rpc_bind dce_rpc_message +## dce_rpc_response rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dce_rpc_request%(c: connection, opnum: count, stub: string%); + +## TODO. +## +## .. bro:see:: rpc_call rpc_dialogue rpc_reply dce_rpc_bind dce_rpc_message +## dce_rpc_request rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event dce_rpc_response%(c: connection, opnum: count, stub: string%); -# DCE/RPC endpoint mapper events. +## TODO. +## +## .. bro:see:: rpc_call rpc_dialogue rpc_reply dce_rpc_bind dce_rpc_message +## dce_rpc_request dce_rpc_response rpc_timeout +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event epm_map_response%(c: connection, uuid: string, p: port, h: addr%); -# "length" is the length of body (not including the frame header) +## Generated for NCP requests (Netware Core Protocol). +## +## See `Wikipedia `__ for +## more information about the NCP protocol. +## +## c: The connection. +## +## frame_type: The frame type, as specified by the protocol. +## +## length: The length of the request body, excluding the frame header. +## +## func: The requested function, as specified by the protocol. +## +## .. bro:see:: ncp_reply +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event ncp_request%(c: connection, frame_type: count, length: count, func: count%); + +## Generated for NCP replies (Netware Core Protocol). +## +## See `Wikipedia `__ for +## more information about the NCP protocol. +## +## c: The connection. +## +## frame_type: The frame type, as specified by the protocol. +## +## length: The length of the request body, excluding the frame header. +## +## req_frame: The frame type from the corresponding request. +## +## req_func: The function code from the corresponding request. +## +## completion_code: The reply's completion code, as specified by the protocol. +## +## .. bro:see:: ncp_request +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event ncp_reply%(c: connection, frame_type: count, length: count, req_frame: count, req_func: count, completion_code: count%); -event interconn_stats%(c: connection, os: interconn_endp_stats, rs: interconn_endp_stats%); -event interconn_remove_conn%(c: connection%); - -event backdoor_stats%(c: connection, os: backdoor_endp_stats, rs: backdoor_endp_stats%); -event backdoor_remove_conn%(c: connection%); -event ssh_signature_found%(c: connection, is_orig: bool%); -event telnet_signature_found%(c: connection, is_orig: bool, len: count%); -event rlogin_signature_found%(c: connection, is_orig: bool, num_null: count, len: count%); -event root_backdoor_signature_found%(c: connection%); -event ftp_signature_found%(c: connection%); -event napster_signature_found%(c: connection%); -event gnutella_signature_found%(c: connection%); -event kazaa_signature_found%(c: connection%); -event http_signature_found%(c: connection%); -event http_proxy_signature_found%(c: connection%); -event smtp_signature_found%(c: connection%); -event irc_signature_found%(c: connection%); -event gaobot_signature_found%(c: connection%); - +## Generated for client-side commands on POP3 connections. +## +## See `Wikipedia `__ for more information +## about the POP3 protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## command: The command sent. +## +## arg: The argument to the command. +## +## .. bro:see:: pop3_data pop3_login_failure pop3_login_success pop3_reply +## pop3_terminate pop3_unexpected +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pop3_request%(c: connection, is_orig: bool, command: string, arg: string%); + +## Generated for server-side replies to commands on POP3 connections. +## +## See `Wikipedia `__ for more information +## about the POP3 protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## cmd: The success indicator sent by the server. This corresponds to the +## first token on the line sent, and should be either ``OK`` or ``ERR``. +## +## msg: The textual description the server sent along with *cmd*. +## +## .. bro:see:: pop3_data pop3_login_failure pop3_login_success pop3_request +## pop3_terminate pop3_unexpected +## +## .. todo:: This event is receiving odd parameters, should unify. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pop3_reply%(c: connection, is_orig: bool, cmd: string, msg: string%); + +## Generated for server-side multi-line responses on POP3 connections. POP3 +## connections use multi-line responses to send bulk data, such as the actual +## mails. This event is generated once for each line that's part of such a +## response. +## +## See `Wikipedia `__ for more information +## about the POP3 protocol. +## +## c: The connection. +## +## is_orig: True if the data was sent by the originator of the TCP connection. +## +## data: The data sent. +## +## .. bro:see:: pop3_login_failure pop3_login_success pop3_reply pop3_request +## pop3_terminate pop3_unexpected +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pop3_data%(c: connection, is_orig: bool, data: string%); + +## Generated for errors encountered on POP3 sessions. If the POP3 analyzer +## finds state transitions that do not conform to the protocol specification, +## or other situations it can't handle, it raises this event. +## +## See `Wikipedia `__ for more information +## about the POP3 protocol. +## +## c: The connection. +## +## is_orig: True if the data was sent by the originator of the TCP connection. +## +## msg: A textual description of the situation. +## +## detail: The input that triggered the event. +## +## .. bro:see:: pop3_data pop3_login_failure pop3_login_success pop3_reply pop3_request +## pop3_terminate +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pop3_unexpected%(c: connection, is_orig: bool, msg: string, detail: string%); + +## Generated when a POP3 connection goes encrypted. While POP3 is by default a +## clear-text protocol, extensions exist to switch to encryption. This event is +## generated if that happens and the analyzer then stops processing the +## connection. +## +## See `Wikipedia `__ for more information +## about the POP3 protocol. +## +## c: The connection. +## +## is_orig: Always false. +## +## msg: A descriptive message why processing was stopped. +## +## .. bro:see:: pop3_data pop3_login_failure pop3_login_success pop3_reply pop3_request +## pop3_unexpected +## +## .. note:: Currently, only the ``STARTLS`` command is recognized and +## triggers this. +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pop3_terminate%(c: connection, is_orig: bool, msg: string%); + +## Generated for successful authentications on POP3 connections. +## +## See `Wikipedia `__ for more information +## about the POP3 protocol. +## +## c: The connection. +## +## is_orig: Always false. +## +## user: The user name used for authentication. The event is only generated if +## a non-empty user name was used. +## +## password: The password used for authentication. +## +## .. bro:see:: pop3_data pop3_login_failure pop3_reply pop3_request pop3_terminate +## pop3_unexpected +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pop3_login_success%(c: connection, is_orig: bool, user: string, password: string%); + +## Generated for unsuccessful authentications on POP3 connections. +## +## See `Wikipedia `__ for more information +## about the POP3 protocol. +## +## c: The connection. +## +## is_orig: Always false. +## +## user: The user name attempted for authentication. The event is only +## generated if a non-empty user name was used. +## +## password: The password attempted for authentication. +## +## .. bro:see:: pop3_data pop3_login_success pop3_reply pop3_request pop3_terminate +## pop3_unexpected +## +## .. todo:: Bro's current default configuration does not activate the protocol +## analyzer that generates this event; the corresponding script has not yet +## been ported to Bro 2.x. To still enable this event, one needs to add a +## corresponding entry to :bro:see:`dpd_config` or a DPD payload signature. event pop3_login_failure%(c: connection, is_orig: bool, user: string, password: string%); + +## Generated for all client-side IRC commands. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: Always true. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## command: The command. +## +## arguments: The arguments for the command. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +## +## .. note:: This event is generated only for messages that originate +## at the client-side. Commands coming in from remote trigger +## the :bro:id:`irc_message` event instead. event irc_request%(c: connection, is_orig: bool, prefix: string, command: string, arguments: string%); + +## Generated for all IRC replies. IRC replies are sent in response to a +## request and come with a reply code. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the reply. IRC uses the prefix to +## indicate the true origin of a message. +## +## code: The reply code, as specified by the protocol. +## +## params: The reply's parameters. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message event irc_reply%(c: connection, is_orig: bool, prefix: string, code: count, params: string%); + +## Generated for IRC commands forwarded from the server to the client. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: Always false. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## command: The command. +## +## message: TODO. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +## +## .. note:: +## +## This event is generated only for messages that are forwarded by the server +## to the client. Commands coming from client trigger the +## :bro:id:`irc_request` event instead. event irc_message%(c: connection, is_orig: bool, prefix: string, command: string, message: string%); + +## Generated for IRC messages of type *quit*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## nick: The nickname coming with the message. +## +## message: The text included with the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message event irc_quit_message%(c: connection, is_orig: bool, nick: string, message: string%); + +## Generated for IRC messages of type *privmsg*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## source: The source of the private communication. +## +## target: The target of the private communication. +## +## message: The text of communication. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message event irc_privmsg_message%(c: connection, is_orig: bool, source: string, target: string, message: string%); + +## Generated for IRC messages of type *notice*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## source: The source of the private communication. +## +## target: The target of the private communication. +## +## message: The text of communication. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_notice_message%(c: connection, is_orig: bool, source: string, target: string, message: string%); + +## Generated for IRC messages of type *squery*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## source: The source of the private communication. +## +## target: The target of the private communication. +## +## message: The text of communication. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message event irc_squery_message%(c: connection, is_orig: bool, source: string, target: string, message: string%); + +## Generated for IRC messages of type *join*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## info_list: The user information coming with the command. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_join_message%(c: connection, is_orig: bool, info_list: irc_join_list%); + +## Generated for IRC messages of type *part*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## nick: The nickname coming with the message. +## +## chans: The set of channels affected. +## +## message: The text coming with the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_password_message event irc_part_message%(c: connection, is_orig: bool, nick: string, chans: string_set, message: string%); + +## Generated for IRC messages of type *nick*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## who: The user changing its nickname. +## +## newnick: The new nickname. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_nick_message%(c: connection, is_orig: bool, who: string, newnick: string%); + +## Generated when a server rejects an IRC nickname. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invite_message irc_join_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_invalid_nick%(c: connection, is_orig: bool%); + +## Generated for an IRC reply of type *luserclient*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## users: The number of users as returned in the reply. +## +## services: The number of services as returned in the reply. +## +## servers: The number of servers as returned in the reply. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_network_info%(c: connection, is_orig: bool, users: count, services: count, servers: count%); + +## Generated for an IRC reply of type *luserme*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## users: The number of users as returned in the reply. +## +## services: The number of services as returned in the reply. +## +## servers: The number of servers as returned in the reply. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message event irc_server_info%(c: connection, is_orig: bool, users: count, services: count, servers: count%); + +## Generated for an IRC reply of type *luserchannels*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## chans: The number of channels as returned in the reply. +## +## .. bro:see:: irc_channel_topic irc_dcc_message irc_error_message irc_global_users +## irc_invalid_nick irc_invite_message irc_join_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_channel_info%(c: connection, is_orig: bool, chans: count%); + +## Generated for an IRC reply of type *whoreply*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## target_nick: The target nickname. +## +## channel: The channel. +## +## user: The user. +## +## host: The host. +## +## server: The server. +## +## nick: The nickname. +## +## params: The parameters. +## +## hops: The hop count. +## +## real_name: The real name. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message event irc_who_line%(c: connection, is_orig: bool, target_nick: string, channel: string, user: string, host: string, server: string, nick: string, params: string, hops: count, real_name: string%); -event irc_who_message%(c: connection, is_orig: bool, mask: string, oper: bool%); -event irc_whois_message%(c: connection, is_orig: bool, server: string, users: string%); -event irc_whois_user_line%(c: connection, is_orig: bool, nick: string, - user: string, host: string, real_name: string%); -event irc_whois_operator_line%(c: connection, is_orig: bool, nick: string%); -event irc_whois_channel_line%(c: connection, is_orig: bool, nick: string, - chans: string_set%); -event irc_oper_message%(c: connection, is_orig: bool, user: string, password: string%); -event irc_oper_response%(c: connection, is_orig: bool, got_oper: bool%); -event irc_kick_message%(c: connection, is_orig: bool, prefix: string, - chans: string, users: string, comment: string%); -event irc_error_message%(c: connection, is_orig: bool, prefix: string, message: string%); -event irc_invite_message%(c: connection, is_orig: bool, prefix: string, - nickname: string, channel: string%); -event irc_mode_message%(c: connection, is_orig: bool, prefix: string, params: string%); -event irc_squit_message%(c: connection, is_orig: bool, prefix: string, - server: string, message: string%); + + +## Generated for an IRC reply of type *namereply*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## c_type: The channel type. +## +## channel: The channel. +## +## users: The set of users. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_names_info%(c: connection, is_orig: bool, c_type: string, channel: string, users: string_set%); + +## Generated for an IRC reply of type *whoisoperator*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## nick: The nickname specified in the reply. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +event irc_whois_operator_line%(c: connection, is_orig: bool, nick: string%); + +## Generated for an IRC reply of type *whoischannels*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## nick: The nickname specified in the reply. +## +## chans: The set of channels returned. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +event irc_whois_channel_line%(c: connection, is_orig: bool, nick: string, + chans: string_set%); + +## Generated for an IRC reply of type *whoisuser*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## nick: The nickname specified in the reply. +## +## user: The user name specified in the reply. +## +## host: The host name specified in the reply. +## +## real_name: The real name specified in the reply. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +event irc_whois_user_line%(c: connection, is_orig: bool, nick: string, + user: string, host: string, real_name: string%); + +## Generated for IRC replies of type *youreoper* and *nooperhost*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## got_oper: True if the *oper* command was executed successfully +## (*youreport*) and false otherwise (*nooperhost*). +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_part_message +## irc_password_message +event irc_oper_response%(c: connection, is_orig: bool, got_oper: bool%); + +## Generated for an IRC reply of type *globalusers*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## msg: The message coming with the reply. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_invalid_nick irc_invite_message irc_join_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message +event irc_global_users%(c: connection, is_orig: bool, prefix: string, msg: string%); + +## Generated for an IRC reply of type *topic*. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## channel: The channel name specified in the reply. +## +## topic: The topic specified in the reply. +## +## .. bro:see:: irc_channel_info irc_dcc_message irc_error_message irc_global_users +## irc_invalid_nick irc_invite_message irc_join_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message +event irc_channel_topic%(c: connection, is_orig: bool, channel: string, topic: string%); + +## Generated for IRC messages of type *who*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## mask: The mask specified in the message. +## +## oper: True if the operator flag was set. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +event irc_who_message%(c: connection, is_orig: bool, mask: string, oper: bool%); + +## Generated for IRC messages of type *whois*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## server: TODO. +## +## users: TODO. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +event irc_whois_message%(c: connection, is_orig: bool, server: string, users: string%); + +## Generated for IRC messages of type *oper*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## user: The user specified in the message. +## +## password: The password specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_response irc_part_message +## irc_password_message +event irc_oper_message%(c: connection, is_orig: bool, user: string, password: string%); + +## Generated for IRC messages of type *kick*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## chans: The channels specified in the message. +## +## users: The users specified in the message. +## +## comment: The comment specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message +event irc_kick_message%(c: connection, is_orig: bool, prefix: string, + chans: string, users: string, comment: string%); + +## Generated for IRC messages of type *error*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## message: The textual description specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_global_users +## irc_invalid_nick irc_invite_message irc_join_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message +event irc_error_message%(c: connection, is_orig: bool, prefix: string, message: string%); + +## Generated for IRC messages of type *invite*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## nickname: The nickname specified in the message. +## +## channel: The channel specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_join_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message +event irc_invite_message%(c: connection, is_orig: bool, prefix: string, + nickname: string, channel: string%); + +## Generated for IRC messages of type *mode*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## params: The parameters coming with the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message +event irc_mode_message%(c: connection, is_orig: bool, prefix: string, params: string%); + +## Generated for IRC messages of type *squit*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## server: The server specified in the message. +## +## message: The textual description specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message +event irc_squit_message%(c: connection, is_orig: bool, prefix: string, + server: string, message: string%); + +## Generated for IRC messages of type *dcc*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## prefix: The optional prefix coming with the command. IRC uses the prefix to +## indicate the true origin of a message. +## +## target: The target specified in the message. +## +## dcc_type: The DCC type specified in the message. +## +## argument: The argument specified in the message. +## +## address: The address specified in the message. +## +## dest_port: The destination port specified in the message. +## +## size: The size specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_error_message irc_global_users +## irc_invalid_nick irc_invite_message irc_join_message irc_kick_message +## irc_message irc_mode_message irc_names_info irc_network_info irc_nick_message +## irc_notice_message irc_oper_message irc_oper_response irc_part_message +## irc_password_message event irc_dcc_message%(c: connection, is_orig: bool, prefix: string, target: string, dcc_type: string, argument: string, address: addr, dest_port: count, size: count%); -event irc_global_users%(c: connection, is_orig: bool, prefix: string, msg: string%); + +## Generated for IRC messages of type *user*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## user: The user specified in the message. +## +## host: The host name specified in the message. +## +## server: The server name specified in the message. +## +## real_name: The real name specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message irc_password_message event irc_user_message%(c: connection, is_orig: bool, user: string, host: string, server: string, real_name: string%); -event irc_channel_topic%(c: connection, is_orig: bool, channel: string, topic: string%); + +## Generated for IRC messages of type *password*. This event is generated for +## messages coming from both the client and the server. +## +## See `Wikipedia `__ for more +## information about the IRC protocol. +## +## c: The connection. +## +## is_orig: True if the command was sent by the originator of the TCP +## connection. +## +## password: The password specified in the message. +## +## .. bro:see:: irc_channel_info irc_channel_topic irc_dcc_message irc_error_message +## irc_global_users irc_invalid_nick irc_invite_message irc_join_message +## irc_kick_message irc_message irc_mode_message irc_names_info irc_network_info +## irc_nick_message irc_notice_message irc_oper_message irc_oper_response +## irc_part_message event irc_password_message%(c: connection, is_orig: bool, password: string%); +## TODO. +## event file_transferred%(c: connection, prefix: string, descr: string, mime_type: string%); -event file_virus%(c: connection, virname: string%); +## Generated for monitored Syslog messages. +## +## See `Wikipedia `__ for more +## information about the Syslog protocol. +## +## c: The connection record for the underlying transport-layer session/flow. +## +## facility: The "facility" included in the message. +## +## severity: The "severity" included in the message. +## +## msg: The message logged. +## +## .. note:: Bro currently parses only UDP syslog traffic. Support for TCP +## syslog will be added soon. event syslog_message%(c: connection, facility: count, severity: count, msg: string%); +## Generated when a signature matches. Bro's signature engine provides +## high-performance pattern matching separately from the normal script +## processing. If a signature with an ``event`` action matches, this event is +## raised. +## +## See the :doc:`user manual ` for more information about Bro's +## signature engine. +## +## state: Context about the match, including which signatures triggered the +## event and the connection for which the match was found. +## +## msg: The message passed to the ``event`` signature action. +## +## data: The last chunk of input that triggered the match. Note that the +## specifics here are not well-defined as Bro does not buffer any input. +## If a match is split across packet boundaries, only the last chunk +## triggering the match will be passed on to the event. event signature_match%(state: signature_state, msg: string, data: string%); -# Generated if a handler finds an identification of the software -# used on a system. +## Generated when a SOCKS request is analyzed. +## +## c: The parent connection of the proxy. +## +## version: The version of SOCKS this message used. +## +## request_type: The type of the request. +## +## sa: Address that the tunneled traffic should be sent to. +## +## p: The destination port for the proxied traffic. +## +## user: Username given for the SOCKS connection. This is not yet implemented +## for SOCKSv5. +event socks_request%(c: connection, version: count, request_type: count, sa: SOCKS::Address, p: port, user: string%); + +## Generated when a SOCKS reply is analyzed. +## +## c: The parent connection of the proxy. +## +## version: The version of SOCKS this message used. +## +## reply: The status reply from the server. +## +## sa: The address that the server sent the traffic to. +## +## p: The destination port for the proxied traffic. +event socks_reply%(c: connection, version: count, reply: count, sa: SOCKS::Address, p: port%); + +## Generated when a protocol analyzer finds an identification of a software +## used on a system. This is a protocol-independent event that is fed by +## different analyzers. For example, the HTTP analyzer reports user-agent and +## server software by raising this event, assuming it can parse it (if not, +## :bro:id:`software_parse_error` will be generated instead). +## +## c: The connection. +## +## host: The host running the reported software. +## +## s: A description of the software found. +## +## descr: The raw (unparsed) software identification string as extracted from +## the protocol. +## +## .. bro:see:: software_parse_error software_unparsed_version_found OS_version_found event software_version_found%(c: connection, host: addr, s: software, descr: string%); -# Generated if a handler finds a version but cannot parse it. +## Generated when a protocol analyzer finds an identification of a software +## used on a system but cannot parse it. This is a protocol-independent event +## that is fed by different analyzers. For example, the HTTP analyzer reports +## user-agent and server software by raising this event if it cannot parse them +## directly (if it can :bro:id:`software_version_found` will be generated +## instead). +## +## c: The connection. +## +## host: The host running the reported software. +## +## descr: The raw (unparsed) software identification string as extracted from +## the protocol. +## +## .. bro:see:: software_version_found software_unparsed_version_found +## OS_version_found event software_parse_error%(c: connection, host: addr, descr: string%); -# Generated once for each raw (unparsed) software identification. +## Generated when a protocol analyzer finds an identification of a software +## used on a system. This is a protocol-independent event that is fed by +## different analyzers. For example, the HTTP analyzer reports user-agent and +## server software by raising this event. Different from +## :bro:id:`software_version_found` and :bro:id:`software_parse_error`, this +## event is always raised, independent of whether Bro can parse the version +## string. +## +## c: The connection. +## +## host: The host running the reported software. +## +## str: The software identification string as extracted from the protocol. +## +## .. bro:see:: software_parse_error software_version_found OS_version_found event software_unparsed_version_found%(c: connection, host: addr, str: string%); -# Generated when an operating system has been fingerprinted. +## Generated when an operating system has been fingerprinted. Bro uses `p0f +## `__ to fingerprint endpoints passively, +## and it raises this event for each system identified. The p0f fingerprints are +## defined by :bro:id:`passive_fingerprint_file`. +## +## TODO. +## +## .. bro:see:: passive_fingerprint_file software_parse_error +## software_version_found software_unparsed_version_found +## generate_OS_version_event event OS_version_found%(c: connection, host: addr, OS: OS_version%); -# Generated when an IP address gets mapped for the first time. -event anonymization_mapping%(orig: addr, mapped: addr%); - -# Generated when a connection to a remote Bro has been established. +## Generated when a connection to a remote Bro has been established. This event +## is intended primarily for use by Bro's communication framework, but it can +## also trigger additional code if helpful. +## +## p: A record describing the peer. +## +## .. bro:see:: remote_capture_filter remote_connection_closed remote_connection_error +## remote_connection_handshake_done remote_event_registered remote_log remote_pong +## remote_state_access_performed remote_state_inconsistency print_hook event remote_connection_established%(p: event_peer%); -# Generated when a connection to a remote Bro has been closed. +## Generated when a connection to a remote Bro has been closed. This event is +## intended primarily for use by Bro's communication framework, but it can +## also trigger additional code if helpful. +## +## p: A record describing the peer. +## +## .. bro:see:: remote_capture_filter remote_connection_error +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_log remote_pong remote_state_access_performed +## remote_state_inconsistency print_hook event remote_connection_closed%(p: event_peer%); -# Generated when a remote connection's handshake has been completed. +## Generated when a remote connection's initial handshake has been completed. +## This event is intended primarily for use by Bro's communication framework, +## but it can also trigger additional code if helpful. +## +## p: A record describing the peer. +## +## .. bro:see:: remote_capture_filter remote_connection_closed remote_connection_error +## remote_connection_established remote_event_registered remote_log remote_pong +## remote_state_access_performed remote_state_inconsistency print_hook event remote_connection_handshake_done%(p: event_peer%); -# Generated for each event registered by a remote peer. +## Generated for each event registered by a remote peer. This event is intended +## primarily for use by Bro's communication framework, but it can also trigger +## additional code if helpful. +## +## p: A record describing the peer. +## +## name: TODO. +## +## .. bro:see:: remote_capture_filter remote_connection_closed +## remote_connection_error remote_connection_established +## remote_connection_handshake_done remote_log remote_pong +## remote_state_access_performed remote_state_inconsistency print_hook event remote_event_registered%(p: event_peer, name: string%); -# Generated when a connection to a remote Bro causes some error. +## Generated when a connection to a remote Bro encountered an error. This event +## is intended primarily for use by Bro's communication framework, but it can +## also trigger additional code if helpful. +## +## p: A record describing the peer. +## +## reason: A textual description of the error. +## +## .. bro:see:: remote_capture_filter remote_connection_closed +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_log remote_pong remote_state_access_performed +## remote_state_inconsistency print_hook event remote_connection_error%(p: event_peer, reason: string%); -# Generated when a remote peer sends us some capture filter. + + +## Generated when a remote peer sent us a capture filter. While this event is +## intended primarily for use by Bro's communication framework, it can also +## trigger additional code if helpful. +## +## p: A record describing the peer. +## +## filter: The filter string sent by the peer. +## +## .. bro:see:: remote_connection_closed remote_connection_error +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_log remote_pong remote_state_access_performed +## remote_state_inconsistency print_hook event remote_capture_filter%(p: event_peer, filter: string%); -# Generated after a call to send_state() when all data has been successfully -# sent to the remote side. +## Generated after a call to :bro:id:`send_state` when all data has been +## successfully sent to the remote side. While this event is +## intended primarily for use by Bro's communication framework, it can also +## trigger additional code if helpful. +## +## p: A record describing the remote peer. +## +## .. bro:see:: remote_capture_filter remote_connection_closed +## remote_connection_error remote_connection_established +## remote_connection_handshake_done remote_event_registered remote_log remote_pong +## remote_state_access_performed remote_state_inconsistency print_hook event finished_send_state%(p: event_peer%); -# Generated if state synchronization detects an inconsistency. +## Generated if state synchronization detects an inconsistency. While this +## event is intended primarily for use by Bro's communication framework, it can +## also trigger additional code if helpful. This event is only raised if +## :bro:id:`remote_check_sync_consistency` is false. +## +## operation: The textual description of the state operation performed. +## +## id: The name of the Bro script identifier that was operated on. +## +## expected_old: A textual representation of the value of *id* that was +## expected to be found before the operation was carried out. +## +## real_old: A textual representation of the value of *id* that was actually +## found before the operation was carried out. The difference between +## *real_old* and *expected_old* is the inconsistency being reported. +## +## .. bro:see:: remote_capture_filter remote_connection_closed +## remote_connection_error remote_connection_established +## remote_connection_handshake_done remote_event_registered remote_log remote_pong +## remote_state_access_performed print_hook remote_check_sync_consistency event remote_state_inconsistency%(operation: string, id: string, expected_old: string, real_old: string%); -# Generated for communication log message. +## Generated for communication log messages. While this event is +## intended primarily for use by Bro's communication framework, it can also +## trigger additional code if helpful. +## +## level: The log level, which is either :bro:id:`REMOTE_LOG_INFO` or +## :bro:id:`REMOTE_LOG_ERROR`. +## +## src: The component of the communication system that logged the message. +## Currently, this will be one of :bro:id:`REMOTE_SRC_CHILD` (Bro's +## child process), :bro:id:`REMOTE_SRC_PARENT` (Bro's main process), or +## :bro:id:`REMOTE_SRC_SCRIPT` (the script level). +## +## msg: The message logged. +## +## .. bro:see:: remote_capture_filter remote_connection_closed remote_connection_error +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_pong remote_state_access_performed +## remote_state_inconsistency print_hook remote_log_peer event remote_log%(level: count, src: count, msg: string%); -# Generated when a remote peer has answered to our ping. +## Generated for communication log messages. While this event is +## intended primarily for use by Bro's communication framework, it can also +## trigger additional code if helpful. This event is equivalent to +## :bro:see:`remote_log` except the message is with respect to a certain peer. +## +## p: A record describing the remote peer. +## +## level: The log level, which is either :bro:id:`REMOTE_LOG_INFO` or +## :bro:id:`REMOTE_LOG_ERROR`. +## +## src: The component of the communication system that logged the message. +## Currently, this will be one of :bro:id:`REMOTE_SRC_CHILD` (Bro's +## child process), :bro:id:`REMOTE_SRC_PARENT` (Bro's main process), or +## :bro:id:`REMOTE_SRC_SCRIPT` (the script level). +## +## msg: The message logged. +## +## .. bro:see:: remote_capture_filter remote_connection_closed remote_connection_error +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_pong remote_state_access_performed +## remote_state_inconsistency print_hook remote_log +event remote_log_peer%(p: event_peer, level: count, src: count, msg: string%); + +## Generated when a remote peer has answered to our ping. This event is part of +## Bro's infrastructure for measuring communication latency. One can send a ping +## by calling :bro:id:`send_ping` and when a corresponding reply is received, +## this event will be raised. +## +## p: The peer sending us the pong. +## +## seq: The sequence number passed to the original :bro:id:`send_ping` call. +## The number is sent back by the peer in its response. +## +## d1: The time interval between sending the ping and receiving the pong. This +## is the latency of the complete path. +## +## d2: The time interval between sending out the ping to the network and its +## reception at the peer. This is the network latency. +## +## d3: The time interval between when the peer's child process received the +## ping and when its parent process sent the pong. This is the +## processing latency at the peer. +## +## .. bro:see:: remote_capture_filter remote_connection_closed remote_connection_error +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_log remote_state_access_performed +## remote_state_inconsistency print_hook event remote_pong%(p: event_peer, seq: count, d1: interval, d2: interval, d3: interval%); -# Generated each time a remote state access has been replayed locally -# (primarily for debugging). +## Generated each time a remote state access has been replayed locally. This +## event is primarily intended for debugging. +## +## id: The name of the Bro script variable that's being operated on. +## +## v: The new value of the variable. +## +## .. bro:see:: remote_capture_filter remote_connection_closed remote_connection_error +## remote_connection_established remote_connection_handshake_done +## remote_event_registered remote_log remote_pong remote_state_inconsistency +## print_hook event remote_state_access_performed%(id: string, v: any%); -# Generated each time profiling_file is updated. "expensive" means that -# this event corresponds to heavier-weight profiling as indicated by the -# expensive_profiling_multiple variable. +## Generated each time Bro's internal profiling log is updated. The file is +## defined by :bro:id:`profiling_file`, and its update frequency by +## :bro:id:`profiling_interval` and :bro:id:`expensive_profiling_multiple`. +## +## f: The profiling file. +## +## expensive: True if this event corresponds to heavier-weight profiling as +## indicated by the :bro:id:`expensive_profiling_multiple` variable. +## +## .. bro:see:: profiling_interval expensive_profiling_multiple event profiling_update%(f: file, expensive: bool%); +## Generated each time Bro's script interpreter opens a file. This event is +## triggered only for files opened via :bro:id:`open`, and in particular not for +## normal log files as created by log writers. +## +## f: The opened file. event file_opened%(f: file%); -# Each print statement generates an event. -event print_hook%(f:file, s: string%); - -# Generated for &rotate_interval. -event rotate_interval%(f: file%); - -# Generated for &rotate_size. -event rotate_size%(f: file%); - +## Generated for a received NetFlow v5 header. Bro's NetFlow processor raises +## this event whenever it either receives a NetFlow header on the port it's +## listening on, or reads one from a trace file. +## +## h: The parsed NetFlow header. +## +## .. bro:see:: netflow_v5_record event netflow_v5_header%(h: nf_v5_header%); + +## Generated for a received NetFlow v5 record. Bro's NetFlow processor raises +## this event whenever it either receives a NetFlow record on the port it's +## listening on, or reads one from a trace file. +## +## r: The parsed NetFlow record. +## +## .. bro:see:: netflow_v5_record event netflow_v5_record%(r: nf_v5_record%); -# Different types of reporter messages. These won't be called -# recursively. +## Raised for informational messages reported via Bro's reporter framework. Such +## messages may be generated internally by the event engine and also by other +## scripts calling :bro:id:`Reporter::info`. +## +## t: The time the message was passed to the reporter. +## +## msg: The message itself. +## +## location: A (potentially empty) string describing a location associated with +## the message. +## +## .. bro:see:: reporter_warning reporter_error Reporter::info Reporter::warning +## Reporter::error +## +## .. note:: Bro will not call reporter events recursively. If the handler of +## any reporter event triggers a new reporter message itself, the output +## will go to ``stderr`` instead. event reporter_info%(t: time, msg: string, location: string%) &error_handler; + +## Raised for warnings reported via Bro's reporter framework. Such messages may +## be generated internally by the event engine and also by other scripts calling +## :bro:id:`Reporter::warning`. +## +## t: The time the warning was passed to the reporter. +## +## msg: The warning message. +## +## location: A (potentially empty) string describing a location associated with +## the warning. +## +## .. bro:see:: reporter_info reporter_error Reporter::info Reporter::warning +## Reporter::error +## +## .. note:: Bro will not call reporter events recursively. If the handler of +## any reporter event triggers a new reporter message itself, the output +## will go to ``stderr`` instead. event reporter_warning%(t: time, msg: string, location: string%) &error_handler; + +## Raised for errors reported via Bro's reporter framework. Such messages may +## be generated internally by the event engine and also by other scripts calling +## :bro:id:`Reporter::error`. +## +## t: The time the error was passed to the reporter. +## +## msg: The error message. +## +## location: A (potentially empty) string describing a location associated with +## the error. +## +## .. bro:see:: reporter_info reporter_warning Reporter::info Reporter::warning +## Reporter::error +## +## .. note:: Bro will not call reporter events recursively. If the handler of +## any reporter event triggers a new reporter message itself, the output +## will go to ``stderr`` instead. event reporter_error%(t: time, msg: string, location: string%) &error_handler; -# Raised for each policy script loaded. +## Raised for each policy script loaded by the script interpreter. +## +## path: The full path to the script loaded. +## +## level: The "nesting level": zero for a top-level Bro script and incremented +## recursively for each ``@load``. event bro_script_loaded%(path: string, level: count%); + +## Deprecated. Will be removed. +event stp_create_endp%(c: connection, e: int, is_orig: bool%); + +# ##### Internal events. Not further documented. + +## Event internal to the stepping stone detector. +event stp_resume_endp%(e: int%); + +## Event internal to the stepping stone detector. +event stp_correlate_pair%(e1: int, e2: int%); + +## Event internal to the stepping stone detector. +event stp_remove_pair%(e1: int, e2: int%); + +## Event internal to the stepping stone detector. +event stp_remove_endp%(e: int%); + +# ##### Deprecated events. Proposed for removal. + +## Deprecated. Will be removed. +event interconn_stats%(c: connection, os: interconn_endp_stats, rs: interconn_endp_stats%); + +## Deprecated. Will be removed. +event interconn_remove_conn%(c: connection%); + +## Deprecated. Will be removed. +event backdoor_stats%(c: connection, os: backdoor_endp_stats, rs: backdoor_endp_stats%); + +## Deprecated. Will be removed. +event backdoor_remove_conn%(c: connection%); + +## Deprecated. Will be removed. +event ssh_signature_found%(c: connection, is_orig: bool%); + +## Deprecated. Will be removed. +event telnet_signature_found%(c: connection, is_orig: bool, len: count%); + +## Deprecated. Will be removed. +event rlogin_signature_found%(c: connection, is_orig: bool, num_null: count, len: count%); + +## Deprecated. Will be removed. +event root_backdoor_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event ftp_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event napster_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event gnutella_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event kazaa_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event http_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event http_proxy_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event smtp_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event irc_signature_found%(c: connection%); + +## Deprecated. Will be removed. +event gaobot_signature_found%(c: connection%); + +## Deprecated. Will be removed. +## +## .. todo:: Unclear what this event is for; it's never raised. We should just +## remove it. +event dns_full_request%(%) &group="dns"; + +## Deprecated. Will be removed. +event anonymization_mapping%(orig: addr, mapped: addr%); + +## Deprecated. Will be removed. +event rotate_interval%(f: file%); + +## Deprecated. Will be removed. +event rotate_size%(f: file%); + +## Deprecated. Will be removed. +event print_hook%(f:file, s: string%); diff --git a/src/input.bif b/src/input.bif new file mode 100644 index 0000000000..199b665fa6 --- /dev/null +++ b/src/input.bif @@ -0,0 +1,59 @@ +# functions and types for the input framework + +module Input; + +%%{ +#include "input/Manager.h" +#include "NetVar.h" +%%} + +type TableDescription: record; +type EventDescription: record; + +function Input::__create_table_stream%(description: Input::TableDescription%) : bool + %{ + bool res = input_mgr->CreateTableStream(description->AsRecordVal()); + return new Val(res, TYPE_BOOL); + %} + +function Input::__create_event_stream%(description: Input::EventDescription%) : bool + %{ + bool res = input_mgr->CreateEventStream(description->AsRecordVal()); + return new Val(res, TYPE_BOOL); + %} + +function Input::__remove_stream%(id: string%) : bool + %{ + bool res = input_mgr->RemoveStream(id->AsString()->CheckString()); + return new Val(res, TYPE_BOOL); + %} + +function Input::__force_update%(id: string%) : bool + %{ + bool res = input_mgr->ForceUpdate(id->AsString()->CheckString()); + return new Val(res, TYPE_BOOL); + %} + +# Options for the input framework + +const accept_unsupported_types: bool; + +# Options for Ascii Reader + +module InputAscii; + +const separator: string; +const set_separator: string; +const empty_field: string; +const unset_field: string; + +module InputRaw; +const record_separator: string; + +module InputBenchmark; +const factor: double; +const spread: count; +const autospread: double; +const addfactor: count; +const stopspreadat: count; +const timedspread: double; diff --git a/src/input/Manager.cc b/src/input/Manager.cc new file mode 100644 index 0000000000..43ac63200f --- /dev/null +++ b/src/input/Manager.cc @@ -0,0 +1,2155 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include + +#include "Manager.h" +#include "ReaderFrontend.h" +#include "ReaderBackend.h" +#include "readers/Ascii.h" +#include "readers/Raw.h" +#include "readers/Benchmark.h" + +#include "Event.h" +#include "EventHandler.h" +#include "NetVar.h" +#include "Net.h" + + +#include "CompHash.h" + +#include "../threading/SerialTypes.h" + +using namespace input; +using threading::Value; +using threading::Field; + +struct ReaderDefinition { + bro_int_t type; // The reader type. + const char *name; // Descriptive name for error messages. + bool (*init)(); // Optional one-time initializing function. + ReaderBackend* (*factory)(ReaderFrontend* frontend); // Factory function for creating instances. +}; + +ReaderDefinition input_readers[] = { + { BifEnum::Input::READER_ASCII, "Ascii", 0, reader::Ascii::Instantiate }, + { BifEnum::Input::READER_RAW, "Raw", 0, reader::Raw::Instantiate }, + { BifEnum::Input::READER_BENCHMARK, "Benchmark", 0, reader::Benchmark::Instantiate }, + + // End marker + { BifEnum::Input::READER_DEFAULT, "None", 0, (ReaderBackend* (*)(ReaderFrontend* frontend))0 } +}; + +/** + * InputHashes are used as Dictionaries to store the value and index hashes + * for all lines currently stored in a table. Index hash is stored as + * HashKey*, because it is thrown into other Bro functions that need the + * complex structure of it. For everything we do (with values), we just take + * the hash_t value and compare it directly with "==" + */ +struct InputHash { + hash_t valhash; + HashKey* idxkey; + ~InputHash(); +}; + +InputHash::~InputHash() + { + delete idxkey; + } + +static void input_hash_delete_func(void* val) + { + InputHash* h = (InputHash*) val; + delete h; + } + +declare(PDict, InputHash); + +/** + * Base stuff that every stream can do. + */ +class Manager::Stream { +public: + string name; + ReaderBackend::ReaderInfo* info; + bool removed; + + StreamType stream_type; // to distinguish between event and table streams + + EnumVal* type; + ReaderFrontend* reader; + TableVal* config; + + RecordVal* description; + + Stream(); + virtual ~Stream(); +}; + +Manager::Stream::Stream() + { + type = 0; + reader = 0; + description = 0; + removed = false; + } + +Manager::Stream::~Stream() + { + if ( type ) + Unref(type); + + if ( description ) + Unref(description); + + if ( config ) + Unref(config); + + if ( reader ) + delete(reader); + } + +class Manager::TableStream: public Manager::Stream { +public: + + unsigned int num_idx_fields; + unsigned int num_val_fields; + bool want_record; + EventHandlerPtr table_event; + + TableVal* tab; + RecordType* rtype; + RecordType* itype; + + PDict(InputHash)* currDict; + PDict(InputHash)* lastDict; + + Func* pred; + + EventHandlerPtr event; + + TableStream(); + ~TableStream(); +}; + +class Manager::EventStream: public Manager::Stream { +public: + EventHandlerPtr event; + + RecordType* fields; + unsigned int num_fields; + + bool want_record; + EventStream(); + ~EventStream(); +}; + +Manager::TableStream::TableStream() : Manager::Stream::Stream() + { + stream_type = TABLE_STREAM; + + tab = 0; + itype = 0; + rtype = 0; + + currDict = 0; + lastDict = 0; + + pred = 0; + } + +Manager::EventStream::EventStream() : Manager::Stream::Stream() + { + fields = 0; + stream_type = EVENT_STREAM; + } + +Manager::EventStream::~EventStream() + { + if ( fields ) + Unref(fields); + } + +Manager::TableStream::~TableStream() + { + if ( tab ) + Unref(tab); + + if ( itype ) + Unref(itype); + + if ( rtype ) // can be 0 for sets + Unref(rtype); + + if ( currDict != 0 ) + { + currDict->Clear(); + delete currDict; + } + + if ( lastDict != 0 ) + { + lastDict->Clear();; + delete lastDict; + } + } + +Manager::Manager() + { + end_of_data = internal_handler("Input::end_of_data"); + } + +Manager::~Manager() + { + for ( map::iterator s = readers.begin(); s != readers.end(); ++s ) + { + delete s->second; + delete s->first; + } + + } + +ReaderBackend* Manager::CreateBackend(ReaderFrontend* frontend, bro_int_t type) + { + ReaderDefinition* ir = input_readers; + + while ( true ) + { + if ( ir->type == BifEnum::Input::READER_DEFAULT ) + { + reporter->Error("The reader that was requested was not found and could not be initialized."); + return 0; + } + + if ( ir->type != type ) + { + // no, didn't find the right one... + ++ir; + continue; + } + + + // call init function of writer if presnt + if ( ir->init ) + { + if ( (*ir->init)() ) + { + //clear it to be not called again + ir->init = 0; + } + + else { + // ohok. init failed, kill factory for all eternity + ir->factory = 0; + DBG_LOG(DBG_LOGGING, "Failed to init input class %s", ir->name); + return 0; + } + + } + + if ( ! ir->factory ) + // no factory? + return 0; + + // all done. break. + break; + } + + assert(ir->factory); + + ReaderBackend* backend = (*ir->factory)(frontend); + assert(backend); + + return backend; + } + +// Create a new input reader object to be used at whomevers leisure lateron. +bool Manager::CreateStream(Stream* info, RecordVal* description) + { + ReaderDefinition* ir = input_readers; + + RecordType* rtype = description->Type()->AsRecordType(); + if ( ! ( same_type(rtype, BifType::Record::Input::TableDescription, 0) + || same_type(rtype, BifType::Record::Input::EventDescription, 0) ) ) + { + reporter->Error("Streamdescription argument not of right type for new input stream"); + return false; + } + + Val* name_val = description->LookupWithDefault(rtype->FieldOffset("name")); + string name = name_val->AsString()->CheckString(); + Unref(name_val); + + Stream *i = FindStream(name); + if ( i != 0 ) + { + reporter->Error("Trying create already existing input stream %s", + name.c_str()); + return false; + } + + EnumVal* reader = description->LookupWithDefault(rtype->FieldOffset("reader"))->AsEnumVal(); + + // get the source ... + Val* sourceval = description->LookupWithDefault(rtype->FieldOffset("source")); + assert ( sourceval != 0 ); + const BroString* bsource = sourceval->AsString(); + string source((const char*) bsource->Bytes(), bsource->Len()); + Unref(sourceval); + + ReaderBackend::ReaderInfo* rinfo = new ReaderBackend::ReaderInfo(); + rinfo->source = copy_string(source.c_str()); + + EnumVal* mode = description->LookupWithDefault(rtype->FieldOffset("mode"))->AsEnumVal(); + switch ( mode->InternalInt() ) + { + case 0: + rinfo->mode = MODE_MANUAL; + break; + + case 1: + rinfo->mode = MODE_REREAD; + break; + + case 2: + rinfo->mode = MODE_STREAM; + break; + + default: + reporter->InternalError("unknown reader mode"); + } + + Unref(mode); + + Val* config = description->LookupWithDefault(rtype->FieldOffset("config")); + info->config = config->AsTableVal(); // ref'd by LookupWithDefault + + { + // create config mapping in ReaderInfo. Has to be done before the construction of reader_obj. + HashKey* k; + IterCookie* c = info->config->AsTable()->InitForIteration(); + + TableEntryVal* v; + while ( (v = info->config->AsTable()->NextEntry(k, c)) ) + { + ListVal* index = info->config->RecoverIndex(k); + string key = index->Index(0)->AsString()->CheckString(); + string value = v->Value()->AsString()->CheckString(); + rinfo->config.insert(std::make_pair(copy_string(key.c_str()), copy_string(value.c_str()))); + Unref(index); + delete k; + } + + } + + + ReaderFrontend* reader_obj = new ReaderFrontend(*rinfo, reader); + assert(reader_obj); + + info->reader = reader_obj; + info->type = reader->AsEnumVal(); // ref'd by lookupwithdefault + info->name = name; + info->info = rinfo; + + Ref(description); + info->description = description; + + + DBG_LOG(DBG_INPUT, "Successfully created new input stream %s", + name.c_str()); + + return true; + } + +bool Manager::CreateEventStream(RecordVal* fval) + { + RecordType* rtype = fval->Type()->AsRecordType(); + if ( ! same_type(rtype, BifType::Record::Input::EventDescription, 0) ) + { + reporter->Error("EventDescription argument not of right type"); + return false; + } + + EventStream* stream = new EventStream(); + { + bool res = CreateStream(stream, fval); + if ( res == false ) + { + delete stream; + return false; + } + } + + + RecordType *fields = fval->LookupWithDefault(rtype->FieldOffset("fields"))->AsType()->AsTypeType()->Type()->AsRecordType(); + + Val *want_record = fval->LookupWithDefault(rtype->FieldOffset("want_record")); + + Val* event_val = fval->LookupWithDefault(rtype->FieldOffset("ev")); + Func* event = event_val->AsFunc(); + Unref(event_val); + + FuncType* etype = event->FType()->AsFuncType(); + + bool allow_file_func = false; + + if ( ! etype->IsEvent() ) + { + reporter->Error("stream event is a function, not an event"); + return false; + } + + const type_list* args = etype->ArgTypes()->Types(); + + if ( args->length() < 2 ) + { + reporter->Error("event takes not enough arguments"); + return false; + } + + if ( ! same_type((*args)[1], BifType::Enum::Input::Event, 0) ) + { + reporter->Error("events second attribute must be of type Input::Event"); + return false; + } + + if ( ! same_type((*args)[0], BifType::Record::Input::EventDescription, 0) ) + { + reporter->Error("events first attribute must be of type Input::EventDescription"); + return false; + } + + if ( want_record->InternalInt() == 0 ) + { + if ( args->length() != fields->NumFields() + 2 ) + { + reporter->Error("event has wrong number of arguments"); + return false; + } + + for ( int i = 0; i < fields->NumFields(); i++ ) + { + if ( !same_type((*args)[i+2], fields->FieldType(i) ) ) + { + reporter->Error("Incompatible type for event"); + return false; + } + } + + } + + else if ( want_record->InternalInt() == 1 ) + { + if ( args->length() != 3 ) + { + reporter->Error("event has wrong number of arguments"); + return false; + } + + if ( ! same_type((*args)[2], fields ) ) + { + ODesc desc1; + ODesc desc2; + (*args)[2]->Describe(&desc1); + fields->Describe(&desc2); + reporter->Error("Incompatible type '%s':%s for event, which needs type '%s':%s\n", + type_name((*args)[2]->Tag()), desc1.Description(), + type_name(fields->Tag()), desc2.Description()); + return false; + } + + allow_file_func = BifConst::Input::accept_unsupported_types; + + } + + else + assert(false); + + + vector fieldsV; // vector, because UnrollRecordType needs it + + bool status = (! UnrollRecordType(&fieldsV, fields, "", allow_file_func)); + + if ( status ) + { + reporter->Error("Problem unrolling"); + return false; + } + + Field** logf = new Field*[fieldsV.size()]; + for ( unsigned int i = 0; i < fieldsV.size(); i++ ) + logf[i] = fieldsV[i]; + + Unref(fields); // ref'd by lookupwithdefault + stream->num_fields = fieldsV.size(); + stream->fields = fields->Ref()->AsRecordType(); + stream->event = event_registry->Lookup(event->GetID()->Name()); + stream->want_record = ( want_record->InternalInt() == 1 ); + Unref(want_record); // ref'd by lookupwithdefault + + assert(stream->reader); + + stream->reader->Init(stream->num_fields, logf ); + + readers[stream->reader] = stream; + + DBG_LOG(DBG_INPUT, "Successfully created event stream %s", + stream->name.c_str()); + + return true; +} + +bool Manager::CreateTableStream(RecordVal* fval) + { + RecordType* rtype = fval->Type()->AsRecordType(); + if ( ! same_type(rtype, BifType::Record::Input::TableDescription, 0) ) + { + reporter->Error("TableDescription argument not of right type"); + return false; + } + + TableStream* stream = new TableStream(); + { + bool res = CreateStream(stream, fval); + if ( res == false ) + { + delete stream; + return false; + } + } + + Val* pred = fval->LookupWithDefault(rtype->FieldOffset("pred")); + + RecordType *idx = fval->LookupWithDefault(rtype->FieldOffset("idx"))->AsType()->AsTypeType()->Type()->AsRecordType(); + RecordType *val = 0; + + if ( fval->LookupWithDefault(rtype->FieldOffset("val")) != 0 ) + { + val = fval->LookupWithDefault(rtype->FieldOffset("val"))->AsType()->AsTypeType()->Type()->AsRecordType(); + Unref(val); // The lookupwithdefault in the if-clause ref'ed val. + } + + TableVal *dst = fval->LookupWithDefault(rtype->FieldOffset("destination"))->AsTableVal(); + + // check if index fields match table description + int num = idx->NumFields(); + const type_list* tl = dst->Type()->AsTableType()->IndexTypes(); + + loop_over_list(*tl, j) + { + if ( j >= num ) + { + reporter->Error("Table type has more indexes than index definition"); + return false; + } + + if ( ! same_type(idx->FieldType(j), (*tl)[j]) ) + { + reporter->Error("Table type does not match index type"); + return false; + } + } + + if ( num != j ) + { + reporter->Error("Table has less elements than index definition"); + return false; + } + + Val *want_record = fval->LookupWithDefault(rtype->FieldOffset("want_record")); + + Val* event_val = fval->LookupWithDefault(rtype->FieldOffset("ev")); + Func* event = event_val ? event_val->AsFunc() : 0; + Unref(event_val); + + if ( event ) + { + FuncType* etype = event->FType()->AsFuncType(); + + if ( ! etype->IsEvent() ) + { + reporter->Error("stream event is a function, not an event"); + return false; + } + + const type_list* args = etype->ArgTypes()->Types(); + + if ( args->length() != 4 ) + { + reporter->Error("Table event must take 4 arguments"); + return false; + } + + if ( ! same_type((*args)[0], BifType::Record::Input::TableDescription, 0) ) + { + reporter->Error("table events first attribute must be of type Input::TableDescription"); + return false; + } + + if ( ! same_type((*args)[1], BifType::Enum::Input::Event, 0) ) + { + reporter->Error("table events second attribute must be of type Input::Event"); + return false; + } + + if ( ! same_type((*args)[2], idx) ) + { + reporter->Error("table events index attributes do not match"); + return false; + } + + if ( want_record->InternalInt() == 1 && ! same_type((*args)[3], val) ) + { + reporter->Error("table events value attributes do not match"); + return false; + } + else if ( want_record->InternalInt() == 0 + && !same_type((*args)[3], val->FieldType(0) ) ) + { + reporter->Error("table events value attribute does not match"); + return false; + } + + assert(want_record->InternalInt() == 1 || want_record->InternalInt() == 0); + + } + + vector fieldsV; // vector, because we don't know the length beforehands + + bool status = (! UnrollRecordType(&fieldsV, idx, "", false)); + + int idxfields = fieldsV.size(); + + if ( val ) // if we are not a set + status = status || ! UnrollRecordType(&fieldsV, val, "", BifConst::Input::accept_unsupported_types); + + int valfields = fieldsV.size() - idxfields; + + if ( ! val ) + assert(valfields == 0); + + if ( status ) + { + reporter->Error("Problem unrolling"); + return false; + } + + Field** fields = new Field*[fieldsV.size()]; + for ( unsigned int i = 0; i < fieldsV.size(); i++ ) + fields[i] = fieldsV[i]; + + stream->pred = pred ? pred->AsFunc() : 0; + stream->num_idx_fields = idxfields; + stream->num_val_fields = valfields; + stream->tab = dst->AsTableVal(); + stream->rtype = val ? val->AsRecordType() : 0; + stream->itype = idx->AsRecordType(); + stream->event = event ? event_registry->Lookup(event->GetID()->Name()) : 0; + stream->currDict = new PDict(InputHash); + stream->currDict->SetDeleteFunc(input_hash_delete_func); + stream->lastDict = new PDict(InputHash); + stream->lastDict->SetDeleteFunc(input_hash_delete_func); + stream->want_record = ( want_record->InternalInt() == 1 ); + + Unref(want_record); // ref'd by lookupwithdefault + Unref(pred); + + if ( valfields > 1 ) + { + if ( ! stream->want_record ) + { + reporter->Error("Stream %s does not want a record (want_record=F), but has more then one value field. Aborting", stream->name.c_str()); + delete stream; + return false; + } + } + + + assert(stream->reader); + stream->reader->Init(fieldsV.size(), fields ); + + readers[stream->reader] = stream; + + DBG_LOG(DBG_INPUT, "Successfully created table stream %s", + stream->name.c_str()); + + return true; + } + + +bool Manager::IsCompatibleType(BroType* t, bool atomic_only) + { + if ( ! t ) + return false; + + switch ( t->Tag() ) { + case TYPE_BOOL: + case TYPE_INT: + case TYPE_COUNT: + case TYPE_COUNTER: + case TYPE_PORT: + case TYPE_SUBNET: + case TYPE_ADDR: + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + case TYPE_ENUM: + case TYPE_STRING: + return true; + + case TYPE_RECORD: + return ! atomic_only; + + case TYPE_TABLE: + { + if ( atomic_only ) + return false; + + if ( ! t->IsSet() ) + return false; + + return IsCompatibleType(t->AsSetType()->Indices()->PureType(), true); + } + + case TYPE_VECTOR: + { + if ( atomic_only ) + return false; + + return IsCompatibleType(t->AsVectorType()->YieldType(), true); + } + + default: + return false; + } + + return false; + } + + +bool Manager::RemoveStream(Stream *i) + { + if ( i == 0 ) + return false; // not found + + if ( i->removed ) + { + reporter->Warning("Stream %s is already queued for removal. Ignoring remove.", i->name.c_str()); + return true; + } + + i->removed = true; + + DBG_LOG(DBG_INPUT, "Successfully queued removal of stream %s", + i->name.c_str()); + + return true; + } + +bool Manager::RemoveStream(ReaderFrontend* frontend) + { + return RemoveStream(FindStream(frontend)); + } + + +bool Manager::RemoveStream(const string &name) + { + return RemoveStream(FindStream(name)); + } + + +bool Manager::RemoveStreamContinuation(ReaderFrontend* reader) + { + Stream *i = FindStream(reader); + + if ( i == 0 ) + { + reporter->Error("Stream not found in RemoveStreamContinuation"); + return false; + } + +#ifdef DEBUG + DBG_LOG(DBG_INPUT, "Successfully executed removal of stream %s", + i->name.c_str()); +#endif + + readers.erase(reader); + delete(i); + + return true; + } + +bool Manager::UnrollRecordType(vector *fields, const RecordType *rec, + const string& nameprepend, bool allow_file_func) + { + for ( int i = 0; i < rec->NumFields(); i++ ) + { + + if ( ! IsCompatibleType(rec->FieldType(i)) ) + { + // If the field is a file or a function type + // and it is optional, we accept it nevertheless. + // This allows importing logfiles containing this + // stuff that we actually cannot read :) + if ( allow_file_func ) + { + if ( ( rec->FieldType(i)->Tag() == TYPE_FILE || + rec->FieldType(i)->Tag() == TYPE_FUNC ) && + rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL) ) + { + reporter->Info("Encountered incompatible type \"%s\" in table definition for ReaderFrontend. Ignoring field.", type_name(rec->FieldType(i)->Tag())); + continue; + } + } + + reporter->Error("Incompatible type \"%s\" in table definition for ReaderFrontend", type_name(rec->FieldType(i)->Tag())); + return false; + } + + if ( rec->FieldType(i)->Tag() == TYPE_RECORD ) + { + string prep = nameprepend + rec->FieldName(i) + "."; + + if ( !UnrollRecordType(fields, rec->FieldType(i)->AsRecordType(), prep, allow_file_func) ) + { + return false; + } + + } + + else + { + string name = nameprepend + rec->FieldName(i); + const char* secondary = 0; + TypeTag ty = rec->FieldType(i)->Tag(); + TypeTag st = TYPE_VOID; + bool optional = false; + + if ( ty == TYPE_TABLE ) + st = rec->FieldType(i)->AsSetType()->Indices()->PureType()->Tag(); + + else if ( ty == TYPE_VECTOR ) + st = rec->FieldType(i)->AsVectorType()->YieldType()->Tag(); + + else if ( ty == TYPE_PORT && + rec->FieldDecl(i)->FindAttr(ATTR_TYPE_COLUMN) ) + { + // we have an annotation for the second column + + Val* c = rec->FieldDecl(i)->FindAttr(ATTR_TYPE_COLUMN)->AttrExpr()->Eval(0); + + assert(c); + assert(c->Type()->Tag() == TYPE_STRING); + + secondary = c->AsStringVal()->AsString()->CheckString(); + } + + if ( rec->FieldDecl(i)->FindAttr(ATTR_OPTIONAL ) ) + optional = true; + + Field* field = new Field(name.c_str(), secondary, ty, st, optional); + fields->push_back(field); + } + } + + return true; + } + +bool Manager::ForceUpdate(const string &name) + { + Stream *i = FindStream(name); + if ( i == 0 ) + { + reporter->Error("Stream %s not found", name.c_str()); + return false; + } + + if ( i->removed ) + { + reporter->Error("Stream %s is already queued for removal. Ignoring force update.", name.c_str()); + return false; + } + + i->reader->Update(); + +#ifdef DEBUG + DBG_LOG(DBG_INPUT, "Forcing update of stream %s", name.c_str()); +#endif + + return true; // update is async :( +} + + +Val* Manager::RecordValToIndexVal(RecordVal *r) + { + Val* idxval; + + RecordType *type = r->Type()->AsRecordType(); + + int num_fields = type->NumFields(); + + if ( num_fields == 1 && type->FieldDecl(0)->type->Tag() != TYPE_RECORD ) + idxval = r->LookupWithDefault(0); + + else + { + ListVal *l = new ListVal(TYPE_ANY); + for ( int j = 0 ; j < num_fields; j++ ) + l->Append(r->LookupWithDefault(j)); + + idxval = l; + } + + + return idxval; + } + + +Val* Manager::ValueToIndexVal(int num_fields, const RecordType *type, const Value* const *vals) + { + Val* idxval; + int position = 0; + + + if ( num_fields == 1 && type->FieldType(0)->Tag() != TYPE_RECORD ) + { + idxval = ValueToVal(vals[0], type->FieldType(0)); + position = 1; + } + else + { + ListVal *l = new ListVal(TYPE_ANY); + for ( int j = 0 ; j < type->NumFields(); j++ ) + { + if ( type->FieldType(j)->Tag() == TYPE_RECORD ) + l->Append(ValueToRecordVal(vals, + type->FieldType(j)->AsRecordType(), &position)); + else + { + l->Append(ValueToVal(vals[position], type->FieldType(j))); + position++; + } + } + idxval = l; + } + + assert ( position == num_fields ); + + return idxval; + } + + +void Manager::SendEntry(ReaderFrontend* reader, Value* *vals) + { + Stream *i = FindStream(reader); + if ( i == 0 ) + { + reporter->InternalError("Unknown reader in SendEntry"); + return; + } + + int readFields = 0; + + if ( i->stream_type == TABLE_STREAM ) + readFields = SendEntryTable(i, vals); + + else if ( i->stream_type == EVENT_STREAM ) + { + EnumVal *type = new EnumVal(BifEnum::Input::EVENT_NEW, BifType::Enum::Input::Event); + readFields = SendEventStreamEvent(i, type, vals); + } + + else + assert(false); + + for ( int i = 0; i < readFields; i++ ) + delete vals[i]; + + delete [] vals; + } + +int Manager::SendEntryTable(Stream* i, const Value* const *vals) + { + bool updated = false; + + assert(i); + + assert(i->stream_type == TABLE_STREAM); + TableStream* stream = (TableStream*) i; + + HashKey* idxhash = HashValues(stream->num_idx_fields, vals); + + if ( idxhash == 0 ) + { + reporter->Error("Could not hash line. Ignoring"); + return stream->num_val_fields + stream->num_idx_fields; + } + + hash_t valhash = 0; + if ( stream->num_val_fields > 0 ) + { + HashKey* valhashkey = HashValues(stream->num_val_fields, vals+stream->num_idx_fields); + if ( valhashkey == 0 ) + { + // empty line. index, but no values. + // hence we also have no hash value... + } + else + { + valhash = valhashkey->Hash(); + delete(valhashkey); + } + } + + InputHash *h = stream->lastDict->Lookup(idxhash); + if ( h != 0 ) + { + // seen before + if ( stream->num_val_fields == 0 || h->valhash == valhash ) + { + // ok, exact duplicate, move entry to new dicrionary and do nothing else. + stream->lastDict->Remove(idxhash); + stream->currDict->Insert(idxhash, h); + delete idxhash; + return stream->num_val_fields + stream->num_idx_fields; + } + + else + { + assert( stream->num_val_fields > 0 ); + // entry was updated in some way + stream->lastDict->Remove(idxhash); + // keep h for predicates + updated = true; + + } + } + + Val* valval; + RecordVal* predidx = 0; + + int position = stream->num_idx_fields; + + if ( stream->num_val_fields == 0 ) + valval = 0; + + else if ( stream->num_val_fields == 1 && !stream->want_record ) + valval = ValueToVal(vals[position], stream->rtype->FieldType(0)); + + else + valval = ValueToRecordVal(vals, stream->rtype, &position); + + + // call stream first to determine if we really add / change the entry + if ( stream->pred ) + { + EnumVal* ev; + int startpos = 0; + predidx = ValueToRecordVal(vals, stream->itype, &startpos); + + if ( updated ) + ev = new EnumVal(BifEnum::Input::EVENT_CHANGED, BifType::Enum::Input::Event); + else + ev = new EnumVal(BifEnum::Input::EVENT_NEW, BifType::Enum::Input::Event); + + bool result; + if ( stream->num_val_fields > 0 ) // we have values + result = CallPred(stream->pred, 3, ev, predidx->Ref(), valval->Ref()); + else // no values + result = CallPred(stream->pred, 2, ev, predidx->Ref()); + + if ( result == false ) + { + Unref(predidx); + Unref(valval); + + if ( ! updated ) + { + // just quit and delete everything we created. + delete idxhash; + delete h; + return stream->num_val_fields + stream->num_idx_fields; + } + + else + { + // keep old one + stream->currDict->Insert(idxhash, h); + delete idxhash; + return stream->num_val_fields + stream->num_idx_fields; + } + } + } + + // now we don't need h anymore - if we are here, the entry is updated and a new h is created. + if ( h ) + { + delete h; + h = 0; + } + + + Val* idxval; + if ( predidx != 0 ) + { + idxval = RecordValToIndexVal(predidx); + // I think there is an unref missing here. But if I insert is, it crashes :) + } + else + idxval = ValueToIndexVal(stream->num_idx_fields, stream->itype, vals); + + Val* oldval = 0; + if ( updated == true ) + { + assert(stream->num_val_fields > 0); + // in that case, we need the old value to send the event (if we send an event). + oldval = stream->tab->Lookup(idxval, false); + } + + assert(idxval); + HashKey* k = stream->tab->ComputeHash(idxval); + if ( ! k ) + reporter->InternalError("could not hash"); + + InputHash* ih = new InputHash(); + ih->idxkey = new HashKey(k->Key(), k->Size(), k->Hash()); + ih->valhash = valhash; + + if ( stream->event && updated ) + Ref(oldval); // otherwise it is no longer accessible after the assignment + + stream->tab->Assign(idxval, k, valval); + Unref(idxval); // asssign does not consume idxval. + + if ( predidx != 0 ) + Unref(predidx); + + stream->currDict->Insert(idxhash, ih); + delete idxhash; + + if ( stream->event ) + { + EnumVal* ev; + int startpos = 0; + Val* predidx = ValueToRecordVal(vals, stream->itype, &startpos); + + if ( updated ) + { // in case of update send back the old value. + assert ( stream->num_val_fields > 0 ); + ev = new EnumVal(BifEnum::Input::EVENT_CHANGED, BifType::Enum::Input::Event); + assert ( oldval != 0 ); + SendEvent(stream->event, 4, stream->description->Ref(), ev, predidx, oldval); + } + + else + { + ev = new EnumVal(BifEnum::Input::EVENT_NEW, BifType::Enum::Input::Event); + if ( stream->num_val_fields == 0 ) + { + Ref(stream->description); + SendEvent(stream->event, 3, stream->description->Ref(), ev, predidx); + } + else + SendEvent(stream->event, 4, stream->description->Ref(), ev, predidx, valval->Ref()); + + } + } + + return stream->num_val_fields + stream->num_idx_fields; + } + +void Manager::EndCurrentSend(ReaderFrontend* reader) + { + Stream *i = FindStream(reader); + + if ( i == 0 ) + { + reporter->InternalError("Unknown reader in EndCurrentSend"); + return; + } + +#ifdef DEBUG + DBG_LOG(DBG_INPUT, "Got EndCurrentSend stream %s", i->name.c_str()); +#endif + + if ( i->stream_type == EVENT_STREAM ) + { + // just signal the end of the data source + SendEndOfData(i); + return; + } + + assert(i->stream_type == TABLE_STREAM); + TableStream* stream = (TableStream*) i; + + // lastdict contains all deleted entries and should be empty apart from that + IterCookie *c = stream->lastDict->InitForIteration(); + stream->lastDict->MakeRobustCookie(c); + InputHash* ih; + HashKey *lastDictIdxKey; + + while ( ( ih = stream->lastDict->NextEntry(lastDictIdxKey, c) ) ) + { + ListVal * idx = 0; + Val *val = 0; + + Val* predidx = 0; + EnumVal* ev = 0; + int startpos = 0; + + if ( stream->pred || stream->event ) + { + idx = stream->tab->RecoverIndex(ih->idxkey); + assert(idx != 0); + val = stream->tab->Lookup(idx); + assert(val != 0); + predidx = ListValToRecordVal(idx, stream->itype, &startpos); + Unref(idx); + ev = new EnumVal(BifEnum::Input::EVENT_REMOVED, BifType::Enum::Input::Event); + } + + if ( stream->pred ) + { + // ask predicate, if we want to expire this element... + + Ref(ev); + Ref(predidx); + Ref(val); + + bool result = CallPred(stream->pred, 3, ev, predidx, val); + + if ( result == false ) + { + // Keep it. Hence - we quit and simply go to the next entry of lastDict + // ah well - and we have to add the entry to currDict... + Unref(predidx); + Unref(ev); + stream->currDict->Insert(lastDictIdxKey, stream->lastDict->RemoveEntry(lastDictIdxKey)); + delete lastDictIdxKey; + continue; + } + } + + if ( stream->event ) + { + Ref(predidx); + Ref(val); + Ref(ev); + SendEvent(stream->event, 4, stream->description->Ref(), ev, predidx, val); + } + + if ( predidx ) // if we have a stream or an event... + Unref(predidx); + + if ( ev ) + Unref(ev); + + Unref(stream->tab->Delete(ih->idxkey)); + stream->lastDict->Remove(lastDictIdxKey); // delete in next line + delete lastDictIdxKey; + delete(ih); + } + + stream->lastDict->Clear(); // should be empt. buti- well... who knows... + delete(stream->lastDict); + + stream->lastDict = stream->currDict; + stream->currDict = new PDict(InputHash); + stream->currDict->SetDeleteFunc(input_hash_delete_func); + +#ifdef DEBUG + DBG_LOG(DBG_INPUT, "EndCurrentSend complete for stream %s", + i->name.c_str()); +#endif + + SendEndOfData(i); + } + +void Manager::SendEndOfData(ReaderFrontend* reader) + { + Stream *i = FindStream(reader); + + if ( i == 0 ) + { + reporter->InternalError("Unknown reader in SendEndOfData"); + return; + } + + SendEndOfData(i); + } + +void Manager::SendEndOfData(const Stream *i) + { + SendEvent(end_of_data, 2, new StringVal(i->name.c_str()), new StringVal(i->info->source)); + } + +void Manager::Put(ReaderFrontend* reader, Value* *vals) + { + Stream *i = FindStream(reader); + if ( i == 0 ) + { + reporter->InternalError("Unknown reader in Put"); + return; + } + + int readFields = 0; + + if ( i->stream_type == TABLE_STREAM ) + readFields = PutTable(i, vals); + + else if ( i->stream_type == EVENT_STREAM ) + { + EnumVal *type = new EnumVal(BifEnum::Input::EVENT_NEW, BifType::Enum::Input::Event); + readFields = SendEventStreamEvent(i, type, vals); + } + + else + assert(false); + + for ( int i = 0; i < readFields; i++ ) + delete vals[i]; + + delete [] vals; + } + +int Manager::SendEventStreamEvent(Stream* i, EnumVal* type, const Value* const *vals) + { + assert(i); + + assert(i->stream_type == EVENT_STREAM); + EventStream* stream = (EventStream*) i; + + Val *val; + list out_vals; + Ref(stream->description); + out_vals.push_back(stream->description); + // no tracking, send everything with a new event... + out_vals.push_back(type); + + int position = 0; + + if ( stream->want_record ) + { + RecordVal * r = ValueToRecordVal(vals, stream->fields, &position); + out_vals.push_back(r); + } + + else + { + for ( int j = 0; j < stream->fields->NumFields(); j++) + { + Val* val = 0; + + if ( stream->fields->FieldType(j)->Tag() == TYPE_RECORD ) + val = ValueToRecordVal(vals, + stream->fields->FieldType(j)->AsRecordType(), + &position); + + else + { + val = ValueToVal(vals[position], stream->fields->FieldType(j)); + position++; + } + + out_vals.push_back(val); + } + } + + SendEvent(stream->event, out_vals); + + return stream->fields->NumFields(); + } + +int Manager::PutTable(Stream* i, const Value* const *vals) + { + assert(i); + + assert(i->stream_type == TABLE_STREAM); + TableStream* stream = (TableStream*) i; + + Val* idxval = ValueToIndexVal(stream->num_idx_fields, stream->itype, vals); + Val* valval; + + int position = stream->num_idx_fields; + + if ( stream->num_val_fields == 0 ) + valval = 0; + + else if ( stream->num_val_fields == 1 && stream->want_record == 0 ) + valval = ValueToVal(vals[position], stream->rtype->FieldType(0)); + else + valval = ValueToRecordVal(vals, stream->rtype, &position); + + // if we have a subscribed event, we need to figure out, if this is an update or not + // same for predicates + if ( stream->pred || stream->event ) + { + bool updated = false; + Val* oldval = 0; + + if ( stream->num_val_fields > 0 ) + { + // in that case, we need the old value to send the event (if we send an event). + oldval = stream->tab->Lookup(idxval, false); + } + + if ( oldval != 0 ) + { + // it is an update + updated = true; + Ref(oldval); // have to do that, otherwise it may disappear in assign + } + + + // predicate if we want the update or not + if ( stream->pred ) + { + EnumVal* ev; + int startpos = 0; + Val* predidx = ValueToRecordVal(vals, stream->itype, &startpos); + Ref(valval); + + if ( updated ) + ev = new EnumVal(BifEnum::Input::EVENT_CHANGED, + BifType::Enum::Input::Event); + else + ev = new EnumVal(BifEnum::Input::EVENT_NEW, + BifType::Enum::Input::Event); + + bool result; + if ( stream->num_val_fields > 0 ) // we have values + result = CallPred(stream->pred, 3, ev, predidx, valval); + else // no values + result = CallPred(stream->pred, 2, ev, predidx); + + if ( result == false ) + { + // do nothing + Unref(idxval); + Unref(valval); + Unref(oldval); + return stream->num_val_fields + stream->num_idx_fields; + } + + } + + stream->tab->Assign(idxval, valval); + + if ( stream->event ) + { + EnumVal* ev; + int startpos = 0; + Val* predidx = ValueToRecordVal(vals, stream->itype, &startpos); + + if ( updated ) + { + // in case of update send back the old value. + assert ( stream->num_val_fields > 0 ); + ev = new EnumVal(BifEnum::Input::EVENT_CHANGED, + BifType::Enum::Input::Event); + assert ( oldval != 0 ); + SendEvent(stream->event, 4, stream->description->Ref(), + ev, predidx, oldval); + } + else + { + ev = new EnumVal(BifEnum::Input::EVENT_NEW, + BifType::Enum::Input::Event); + if ( stream->num_val_fields == 0 ) + SendEvent(stream->event, 4, stream->description->Ref(), + ev, predidx); + else + SendEvent(stream->event, 4, stream->description->Ref(), + ev, predidx, valval->Ref()); + } + + } + + } + + else // no predicates or other stuff + stream->tab->Assign(idxval, valval); + + Unref(idxval); // not consumed by assign + + return stream->num_idx_fields + stream->num_val_fields; + } + +// Todo:: perhaps throw some kind of clear-event? +void Manager::Clear(ReaderFrontend* reader) + { + Stream *i = FindStream(reader); + if ( i == 0 ) + { + reporter->InternalError("Unknown reader in Clear"); + return; + } + +#ifdef DEBUG + DBG_LOG(DBG_INPUT, "Got Clear for stream %s", + i->name.c_str()); +#endif + + assert(i->stream_type == TABLE_STREAM); + TableStream* stream = (TableStream*) i; + + stream->tab->RemoveAll(); + } + +// put interface: delete old entry from table. +bool Manager::Delete(ReaderFrontend* reader, Value* *vals) + { + Stream *i = FindStream(reader); + if ( i == 0 ) + { + reporter->InternalError("Unknown reader in Delete"); + return false; + } + + bool success = false; + int readVals = 0; + + if ( i->stream_type == TABLE_STREAM ) + { + TableStream* stream = (TableStream*) i; + Val* idxval = ValueToIndexVal(stream->num_idx_fields, stream->itype, vals); + assert(idxval != 0); + readVals = stream->num_idx_fields + stream->num_val_fields; + bool streamresult = true; + + if ( stream->pred || stream->event ) + { + Val *val = stream->tab->Lookup(idxval); + + if ( stream->pred ) + { + Ref(val); + EnumVal *ev = new EnumVal(BifEnum::Input::EVENT_REMOVED, BifType::Enum::Input::Event); + int startpos = 0; + Val* predidx = ValueToRecordVal(vals, stream->itype, &startpos); + + streamresult = CallPred(stream->pred, 3, ev, predidx, val); + + if ( streamresult == false ) + { + // keep it. + Unref(idxval); + success = true; + } + + } + + // only if stream = true -> no streaming + if ( streamresult && stream->event ) + { + Ref(idxval); + assert(val != 0); + Ref(val); + EnumVal *ev = new EnumVal(BifEnum::Input::EVENT_REMOVED, BifType::Enum::Input::Event); + SendEvent(stream->event, 4, stream->description->Ref(), ev, idxval, val); + } + } + + // only if stream = true -> no streaming + if ( streamresult ) + { + Val* retptr = stream->tab->Delete(idxval); + success = ( retptr != 0 ); + if ( ! success ) + reporter->Error("Internal error while deleting values from input table"); + else + Unref(retptr); + } + + } + + else if ( i->stream_type == EVENT_STREAM ) + { + EnumVal *type = new EnumVal(BifEnum::Input::EVENT_REMOVED, BifType::Enum::Input::Event); + readVals = SendEventStreamEvent(i, type, vals); + success = true; + } + + else + { + assert(false); + return false; + } + + for ( int i = 0; i < readVals; i++ ) + delete vals[i]; + + delete [] vals; + + return success; + } + +bool Manager::CallPred(Func* pred_func, const int numvals, ...) + { + bool result = false; + val_list vl(numvals); + + va_list lP; + va_start(lP, numvals); + for ( int i = 0; i < numvals; i++ ) + vl.append( va_arg(lP, Val*) ); + + va_end(lP); + + Val* v = pred_func->Call(&vl); + if ( v ) + { + result = v->AsBool(); + Unref(v); + } + + return result; + } + +bool Manager::SendEvent(const string& name, const int num_vals, Value* *vals) + { + EventHandler* handler = event_registry->Lookup(name.c_str()); + if ( handler == 0 ) + { + reporter->Error("Event %s not found", name.c_str()); + return false; + } + + RecordType *type = handler->FType()->Args(); + int num_event_vals = type->NumFields(); + if ( num_vals != num_event_vals ) + { + reporter->Error("Wrong number of values for event %s", name.c_str()); + return false; + } + + val_list* vl = new val_list; + for ( int i = 0; i < num_vals; i++) + vl->append(ValueToVal(vals[i], type->FieldType(i))); + + mgr.Dispatch(new Event(handler, vl)); + + for ( int i = 0; i < num_vals; i++ ) + delete vals[i]; + + delete [] vals; + + return true; +} + +void Manager::SendEvent(EventHandlerPtr ev, const int numvals, ...) + { + val_list* vl = new val_list; + + va_list lP; + va_start(lP, numvals); + for ( int i = 0; i < numvals; i++ ) + vl->append( va_arg(lP, Val*) ); + + va_end(lP); + + mgr.QueueEvent(ev, vl, SOURCE_LOCAL); + } + +void Manager::SendEvent(EventHandlerPtr ev, list events) + { + val_list* vl = new val_list; + + for ( list::iterator i = events.begin(); i != events.end(); i++ ) + { + vl->append( *i ); + } + + mgr.QueueEvent(ev, vl, SOURCE_LOCAL); + } + +// Convert a bro list value to a bro record value. +// I / we could think about moving this functionality to val.cc +RecordVal* Manager::ListValToRecordVal(ListVal* list, RecordType *request_type, int* position) + { + assert(position != 0 ); // we need the pointer to point to data; + + if ( request_type->Tag() != TYPE_RECORD ) + { + reporter->InternalError("ListValToRecordVal called on non-record-value."); + return 0; + } + + RecordVal* rec = new RecordVal(request_type->AsRecordType()); + + assert(list != 0); + int maxpos = list->Length(); + + for ( int i = 0; i < request_type->NumFields(); i++ ) + { + assert ( (*position) <= maxpos ); + + Val* fieldVal = 0; + if ( request_type->FieldType(i)->Tag() == TYPE_RECORD ) + fieldVal = ListValToRecordVal(list, request_type->FieldType(i)->AsRecordType(), position); + else + { + fieldVal = list->Index(*position); + (*position)++; + } + + rec->Assign(i, fieldVal->Ref()); + } + + return rec; + } + +// Convert a threading value to a record value +RecordVal* Manager::ValueToRecordVal(const Value* const *vals, + RecordType *request_type, int* position) + { + assert(position != 0); // we need the pointer to point to data. + + if ( request_type->Tag() != TYPE_RECORD ) + { + reporter->InternalError("ValueToRecordVal called on non-record-value."); + return 0; + } + + RecordVal* rec = new RecordVal(request_type->AsRecordType()); + for ( int i = 0; i < request_type->NumFields(); i++ ) + { + Val* fieldVal = 0; + if ( request_type->FieldType(i)->Tag() == TYPE_RECORD ) + fieldVal = ValueToRecordVal(vals, request_type->FieldType(i)->AsRecordType(), position); + else if ( request_type->FieldType(i)->Tag() == TYPE_FILE || + request_type->FieldType(i)->Tag() == TYPE_FUNC ) + { + // If those two unsupported types are encountered here, they have + // been let through by the type checking. + // That means that they are optional & the user agreed to ignore + // them and has been warned by reporter. + // Hence -> assign null to the field, done. + + // Better check that it really is optional. Uou never know. + assert(request_type->FieldDecl(i)->FindAttr(ATTR_OPTIONAL)); + } + else + { + fieldVal = ValueToVal(vals[*position], request_type->FieldType(i)); + (*position)++; + } + + rec->Assign(i, fieldVal); + } + + return rec; + } + +// Count the length of the values used to create a correct length buffer for +// hashing later +int Manager::GetValueLength(const Value* val) { + assert( val->present ); // presence has to be checked elsewhere + int length = 0; + + switch (val->type) { + case TYPE_BOOL: + case TYPE_INT: + length += sizeof(val->val.int_val); + break; + + case TYPE_COUNT: + case TYPE_COUNTER: + length += sizeof(val->val.uint_val); + break; + + case TYPE_PORT: + length += sizeof(val->val.port_val.port); + length += sizeof(val->val.port_val.proto); + break; + + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + length += sizeof(val->val.double_val); + break; + + case TYPE_STRING: + case TYPE_ENUM: + { + length += val->val.string_val.length + 1; + break; + } + + case TYPE_ADDR: + { + switch ( val->val.addr_val.family ) { + case IPv4: + length += sizeof(val->val.addr_val.in.in4); + break; + case IPv6: + length += sizeof(val->val.addr_val.in.in6); + break; + default: + assert(false); + } + } + break; + + case TYPE_SUBNET: + { + switch ( val->val.subnet_val.prefix.family ) { + case IPv4: + length += sizeof(val->val.subnet_val.prefix.in.in4)+ + sizeof(val->val.subnet_val.length); + break; + case IPv6: + length += sizeof(val->val.subnet_val.prefix.in.in6)+ + sizeof(val->val.subnet_val.length); + break; + default: + assert(false); + } + } + break; + + case TYPE_TABLE: + { + for ( int i = 0; i < val->val.set_val.size; i++ ) + length += GetValueLength(val->val.set_val.vals[i]); + break; + } + + case TYPE_VECTOR: + { + int j = val->val.vector_val.size; + for ( int i = 0; i < j; i++ ) + length += GetValueLength(val->val.vector_val.vals[i]); + break; + } + + default: + reporter->InternalError("unsupported type %d for GetValueLength", val->type); + } + + return length; + +} + +// Given a threading::value, copy the raw data bytes into *data and return how many bytes were copied. +// Used for hashing the values for lookup in the bro table +int Manager::CopyValue(char *data, const int startpos, const Value* val) + { + assert( val->present ); // presence has to be checked elsewhere + + switch ( val->type ) { + case TYPE_BOOL: + case TYPE_INT: + memcpy(data+startpos, (const void*) &(val->val.int_val), sizeof(val->val.int_val)); + return sizeof(val->val.int_val); + + case TYPE_COUNT: + case TYPE_COUNTER: + memcpy(data+startpos, (const void*) &(val->val.uint_val), sizeof(val->val.uint_val)); + return sizeof(val->val.uint_val); + + case TYPE_PORT: + { + int length = 0; + memcpy(data+startpos, (const void*) &(val->val.port_val.port), + sizeof(val->val.port_val.port)); + length += sizeof(val->val.port_val.port); + memcpy(data+startpos+length, (const void*) &(val->val.port_val.proto), + sizeof(val->val.port_val.proto)); + length += sizeof(val->val.port_val.proto); + return length; + } + + + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + memcpy(data+startpos, (const void*) &(val->val.double_val), + sizeof(val->val.double_val)); + return sizeof(val->val.double_val); + + case TYPE_STRING: + case TYPE_ENUM: + { + memcpy(data+startpos, val->val.string_val.data, val->val.string_val.length); + // Add a \0 to the end. To be able to hash zero-length + // strings and differentiate from !present. + memset(data + startpos + val->val.string_val.length, 0, 1); + return val->val.string_val.length + 1; + } + + case TYPE_ADDR: + { + int length = 0; + switch ( val->val.addr_val.family ) { + case IPv4: + length = sizeof(val->val.addr_val.in.in4); + memcpy(data + startpos, (const char*) &(val->val.addr_val.in.in4), length); + break; + + case IPv6: + length = sizeof(val->val.addr_val.in.in6); + memcpy(data + startpos, (const char*) &(val->val.addr_val.in.in6), length); + break; + + default: + assert(false); + } + + return length; + } + + case TYPE_SUBNET: + { + int length = 0; + switch ( val->val.subnet_val.prefix.family ) { + case IPv4: + length = sizeof(val->val.addr_val.in.in4); + memcpy(data + startpos, + (const char*) &(val->val.subnet_val.prefix.in.in4), length); + break; + + case IPv6: + length = sizeof(val->val.addr_val.in.in6); + memcpy(data + startpos, + (const char*) &(val->val.subnet_val.prefix.in.in4), length); + break; + + default: + assert(false); + } + + int lengthlength = sizeof(val->val.subnet_val.length); + memcpy(data + startpos + length , + (const char*) &(val->val.subnet_val.length), lengthlength); + length += lengthlength; + + return length; + } + + case TYPE_TABLE: + { + int length = 0; + int j = val->val.set_val.size; + for ( int i = 0; i < j; i++ ) + length += CopyValue(data, startpos+length, val->val.set_val.vals[i]); + + return length; + } + + case TYPE_VECTOR: + { + int length = 0; + int j = val->val.vector_val.size; + for ( int i = 0; i < j; i++ ) + length += CopyValue(data, startpos+length, val->val.vector_val.vals[i]); + + return length; + } + + default: + reporter->InternalError("unsupported type %d for CopyValue", val->type); + return 0; + } + + assert(false); + return 0; + } + +// Hash num_elements threading values and return the HashKey for them. At least one of the vals has to be ->present. +HashKey* Manager::HashValues(const int num_elements, const Value* const *vals) + { + int length = 0; + + for ( int i = 0; i < num_elements; i++ ) + { + const Value* val = vals[i]; + if ( val->present ) + length += GetValueLength(val); + + // And in any case add 1 for the end-of-field-identifier. + length++; + } + + assert ( length >= num_elements ); + + if ( length == num_elements ) + return NULL; + + int position = 0; + char *data = (char*) malloc(length); + if ( data == 0 ) + reporter->InternalError("Could not malloc?"); + + for ( int i = 0; i < num_elements; i++ ) + { + const Value* val = vals[i]; + if ( val->present ) + position += CopyValue(data, position, val); + + memset(data + position, 1, 1); // Add end-of-field-marker. Does not really matter which value it is, + // it just has to be... something. + + position++; + + } + + HashKey *key = new HashKey(data, length); + delete data; + + assert(position == length); + return key; + } + +// convert threading value to Bro value +Val* Manager::ValueToVal(const Value* val, BroType* request_type) + { + + if ( request_type->Tag() != TYPE_ANY && request_type->Tag() != val->type ) + { + reporter->InternalError("Typetags don't match: %d vs %d", request_type->Tag(), val->type); + return 0; + } + + if ( !val->present ) + return 0; // unset field + + switch ( val->type ) { + case TYPE_BOOL: + case TYPE_INT: + return new Val(val->val.int_val, val->type); + break; + + case TYPE_COUNT: + case TYPE_COUNTER: + return new Val(val->val.uint_val, val->type); + + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + return new Val(val->val.double_val, val->type); + + case TYPE_STRING: + { + BroString *s = new BroString((const u_char*)val->val.string_val.data, val->val.string_val.length, 1); + return new StringVal(s); + } + + case TYPE_PORT: + return new PortVal(val->val.port_val.port, val->val.port_val.proto); + + case TYPE_ADDR: + { + IPAddr* addr = 0; + switch ( val->val.addr_val.family ) { + case IPv4: + addr = new IPAddr(val->val.addr_val.in.in4); + break; + + case IPv6: + addr = new IPAddr(val->val.addr_val.in.in6); + break; + + default: + assert(false); + } + + AddrVal* addrval = new AddrVal(*addr); + delete addr; + return addrval; + } + + case TYPE_SUBNET: + { + IPAddr* addr = 0; + switch ( val->val.subnet_val.prefix.family ) { + case IPv4: + addr = new IPAddr(val->val.subnet_val.prefix.in.in4); + break; + + case IPv6: + addr = new IPAddr(val->val.subnet_val.prefix.in.in6); + break; + + default: + assert(false); + } + + SubNetVal* subnetval = new SubNetVal(*addr, val->val.subnet_val.length); + delete addr; + return subnetval; + } + + case TYPE_TABLE: + { + // all entries have to have the same type... + BroType* type = request_type->AsTableType()->Indices()->PureType(); + TypeList* set_index = new TypeList(type->Ref()); + set_index->Append(type->Ref()); + SetType* s = new SetType(set_index, 0); + TableVal* t = new TableVal(s); + for ( int i = 0; i < val->val.set_val.size; i++ ) + { + Val* assignval = ValueToVal( val->val.set_val.vals[i], type ); + t->Assign(assignval, 0); + Unref(assignval); // idex is not consumed by assign. + } + + Unref(s); + return t; + } + + case TYPE_VECTOR: + { + // all entries have to have the same type... + BroType* type = request_type->AsVectorType()->YieldType(); + VectorType* vt = new VectorType(type->Ref()); + VectorVal* v = new VectorVal(vt); + for ( int i = 0; i < val->val.vector_val.size; i++ ) + v->Assign(i, ValueToVal( val->val.set_val.vals[i], type ), 0); + + Unref(vt); + return v; + } + + case TYPE_ENUM: { + // well, this is kind of stupid, because EnumType just mangles the module name and the var name together again... + // but well + string module = extract_module_name(val->val.string_val.data); + string var = extract_var_name(val->val.string_val.data); + bro_int_t index = request_type->AsEnumType()->Lookup(module, var.c_str()); + if ( index == -1 ) + reporter->InternalError("Value not found in enum mappimg. Module: %s, var: %s", + module.c_str(), var.c_str()); + + return new EnumVal(index, request_type->Ref()->AsEnumType() ); + } + + + default: + reporter->InternalError("unsupported type for input_read"); + } + + assert(false); + return NULL; + } + +Manager::Stream* Manager::FindStream(const string &name) + { + for ( map::iterator s = readers.begin(); s != readers.end(); ++s ) + { + if ( (*s).second->name == name ) + return (*s).second; + } + + return 0; + } + +Manager::Stream* Manager::FindStream(ReaderFrontend* reader) + { + map::iterator s = readers.find(reader); + if ( s != readers.end() ) + return s->second; + + return 0; + } diff --git a/src/input/Manager.h b/src/input/Manager.h new file mode 100644 index 0000000000..633b20f8ed --- /dev/null +++ b/src/input/Manager.h @@ -0,0 +1,218 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// Class for managing input streams. + +#ifndef INPUT_MANAGER_H +#define INPUT_MANAGER_H + +#include "BroString.h" +#include "EventHandler.h" +#include "RemoteSerializer.h" +#include "Val.h" + +#include + +namespace input { + +class ReaderFrontend; +class ReaderBackend; + +/** + * Singleton class for managing input streams. + */ +class Manager { +public: + /** + * Constructor. + */ + Manager(); + + /** + * Destructor. + */ + ~Manager(); + + /** + * Creates a new input stream which will write the data from the data + * source into a table. + * + * @param description A record of script type \c + * Input:StreamDescription. + * + * This method corresponds directly to the internal BiF defined in + * input.bif, which just forwards here. + */ + bool CreateTableStream(RecordVal* description); + + /** + * Creates a new input stream which sends events for read input data. + * + * @param description A record of script type \c + * Input:StreamDescription. + * + * This method corresponds directly to the internal BiF defined in + * input.bif, which just forwards here. + */ + bool CreateEventStream(RecordVal* description); + + /** + * Force update on a input stream. Forces a re-read of the whole + * input source. Usually used when an input stream is opened in + * managed mode. Otherwise, this can be used to trigger a input + * source check before a heartbeat message arrives. May be ignored by + * the reader. + * + * @param id The enum value corresponding the input stream. + * + * This method corresponds directly to the internal BiF defined in + * input.bif, which just forwards here. + */ + bool ForceUpdate(const string &id); + + /** + * Deletes an existing input stream. + * + * @param id The name of the input stream to be removed. + * + * This method corresponds directly to the internal BiF defined in + * input.bif, which just forwards here. + */ + bool RemoveStream(const string &id); + +protected: + friend class ReaderFrontend; + friend class PutMessage; + friend class DeleteMessage; + friend class ClearMessage; + friend class SendEventMessage; + friend class SendEntryMessage; + friend class EndCurrentSendMessage; + friend class ReaderClosedMessage; + friend class DisableMessage; + friend class EndOfDataMessage; + + // For readers to write to input stream in direct mode (reporting + // new/deleted values directly). Functions take ownership of + // threading::Value fields. + void Put(ReaderFrontend* reader, threading::Value* *vals); + void Clear(ReaderFrontend* reader); + bool Delete(ReaderFrontend* reader, threading::Value* *vals); + // Trigger sending the End-of-Data event when the input source has + // finished reading. Just use in direct mode. + void SendEndOfData(ReaderFrontend* reader); + + // For readers to write to input stream in indirect mode (manager is + // monitoring new/deleted values) Functions take ownership of + // threading::Value fields. + void SendEntry(ReaderFrontend* reader, threading::Value* *vals); + void EndCurrentSend(ReaderFrontend* reader); + + // Allows readers to directly send Bro events. The num_vals and vals + // must be the same the named event expects. Takes ownership of + // threading::Value fields. + bool SendEvent(const string& name, const int num_vals, threading::Value* *vals); + + // Instantiates a new ReaderBackend of the given type (note that + // doing so creates a new thread!). + ReaderBackend* CreateBackend(ReaderFrontend* frontend, bro_int_t type); + + // Function called from the ReaderBackend to notify the manager that + // a stream has been removed or a stream has been closed. Used to + // prevent race conditions where data for a specific stream is still + // in the queue when the RemoveStream directive is executed by the + // main thread. This makes sure all data that has ben queued for a + // stream is still received. + bool RemoveStreamContinuation(ReaderFrontend* reader); + + /** + * Deletes an existing input stream. + * + * @param frontend pointer to the frontend of the input stream to be removed. + * + * This method is used by the reader backends to remove a reader when it fails + * for some reason. + */ + bool RemoveStream(ReaderFrontend* frontend); + +private: + class Stream; + class TableStream; + class EventStream; + + // Actual RemoveStream implementation -- the function's public and + // protected definitions are wrappers around this function. + bool RemoveStream(Stream* i); + + bool CreateStream(Stream*, RecordVal* description); + + // SendEntry implementation for Table stream. + int SendEntryTable(Stream* i, const threading::Value* const *vals); + + // Put implementation for Table stream. + int PutTable(Stream* i, const threading::Value* const *vals); + + // SendEntry and Put implementation for Event stream. + int SendEventStreamEvent(Stream* i, EnumVal* type, const threading::Value* const *vals); + + // Checks that a Bro type can be used for data reading. The + // equivalend in threading cannot be used, because we have support + // different types from the log framework + bool IsCompatibleType(BroType* t, bool atomic_only=false); + // Check if a record is made up of compatible types and return a list + // of all fields that are in the record in order. Recursively unrolls + // records + bool UnrollRecordType(vector *fields, const RecordType *rec, const string& nameprepend, bool allow_file_func); + + // Send events + void SendEvent(EventHandlerPtr ev, const int numvals, ...); + void SendEvent(EventHandlerPtr ev, list events); + + // Implementation of SendEndOfData (send end_of_data event). + void SendEndOfData(const Stream *i); + + // Call predicate function and return result. + bool CallPred(Func* pred_func, const int numvals, ...); + + // Get a hashkey for a set of threading::Values. + HashKey* HashValues(const int num_elements, const threading::Value* const *vals); + + // Get the memory used by a specific value. + int GetValueLength(const threading::Value* val); + + // Copies the raw data in a specific threading::Value to position + // startpos. + int CopyValue(char *data, const int startpos, const threading::Value* val); + + // Convert Threading::Value to an internal Bro Type (works also with + // Records). + Val* ValueToVal(const threading::Value* val, BroType* request_type); + + // Convert Threading::Value to an internal Bro List type. + Val* ValueToIndexVal(int num_fields, const RecordType* type, const threading::Value* const *vals); + + // Converts a threading::value to a record type. Mostly used by + // ValueToVal. + RecordVal* ValueToRecordVal(const threading::Value* const *vals, RecordType *request_type, int* position); + + Val* RecordValToIndexVal(RecordVal *r); + + // Converts a Bro ListVal to a RecordVal given the record type. + RecordVal* ListValToRecordVal(ListVal* list, RecordType *request_type, int* position); + + Stream* FindStream(const string &name); + Stream* FindStream(ReaderFrontend* reader); + + enum StreamType { TABLE_STREAM, EVENT_STREAM }; + + map readers; + + EventHandlerPtr end_of_data; +}; + + +} + +extern input::Manager* input_mgr; + + +#endif /* INPUT_MANAGER_H */ diff --git a/src/input/ReaderBackend.cc b/src/input/ReaderBackend.cc new file mode 100644 index 0000000000..74f5306271 --- /dev/null +++ b/src/input/ReaderBackend.cc @@ -0,0 +1,330 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "ReaderBackend.h" +#include "ReaderFrontend.h" +#include "Manager.h" + +using threading::Value; +using threading::Field; + +namespace input { + +class PutMessage : public threading::OutputMessage { +public: + PutMessage(ReaderFrontend* reader, Value* *val) + : threading::OutputMessage("Put", reader), + val(val) {} + + virtual bool Process() + { + input_mgr->Put(Object(), val); + return true; + } + +private: + Value* *val; +}; + +class DeleteMessage : public threading::OutputMessage { +public: + DeleteMessage(ReaderFrontend* reader, Value* *val) + : threading::OutputMessage("Delete", reader), + val(val) {} + + virtual bool Process() + { + return input_mgr->Delete(Object(), val); + } + +private: + Value* *val; +}; + +class ClearMessage : public threading::OutputMessage { +public: + ClearMessage(ReaderFrontend* reader) + : threading::OutputMessage("Clear", reader) {} + + virtual bool Process() + { + input_mgr->Clear(Object()); + return true; + } + +private: +}; + +class SendEventMessage : public threading::OutputMessage { +public: + SendEventMessage(ReaderFrontend* reader, const char* name, const int num_vals, Value* *val) + : threading::OutputMessage("SendEvent", reader), + name(copy_string(name)), num_vals(num_vals), val(val) {} + + virtual ~SendEventMessage() { delete [] name; } + + virtual bool Process() + { + bool success = input_mgr->SendEvent(name, num_vals, val); + + if ( ! success ) + reporter->Error("SendEvent for event %s failed", name); + + return true; // We do not want to die if sendEvent fails because the event did not return. + } + +private: + const char* name; + const int num_vals; + Value* *val; +}; + +class SendEntryMessage : public threading::OutputMessage { +public: + SendEntryMessage(ReaderFrontend* reader, Value* *val) + : threading::OutputMessage("SendEntry", reader), + val(val) { } + + virtual bool Process() + { + input_mgr->SendEntry(Object(), val); + return true; + } + +private: + Value* *val; +}; + +class EndCurrentSendMessage : public threading::OutputMessage { +public: + EndCurrentSendMessage(ReaderFrontend* reader) + : threading::OutputMessage("EndCurrentSend", reader) {} + + virtual bool Process() + { + input_mgr->EndCurrentSend(Object()); + return true; + } + +private: +}; + +class EndOfDataMessage : public threading::OutputMessage { +public: + EndOfDataMessage(ReaderFrontend* reader) + : threading::OutputMessage("EndOfData", reader) {} + + virtual bool Process() + { + input_mgr->SendEndOfData(Object()); + return true; + } + +private: +}; + +class ReaderClosedMessage : public threading::OutputMessage { +public: + ReaderClosedMessage(ReaderFrontend* reader) + : threading::OutputMessage("ReaderClosed", reader) {} + + virtual bool Process() + { + Object()->SetDisable(); + return input_mgr->RemoveStreamContinuation(Object()); + } + +private: +}; + + +class DisableMessage : public threading::OutputMessage +{ +public: + DisableMessage(ReaderFrontend* writer) + : threading::OutputMessage("Disable", writer) {} + + virtual bool Process() + { + Object()->SetDisable(); + // And - because we do not need disabled objects any more - + // there is no way to re-enable them, so simply delete them. + // This avoids the problem of having to periodically check if + // there are any disabled readers out there. As soon as a + // reader disables itself, it deletes itself. + input_mgr->RemoveStream(Object()); + return true; + } +}; + +using namespace logging; + +ReaderBackend::ReaderBackend(ReaderFrontend* arg_frontend) : MsgThread() + { + disabled = true; // disabled will be set correcty in init. + frontend = arg_frontend; + info = new ReaderInfo(frontend->Info()); + + SetName(frontend->Name()); + } + +ReaderBackend::~ReaderBackend() + { + delete info; + } + +void ReaderBackend::Put(Value* *val) + { + SendOut(new PutMessage(frontend, val)); + } + +void ReaderBackend::Delete(Value* *val) + { + SendOut(new DeleteMessage(frontend, val)); + } + +void ReaderBackend::Clear() + { + SendOut(new ClearMessage(frontend)); + } + +void ReaderBackend::SendEvent(const char* name, const int num_vals, Value* *vals) + { + SendOut(new SendEventMessage(frontend, name, num_vals, vals)); + } + +void ReaderBackend::EndCurrentSend() + { + SendOut(new EndCurrentSendMessage(frontend)); + } + +void ReaderBackend::EndOfData() + { + SendOut(new EndOfDataMessage(frontend)); + } + +void ReaderBackend::SendEntry(Value* *vals) + { + SendOut(new SendEntryMessage(frontend, vals)); + } + +bool ReaderBackend::Init(const int arg_num_fields, + const threading::Field* const* arg_fields) + { + if ( Failed() ) + return true; + + num_fields = arg_num_fields; + fields = arg_fields; + + // disable if DoInit returns error. + int success = DoInit(*info, arg_num_fields, arg_fields); + + if ( ! success ) + { + Error("Init failed"); + DisableFrontend(); + } + + disabled = !success; + + return success; + } + +bool ReaderBackend::OnFinish(double network_time) + { + if ( ! Failed() ) + DoClose(); + + disabled = true; // frontend disables itself when it gets the Close-message. + SendOut(new ReaderClosedMessage(frontend)); + + if ( fields != 0 ) + { + for ( unsigned int i = 0; i < num_fields; i++ ) + delete(fields[i]); + + delete [] (fields); + fields = 0; + } + + return true; + } + +bool ReaderBackend::Update() + { + if ( disabled ) + return false; + + if ( Failed() ) + return true; + + bool success = DoUpdate(); + if ( ! success ) + DisableFrontend(); + + return success; + } + +void ReaderBackend::DisableFrontend() + { + // We also set disabled here, because there still may be other + // messages queued and we will dutifully ignore these from now. + disabled = true; + SendOut(new DisableMessage(frontend)); + } + +bool ReaderBackend::OnHeartbeat(double network_time, double current_time) + { + if ( Failed() ) + return true; + + return DoHeartbeat(network_time, current_time); + } + +TransportProto ReaderBackend::StringToProto(const string &proto) + { + if ( proto == "unknown" ) + return TRANSPORT_UNKNOWN; + else if ( proto == "tcp" ) + return TRANSPORT_TCP; + else if ( proto == "udp" ) + return TRANSPORT_UDP; + else if ( proto == "icmp" ) + return TRANSPORT_ICMP; + + Error(Fmt("Tried to parse invalid/unknown protocol: %s", proto.c_str())); + + return TRANSPORT_UNKNOWN; + } + + +// More or less verbose copy from IPAddr.cc -- which uses reporter. +Value::addr_t ReaderBackend::StringToAddr(const string &s) + { + Value::addr_t val; + + if ( s.find(':') == std::string::npos ) // IPv4. + { + val.family = IPv4; + + if ( inet_aton(s.c_str(), &(val.in.in4)) <= 0 ) + { + Error(Fmt("Bad address: %s", s.c_str())); + memset(&val.in.in4.s_addr, 0, sizeof(val.in.in4.s_addr)); + } + } + + else + { + val.family = IPv6; + if ( inet_pton(AF_INET6, s.c_str(), val.in.in6.s6_addr) <=0 ) + { + Error(Fmt("Bad address: %s", s.c_str())); + memset(val.in.in6.s6_addr, 0, sizeof(val.in.in6.s6_addr)); + } + } + + return val; + } + +} diff --git a/src/input/ReaderBackend.h b/src/input/ReaderBackend.h new file mode 100644 index 0000000000..9fd6c06aa3 --- /dev/null +++ b/src/input/ReaderBackend.h @@ -0,0 +1,348 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef INPUT_READERBACKEND_H +#define INPUT_READERBACKEND_H + +#include "BroString.h" + +#include "threading/SerialTypes.h" +#include "threading/MsgThread.h" + +namespace input { + +/** + * The modes a reader can be in. + */ +enum ReaderMode { + /** + * Manual refresh reader mode. The reader will read the file once, + * and send all read data back to the manager. After that, no automatic + * refresh should happen. Manual refreshes can be triggered from the + * scripting layer using force_update. + */ + MODE_MANUAL, + + /** + * Automatic rereading mode. The reader should monitor the + * data source for changes continually. When the data source changes, + * either the whole file has to be resent using the SendEntry/EndCurrentSend functions. + */ + MODE_REREAD, + + /** + * Streaming reading mode. The reader should monitor the data source + * for new appended data. When new data is appended is has to be sent + * using the Put api functions. + */ + MODE_STREAM, + + /** Internal dummy mode for initialization. */ + MODE_NONE +}; + +class ReaderFrontend; + +/** + * Base class for reader implementation. When the input:Manager creates a new + * input stream, it instantiates a ReaderFrontend. That then in turn creates + * a ReaderBackend of the right type. The frontend then forwards messages + * over the backend as its methods are called. + * + * All methods must be called only from the corresponding child thread (the + * constructor is the one exception.) + */ +class ReaderBackend : public threading::MsgThread { +public: + /** + * Constructor. + * + * @param frontend The frontend reader that created this backend. The + * *only* purpose of this value is to be passed back via messages as + * an argument to callbacks. One must not otherwise access the + * frontend, it's running in a different thread. + */ + ReaderBackend(ReaderFrontend* frontend); + + /** + * Destructor. + */ + virtual ~ReaderBackend(); + + /** + * A struct passing information to the reader at initialization time. + */ + struct ReaderInfo + { + // Structure takes ownership of the strings. + typedef std::map config_map; + + /** + * A string left to the interpretation of the reader + * implementation; it corresponds to the value configured on + * the script-level for the logging filter. + * + * Structure takes ownership of the string. + */ + const char* source; + + /** + * A map of key/value pairs corresponding to the relevant + * filter's "config" table. + */ + config_map config; + + /** + * The opening mode for the input source. + */ + ReaderMode mode; + + ReaderInfo() + { + source = 0; + mode = MODE_NONE; + } + + ReaderInfo(const ReaderInfo& other) + { + source = other.source ? copy_string(other.source) : 0; + mode = other.mode; + + for ( config_map::const_iterator i = other.config.begin(); i != other.config.end(); i++ ) + config.insert(std::make_pair(copy_string(i->first), copy_string(i->second))); + } + + ~ReaderInfo() + { + delete [] source; + + for ( config_map::iterator i = config.begin(); i != config.end(); i++ ) + { + delete [] i->first; + delete [] i->second; + } + } + + private: + const ReaderInfo& operator=(const ReaderInfo& other); // Disable. + }; + + /** + * One-time initialization of the reader to define the input source. + * + * @param @param info Meta information for the writer. + * + * @param num_fields Number of fields contained in \a fields. + * + * @param fields The types and names of the fields to be retrieved + * from the input source. + * + * @param config A string map containing additional configuration options + * for the reader. + * + * @return False if an error occured. + */ + bool Init(int num_fields, const threading::Field* const* fields); + + /** + * Force trigger an update of the input stream. The action that will + * be taken depends on the current read mode and the individual input + * backend. + * + * An backend can choose to ignore this. + * + * @return False if an error occured. + */ + bool Update(); + + /** + * Disables the frontend that has instantiated this backend. Once + * disabled, the frontend will not send any further message over. + */ + void DisableFrontend(); + + /** + * Returns the log fields as passed into the constructor. + */ + const threading::Field* const * Fields() const { return fields; } + + /** + * Returns the additional reader information into the constructor. + */ + const ReaderInfo& Info() const { return *info; } + + /** + * Returns the number of log fields as passed into the constructor. + */ + int NumFields() const { return num_fields; } + + // Overridden from MsgThread. + virtual bool OnHeartbeat(double network_time, double current_time); + virtual bool OnFinish(double network_time); + +protected: + // Methods that have to be overwritten by the individual readers + + /** + * Reader-specific intialization method. Note that data may only be + * read from the input source after the Init() function has been + * called. + * + * A reader implementation must override this method. If it returns + * false, it will be assumed that a fatal error has occured that + * prevents the reader from further operation; it will then be + * disabled and eventually deleted. When returning false, an + * implementation should also call Error() to indicate what happened. + * + * Arguments are the same as Init(). + * + * Note that derived classes don't need to store the values passed in + * here if other methods need them to; the \a ReaderBackend class + * provides accessor methods to get them later, and they are passed + * in here only for convinience. + */ + virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields) = 0; + + /** + * Reader-specific method implementing input finalization at + * termination. + * + * A reader implementation must override this method but it can just + * ignore calls if an input source can't actually be closed. + * + * After the method is called, the writer will be deleted. If an + * error occurs during shutdown, an implementation should also call + * Error() to indicate what happened. + */ + virtual void DoClose() = 0; + + /** + * Reader-specific method implementing the forced update trigger. + * + * A reader implementation must override this method but it can just + * ignore calls if a forced update does not fit the input source or + * the current input reading mode. + * + * If it returns false, it will be assumed that a fatal error has + * occured that prevents the reader from further operation; it will + * then be disabled and eventually deleted. When returning false, an + * implementation should also call Error to indicate what happened. + */ + virtual bool DoUpdate() = 0; + + /** + * Triggered by regular heartbeat messages from the main thread. + */ + virtual bool DoHeartbeat(double network_time, double current_time) = 0; + + /** + * Method allowing a reader to send a specified Bro event. Vals must + * match the values expected by the bro event. + * + * @param name name of the bro event to send + * + * @param num_vals number of entries in \a vals + * + * @param vals the values to be given to the event + */ + void SendEvent(const char* name, const int num_vals, threading::Value* *vals); + + // Content-sending-functions (simple mode). Include table-specific + // functionality that simply is not used if we have no table. + + /** + * Method allowing a reader to send a list of values read from a + * specific stream back to the manager in simple mode. + * + * If the stream is a table stream, the values are inserted into the + * table; if it is an event stream, the event is raised. + * + * @param val Array of threading::Values expected by the stream. The + * array must have exactly NumEntries() elements. + */ + void Put(threading::Value** val); + + /** + * Method allowing a reader to delete a specific value from a Bro + * table. + * + * If the receiving stream is an event stream, only a removed event + * is raised. + * + * @param val Array of threading::Values expected by the stream. The + * array must have exactly NumEntries() elements. + */ + void Delete(threading::Value** val); + + /** + * Method allowing a reader to clear a Bro table. + * + * If the receiving stream is an event stream, this is ignored. + * + */ + void Clear(); + + /** + * Method telling the manager that we finished reading the current + * data source. Will trigger an end_of_data event. + * + * Note: When using SendEntry as the tracking mode this is triggered + * automatically by EndCurrentSend(). Only use if not using the + * tracking mode. Otherwise the event will be sent twice. + */ + void EndOfData(); + + // Content-sending-functions (tracking mode): Only changed lines are propagated. + + /** + * Method allowing a reader to send a list of values read from + * specific stream back to the manager in tracking mode. + * + * If the stream is a table stream, the values are inserted into the + * table; if it is an event stream, the event is raised. + * + * @param val Array of threading::Values expected by the stream. The + * array must have exactly NumEntries() elements. + */ + void SendEntry(threading::Value** vals); + + /** + * Method telling the manager, that the current list of entries sent + * by SendEntry is finished. + * + * For table streams, all entries that were not updated since the + * last EndCurrentSend will be deleted, because they are no longer + * present in the input source + */ + void EndCurrentSend(); + + /** + * Convert a string into a TransportProto. This is just a utility + * function for Readers. + * + * @param proto the transport protocol + */ + TransportProto StringToProto(const string &proto); + + /** + * Convert a string into a Value::addr_t. This is just a utility + * function for Readers. + * + * @param addr containing an ipv4 or ipv6 address + */ + threading::Value::addr_t StringToAddr(const string &addr); + +private: + // Frontend that instantiated us. This object must not be accessed + // from this class, it's running in a different thread! + ReaderFrontend* frontend; + + ReaderInfo* info; + unsigned int num_fields; + const threading::Field* const * fields; // raw mapping + + bool disabled; +}; + +} + +#endif /* INPUT_READERBACKEND_H */ diff --git a/src/input/ReaderFrontend.cc b/src/input/ReaderFrontend.cc new file mode 100644 index 0000000000..a8528c002d --- /dev/null +++ b/src/input/ReaderFrontend.cc @@ -0,0 +1,94 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "Manager.h" +#include "ReaderFrontend.h" +#include "ReaderBackend.h" + +#include "threading/MsgThread.h" + +namespace input { + +class InitMessage : public threading::InputMessage +{ +public: + InitMessage(ReaderBackend* backend, + const int num_fields, const threading::Field* const* fields) + : threading::InputMessage("Init", backend), + num_fields(num_fields), fields(fields) { } + + virtual bool Process() + { + return Object()->Init(num_fields, fields); + } + +private: + const int num_fields; + const threading::Field* const* fields; +}; + +class UpdateMessage : public threading::InputMessage +{ +public: + UpdateMessage(ReaderBackend* backend) + : threading::InputMessage("Update", backend) + { } + + virtual bool Process() { return Object()->Update(); } +}; + +ReaderFrontend::ReaderFrontend(const ReaderBackend::ReaderInfo& arg_info, EnumVal* type) + { + disabled = initialized = false; + info = new ReaderBackend::ReaderInfo(arg_info); + + const char* t = type->Type()->AsEnumType()->Lookup(type->InternalInt()); + name = copy_string(fmt("%s/%s", arg_info.source, t)); + + backend = input_mgr->CreateBackend(this, type->InternalInt()); + assert(backend); + backend->Start(); + } + +ReaderFrontend::~ReaderFrontend() + { + delete [] name; + delete info; + } + +void ReaderFrontend::Init(const int arg_num_fields, + const threading::Field* const* arg_fields) + { + if ( disabled ) + return; + + if ( initialized ) + reporter->InternalError("reader initialize twice"); + + num_fields = arg_num_fields; + fields = arg_fields; + initialized = true; + + backend->SendIn(new InitMessage(backend, num_fields, fields)); + } + +void ReaderFrontend::Update() + { + if ( disabled ) + return; + + if ( ! initialized ) + { + reporter->Error("Tried to call update on uninitialized reader"); + return; + } + + backend->SendIn(new UpdateMessage(backend)); + } + +const char* ReaderFrontend::Name() const + { + return name; + } + +} + diff --git a/src/input/ReaderFrontend.h b/src/input/ReaderFrontend.h new file mode 100644 index 0000000000..a93f7703ac --- /dev/null +++ b/src/input/ReaderFrontend.h @@ -0,0 +1,141 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef INPUT_READERFRONTEND_H +#define INPUT_READERFRONTEND_H + +#include "ReaderBackend.h" +#include "threading/MsgThread.h" +#include "threading/SerialTypes.h" + +#include "Val.h" + +namespace input { + +class Manager; + +/** + * Bridge class between the input::Manager and backend input threads. The + * Manager instantiates one \a ReaderFrontend for each open input stream. + * Each frontend in turns instantiates a ReaderBackend-derived class + * internally that's specific to the particular input format. That backend + * spawns a new thread, and it receives messages from the frontend that + * correspond to method called by the manager. + */ +class ReaderFrontend { +public: + /** + * Constructor. + * + * info: The meta information struct for the writer. + * + * type: The backend writer type, with the value corresponding to the + * script-level \c Input::Reader enum (e.g., \a READER_ASCII). The + * frontend will internally instantiate a ReaderBackend of the + * corresponding type. + * + * Frontends must only be instantiated by the main thread. + */ + ReaderFrontend(const ReaderBackend::ReaderInfo& info, EnumVal* type); + + /** + * Destructor. + * + * Frontends must only be destroyed by the main thread. + */ + virtual ~ReaderFrontend(); + + /** + * Initializes the reader. + * + * This method generates a message to the backend reader and triggers + * the corresponding message there. If the backend method fails, it + * sends a message back that will asynchronously call Disable(). + * + * See ReaderBackend::Init() for arguments. + * + * This method must only be called from the main thread. + */ + void Init(const int arg_num_fields, const threading::Field* const* fields); + + /** + * Force an update of the current input source. Actual action depends + * on the opening mode and on the input source. + * + * This method generates a message to the backend reader and triggers + * the corresponding message there. + * + * This method must only be called from the main thread. + */ + void Update(); + + /** + * Finalizes reading from this stream. + * + * This method generates a message to the backend reader and triggers + * the corresponding message there. This method must only be called + * from the main thread. + */ + void Close(); + + /** + * Disables the reader frontend. From now on, all method calls that + * would normally send message over to the backend, turn into no-ops. + * Note though that it does not stop the backend itself, use Finish() + * to do that as well (this method is primarily for use as callback + * when the backend wants to disable the frontend). + * + * Disabled frontends will eventually be discarded by the + * input::Manager. + * + * This method must only be called from the main thread. + */ + void SetDisable() { disabled = true; } + + /** + * Returns true if the reader frontend has been disabled with + * SetDisable(). + */ + bool Disabled() { return disabled; } + + /** + * Returns a descriptive name for the reader, including the type of + * the backend and the source used. + * + * This method is safe to call from any thread. + */ + const char* Name() const; + + /** + * Returns the additional reader information passed into the constructor. + */ + const ReaderBackend::ReaderInfo& Info() const { assert(info); return *info; } + + /** + * Returns the number of log fields as passed into the constructor. + */ + int NumFields() const { return num_fields; } + + /** + * Returns the log fields as passed into the constructor. + */ + const threading::Field* const * Fields() const { return fields; } + +protected: + friend class Manager; + +private: + ReaderBackend* backend; // The backend we have instanatiated. + ReaderBackend::ReaderInfo* info; // Meta information. + const threading::Field* const* fields; // The input fields. + int num_fields; // Information as passed to Init(). + bool disabled; // True if disabled. + bool initialized; // True if initialized. + const char* name; // Descriptive name. +}; + +} + + +#endif /* INPUT_READERFRONTEND_H */ + + diff --git a/src/input/fdstream.h b/src/input/fdstream.h new file mode 100644 index 0000000000..cda767dd52 --- /dev/null +++ b/src/input/fdstream.h @@ -0,0 +1,189 @@ +/* The following code declares classes to read from and write to + * file descriptore or file handles. + * + * See + * http://www.josuttis.com/cppcode + * for details and the latest version. + * + * - open: + * - integrating BUFSIZ on some systems? + * - optimized reading of multiple characters + * - stream for reading AND writing + * - i18n + * + * (C) Copyright Nicolai M. Josuttis 2001. + * Permission to copy, use, modify, sell and distribute this software + * is granted provided this copyright notice appears in all copies. + * This software is provided "as is" without express or implied + * warranty, and with no claim as to its suitability for any purpose. + * + * Version: Jul 28, 2002 + * History: + * Jul 28, 2002: bugfix memcpy() => memmove() + * fdinbuf::underflow(): cast for return statements + * Aug 05, 2001: first public version + */ +#ifndef BOOST_FDSTREAM_HPP +#define BOOST_FDSTREAM_HPP + +#include +#include +#include +// for EOF: +#include +// for memmove(): +#include + + + +// low-level read and write functions +#ifdef _MSC_VER +# include +#else +# include +# include +//extern "C" { +// int write (int fd, const char* buf, int num); +// int read (int fd, char* buf, int num); +//} +#endif + + +// BEGIN namespace BOOST +namespace boost { + + +/************************************************************ + * fdostream + * - a stream that writes on a file descriptor + ************************************************************/ + + +class fdoutbuf : public std::streambuf { + protected: + int fd; // file descriptor + public: + // constructor + fdoutbuf (int _fd) : fd(_fd) { + } + protected: + // write one character + virtual int_type overflow (int_type c) { + if (c != EOF) { + char z = c; + if (write (fd, &z, 1) != 1) { + return EOF; + } + } + return c; + } + // write multiple characters + virtual + std::streamsize xsputn (const char* s, + std::streamsize num) { + return write(fd,s,num); + } +}; + +class fdostream : public std::ostream { + protected: + fdoutbuf buf; + public: + fdostream (int fd) : std::ostream(0), buf(fd) { + rdbuf(&buf); + } +}; + + +/************************************************************ + * fdistream + * - a stream that reads on a file descriptor + ************************************************************/ + +class fdinbuf : public std::streambuf { + protected: + int fd; // file descriptor + protected: + /* data buffer: + * - at most, pbSize characters in putback area plus + * - at most, bufSize characters in ordinary read buffer + */ + static const int pbSize = 4; // size of putback area + static const int bufSize = 1024; // size of the data buffer + char buffer[bufSize+pbSize]; // data buffer + + public: + /* constructor + * - initialize file descriptor + * - initialize empty data buffer + * - no putback area + * => force underflow() + */ + fdinbuf (int _fd) : fd(_fd) { + setg (buffer+pbSize, // beginning of putback area + buffer+pbSize, // read position + buffer+pbSize); // end position + } + + protected: + // insert new characters into the buffer + virtual int_type underflow () { +#ifndef _MSC_VER + using std::memmove; +#endif + + // is read position before end of buffer? + if (gptr() < egptr()) { + return traits_type::to_int_type(*gptr()); + } + + /* process size of putback area + * - use number of characters read + * - but at most size of putback area + */ + int numPutback; + numPutback = gptr() - eback(); + if (numPutback > pbSize) { + numPutback = pbSize; + } + + /* copy up to pbSize characters previously read into + * the putback area + */ + memmove (buffer+(pbSize-numPutback), gptr()-numPutback, + numPutback); + + // read at most bufSize new characters + int num; + num = read (fd, buffer+pbSize, bufSize); + if ( num == EAGAIN ) { + return 0; + } + if (num <= 0) { + // ERROR or EOF + return EOF; + } + + // reset buffer pointers + setg (buffer+(pbSize-numPutback), // beginning of putback area + buffer+pbSize, // read position + buffer+pbSize+num); // end of buffer + + // return next character + return traits_type::to_int_type(*gptr()); + } +}; + +class fdistream : public std::istream { + protected: + fdinbuf buf; + public: + fdistream (int fd) : std::istream(0), buf(fd) { + rdbuf(&buf); + } +}; + + +} // END namespace boost + +#endif /*BOOST_FDSTREAM_HPP*/ diff --git a/src/input/readers/Ascii.cc b/src/input/readers/Ascii.cc new file mode 100644 index 0000000000..e9cba27205 --- /dev/null +++ b/src/input/readers/Ascii.cc @@ -0,0 +1,615 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "Ascii.h" +#include "NetVar.h" + +#include +#include + +#include "../../threading/SerialTypes.h" + +#include +#include +#include +#include + +using namespace input::reader; +using threading::Value; +using threading::Field; + +FieldMapping::FieldMapping(const string& arg_name, const TypeTag& arg_type, int arg_position) + : name(arg_name), type(arg_type) + { + position = arg_position; + secondary_position = -1; + present = true; + } + +FieldMapping::FieldMapping(const string& arg_name, const TypeTag& arg_type, + const TypeTag& arg_subtype, int arg_position) + : name(arg_name), type(arg_type), subtype(arg_subtype) + { + position = arg_position; + secondary_position = -1; + present = true; + } + +FieldMapping::FieldMapping(const FieldMapping& arg) + : name(arg.name), type(arg.type), subtype(arg.subtype), present(arg.present) + { + position = arg.position; + secondary_position = arg.secondary_position; + } + +FieldMapping FieldMapping::subType() + { + return FieldMapping(name, subtype, position); + } + +Ascii::Ascii(ReaderFrontend *frontend) : ReaderBackend(frontend) + { + file = 0; + + separator.assign( (const char*) BifConst::InputAscii::separator->Bytes(), + BifConst::InputAscii::separator->Len()); + + if ( separator.size() != 1 ) + Error("separator length has to be 1. Separator will be truncated."); + + set_separator.assign( (const char*) BifConst::InputAscii::set_separator->Bytes(), + BifConst::InputAscii::set_separator->Len()); + + if ( set_separator.size() != 1 ) + Error("set_separator length has to be 1. Separator will be truncated."); + + empty_field.assign( (const char*) BifConst::InputAscii::empty_field->Bytes(), + BifConst::InputAscii::empty_field->Len()); + + unset_field.assign( (const char*) BifConst::InputAscii::unset_field->Bytes(), + BifConst::InputAscii::unset_field->Len()); +} + +Ascii::~Ascii() + { + DoClose(); + } + +void Ascii::DoClose() + { + if ( file != 0 ) + { + file->close(); + delete(file); + file = 0; + } + } + +bool Ascii::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields) + { + mtime = 0; + + file = new ifstream(info.source); + if ( ! file->is_open() ) + { + Error(Fmt("Init: cannot open %s", info.source)); + delete(file); + file = 0; + return false; + } + + if ( ReadHeader(false) == false ) + { + Error(Fmt("Init: cannot open %s; headers are incorrect", info.source)); + file->close(); + delete(file); + file = 0; + return false; + } + + DoUpdate(); + + return true; + } + + +bool Ascii::ReadHeader(bool useCached) + { + // try to read the header line... + string line; + map ifields; + + if ( ! useCached ) + { + if ( ! GetLine(line) ) + { + Error("could not read first line"); + return false; + } + + headerline = line; + } + + else + line = headerline; + + // construct list of field names. + istringstream splitstream(line); + int pos=0; + while ( splitstream ) + { + string s; + if ( ! getline(splitstream, s, separator[0])) + break; + + ifields[s] = pos; + pos++; + } + + // printf("Updating fields from description %s\n", line.c_str()); + columnMap.clear(); + + for ( int i = 0; i < NumFields(); i++ ) + { + const Field* field = Fields()[i]; + + map::iterator fit = ifields.find(field->name); + if ( fit == ifields.end() ) + { + if ( field->optional ) + { + // we do not really need this field. mark it as not present and always send an undef back. + FieldMapping f(field->name, field->type, field->subtype, -1); + f.present = false; + columnMap.push_back(f); + continue; + } + + Error(Fmt("Did not find requested field %s in input data file %s.", + field->name, Info().source)); + return false; + } + + + FieldMapping f(field->name, field->type, field->subtype, ifields[field->name]); + + if ( field->secondary_name && strlen(field->secondary_name) != 0 ) + { + map::iterator fit2 = ifields.find(field->secondary_name); + if ( fit2 == ifields.end() ) + { + Error(Fmt("Could not find requested port type field %s in input data file.", + field->secondary_name)); + return false; + } + + f.secondary_position = ifields[field->secondary_name]; + } + + columnMap.push_back(f); + } + + + // well, that seems to have worked... + return true; + } + +bool Ascii::GetLine(string& str) + { + while ( getline(*file, str) ) + { + if ( str[0] != '#' ) + return true; + + if ( ( str.length() > 8 ) && ( str.compare(0,7, "#fields") == 0 ) && ( str[7] == separator[0] ) ) + { + str = str.substr(8); + return true; + } + } + + return false; + } + +bool Ascii::CheckNumberError(const string& s, const char * end) + { + // Do this check first, before executing s.c_str() or similar. + // otherwise the value to which *end is pointing at the moment might + // be gone ... + bool endnotnull = (*end != '\0'); + + if ( s.length() == 0 ) + { + Error("Got empty string for number field"); + return true; + } + + if ( end == s.c_str() ) { + Error(Fmt("String '%s' contained no parseable number", s.c_str())); + return true; + } + + if ( endnotnull ) + Warning(Fmt("Number '%s' contained non-numeric trailing characters. Ignored trailing characters '%s'", s.c_str(), end)); + + if ( errno == EINVAL ) + { + Error(Fmt("String '%s' could not be converted to a number", s.c_str())); + return true; + } + + else if ( errno == ERANGE ) + { + Error(Fmt("Number '%s' out of supported range.", s.c_str())); + return true; + } + + return false; + } + + +Value* Ascii::EntryToVal(string s, FieldMapping field) + { + if ( s.compare(unset_field) == 0 ) // field is not set... + return new Value(field.type, false); + + Value* val = new Value(field.type, true); + char* end = 0; + errno = 0; + + switch ( field.type ) { + case TYPE_ENUM: + case TYPE_STRING: + s = get_unescaped_string(s); + val->val.string_val.length = s.size(); + val->val.string_val.data = copy_string(s.c_str()); + break; + + case TYPE_BOOL: + if ( s == "T" ) + val->val.int_val = 1; + else if ( s == "F" ) + val->val.int_val = 0; + else + { + Error(Fmt("Field: %s Invalid value for boolean: %s", + field.name.c_str(), s.c_str())); + return 0; + } + break; + + case TYPE_INT: + val->val.int_val = strtoll(s.c_str(), &end, 10); + if ( CheckNumberError(s, end) ) + return 0; + break; + + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + val->val.double_val = strtod(s.c_str(), &end); + if ( CheckNumberError(s, end) ) + return 0; + break; + + case TYPE_COUNT: + case TYPE_COUNTER: + val->val.uint_val = strtoull(s.c_str(), &end, 10); + if ( CheckNumberError(s, end) ) + return 0; + break; + + case TYPE_PORT: + val->val.port_val.port = strtoull(s.c_str(), &end, 10); + if ( CheckNumberError(s, end) ) + return 0; + + val->val.port_val.proto = TRANSPORT_UNKNOWN; + break; + + case TYPE_SUBNET: + { + s = get_unescaped_string(s); + size_t pos = s.find("/"); + if ( pos == s.npos ) + { + Error(Fmt("Invalid value for subnet: %s", s.c_str())); + return 0; + } + + uint8_t width = (uint8_t) strtol(s.substr(pos+1).c_str(), &end, 10); + + if ( CheckNumberError(s, end) ) + return 0; + + string addr = s.substr(0, pos); + + val->val.subnet_val.prefix = StringToAddr(addr); + val->val.subnet_val.length = width; + break; + } + + case TYPE_ADDR: + s = get_unescaped_string(s); + val->val.addr_val = StringToAddr(s); + break; + + case TYPE_TABLE: + case TYPE_VECTOR: + // First - common initialization + // Then - initialization for table. + // Then - initialization for vector. + // Then - common stuff + { + // how many entries do we have... + unsigned int length = 1; + for ( unsigned int i = 0; i < s.size(); i++ ) + { + if ( s[i] == set_separator[0] ) + length++; + } + + unsigned int pos = 0; + + if ( s.compare(empty_field) == 0 ) + length = 0; + + Value** lvals = new Value* [length]; + + if ( field.type == TYPE_TABLE ) + { + val->val.set_val.vals = lvals; + val->val.set_val.size = length; + } + + else if ( field.type == TYPE_VECTOR ) + { + val->val.vector_val.vals = lvals; + val->val.vector_val.size = length; + } + + else + assert(false); + + if ( length == 0 ) + break; //empty + + istringstream splitstream(s); + while ( splitstream ) + { + string element; + + if ( ! getline(splitstream, element, set_separator[0]) ) + break; + + if ( pos >= length ) + { + Error(Fmt("Internal error while parsing set. pos %d >= length %d." + " Element: %s", pos, length, element.c_str())); + break; + } + + Value* newval = EntryToVal(element, field.subType()); + if ( newval == 0 ) + { + Error("Error while reading set"); + return 0; + } + + lvals[pos] = newval; + + pos++; + } + + // Test if the string ends with a set_separator... or if the + // complete string is empty. In either of these cases we have + // to push an empty val on top of it. + if ( s.empty() || *s.rbegin() == set_separator[0] ) + { + lvals[pos] = EntryToVal("", field.subType()); + if ( lvals[pos] == 0 ) + { + Error("Error while trying to add empty set element"); + return 0; + } + + pos++; + } + + if ( pos != length ) + { + Error(Fmt("Internal error while parsing set: did not find all elements: %s", s.c_str())); + return 0; + } + + break; + } + + default: + Error(Fmt("unsupported field format %d for %s", field.type, + field.name.c_str())); + return 0; + } + + return val; + } + +// read the entire file and send appropriate thingies back to InputMgr +bool Ascii::DoUpdate() + { + switch ( Info().mode ) { + case MODE_REREAD: + { + // check if the file has changed + struct stat sb; + if ( stat(Info().source, &sb) == -1 ) + { + Error(Fmt("Could not get stat for %s", Info().source)); + return false; + } + + if ( sb.st_mtime <= mtime ) // no change + return true; + + mtime = sb.st_mtime; + // file changed. reread. + + // fallthrough + } + + case MODE_MANUAL: + case MODE_STREAM: + { + // dirty, fix me. (well, apparently after trying seeking, etc + // - this is not that bad) + if ( file && file->is_open() ) + { + if ( Info().mode == MODE_STREAM ) + { + file->clear(); // remove end of file evil bits + if ( !ReadHeader(true) ) + return false; // header reading failed + + break; + } + + file->close(); + delete file; + file = 0; + } + + file = new ifstream(Info().source); + if ( ! file->is_open() ) + { + Error(Fmt("cannot open %s", Info().source)); + return false; + } + + if ( ReadHeader(false) == false ) + { + return false; + } + + break; + } + + default: + assert(false); + + } + + string line; + while ( GetLine(line ) ) + { + // split on tabs + bool error = false; + istringstream splitstream(line); + + map stringfields; + int pos = 0; + while ( splitstream ) + { + string s; + if ( ! getline(splitstream, s, separator[0]) ) + break; + + stringfields[pos] = s; + pos++; + } + + pos--; // for easy comparisons of max element. + + Value** fields = new Value*[NumFields()]; + + int fpos = 0; + for ( vector::iterator fit = columnMap.begin(); + fit != columnMap.end(); + fit++ ) + { + + if ( ! fit->present ) + { + // add non-present field + fields[fpos] = new Value((*fit).type, false); + fpos++; + continue; + } + + assert(fit->position >= 0 ); + + if ( (*fit).position > pos || (*fit).secondary_position > pos ) + { + Error(Fmt("Not enough fields in line %s. Found %d fields, want positions %d and %d", + line.c_str(), pos, (*fit).position, (*fit).secondary_position)); + return false; + } + + Value* val = EntryToVal(stringfields[(*fit).position], *fit); + if ( val == 0 ) + { + Error(Fmt("Could not convert line '%s' to Val. Ignoring line.", line.c_str())); + error = true; + break; + } + + if ( (*fit).secondary_position != -1 ) + { + // we have a port definition :) + assert(val->type == TYPE_PORT ); + // Error(Fmt("Got type %d != PORT with secondary position!", val->type)); + + val->val.port_val.proto = StringToProto(stringfields[(*fit).secondary_position]); + } + + fields[fpos] = val; + + fpos++; + } + + if ( error ) + { + // Encountered non-fatal error, ignoring line. But + // first, delete all successfully read fields and the + // array structure. + + for ( int i = 0; i < fpos; i++ ) + delete fields[fpos]; + + delete [] fields; + continue; + } + + //printf("fpos: %d, second.num_fields: %d\n", fpos, (*it).second.num_fields); + assert ( fpos == NumFields() ); + + if ( Info().mode == MODE_STREAM ) + Put(fields); + else + SendEntry(fields); + } + + if ( Info().mode != MODE_STREAM ) + EndCurrentSend(); + + return true; + } + +bool Ascii::DoHeartbeat(double network_time, double current_time) +{ + switch ( Info().mode ) { + case MODE_MANUAL: + // yay, we do nothing :) + break; + + case MODE_REREAD: + case MODE_STREAM: + Update(); // call update and not DoUpdate, because update + // checks disabled. + break; + + default: + assert(false); + } + + return true; + } + diff --git a/src/input/readers/Ascii.h b/src/input/readers/Ascii.h new file mode 100644 index 0000000000..6e693fc74b --- /dev/null +++ b/src/input/readers/Ascii.h @@ -0,0 +1,73 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef INPUT_READERS_ASCII_H +#define INPUT_READERS_ASCII_H + +#include +#include + +#include "../ReaderBackend.h" + +namespace input { namespace reader { + +// Description for input field mapping. +struct FieldMapping { + string name; + TypeTag type; + TypeTag subtype; // internal type for sets and vectors + int position; + int secondary_position; // for ports: pos of the second field + bool present; + + FieldMapping(const string& arg_name, const TypeTag& arg_type, int arg_position); + FieldMapping(const string& arg_name, const TypeTag& arg_type, const TypeTag& arg_subtype, int arg_position); + FieldMapping(const FieldMapping& arg); + FieldMapping() { position = -1; secondary_position = -1; } + + FieldMapping subType(); +}; + +/** + * Reader for structured ASCII files. + */ +class Ascii : public ReaderBackend { +public: + Ascii(ReaderFrontend* frontend); + ~Ascii(); + + static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Ascii(frontend); } + +protected: + virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields); + virtual void DoClose(); + virtual bool DoUpdate(); + virtual bool DoHeartbeat(double network_time, double current_time); + +private: + + bool ReadHeader(bool useCached); + bool GetLine(string& str); + threading::Value* EntryToVal(string s, FieldMapping type); + bool CheckNumberError(const string& s, const char * end); + + ifstream* file; + time_t mtime; + + // map columns in the file to columns to send back to the manager + vector columnMap; + + // keep a copy of the headerline to determine field locations when stream descriptions change + string headerline; + + // options set from the script-level. + string separator; + string set_separator; + string empty_field; + string unset_field; +}; + + +} +} + +#endif /* INPUT_READERS_ASCII_H */ diff --git a/src/input/readers/Benchmark.cc b/src/input/readers/Benchmark.cc new file mode 100644 index 0000000000..b8cec0f14d --- /dev/null +++ b/src/input/readers/Benchmark.cc @@ -0,0 +1,266 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "Benchmark.h" +#include "NetVar.h" + +#include "../../threading/SerialTypes.h" + +#include +#include +#include + +#include "../../threading/Manager.h" + +using namespace input::reader; +using threading::Value; +using threading::Field; + +Benchmark::Benchmark(ReaderFrontend *frontend) : ReaderBackend(frontend) + { + multiplication_factor = double(BifConst::InputBenchmark::factor); + autospread = double(BifConst::InputBenchmark::autospread); + spread = int(BifConst::InputBenchmark::spread); + add = int(BifConst::InputBenchmark::addfactor); + autospread_time = 0; + stopspreadat = int(BifConst::InputBenchmark::stopspreadat); + timedspread = double(BifConst::InputBenchmark::timedspread); + heartbeat_interval = double(BifConst::Threading::heartbeat_interval); + } + +Benchmark::~Benchmark() + { + DoClose(); + } + +void Benchmark::DoClose() + { + } + +bool Benchmark::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields) + { + num_lines = atoi(info.source); + + if ( autospread != 0.0 ) + autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) ); + + heartbeatstarttime = CurrTime(); + DoUpdate(); + + return true; + } + +string Benchmark::RandomString(const int len) + { + string s(len, ' '); + + static const char values[] = + "0123456789!@#$%^&*()-_=+{}[]\\|" + "ABCDEFGHIJKLMNOPQRSTUVWXYZ" + "abcdefghijklmnopqrstuvwxyz"; + + for (int i = 0; i < len; ++i) + s[i] = values[random() / (RAND_MAX / sizeof(values))]; + + return s; + } + +double Benchmark::CurrTime() + { + struct timeval tv; + assert ( gettimeofday(&tv, 0) >= 0 ); + + return double(tv.tv_sec) + double(tv.tv_usec) / 1e6; + } + + +// read the entire file and send appropriate thingies back to InputMgr +bool Benchmark::DoUpdate() + { + int linestosend = num_lines * heartbeat_interval; + for ( int i = 0; i < linestosend; i++ ) + { + Value** field = new Value*[NumFields()]; + for (int j = 0; j < NumFields(); j++ ) + field[j] = EntryToVal(Fields()[j]->type, Fields()[j]->subtype); + + if ( Info().mode == MODE_STREAM ) + // do not do tracking, spread out elements over the second that we have... + Put(field); + else + SendEntry(field); + + if ( stopspreadat == 0 || num_lines < stopspreadat ) + { + if ( spread != 0 ) + usleep(spread); + + if ( autospread_time != 0 ) + usleep( autospread_time ); + } + + if ( timedspread != 0.0 ) + { + double diff; + do + diff = CurrTime() - heartbeatstarttime; + while ( diff/heartbeat_interval < i/(linestosend + + (linestosend * timedspread) ) ); + } + + } + + if ( Info().mode != MODE_STREAM ) + EndCurrentSend(); + + return true; +} + +threading::Value* Benchmark::EntryToVal(TypeTag type, TypeTag subtype) + { + Value* val = new Value(type, true); + + // basically construct something random from the fields that we want. + + switch ( type ) { + case TYPE_ENUM: + assert(false); // no enums, please. + + case TYPE_STRING: + { + string rnd = RandomString(10); + val->val.string_val.data = copy_string(rnd.c_str()); + val->val.string_val.length = rnd.size(); + break; + } + + case TYPE_BOOL: + val->val.int_val = 1; // we never lie. + break; + + case TYPE_INT: + val->val.int_val = random(); + break; + + case TYPE_TIME: + val->val.double_val = CurrTime(); + break; + + case TYPE_DOUBLE: + case TYPE_INTERVAL: + val->val.double_val = random(); + break; + + case TYPE_COUNT: + case TYPE_COUNTER: + val->val.uint_val = random(); + break; + + case TYPE_PORT: + val->val.port_val.port = random() / (RAND_MAX / 60000); + val->val.port_val.proto = TRANSPORT_UNKNOWN; + break; + + case TYPE_SUBNET: + { + val->val.subnet_val.prefix = StringToAddr("192.168.17.1"); + val->val.subnet_val.length = 16; + } + break; + + case TYPE_ADDR: + val->val.addr_val = StringToAddr("192.168.17.1"); + break; + + case TYPE_TABLE: + case TYPE_VECTOR: + // First - common initialization + // Then - initialization for table. + // Then - initialization for vector. + // Then - common stuff + { + // how many entries do we have... + unsigned int length = random() / (RAND_MAX / 15); + + Value** lvals = new Value* [length]; + + if ( type == TYPE_TABLE ) + { + val->val.set_val.vals = lvals; + val->val.set_val.size = length; + } + else if ( type == TYPE_VECTOR ) + { + val->val.vector_val.vals = lvals; + val->val.vector_val.size = length; + } + else + assert(false); + + if ( length == 0 ) + break; //empty + + for ( unsigned int pos = 0; pos < length; pos++ ) + { + Value* newval = EntryToVal(subtype, TYPE_ENUM); + if ( newval == 0 ) + { + Error("Error while reading set"); + return 0; + } + lvals[pos] = newval; + } + + break; + } + + + default: + Error(Fmt("unsupported field format %d", type)); + return 0; + } + + return val; + + } + + +bool Benchmark::DoHeartbeat(double network_time, double current_time) +{ + num_lines = (int) ( (double) num_lines*multiplication_factor); + num_lines += add; + heartbeatstarttime = CurrTime(); + + switch ( Info().mode ) { + case MODE_MANUAL: + // yay, we do nothing :) + break; + + case MODE_REREAD: + case MODE_STREAM: + if ( multiplication_factor != 1 || add != 0 ) + { + // we have to document at what time we changed the factor to what value. + Value** v = new Value*[2]; + v[0] = new Value(TYPE_COUNT, true); + v[0]->val.uint_val = num_lines; + v[1] = new Value(TYPE_TIME, true); + v[1]->val.double_val = CurrTime(); + + SendEvent("lines_changed", 2, v); + } + + if ( autospread != 0.0 ) + // because executing this in every loop is apparently too expensive. + autospread_time = (int) ( (double) 1000000 / (autospread * (double) num_lines) ); + + Update(); // call update and not DoUpdate, because update actually checks disabled. + + SendEvent("HeartbeatDone", 0, 0); + break; + + default: + assert(false); + } + + return true; +} diff --git a/src/input/readers/Benchmark.h b/src/input/readers/Benchmark.h new file mode 100644 index 0000000000..bab564b12a --- /dev/null +++ b/src/input/readers/Benchmark.h @@ -0,0 +1,47 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef INPUT_READERS_BENCHMARK_H +#define INPUT_READERS_BENCHMARK_H + +#include "../ReaderBackend.h" + +namespace input { namespace reader { + +/** + * A benchmark reader to measure performance of the input framework. + */ +class Benchmark : public ReaderBackend { +public: + Benchmark(ReaderFrontend* frontend); + ~Benchmark(); + + static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Benchmark(frontend); } + +protected: + virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields); + virtual void DoClose(); + virtual bool DoUpdate(); + virtual bool DoHeartbeat(double network_time, double current_time); + +private: + double CurrTime(); + string RandomString(const int len); + threading::Value* EntryToVal(TypeTag Type, TypeTag subtype); + + int num_lines; + double multiplication_factor; + int spread; + double autospread; + int autospread_time; + int add; + int stopspreadat; + double heartbeatstarttime; + double timedspread; + double heartbeat_interval; +}; + + +} +} + +#endif /* INPUT_READERS_BENCHMARK_H */ diff --git a/src/input/readers/Raw.cc b/src/input/readers/Raw.cc new file mode 100644 index 0000000000..ac96e5c0f5 --- /dev/null +++ b/src/input/readers/Raw.cc @@ -0,0 +1,278 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "Raw.h" +#include "NetVar.h" + +#include +#include + +#include "../../threading/SerialTypes.h" +#include "../fdstream.h" + +#include +#include +#include +#include +#include + +using namespace input::reader; +using threading::Value; +using threading::Field; + +Raw::Raw(ReaderFrontend *frontend) : ReaderBackend(frontend) + { + file = 0; + in = 0; + + separator.assign( (const char*) BifConst::InputRaw::record_separator->Bytes(), + BifConst::InputRaw::record_separator->Len()); + + if ( separator.size() != 1 ) + Error("separator length has to be 1. Separator will be truncated."); + } + +Raw::~Raw() + { + DoClose(); + } + +void Raw::DoClose() + { + if ( file != 0 ) + CloseInput(); + } + +bool Raw::OpenInput() + { + if ( execute ) + { + file = popen(fname.c_str(), "r"); + if ( file == NULL ) + { + Error(Fmt("Could not execute command %s", fname.c_str())); + return false; + } + } + else + { + file = fopen(fname.c_str(), "r"); + if ( file == NULL ) + { + Error(Fmt("Init: cannot open %s", fname.c_str())); + return false; + } + } + + // This is defined in input/fdstream.h + in = new boost::fdistream(fileno(file)); + + if ( execute && Info().mode == MODE_STREAM ) + fcntl(fileno(file), F_SETFL, O_NONBLOCK); + + return true; + } + +bool Raw::CloseInput() + { + if ( file == NULL ) + { + InternalError(Fmt("Trying to close closed file for stream %s", fname.c_str())); + return false; + } +#ifdef DEBUG + Debug(DBG_INPUT, "Raw reader starting close"); +#endif + + delete in; + + if ( execute ) + pclose(file); + else + fclose(file); + + in = NULL; + file = NULL; + +#ifdef DEBUG + Debug(DBG_INPUT, "Raw reader finished close"); +#endif + + return true; + } + +bool Raw::DoInit(const ReaderInfo& info, int num_fields, const Field* const* fields) + { + fname = info.source; + mtime = 0; + execute = false; + firstrun = true; + bool result; + + if ( ! info.source || strlen(info.source) == 0 ) + { + Error("No source path provided"); + return false; + } + + if ( num_fields != 1 ) + { + Error("Filter for raw reader contains more than one field. " + "Filters for the raw reader may only contain exactly one string field. " + "Filter ignored."); + return false; + } + + if ( fields[0]->type != TYPE_STRING ) + { + Error("Filter for raw reader contains a field that is not of type string."); + return false; + } + + // do Initialization + string source = string(info.source); + char last = info.source[source.length() - 1]; + if ( last == '|' ) + { + execute = true; + fname = source.substr(0, fname.length() - 1); + + if ( (info.mode != MODE_MANUAL) ) + { + Error(Fmt("Unsupported read mode %d for source %s in execution mode", + info.mode, fname.c_str())); + return false; + } + + result = OpenInput(); + + } + else + { + execute = false; + result = OpenInput(); + } + + if ( result == false ) + return result; + +#ifdef DEBUG + Debug(DBG_INPUT, "Raw reader created, will perform first update"); +#endif + + // after initialization - do update + DoUpdate(); + +#ifdef DEBUG + Debug(DBG_INPUT, "First update went through"); +#endif + return true; + } + + +bool Raw::GetLine(string& str) + { + if ( in->peek() == std::iostream::traits_type::eof() ) + return false; + + if ( in->eofbit == true || in->failbit == true ) + return false; + + return getline(*in, str, separator[0]); + } + +// read the entire file and send appropriate thingies back to InputMgr +bool Raw::DoUpdate() + { + if ( firstrun ) + firstrun = false; + + else + { + switch ( Info().mode ) { + case MODE_REREAD: + { + // check if the file has changed + struct stat sb; + if ( stat(fname.c_str(), &sb) == -1 ) + { + Error(Fmt("Could not get stat for %s", fname.c_str())); + return false; + } + + if ( sb.st_mtime <= mtime ) + // no change + return true; + + mtime = sb.st_mtime; + // file changed. reread. + // + // fallthrough + } + + case MODE_MANUAL: + case MODE_STREAM: + if ( Info().mode == MODE_STREAM && file != NULL && in != NULL ) + { + //fpurge(file); + in->clear(); // remove end of file evil bits + break; + } + + CloseInput(); + if ( ! OpenInput() ) + return false; + + break; + + default: + assert(false); + } + } + + string line; + while ( GetLine(line) ) + { + assert (NumFields() == 1); + + Value** fields = new Value*[1]; + + // filter has exactly one text field. convert to it. + Value* val = new Value(TYPE_STRING, true); + val->val.string_val.data = copy_string(line.c_str()); + val->val.string_val.length = line.size(); + fields[0] = val; + + Put(fields); + } + +#ifdef DEBUG + Debug(DBG_INPUT, "DoUpdate finished successfully"); +#endif + + return true; + } + +bool Raw::DoHeartbeat(double network_time, double current_time) + { + switch ( Info().mode ) { + case MODE_MANUAL: + // yay, we do nothing :) + break; + + case MODE_REREAD: + case MODE_STREAM: +#ifdef DEBUG + Debug(DBG_INPUT, "Starting Heartbeat update"); +#endif + Update(); // call update and not DoUpdate, because update + // checks disabled. +#ifdef DEBUG + Debug(DBG_INPUT, "Finished with heartbeat update"); +#endif + break; + default: + assert(false); + } + + return true; + } diff --git a/src/input/readers/Raw.h b/src/input/readers/Raw.h new file mode 100644 index 0000000000..48912b70a7 --- /dev/null +++ b/src/input/readers/Raw.h @@ -0,0 +1,49 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef INPUT_READERS_RAW_H +#define INPUT_READERS_RAW_H + +#include +#include + +#include "../ReaderBackend.h" + +namespace input { namespace reader { + +/** + * A reader that returns a file (or the output of a command) as a single + * blob. + */ +class Raw : public ReaderBackend { +public: + Raw(ReaderFrontend* frontend); + ~Raw(); + + static ReaderBackend* Instantiate(ReaderFrontend* frontend) { return new Raw(frontend); } + +protected: + virtual bool DoInit(const ReaderInfo& info, int arg_num_fields, const threading::Field* const* fields); + virtual void DoClose(); + virtual bool DoUpdate(); + virtual bool DoHeartbeat(double network_time, double current_time); + +private: + bool OpenInput(); + bool CloseInput(); + bool GetLine(string& str); + + string fname; // Source with a potential "|" removed. + istream* in; + FILE* file; + bool execute; + bool firstrun; + time_t mtime; + + // options set from the script-level. + string separator; +}; + +} +} + +#endif /* INPUT_READERS_RAW_H */ diff --git a/src/logging.bif b/src/logging.bif index 501eb899d9..f5d3e8e3e6 100644 --- a/src/logging.bif +++ b/src/logging.bif @@ -1,10 +1,11 @@ -# Internal functions and types used by the logging framework. +##! Internal functions and types used by the logging framework. module Log; %%{ -#include "LogMgr.h" #include "NetVar.h" + +#include "logging/Manager.h" %%} type Filter: record; @@ -64,10 +65,39 @@ function Log::__flush%(id: Log::ID%): bool module LogAscii; const output_to_stdout: bool; -const include_header: bool; -const header_prefix: string; +const include_meta: bool; +const meta_prefix: string; const separator: string; const set_separator: string; const empty_field: string; const unset_field: string; +# Options for the DataSeries writer. + +module LogDataSeries; + +const compression: string; +const extent_size: count; +const dump_schema: bool; +const use_integer_for_time: bool; +const num_threads: count; + +# Options for the ElasticSearch writer. + +module LogElasticSearch; + +const cluster_name: string; +const server_host: string; +const server_port: count; +const index_prefix: string; +const type_prefix: string; +const transfer_timeout: interval; +const max_batch_size: count; +const max_batch_interval: interval; +const max_byte_size: count; + +# Options for the None writer. + +module LogNone; + +const debug: bool; diff --git a/src/LogMgr.cc b/src/logging/Manager.cc similarity index 63% rename from src/LogMgr.cc rename to src/logging/Manager.cc index f78a2a19e0..4c6d2e92fd 100644 --- a/src/LogMgr.cc +++ b/src/logging/Manager.cc @@ -2,33 +2,58 @@ #include -#include "LogMgr.h" -#include "Event.h" -#include "EventHandler.h" -#include "NetVar.h" -#include "Net.h" +#include "../Event.h" +#include "../EventHandler.h" +#include "../NetVar.h" +#include "../Net.h" +#include "../Type.h" -#include "LogWriterAscii.h" -#include "LogWriterNone.h" +#include "threading/Manager.h" +#include "threading/SerialTypes.h" + +#include "Manager.h" +#include "WriterFrontend.h" +#include "WriterBackend.h" + +#include "writers/Ascii.h" +#include "writers/None.h" + +#ifdef USE_ELASTICSEARCH +#include "writers/ElasticSearch.h" +#endif + +#ifdef USE_DATASERIES +#include "writers/DataSeries.h" +#endif + +using namespace logging; // Structure describing a log writer type. -struct LogWriterDefinition { +struct WriterDefinition { bro_int_t type; // The type. const char *name; // Descriptive name for error messages. bool (*init)(); // An optional one-time initialization function. - LogWriter* (*factory)(); // A factory function creating instances. + WriterBackend* (*factory)(WriterFrontend* frontend); // A factory function creating instances. }; // Static table defining all availabel log writers. -LogWriterDefinition log_writers[] = { - { BifEnum::Log::WRITER_NONE, "None", 0, LogWriterNone::Instantiate }, - { BifEnum::Log::WRITER_ASCII, "Ascii", 0, LogWriterAscii::Instantiate }, +WriterDefinition log_writers[] = { + { BifEnum::Log::WRITER_NONE, "None", 0, writer::None::Instantiate }, + { BifEnum::Log::WRITER_ASCII, "Ascii", 0, writer::Ascii::Instantiate }, + +#ifdef USE_ELASTICSEARCH + { BifEnum::Log::WRITER_ELASTICSEARCH, "ElasticSearch", 0, writer::ElasticSearch::Instantiate }, +#endif + +#ifdef USE_DATASERIES + { BifEnum::Log::WRITER_DATASERIES, "DataSeries", 0, writer::DataSeries::Instantiate }, +#endif // End marker, don't touch. - { BifEnum::Log::WRITER_DEFAULT, "None", 0, (LogWriter* (*)())0 } + { BifEnum::Log::WRITER_DEFAULT, "None", 0, (WriterBackend* (*)(WriterFrontend* frontend))0 } }; -struct LogMgr::Filter { +struct Manager::Filter { string name; EnumVal* id; Func* pred; @@ -36,13 +61,14 @@ struct LogMgr::Filter { string path; Val* path_val; EnumVal* writer; + TableVal* config; bool local; bool remote; double interval; Func* postprocessor; int num_fields; - LogField** fields; + threading::Field** fields; // Vector indexed by field number. Each element is a list of record // indices defining a path leading to the value across potential @@ -52,16 +78,19 @@ struct LogMgr::Filter { ~Filter(); }; -struct LogMgr::WriterInfo { +struct Manager::WriterInfo { EnumVal* type; double open_time; Timer* rotation_timer; double interval; Func* postprocessor; - LogWriter* writer; + WriterFrontend* writer; + WriterBackend::WriterInfo* info; + bool from_remote; + string instantiating_filter; }; -struct LogMgr::Stream { +struct Manager::Stream { EnumVal* id; bool enabled; string name; @@ -78,315 +107,7 @@ struct LogMgr::Stream { ~Stream(); }; -bool LogField::Read(SerializationFormat* fmt) - { - int t; - - bool success = (fmt->Read(&name, "name") && fmt->Read(&t, "type")); - type = (TypeTag) t; - - return success; - } - -bool LogField::Write(SerializationFormat* fmt) const - { - return (fmt->Write(name, "name") && fmt->Write((int)type, "type")); - } - -LogVal::~LogVal() - { - if ( (type == TYPE_ENUM || type == TYPE_STRING || type == TYPE_FILE || type == TYPE_FUNC) - && present ) - delete val.string_val; - - if ( type == TYPE_TABLE && present ) - { - for ( int i = 0; i < val.set_val.size; i++ ) - delete val.set_val.vals[i]; - - delete [] val.set_val.vals; - } - - if ( type == TYPE_VECTOR && present ) - { - for ( int i = 0; i < val.vector_val.size; i++ ) - delete val.vector_val.vals[i]; - - delete [] val.vector_val.vals; - } - } - -bool LogVal::IsCompatibleType(BroType* t, bool atomic_only) - { - if ( ! t ) - return false; - - switch ( t->Tag() ) { - case TYPE_BOOL: - case TYPE_INT: - case TYPE_COUNT: - case TYPE_COUNTER: - case TYPE_PORT: - case TYPE_SUBNET: - case TYPE_ADDR: - case TYPE_DOUBLE: - case TYPE_TIME: - case TYPE_INTERVAL: - case TYPE_ENUM: - case TYPE_STRING: - case TYPE_FILE: - case TYPE_FUNC: - return true; - - case TYPE_RECORD: - return ! atomic_only; - - case TYPE_TABLE: - { - if ( atomic_only ) - return false; - - if ( ! t->IsSet() ) - return false; - - return IsCompatibleType(t->AsSetType()->Indices()->PureType()); - } - - case TYPE_VECTOR: - { - if ( atomic_only ) - return false; - - return IsCompatibleType(t->AsVectorType()->YieldType()); - } - - default: - return false; - } - - return false; - } - -bool LogVal::Read(SerializationFormat* fmt) - { - int ty; - - if ( ! (fmt->Read(&ty, "type") && fmt->Read(&present, "present")) ) - return false; - - type = (TypeTag)(ty); - - if ( ! present ) - return true; - - switch ( type ) { - case TYPE_BOOL: - case TYPE_INT: - return fmt->Read(&val.int_val, "int"); - - case TYPE_COUNT: - case TYPE_COUNTER: - case TYPE_PORT: - return fmt->Read(&val.uint_val, "uint"); - - case TYPE_SUBNET: - { - uint32 net[4]; - if ( ! (fmt->Read(&net[0], "net0") && - fmt->Read(&net[1], "net1") && - fmt->Read(&net[2], "net2") && - fmt->Read(&net[3], "net3") && - fmt->Read(&val.subnet_val.width, "width")) ) - return false; - -#ifdef BROv6 - val.subnet_val.net[0] = net[0]; - val.subnet_val.net[1] = net[1]; - val.subnet_val.net[2] = net[2]; - val.subnet_val.net[3] = net[3]; -#else - val.subnet_val.net = net[0]; -#endif - return true; - } - - case TYPE_ADDR: - { - uint32 addr[4]; - if ( ! (fmt->Read(&addr[0], "addr0") && - fmt->Read(&addr[1], "addr1") && - fmt->Read(&addr[2], "addr2") && - fmt->Read(&addr[3], "addr3")) ) - return false; - - val.addr_val[0] = addr[0]; -#ifdef BROv6 - val.addr_val[1] = addr[1]; - val.addr_val[2] = addr[2]; - val.addr_val[3] = addr[3]; -#endif - return true; - } - - case TYPE_DOUBLE: - case TYPE_TIME: - case TYPE_INTERVAL: - return fmt->Read(&val.double_val, "double"); - - case TYPE_ENUM: - case TYPE_STRING: - case TYPE_FILE: - case TYPE_FUNC: - { - val.string_val = new string; - return fmt->Read(val.string_val, "string"); - } - - case TYPE_TABLE: - { - if ( ! fmt->Read(&val.set_val.size, "set_size") ) - return false; - - val.set_val.vals = new LogVal* [val.set_val.size]; - - for ( int i = 0; i < val.set_val.size; ++i ) - { - val.set_val.vals[i] = new LogVal; - - if ( ! val.set_val.vals[i]->Read(fmt) ) - return false; - } - - return true; - } - - case TYPE_VECTOR: - { - if ( ! fmt->Read(&val.vector_val.size, "vector_size") ) - return false; - - val.vector_val.vals = new LogVal* [val.vector_val.size]; - - for ( int i = 0; i < val.vector_val.size; ++i ) - { - val.vector_val.vals[i] = new LogVal; - - if ( ! val.vector_val.vals[i]->Read(fmt) ) - return false; - } - - return true; - } - - default: - reporter->InternalError("unsupported type %s in LogVal::Write", type_name(type)); - } - - return false; - } - -bool LogVal::Write(SerializationFormat* fmt) const - { - if ( ! (fmt->Write((int)type, "type") && - fmt->Write(present, "present")) ) - return false; - - if ( ! present ) - return true; - - switch ( type ) { - case TYPE_BOOL: - case TYPE_INT: - return fmt->Write(val.int_val, "int"); - - case TYPE_COUNT: - case TYPE_COUNTER: - case TYPE_PORT: - return fmt->Write(val.uint_val, "uint"); - - case TYPE_SUBNET: - { - uint32 net[4]; -#ifdef BROv6 - net[0] = val.subnet_val.net[0]; - net[1] = val.subnet_val.net[1]; - net[2] = val.subnet_val.net[2]; - net[3] = val.subnet_val.net[3]; -#else - net[0] = val.subnet_val.net; - net[1] = net[2] = net[3] = 0; -#endif - return fmt->Write(net[0], "net0") && - fmt->Write(net[1], "net1") && - fmt->Write(net[2], "net2") && - fmt->Write(net[3], "net3") && - fmt->Write(val.subnet_val.width, "width"); - } - - case TYPE_ADDR: - { - uint32 addr[4]; - addr[0] = val.addr_val[0]; -#ifdef BROv6 - addr[1] = val.addr_val[1]; - addr[2] = val.addr_val[2]; - addr[3] = val.addr_val[3]; -#else - addr[1] = addr[2] = addr[3] = 0; -#endif - return fmt->Write(addr[0], "addr0") && - fmt->Write(addr[1], "addr1") && - fmt->Write(addr[2], "addr2") && - fmt->Write(addr[3], "addr3"); - } - - case TYPE_DOUBLE: - case TYPE_TIME: - case TYPE_INTERVAL: - return fmt->Write(val.double_val, "double"); - - case TYPE_ENUM: - case TYPE_STRING: - case TYPE_FILE: - case TYPE_FUNC: - return fmt->Write(*val.string_val, "string"); - - case TYPE_TABLE: - { - if ( ! fmt->Write(val.set_val.size, "set_size") ) - return false; - - for ( int i = 0; i < val.set_val.size; ++i ) - { - if ( ! val.set_val.vals[i]->Write(fmt) ) - return false; - } - - return true; - } - - case TYPE_VECTOR: - { - if ( ! fmt->Write(val.vector_val.size, "vector_size") ) - return false; - - for ( int i = 0; i < val.vector_val.size; ++i ) - { - if ( ! val.vector_val.vals[i]->Write(fmt) ) - return false; - } - - return true; - } - - default: - reporter->InternalError("unsupported type %s in LogVal::REad", type_name(type)); - } - - return false; - } - -LogMgr::Filter::~Filter() +Manager::Filter::~Filter() { for ( int i = 0; i < num_fields; ++i ) delete fields[i]; @@ -396,7 +117,7 @@ LogMgr::Filter::~Filter() Unref(path_val); } -LogMgr::Stream::~Stream() +Manager::Stream::~Stream() { Unref(columns); @@ -404,14 +125,12 @@ LogMgr::Stream::~Stream() { WriterInfo* winfo = i->second; - if ( ! winfo ) - continue; - if ( winfo->rotation_timer ) timer_mgr->Cancel(winfo->rotation_timer); Unref(winfo->type); delete winfo->writer; + delete winfo->info; delete winfo; } @@ -419,17 +138,81 @@ LogMgr::Stream::~Stream() delete *f; } -LogMgr::LogMgr() +Manager::Manager() { + rotations_pending = 0; } -LogMgr::~LogMgr() +Manager::~Manager() { for ( vector::iterator s = streams.begin(); s != streams.end(); ++s ) delete *s; } -LogMgr::Stream* LogMgr::FindStream(EnumVal* id) +list Manager::SupportedFormats() + { + list formats; + + for ( WriterDefinition* ld = log_writers; ld->type != BifEnum::Log::WRITER_DEFAULT; ++ld ) + formats.push_back(ld->name); + + return formats; + } + +WriterBackend* Manager::CreateBackend(WriterFrontend* frontend, bro_int_t type) + { + WriterDefinition* ld = log_writers; + + while ( true ) + { + if ( ld->type == BifEnum::Log::WRITER_DEFAULT ) + { + reporter->Error("unknown writer type requested"); + return 0; + } + + if ( ld->type != type ) + { + // Not the right one. + ++ld; + continue; + } + + // If the writer has an init function, call it. + if ( ld->init ) + { + if ( (*ld->init)() ) + // Clear the init function so that we won't + // call it again later. + ld->init = 0; + else + { + // Init failed, disable by deleting factory + // function. + ld->factory = 0; + + reporter->Error("initialization of writer %s failed", ld->name); + return 0; + } + } + + if ( ! ld->factory ) + // Oops, we can't instantiate this guy. + return 0; + + // All done. + break; + } + + assert(ld->factory); + + WriterBackend* backend = (*ld->factory)(frontend); + assert(backend); + + return backend; + } + +Manager::Stream* Manager::FindStream(EnumVal* id) { unsigned int idx = id->AsEnum(); @@ -439,7 +222,7 @@ LogMgr::Stream* LogMgr::FindStream(EnumVal* id) return streams[idx]; } -LogMgr::WriterInfo* LogMgr::FindWriter(LogWriter* writer) +Manager::WriterInfo* Manager::FindWriter(WriterFrontend* writer) { for ( vector::iterator s = streams.begin(); s != streams.end(); ++s ) { @@ -450,7 +233,7 @@ LogMgr::WriterInfo* LogMgr::FindWriter(LogWriter* writer) { WriterInfo* winfo = i->second; - if ( winfo && winfo->writer == writer ) + if ( winfo->writer == writer ) return winfo; } } @@ -458,14 +241,38 @@ LogMgr::WriterInfo* LogMgr::FindWriter(LogWriter* writer) return 0; } -void LogMgr::RemoveDisabledWriters(Stream* stream) +bool Manager::CompareFields(const Filter* filter, const WriterFrontend* writer) + { + if ( filter->num_fields != writer->NumFields() ) + return false; + + for ( int i = 0; i < filter->num_fields; ++ i) + if ( filter->fields[i]->type != writer->Fields()[i]->type ) + return false; + + return true; + } + +bool Manager::CheckFilterWriterConflict(const WriterInfo* winfo, const Filter* filter) + { + if ( winfo->from_remote ) + // If the writer was instantiated as a result of remote logging, then + // a filter and writer are only compatible if field types match + return ! CompareFields(filter, winfo->writer); + else + // If the writer was instantiated locally, it is bound to one filter + return winfo->instantiating_filter != filter->name; + } + +void Manager::RemoveDisabledWriters(Stream* stream) { list disabled; for ( Stream::WriterMap::iterator j = stream->writers.begin(); j != stream->writers.end(); j++ ) { - if ( j->second && j->second->writer->Disabled() ) + if ( j->second->writer->Disabled() ) { + j->second->writer->Stop(); delete j->second; disabled.push_back(j->first); } @@ -475,7 +282,7 @@ void LogMgr::RemoveDisabledWriters(Stream* stream) stream->writers.erase(*j); } -bool LogMgr::CreateStream(EnumVal* id, RecordVal* sval) +bool Manager::CreateStream(EnumVal* id, RecordVal* sval) { RecordType* rtype = sval->Type()->AsRecordType(); @@ -495,7 +302,7 @@ bool LogMgr::CreateStream(EnumVal* id, RecordVal* sval) if ( ! (columns->FieldDecl(i)->FindAttr(ATTR_LOG)) ) continue; - if ( ! LogVal::IsCompatibleType(columns->FieldType(i)) ) + if ( ! threading::Value::IsCompatibleType(columns->FieldType(i)) ) { reporter->Error("type of field '%s' is not support for logging output", columns->FieldName(i)); @@ -567,7 +374,7 @@ bool LogMgr::CreateStream(EnumVal* id, RecordVal* sval) return true; } -bool LogMgr::EnableStream(EnumVal* id) +bool Manager::EnableStream(EnumVal* id) { Stream* stream = FindStream(id); @@ -583,7 +390,7 @@ bool LogMgr::EnableStream(EnumVal* id) return true; } -bool LogMgr::DisableStream(EnumVal* id) +bool Manager::DisableStream(EnumVal* id) { Stream* stream = FindStream(id); @@ -600,7 +407,7 @@ bool LogMgr::DisableStream(EnumVal* id) } // Helper for recursive record field unrolling. -bool LogMgr::TraverseRecord(Stream* stream, Filter* filter, RecordType* rt, +bool Manager::TraverseRecord(Stream* stream, Filter* filter, RecordType* rt, TableVal* include, TableVal* exclude, string path, list indices) { for ( int i = 0; i < rt->NumFields(); ++i ) @@ -694,9 +501,9 @@ bool LogMgr::TraverseRecord(Stream* stream, Filter* filter, RecordType* rt, filter->indices.push_back(new_indices); - filter->fields = (LogField**) + filter->fields = (threading::Field**) realloc(filter->fields, - sizeof(LogField) * ++filter->num_fields); + sizeof(threading::Field) * ++filter->num_fields); if ( ! filter->fields ) { @@ -704,16 +511,23 @@ bool LogMgr::TraverseRecord(Stream* stream, Filter* filter, RecordType* rt, return false; } - LogField* field = new LogField(); - field->name = new_path; - field->type = t->Tag(); - filter->fields[filter->num_fields - 1] = field; + TypeTag st = TYPE_VOID; + + if ( t->Tag() == TYPE_TABLE ) + st = t->AsSetType()->Indices()->PureType()->Tag(); + + else if ( t->Tag() == TYPE_VECTOR ) + st = t->AsVectorType()->YieldType()->Tag(); + + bool optional = rt->FieldDecl(i)->FindAttr(ATTR_OPTIONAL); + + filter->fields[filter->num_fields - 1] = new threading::Field(new_path.c_str(), 0, t->Tag(), st, optional); } return true; } -bool LogMgr::AddFilter(EnumVal* id, RecordVal* fval) +bool Manager::AddFilter(EnumVal* id, RecordVal* fval) { RecordType* rtype = fval->Type()->AsRecordType(); @@ -740,6 +554,7 @@ bool LogMgr::AddFilter(EnumVal* id, RecordVal* fval) Val* log_remote = fval->LookupWithDefault(rtype->FieldOffset("log_remote")); Val* interv = fval->LookupWithDefault(rtype->FieldOffset("interv")); Val* postprocessor = fval->LookupWithDefault(rtype->FieldOffset("postprocessor")); + Val* config = fval->LookupWithDefault(rtype->FieldOffset("config")); Filter* filter = new Filter; filter->name = name->AsString()->CheckString(); @@ -751,6 +566,7 @@ bool LogMgr::AddFilter(EnumVal* id, RecordVal* fval) filter->remote = log_remote->AsBool(); filter->interval = interv->AsInterval(); filter->postprocessor = postprocessor ? postprocessor->AsFunc() : 0; + filter->config = config->Ref()->AsTableVal(); Unref(name); Unref(pred); @@ -759,6 +575,7 @@ bool LogMgr::AddFilter(EnumVal* id, RecordVal* fval) Unref(log_remote); Unref(interv); Unref(postprocessor); + Unref(config); // Build the list of fields that the filter wants included, including // potentially rolling out fields. @@ -809,21 +626,21 @@ bool LogMgr::AddFilter(EnumVal* id, RecordVal* fval) for ( int i = 0; i < filter->num_fields; i++ ) { - LogField* field = filter->fields[i]; + threading::Field* field = filter->fields[i]; DBG_LOG(DBG_LOGGING, " field %10s: %s", - field->name.c_str(), type_name(field->type)); + field->name, type_name(field->type)); } #endif return true; } -bool LogMgr::RemoveFilter(EnumVal* id, StringVal* name) +bool Manager::RemoveFilter(EnumVal* id, StringVal* name) { return RemoveFilter(id, name->AsString()->CheckString()); } -bool LogMgr::RemoveFilter(EnumVal* id, string name) +bool Manager::RemoveFilter(EnumVal* id, string name) { Stream* stream = FindStream(id); if ( ! stream ) @@ -850,7 +667,7 @@ bool LogMgr::RemoveFilter(EnumVal* id, string name) return true; } -bool LogMgr::Write(EnumVal* id, RecordVal* columns) +bool Manager::Write(EnumVal* id, RecordVal* columns) { bool error = false; @@ -893,16 +710,13 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns) int result = 1; - try + Val* v = filter->pred->Call(&vl); + if ( v ) { - Val* v = filter->pred->Call(&vl); result = v->AsBool(); Unref(v); } - catch ( InterpreterException& e ) - { /* Already reported. */ } - if ( ! result ) continue; } @@ -914,11 +728,11 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns) Val* path_arg; if ( filter->path_val ) - path_arg = filter->path_val; + path_arg = filter->path_val->Ref(); else path_arg = new StringVal(""); - vl.append(path_arg->Ref()); + vl.append(path_arg); Val* rec_arg; BroType* rt = filter->path_func->FType()->Args()->FieldType("rec"); @@ -933,15 +747,10 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns) Val* v = 0; - try - { - v = filter->path_func->Call(&vl); - } + v = filter->path_func->Call(&vl); - catch ( InterpreterException& e ) - { + if ( ! v ) return false; - } if ( ! v->Type()->Tag() == TYPE_STRING ) { @@ -952,7 +761,6 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns) if ( ! filter->path_val ) { - Unref(path_arg); filter->path = v->AsString()->CheckString(); filter->path_val = v->Ref(); } @@ -966,87 +774,96 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns) #endif } - // See if we already have a writer for this path. - Stream::WriterMap::iterator w = - stream->writers.find(Stream::WriterPathPair(filter->writer->AsEnum(), path)); + Stream::WriterPathPair wpp(filter->writer->AsEnum(), path); - LogWriter* writer = 0; + // See if we already have a writer for this path. + Stream::WriterMap::iterator w = stream->writers.find(wpp); + + if ( w != stream->writers.end() && + CheckFilterWriterConflict(w->second, filter) ) + { + // Auto-correct path due to conflict over the writer/path pairs. + string instantiator = w->second->instantiating_filter; + string new_path; + unsigned int i = 2; + + do { + char num[32]; + snprintf(num, sizeof(num), "-%u", i++); + new_path = path + num; + wpp.second = new_path; + w = stream->writers.find(wpp); + } while ( w != stream->writers.end() && + CheckFilterWriterConflict(w->second, filter) ); + + Unref(filter->path_val); + filter->path_val = new StringVal(new_path.c_str()); + + reporter->Warning("Write using filter '%s' on path '%s' changed to" + " use new path '%s' to avoid conflict with filter '%s'", + filter->name.c_str(), path.c_str(), new_path.c_str(), + instantiator.c_str()); + + path = filter->path = filter->path_val->AsString()->CheckString(); + } + + WriterFrontend* writer = 0; if ( w != stream->writers.end() ) + { // We know this writer already. - writer = w->second ? w->second->writer : 0; + writer = w->second->writer; + } else { // No, need to create one. - // Copy the fields for LogWriter::Init() as it will take - // ownership. - LogField** arg_fields = new LogField*[filter->num_fields]; + // Copy the fields for WriterFrontend::Init() as it + // will take ownership. + threading::Field** arg_fields = new threading::Field*[filter->num_fields]; for ( int j = 0; j < filter->num_fields; ++j ) - arg_fields[j] = new LogField(*filter->fields[j]); + arg_fields[j] = new threading::Field(*filter->fields[j]); - if ( filter->remote ) - remote_serializer->SendLogCreateWriter(stream->id, - filter->writer, - path, - filter->num_fields, - arg_fields); + WriterBackend::WriterInfo* info = new WriterBackend::WriterInfo; + info->path = copy_string(path.c_str()); + info->network_time = network_time; - if ( filter->local ) + HashKey* k; + IterCookie* c = filter->config->AsTable()->InitForIteration(); + + TableEntryVal* v; + while ( (v = filter->config->AsTable()->NextEntry(k, c)) ) { - writer = CreateWriter(stream->id, filter->writer, - path, filter->num_fields, - arg_fields); - - if ( ! writer ) - { - Unref(columns); - return false; - } + ListVal* index = filter->config->RecoverIndex(k); + string key = index->Index(0)->AsString()->CheckString(); + string value = v->Value()->AsString()->CheckString(); + info->config.insert(std::make_pair(copy_string(key.c_str()), copy_string(value.c_str()))); + Unref(index); + delete k; } - else + + // CreateWriter() will set the other fields in info. + + writer = CreateWriter(stream->id, filter->writer, + info, filter->num_fields, arg_fields, filter->local, + filter->remote, false, filter->name); + + if ( ! writer ) { - // Insert a null pointer into the map to make - // sure we don't try creating it again. - stream->writers.insert(Stream::WriterMap::value_type( - Stream::WriterPathPair(filter->writer->AsEnum(), path), 0)); - - for( int i = 0; i < filter->num_fields; ++i) - delete arg_fields[i]; - - delete [] arg_fields; + Unref(columns); + return false; } } // Alright, can do the write now. - if ( filter->local || filter->remote ) - { - LogVal** vals = RecordToFilterVals(stream, filter, columns); - - if ( filter->remote ) - remote_serializer->SendLogWrite(stream->id, - filter->writer, - path, - filter->num_fields, - vals); - - if ( filter->local ) - { - assert(writer); - - // Write takes ownership of vals. - if ( ! writer->Write(filter->num_fields, vals) ) - error = true; - } - - else - DeleteVals(filter->num_fields, vals); - - } + threading::Value** vals = RecordToFilterVals(stream, filter, columns); + // Write takes ownership of vals. + assert(writer); + writer->Write(filter->num_fields, vals); #ifdef DEBUG DBG_LOG(DBG_LOGGING, "Wrote record to filter '%s' on stream '%s'", @@ -1062,15 +879,15 @@ bool LogMgr::Write(EnumVal* id, RecordVal* columns) return true; } -LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) +threading::Value* Manager::ValToLogVal(Val* val, BroType* ty) { if ( ! ty ) ty = val->Type(); if ( ! val ) - return new LogVal(ty->Tag(), false); + return new threading::Value(ty->Tag(), false); - LogVal* lval = new LogVal(ty->Tag()); + threading::Value* lval = new threading::Value(ty->Tag()); switch ( lval->type ) { case TYPE_BOOL: @@ -1083,7 +900,18 @@ LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) const char* s = val->Type()->AsEnumType()->Lookup(val->InternalInt()); - lval->val.string_val = new string(s); + if ( s ) + { + lval->val.string_val.data = copy_string(s); + lval->val.string_val.length = strlen(s); + } + + else + { + val->Type()->Error("enum type does not contain value", val); + lval->val.string_val.data = copy_string(""); + lval->val.string_val.length = 0; + } break; } @@ -1093,23 +921,17 @@ LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) break; case TYPE_PORT: - lval->val.uint_val = val->AsPortVal()->Port(); + lval->val.port_val.port = val->AsPortVal()->Port(); + lval->val.port_val.proto = val->AsPortVal()->PortType(); break; case TYPE_SUBNET: - lval->val.subnet_val = *val->AsSubNet(); + val->AsSubNet().ConvertToThreadingValue(&lval->val.subnet_val); break; case TYPE_ADDR: - { - addr_type t = val->AsAddr(); -#ifdef BROv6 - copy_addr(t, lval->val.addr_val); -#else - copy_addr(&t, lval->val.addr_val); -#endif + val->AsAddr().ConvertToThreadingValue(&lval->val.addr_val); break; - } case TYPE_DOUBLE: case TYPE_TIME: @@ -1120,15 +942,20 @@ LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) case TYPE_STRING: { const BroString* s = val->AsString(); - lval->val.string_val = - new string((const char*) s->Bytes(), s->Len()); + char* buf = new char[s->Len()]; + memcpy(buf, s->Bytes(), s->Len()); + + lval->val.string_val.data = buf; + lval->val.string_val.length = s->Len(); break; } case TYPE_FILE: { const BroFile* f = val->AsFile(); - lval->val.string_val = new string(f->Name()); + string s = f->Name(); + lval->val.string_val.data = copy_string(s.c_str()); + lval->val.string_val.length = s.size(); break; } @@ -1137,7 +964,9 @@ LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) ODesc d; const Func* f = val->AsFunc(); f->Describe(&d); - lval->val.string_val = new string(d.Description()); + const char* s = d.Description(); + lval->val.string_val.data = copy_string(s); + lval->val.string_val.length = strlen(s); break; } @@ -1150,7 +979,7 @@ LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) set = new ListVal(TYPE_INT); lval->val.set_val.size = set->Length(); - lval->val.set_val.vals = new LogVal* [lval->val.set_val.size]; + lval->val.set_val.vals = new threading::Value* [lval->val.set_val.size]; for ( int i = 0; i < lval->val.set_val.size; i++ ) lval->val.set_val.vals[i] = ValToLogVal(set->Index(i)); @@ -1164,7 +993,7 @@ LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) VectorVal* vec = val->AsVectorVal(); lval->val.vector_val.size = vec->Size(); lval->val.vector_val.vals = - new LogVal* [lval->val.vector_val.size]; + new threading::Value* [lval->val.vector_val.size]; for ( int i = 0; i < lval->val.vector_val.size; i++ ) { @@ -1183,10 +1012,10 @@ LogVal* LogMgr::ValToLogVal(Val* val, BroType* ty) return lval; } -LogVal** LogMgr::RecordToFilterVals(Stream* stream, Filter* filter, +threading::Value** Manager::RecordToFilterVals(Stream* stream, Filter* filter, RecordVal* columns) { - LogVal** vals = new LogVal*[filter->num_fields]; + threading::Value** vals = new threading::Value*[filter->num_fields]; for ( int i = 0; i < filter->num_fields; ++i ) { @@ -1205,7 +1034,7 @@ LogVal** LogMgr::RecordToFilterVals(Stream* stream, Filter* filter, if ( ! val ) { // Value, or any of its parents, is not set. - vals[i] = new LogVal(filter->fields[i]->type, false); + vals[i] = new threading::Value(filter->fields[i]->type, false); break; } } @@ -1217,81 +1046,34 @@ LogVal** LogMgr::RecordToFilterVals(Stream* stream, Filter* filter, return vals; } -LogWriter* LogMgr::CreateWriter(EnumVal* id, EnumVal* writer, string path, - int num_fields, LogField** fields) +WriterFrontend* Manager::CreateWriter(EnumVal* id, EnumVal* writer, WriterBackend::WriterInfo* info, + int num_fields, const threading::Field* const* fields, bool local, bool remote, bool from_remote, + const string& instantiating_filter) { Stream* stream = FindStream(id); if ( ! stream ) // Don't know this stream. - return false; + return 0; Stream::WriterMap::iterator w = - stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), path)); + stream->writers.find(Stream::WriterPathPair(writer->AsEnum(), info->path)); - if ( w != stream->writers.end() && w->second ) + if ( w != stream->writers.end() ) // If we already have a writer for this. That's fine, we just // return it. return w->second->writer; - // Need to instantiate a new writer. - - LogWriterDefinition* ld = log_writers; - - while ( true ) - { - if ( ld->type == BifEnum::Log::WRITER_DEFAULT ) - { - reporter->Error("unknow writer when creating writer"); - return 0; - } - - if ( ld->type == writer->AsEnum() ) - break; - - if ( ! ld->factory ) - // Oops, we can't instantiate this guy. - return 0; - - // If the writer has an init function, call it. - if ( ld->init ) - { - if ( (*ld->init)() ) - // Clear the init function so that we won't - // call it again later. - ld->init = 0; - else - // Init failed, disable by deleting factory - // function. - ld->factory = 0; - - DBG_LOG(DBG_LOGGING, "failed to init writer class %s", - ld->name); - - return false; - } - - ++ld; - } - - assert(ld->factory); - LogWriter* writer_obj = (*ld->factory)(); - - if ( ! writer_obj->Init(path, num_fields, fields) ) - { - DBG_LOG(DBG_LOGGING, "failed to init instance of writer %s", - ld->name); - - return 0; - } - WriterInfo* winfo = new WriterInfo; winfo->type = writer->Ref()->AsEnumVal(); - winfo->writer = writer_obj; + winfo->writer = 0; winfo->open_time = network_time; winfo->rotation_timer = 0; winfo->interval = 0; winfo->postprocessor = 0; + winfo->info = info; + winfo->from_remote = from_remote; + winfo->instantiating_filter = instantiating_filter; // Search for a corresponding filter for the writer/path pair and use its // rotation settings. If no matching filter is found, fall back on @@ -1303,7 +1085,7 @@ LogWriter* LogMgr::CreateWriter(EnumVal* id, EnumVal* writer, string path, { Filter* f = *it; if ( f->writer->AsEnum() == writer->AsEnum() && - f->path == winfo->writer->Path() ) + f->path == info->path ) { found_filter_match = true; winfo->interval = f->interval; @@ -1319,25 +1101,37 @@ LogWriter* LogMgr::CreateWriter(EnumVal* id, EnumVal* writer, string path, winfo->interval = id->ID_Val()->AsInterval(); } - InstallRotationTimer(winfo); - stream->writers.insert( - Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), path), + Stream::WriterMap::value_type(Stream::WriterPathPair(writer->AsEnum(), info->path), winfo)); - return writer_obj; + // Still need to set the WriterInfo's rotation parameters, which we + // computed above. + const char* base_time = log_rotate_base_time ? + log_rotate_base_time->AsString()->CheckString() : 0; + + winfo->info->rotation_interval = winfo->interval; + winfo->info->rotation_base = parse_rotate_base_time(base_time); + + winfo->writer = new WriterFrontend(*winfo->info, id, writer, local, remote); + winfo->writer->Init(num_fields, fields); + + InstallRotationTimer(winfo); + + return winfo->writer; } -void LogMgr::DeleteVals(int num_fields, LogVal** vals) +void Manager::DeleteVals(int num_fields, threading::Value** vals) { + // Note this code is duplicated in WriterBackend::DeleteVals(). for ( int i = 0; i < num_fields; i++ ) delete vals[i]; delete [] vals; } -bool LogMgr::Write(EnumVal* id, EnumVal* writer, string path, int num_fields, - LogVal** vals) +bool Manager::Write(EnumVal* id, EnumVal* writer, string path, int num_fields, + threading::Value** vals) { Stream* stream = FindStream(id); @@ -1347,7 +1141,7 @@ bool LogMgr::Write(EnumVal* id, EnumVal* writer, string path, int num_fields, #ifdef DEBUG ODesc desc; id->Describe(&desc); - DBG_LOG(DBG_LOGGING, "unknown stream %s in LogMgr::Write()", + DBG_LOG(DBG_LOGGING, "unknown stream %s in Manager::Write()", desc.Description()); #endif DeleteVals(num_fields, vals); @@ -1369,23 +1163,23 @@ bool LogMgr::Write(EnumVal* id, EnumVal* writer, string path, int num_fields, #ifdef DEBUG ODesc desc; id->Describe(&desc); - DBG_LOG(DBG_LOGGING, "unknown writer %s in LogMgr::Write()", + DBG_LOG(DBG_LOGGING, "unknown writer %s in Manager::Write()", desc.Description()); #endif DeleteVals(num_fields, vals); return false; } - bool success = (w->second ? w->second->writer->Write(num_fields, vals) : true); + w->second->writer->Write(num_fields, vals); DBG_LOG(DBG_LOGGING, - "Wrote pre-filtered record to path '%s' on stream '%s' [%s]", - path.c_str(), stream->name.c_str(), (success ? "ok" : "error")); + "Wrote pre-filtered record to path '%s' on stream '%s'", + path.c_str(), stream->name.c_str()); - return success; + return true; } -void LogMgr::SendAllWritersTo(RemoteSerializer::PeerID peer) +void Manager::SendAllWritersTo(RemoteSerializer::PeerID peer) { for ( vector::iterator s = streams.begin(); s != streams.end(); ++s ) { @@ -1397,22 +1191,19 @@ void LogMgr::SendAllWritersTo(RemoteSerializer::PeerID peer) for ( Stream::WriterMap::iterator i = stream->writers.begin(); i != stream->writers.end(); i++ ) { - if ( ! i->second ) - continue; - - LogWriter* writer = i->second->writer; + WriterFrontend* writer = i->second->writer; EnumVal writer_val(i->first.first, BifType::Enum::Log::Writer); remote_serializer->SendLogCreateWriter(peer, (*s)->id, &writer_val, - i->first.second, + *i->second->info, writer->NumFields(), writer->Fields()); } } } -bool LogMgr::SetBuf(EnumVal* id, bool enabled) +bool Manager::SetBuf(EnumVal* id, bool enabled) { Stream* stream = FindStream(id); if ( ! stream ) @@ -1420,17 +1211,14 @@ bool LogMgr::SetBuf(EnumVal* id, bool enabled) for ( Stream::WriterMap::iterator i = stream->writers.begin(); i != stream->writers.end(); i++ ) - { - if ( i->second ) - i->second->writer->SetBuf(enabled); - } + i->second->writer->SetBuf(enabled); RemoveDisabledWriters(stream); return true; } -bool LogMgr::Flush(EnumVal* id) +bool Manager::Flush(EnumVal* id) { Stream* stream = FindStream(id); if ( ! stream ) @@ -1441,26 +1229,39 @@ bool LogMgr::Flush(EnumVal* id) for ( Stream::WriterMap::iterator i = stream->writers.begin(); i != stream->writers.end(); i++ ) - { - if ( i->second ) - i->second->writer->Flush(); - } + i->second->writer->Flush(network_time); RemoveDisabledWriters(stream); return true; } -void LogMgr::Error(LogWriter* writer, const char* msg) +void Manager::Terminate() { - reporter->Error("error with writer for %s: %s", - writer->Path().c_str(), msg); + // Make sure we process all the pending rotations. + + while ( rotations_pending > 0 ) + { + thread_mgr->ForceProcessing(); // A blatant layering violation ... + usleep(1000); + } + + if ( rotations_pending < 0 ) + reporter->InternalError("Negative pending log rotations: %d", rotations_pending); + + for ( vector::iterator s = streams.begin(); s != streams.end(); ++s ) + { + if ( ! *s ) + continue; + + Flush((*s)->id); + } } // Timer which on dispatching rotates the filter. class RotationTimer : public Timer { public: - RotationTimer(double t, LogMgr::WriterInfo* arg_winfo, bool arg_rotate) + RotationTimer(double t, Manager::WriterInfo* arg_winfo, bool arg_rotate) : Timer(t, TIMER_ROTATE) { winfo = arg_winfo; @@ -1472,7 +1273,7 @@ public: void Dispatch(double t, int is_expire); protected: - LogMgr::WriterInfo* winfo; + Manager::WriterInfo* winfo; bool rotate; }; @@ -1496,7 +1297,7 @@ void RotationTimer::Dispatch(double t, int is_expire) } } -void LogMgr::InstallRotationTimer(WriterInfo* winfo) +void Manager::InstallRotationTimer(WriterInfo* winfo) { if ( terminating ) return; @@ -1524,8 +1325,9 @@ void LogMgr::InstallRotationTimer(WriterInfo* winfo) const char* base_time = log_rotate_base_time ? log_rotate_base_time->AsString()->CheckString() : 0; + double base = parse_rotate_base_time(base_time); double delta_t = - calc_next_rotate(rotation_interval, base_time); + calc_next_rotate(network_time, rotation_interval, base); winfo->rotation_timer = new RotationTimer(network_time + delta_t, winfo, true); @@ -1534,14 +1336,14 @@ void LogMgr::InstallRotationTimer(WriterInfo* winfo) timer_mgr->Add(winfo->rotation_timer); DBG_LOG(DBG_LOGGING, "Scheduled rotation timer for %s to %.6f", - winfo->writer->Path().c_str(), winfo->rotation_timer->Time()); + winfo->writer->Name(), winfo->rotation_timer->Time()); } } -void LogMgr::Rotate(WriterInfo* winfo) +void Manager::Rotate(WriterInfo* winfo) { DBG_LOG(DBG_LOGGING, "Rotating %s at %.6f", - winfo->writer->Path().c_str(), network_time); + winfo->writer->Name(), network_time); // Build a temporary path for the writer to move the file to. struct tm tm; @@ -1552,17 +1354,29 @@ void LogMgr::Rotate(WriterInfo* winfo) localtime_r(&teatime, &tm); strftime(buf, sizeof(buf), date_fmt, &tm); - string tmp = string(fmt("%s-%s", winfo->writer->Path().c_str(), buf)); - // Trigger the rotation. + const char* tmp = fmt("%s-%s", winfo->writer->Info().path, buf); winfo->writer->Rotate(tmp, winfo->open_time, network_time, terminating); + + ++rotations_pending; } -bool LogMgr::FinishedRotation(LogWriter* writer, string new_name, string old_name, - double open, double close, bool terminating) +bool Manager::FinishedRotation(WriterFrontend* writer, const char* new_name, const char* old_name, + double open, double close, bool success, bool terminating) { + assert(writer); + + --rotations_pending; + + if ( ! success ) + { + DBG_LOG(DBG_LOGGING, "Non-successful rotating writer '%s', file '%s' at %.6f,", + writer->Name(), filename, network_time); + return true; + } + DBG_LOG(DBG_LOGGING, "Finished rotating %s at %.6f, new name %s", - writer->Path().c_str(), network_time, new_name.c_str()); + writer->Name(), network_time, new_name); WriterInfo* winfo = FindWriter(writer); if ( ! winfo ) @@ -1571,8 +1385,8 @@ bool LogMgr::FinishedRotation(LogWriter* writer, string new_name, string old_nam // Create the RotationInfo record. RecordVal* info = new RecordVal(BifType::Record::Log::RotationInfo); info->Assign(0, winfo->type->Ref()); - info->Assign(1, new StringVal(new_name.c_str())); - info->Assign(2, new StringVal(winfo->writer->Path().c_str())); + info->Assign(1, new StringVal(new_name)); + info->Assign(2, new StringVal(winfo->writer->Info().path)); info->Assign(3, new Val(open, TYPE_TIME)); info->Assign(4, new Val(close, TYPE_TIME)); info->Assign(5, new Val(terminating, TYPE_BOOL)); @@ -1593,16 +1407,12 @@ bool LogMgr::FinishedRotation(LogWriter* writer, string new_name, string old_nam int result = 0; - try + Val* v = func->Call(&vl); + if ( v ) { - Val* v = func->Call(&vl); result = v->AsBool(); Unref(v); } - catch ( InterpreterException& e ) - { /* Already reported. */ } - return result; } - diff --git a/src/logging/Manager.h b/src/logging/Manager.h new file mode 100644 index 0000000000..90ad944bc6 --- /dev/null +++ b/src/logging/Manager.h @@ -0,0 +1,214 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// A class managing log writers and filters. + +#ifndef LOGGING_MANAGER_H +#define LOGGING_MANAGER_H + +#include "../Val.h" +#include "../EventHandler.h" +#include "../RemoteSerializer.h" + +#include "WriterBackend.h" + +class SerializationFormat; +class RemoteSerializer; +class RotationTimer; + +namespace logging { + +class WriterFrontend; +class RotationFinishedMessage; + +/** + * Singleton class for managing log streams. + */ +class Manager { +public: + /** + * Constructor. + */ + Manager(); + + /** + * Destructor. + */ + ~Manager(); + + /** + * Creates a new log stream. + * + * @param id The enum value corresponding the log stream. + * + * @param stream A record of script type \c Log::Stream. + * + * This method corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool CreateStream(EnumVal* id, RecordVal* stream); + + /** + * Enables a log log stream. + * + * @param id The enum value corresponding the log stream. + * + * This method corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool EnableStream(EnumVal* id); + + /** + * Disables a log stream. + * + * @param id The enum value corresponding the log stream. + * + * This methods corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool DisableStream(EnumVal* id); + + /** + * Adds a filter to a log stream. + * + * @param id The enum value corresponding the log stream. + * + * @param filter A record of script type \c Log::Filter. + * + * This methods corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool AddFilter(EnumVal* id, RecordVal* filter); + + /** + * Removes a filter from a log stream. + * + * @param id The enum value corresponding the log stream. + * + * @param name The name of the filter to remove. + * + * This methods corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool RemoveFilter(EnumVal* id, StringVal* name); + + /** + * Removes a filter from a log stream. + * + * @param id The enum value corresponding the log stream. + * + * @param name The name of the filter to remove. + * + * This methods corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool RemoveFilter(EnumVal* id, string name); + + /** + * Write a record to a log stream. + * + * @param id The enum value corresponding the log stream. + * + * @param colums A record of the type defined for the stream's + * columns. + * + * This methods corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool Write(EnumVal* id, RecordVal* columns); + + /** + * Sets log streams buffering state. This adjusts all associated + * writers to the new state. + * + * @param id The enum value corresponding the log stream. + * + * @param enabled False to disable buffering (default is enabled). + * + * This methods corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool SetBuf(EnumVal* id, bool enabled); + + /** + * Flushes a log stream. This flushed all associated writers. + * + * @param id The enum value corresponding the log stream. + * + * This methods corresponds directly to the internal BiF defined in + * logging.bif, which just forwards here. + */ + bool Flush(EnumVal* id); + + /** + * Prepares the log manager to terminate. This will flush all log + * stream. + */ + void Terminate(); + + /** + * Returns a list of supported output formats. + */ + static list SupportedFormats(); + +protected: + friend class WriterFrontend; + friend class RotationFinishedMessage; + friend class RotationFailedMessage; + friend class ::RemoteSerializer; + friend class ::RotationTimer; + + // Instantiates a new WriterBackend of the given type (note that + // doing so creates a new thread!). + WriterBackend* CreateBackend(WriterFrontend* frontend, bro_int_t type); + + //// Function also used by the RemoteSerializer. + + // Takes ownership of fields and info. + WriterFrontend* CreateWriter(EnumVal* id, EnumVal* writer, WriterBackend::WriterInfo* info, + int num_fields, const threading::Field* const* fields, + bool local, bool remote, bool from_remote, const string& instantiating_filter=""); + + // Takes ownership of values.. + bool Write(EnumVal* id, EnumVal* writer, string path, + int num_fields, threading::Value** vals); + + // Announces all instantiated writers to peer. + void SendAllWritersTo(RemoteSerializer::PeerID peer); + + // Signals that a file has been rotated. + bool FinishedRotation(WriterFrontend* writer, const char* new_name, const char* old_name, + double open, double close, bool success, bool terminating); + + // Deletes the values as passed into Write(). + void DeleteVals(int num_fields, threading::Value** vals); + +private: + struct Filter; + struct Stream; + struct WriterInfo; + + bool TraverseRecord(Stream* stream, Filter* filter, RecordType* rt, + TableVal* include, TableVal* exclude, string path, list indices); + + threading::Value** RecordToFilterVals(Stream* stream, Filter* filter, + RecordVal* columns); + + threading::Value* ValToLogVal(Val* val, BroType* ty = 0); + Stream* FindStream(EnumVal* id); + void RemoveDisabledWriters(Stream* stream); + void InstallRotationTimer(WriterInfo* winfo); + void Rotate(WriterInfo* info); + Filter* FindFilter(EnumVal* id, StringVal* filter); + WriterInfo* FindWriter(WriterFrontend* writer); + bool CompareFields(const Filter* filter, const WriterFrontend* writer); + bool CheckFilterWriterConflict(const WriterInfo* winfo, const Filter* filter); + + vector streams; // Indexed by stream enum. + int rotations_pending; // Number of rotations not yet finished. +}; + +} + +extern logging::Manager* log_mgr; + +#endif diff --git a/src/logging/WriterBackend.cc b/src/logging/WriterBackend.cc new file mode 100644 index 0000000000..47fdec27ef --- /dev/null +++ b/src/logging/WriterBackend.cc @@ -0,0 +1,373 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "util.h" +#include "bro_inet_ntop.h" +#include "threading/SerialTypes.h" + +#include "Manager.h" +#include "WriterBackend.h" +#include "WriterFrontend.h" + +// Messages sent from backend to frontend (i.e., "OutputMessages"). + +using threading::Value; +using threading::Field; + +namespace logging { + +class RotationFinishedMessage : public threading::OutputMessage +{ +public: + RotationFinishedMessage(WriterFrontend* writer, const char* new_name, const char* old_name, + double open, double close, bool success, bool terminating) + : threading::OutputMessage("RotationFinished", writer), + new_name(copy_string(new_name)), old_name(copy_string(old_name)), open(open), + close(close), success(success), terminating(terminating) { } + + virtual ~RotationFinishedMessage() + { + delete [] new_name; + delete [] old_name; + } + + virtual bool Process() + { + return log_mgr->FinishedRotation(Object(), new_name, old_name, open, close, success, terminating); + } + +private: + const char* new_name; + const char* old_name; + double open; + double close; + bool success; + bool terminating; +}; + +class FlushWriteBufferMessage : public threading::OutputMessage +{ +public: + FlushWriteBufferMessage(WriterFrontend* writer) + : threading::OutputMessage("FlushWriteBuffer", writer) {} + + virtual bool Process() { Object()->FlushWriteBuffer(); return true; } +}; + +class DisableMessage : public threading::OutputMessage +{ +public: + DisableMessage(WriterFrontend* writer) + : threading::OutputMessage("Disable", writer) {} + + virtual bool Process() { Object()->SetDisable(); return true; } +}; + +} + +// Backend methods. + +using namespace logging; + +bool WriterBackend::WriterInfo::Read(SerializationFormat* fmt) + { + int size; + + string tmp_path; + + if ( ! (fmt->Read(&tmp_path, "path") && + fmt->Read(&rotation_base, "rotation_base") && + fmt->Read(&rotation_interval, "rotation_interval") && + fmt->Read(&network_time, "network_time") && + fmt->Read(&size, "config_size")) ) + return false; + + path = copy_string(tmp_path.c_str()); + + config.clear(); + + while ( size ) + { + string value; + string key; + + if ( ! (fmt->Read(&value, "config-value") && fmt->Read(&value, "config-key")) ) + return false; + + config.insert(std::make_pair(copy_string(value.c_str()), copy_string(key.c_str()))); + } + + return true; + } + + +bool WriterBackend::WriterInfo::Write(SerializationFormat* fmt) const + { + int size = config.size(); + + if ( ! (fmt->Write(path, "path") && + fmt->Write(rotation_base, "rotation_base") && + fmt->Write(rotation_interval, "rotation_interval") && + fmt->Write(network_time, "network_time") && + fmt->Write(size, "config_size")) ) + return false; + + for ( config_map::const_iterator i = config.begin(); i != config.end(); ++i ) + { + if ( ! (fmt->Write(i->first, "config-value") && fmt->Write(i->second, "config-key")) ) + return false; + } + + return true; + } + +WriterBackend::WriterBackend(WriterFrontend* arg_frontend) : MsgThread() + { + num_fields = 0; + fields = 0; + buffering = true; + frontend = arg_frontend; + info = new WriterInfo(frontend->Info()); + rotation_counter = 0; + + SetName(frontend->Name()); + } + +WriterBackend::~WriterBackend() + { + if ( fields ) + { + for(int i = 0; i < num_fields; ++i) + delete fields[i]; + + delete [] fields; + } + + delete info; + } + +void WriterBackend::DeleteVals(int num_writes, Value*** vals) + { + for ( int j = 0; j < num_writes; ++j ) + { + // Note this code is duplicated in Manager::DeleteVals(). + for ( int i = 0; i < num_fields; i++ ) + delete vals[j][i]; + + delete [] vals[j]; + } + + delete [] vals; + } + +bool WriterBackend::FinishedRotation(const char* new_name, const char* old_name, + double open, double close, bool terminating) + { + --rotation_counter; + SendOut(new RotationFinishedMessage(frontend, new_name, old_name, open, close, true, terminating)); + return true; + } + +bool WriterBackend::FinishedRotation() + { + --rotation_counter; + SendOut(new RotationFinishedMessage(frontend, 0, 0, 0, 0, false, false)); + return true; + } + +void WriterBackend::DisableFrontend() + { + SendOut(new DisableMessage(frontend)); + } + +bool WriterBackend::Init(int arg_num_fields, const Field* const* arg_fields) + { + num_fields = arg_num_fields; + fields = arg_fields; + + if ( Failed() ) + return true; + + if ( ! DoInit(*info, arg_num_fields, arg_fields) ) + { + DisableFrontend(); + return false; + } + + return true; + } + +bool WriterBackend::Write(int arg_num_fields, int num_writes, Value*** vals) + { + // Double-check that the arguments match. If we get this from remote, + // something might be mixed up. + if ( num_fields != arg_num_fields ) + { + +#ifdef DEBUG + const char* msg = Fmt("Number of fields don't match in WriterBackend::Write() (%d vs. %d)", + arg_num_fields, num_fields); + Debug(DBG_LOGGING, msg); +#endif + + DeleteVals(num_writes, vals); + DisableFrontend(); + return false; + } + + // Double-check all the types match. + for ( int j = 0; j < num_writes; j++ ) + { + for ( int i = 0; i < num_fields; ++i ) + { + if ( vals[j][i]->type != fields[i]->type ) + { +#ifdef DEBUG + const char* msg = Fmt("Field type doesn't match in WriterBackend::Write() (%d vs. %d)", + vals[j][i]->type, fields[i]->type); + Debug(DBG_LOGGING, msg); +#endif + DisableFrontend(); + DeleteVals(num_writes, vals); + return false; + } + } + } + + bool success = true; + + if ( ! Failed() ) + { + for ( int j = 0; j < num_writes; j++ ) + { + success = DoWrite(num_fields, fields, vals[j]); + + if ( ! success ) + break; + } + } + + DeleteVals(num_writes, vals); + + if ( ! success ) + DisableFrontend(); + + return success; + } + +bool WriterBackend::SetBuf(bool enabled) + { + if ( enabled == buffering ) + // No change. + return true; + + if ( Failed() ) + return true; + + buffering = enabled; + + if ( ! DoSetBuf(enabled) ) + { + DisableFrontend(); + return false; + } + + return true; + } + +bool WriterBackend::Rotate(const char* rotated_path, double open, + double close, bool terminating) + { + if ( Failed() ) + return true; + + rotation_counter = 1; + + if ( ! DoRotate(rotated_path, open, close, terminating) ) + { + DisableFrontend(); + return false; + } + + // Insurance against broken writers. + if ( rotation_counter > 0 ) + InternalError(Fmt("writer %s did not call FinishedRotation() in DoRotation()", Name())); + + if ( rotation_counter < 0 ) + InternalError(Fmt("writer %s called FinishedRotation() more than once in DoRotation()", Name())); + + return true; + } + +bool WriterBackend::Flush(double network_time) + { + if ( Failed() ) + return true; + + if ( ! DoFlush(network_time) ) + { + DisableFrontend(); + return false; + } + + return true; + } + +bool WriterBackend::OnFinish(double network_time) + { + if ( Failed() ) + return true; + + return DoFinish(network_time); + } + +bool WriterBackend::OnHeartbeat(double network_time, double current_time) + { + if ( Failed() ) + return true; + + SendOut(new FlushWriteBufferMessage(frontend)); + return DoHeartbeat(network_time, current_time); + } + +string WriterBackend::Render(const threading::Value::addr_t& addr) const + { + if ( addr.family == IPv4 ) + { + char s[INET_ADDRSTRLEN]; + + if ( ! bro_inet_ntop(AF_INET, &addr.in.in4, s, INET_ADDRSTRLEN) ) + return ""; + else + return s; + } + else + { + char s[INET6_ADDRSTRLEN]; + + if ( ! bro_inet_ntop(AF_INET6, &addr.in.in6, s, INET6_ADDRSTRLEN) ) + return ""; + else + return s; + } + } + +string WriterBackend::Render(const threading::Value::subnet_t& subnet) const + { + char l[16]; + + if ( subnet.prefix.family == IPv4 ) + modp_uitoa10(subnet.length - 96, l); + else + modp_uitoa10(subnet.length, l); + + string s = Render(subnet.prefix) + "/" + l; + + return s; + } + +string WriterBackend::Render(double d) const + { + char buf[256]; + modp_dtoa(d, buf, 6); + return buf; + } diff --git a/src/logging/WriterBackend.h b/src/logging/WriterBackend.h new file mode 100644 index 0000000000..89185619c4 --- /dev/null +++ b/src/logging/WriterBackend.h @@ -0,0 +1,426 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// Bridge class between main process and writer threads. + +#ifndef LOGGING_WRITERBACKEND_H +#define LOGGING_WRITERBACKEND_H + +#include "threading/MsgThread.h" + +class RemoteSerializer; + +namespace logging { + +class WriterFrontend; + +/** + * Base class for writer implementation. When the logging::Manager creates a + * new logging filter, it instantiates a WriterFrontend. That then in turn + * creates a WriterBackend of the right type. The frontend then forwards + * messages over the backend as its methods are called. + * + * All of this methods must be called only from the corresponding child + * thread (the constructor and destructor are the exceptions.) + */ +class WriterBackend : public threading::MsgThread +{ +public: + /** + * Constructor. + * + * @param frontend The frontend writer that created this backend. The + * *only* purpose of this value is to be passed back via messages as + * a argument to callbacks. One must not otherwise access the + * frontend, it's running in a different thread. + * + * @param name A descriptive name for writer's type (e.g., \c Ascii). + * + */ + WriterBackend(WriterFrontend* frontend); + + /** + * Destructor. + */ + virtual ~WriterBackend(); + + /** + * A struct passing information to the writer at initialization time. + */ + struct WriterInfo + { + // Structure takes ownership of these strings. + typedef std::map config_map; + + /** + * A string left to the interpretation of the writer + * implementation; it corresponds to the 'path' value configured + * on the script-level for the logging filter. + * + * Structure takes ownership of string. + */ + const char* path; + + /** + * The rotation interval as configured for this writer. + */ + double rotation_interval; + + /** + * The parsed value of log_rotate_base_time in seconds. + */ + double rotation_base; + + /** + * The network time when the writer is created. + */ + double network_time; + + /** + * A map of key/value pairs corresponding to the relevant + * filter's "config" table. + */ + config_map config; + + WriterInfo() : path(0), rotation_interval(0.0), rotation_base(0.0), + network_time(0.0) + { + } + + WriterInfo(const WriterInfo& other) + { + path = other.path ? copy_string(other.path) : 0; + rotation_interval = other.rotation_interval; + rotation_base = other.rotation_base; + network_time = other.network_time; + + for ( config_map::const_iterator i = other.config.begin(); i != other.config.end(); i++ ) + config.insert(std::make_pair(copy_string(i->first), copy_string(i->second))); + } + + ~WriterInfo() + { + delete [] path; + + for ( config_map::iterator i = config.begin(); i != config.end(); i++ ) + { + delete [] i->first; + delete [] i->second; + } + } + + private: + const WriterInfo& operator=(const WriterInfo& other); // Disable. + + friend class ::RemoteSerializer; + + // Note, these need to be adapted when changing the struct's + // fields. They serialize/deserialize the struct. + bool Read(SerializationFormat* fmt); + bool Write(SerializationFormat* fmt) const; + }; + + /** + * One-time initialization of the writer to define the logged fields. + * + * @param num_fields + * + * @param fields An array of size \a num_fields with the log fields. + * The methods takes ownership of the array. + * + * @param frontend_name The name of the front-end writer implementation. + * + * @return False if an error occured. + */ + bool Init(int num_fields, const threading::Field* const* fields); + + /** + * Writes one log entry. + * + * @param num_fields: The number of log fields for this stream. The + * value must match what was passed to Init(). + * + * @param An array of size \a num_fields with the log values. Their + * types musst match with the field passed to Init(). The method + * takes ownership of \a vals.. + * + * Returns false if an error occured, in which case the writer must + * not be used any further. + * + * @return False if an error occured. + */ + bool Write(int num_fields, int num_writes, threading::Value*** vals); + + /** + * Sets the buffering status for the writer, assuming the writer + * supports that. (If not, it will be ignored). + * + * @param enabled False if buffering is to be disabled (by default + * it's on). + * + * @return False if an error occured. + */ + bool SetBuf(bool enabled); + + /** + * Flushes any currently buffered output, assuming the writer + * supports that. (If not, it will be ignored). + * + * @param network_time The network time when the flush was triggered. + * + * @return False if an error occured. + */ + bool Flush(double network_time); + + /** + * Triggers rotation, if the writer supports that. (If not, it will + * be ignored). + * + * @return False if an error occured. + */ + bool Rotate(const char* rotated_path, double open, double close, bool terminating); + + /** + * Disables the frontend that has instantiated this backend. Once + * disabled,the frontend will not send any further message over. + * + * TODO: Do we still need this method (and the corresponding message)? + */ + void DisableFrontend(); + + /** + * Returns the additional writer information passed into the constructor. + */ + const WriterInfo& Info() const { return *info; } + + /** + * Returns the number of log fields as passed into the constructor. + */ + int NumFields() const { return num_fields; } + + /** + * Returns the log fields as passed into the constructor. + */ + const threading::Field* const * Fields() const { return fields; } + + /** + * Returns the current buffering state. + * + * @return True if buffering is enabled. + */ + bool IsBuf() { return buffering; } + + /** + * Signals that a file has been successfully rotated and any + * potential post-processor can now run. + * + * Most of the parameters should be passed through from DoRotate(). + * + * Note: Exactly one of the two FinishedRotation() methods must be + * called by a writer's implementation of DoRotate() once rotation + * has finished. + * + * @param new_name The filename of the rotated file. + * + * @param old_name The filename of the original file. + * + * @param open: The timestamp when the original file was opened. + * + * @param close: The timestamp when the origina file was closed. + * + * @param terminating: True if the original rotation request occured + * due to the main Bro process shutting down. + */ + bool FinishedRotation(const char* new_name, const char* old_name, + double open, double close, bool terminating); + + /** + * Signals that a file rotation request has been processed, but no + * further post-processing needs to be performed (either because + * there was an error, or there was nothing to rotate to begin with + * with this writer). + * + * Note: Exactly one of the two FinishedRotation() methods must be + * called by a writer's implementation of DoRotate() once rotation + * has finished. + * + * @param new_name The filename of the rotated file. + * + * @param old_name The filename of the original file. + * + * @param open: The timestamp when the original file was opened. + * + * @param close: The timestamp when the origina file was closed. + * + * @param terminating: True if the original rotation request occured + * due to the main Bro process shutting down. + */ + bool FinishedRotation(); + + /** Helper method to render an IP address as a string. + * + * @param addr The address. + * + * @return An ASCII representation of the address. + */ + string Render(const threading::Value::addr_t& addr) const; + + /** Helper method to render an subnet value as a string. + * + * @param addr The address. + * + * @return An ASCII representation of the address. + */ + string Render(const threading::Value::subnet_t& subnet) const; + + /** Helper method to render a double in Bro's standard precision. + * + * @param d The double. + * + * @return An ASCII representation of the double. + */ + string Render(double d) const; + + // Overridden from MsgThread. + virtual bool OnHeartbeat(double network_time, double current_time); + virtual bool OnFinish(double network_time); + +protected: + friend class FinishMessage; + + /** + * Writer-specific intialization method. + * + * A writer implementation must override this method. If it returns + * false, it will be assumed that a fatal error has occured that + * prevents the writer from further operation; it will then be + * disabled and eventually deleted. When returning false, an + * implementation should also call Error() to indicate what happened. + */ + virtual bool DoInit(const WriterInfo& info, int num_fields, + const threading::Field* const* fields) = 0; + + /** + * Writer-specific output method implementing recording of fone log + * entry. + * + * A writer implementation must override this method. If it returns + * false, it will be assumed that a fatal error has occured that + * prevents the writer from further operation; it will then be + * disabled and eventually deleted. When returning false, an + * implementation should also call Error() to indicate what happened. + */ + virtual bool DoWrite(int num_fields, const threading::Field* const* fields, + threading::Value** vals) = 0; + + /** + * Writer-specific method implementing a change of fthe buffering + * state. If buffering is disabled, the writer should attempt to + * write out information as quickly as possible even if doing so may + * have a performance impact. If enabled (which is the default), it + * may buffer data as helpful and write it out later in a way + * optimized for performance. The current buffering state can be + * queried via IsBuf(). + * + * A writer implementation must override this method but it can just + * ignore calls if buffering doesn't align with its semantics. + * + * If the method returns false, it will be assumed that a fatal error + * has occured that prevents the writer from further operation; it + * will then be disabled and eventually deleted. When returning + * false, an implementation should also call Error() to indicate what + * happened. + */ + virtual bool DoSetBuf(bool enabled) = 0; + + /** + * Writer-specific method implementing flushing of its output. + * + * A writer implementation must override this method but it can just + * ignore calls if flushing doesn't align with its semantics. + * + * If the method returns false, it will be assumed that a fatal error + * has occured that prevents the writer from further operation; it + * will then be disabled and eventually deleted. When returning + * false, an implementation should also call Error() to indicate what + * happened. + * + * @param network_time The network time when the flush was triggered. + */ + virtual bool DoFlush(double network_time) = 0; + + /** + * Writer-specific method implementing log rotation. Most directly + * this only applies to writers writing into files, which should then + * close the current file and open a new one. However, a writer may + * also trigger other apppropiate actions if semantics are similar. + * Once rotation has finished, the implementation *must* call + * FinishedRotation() to signal the log manager that potential + * postprocessors can now run. + * + * A writer implementation must override this method but it can just + * ignore calls if flushing doesn't align with its semantics. It + * still needs to call FinishedRotation() though. + * + * If the method returns false, it will be assumed that a fatal error + * has occured that prevents the writer from further operation; it + * will then be disabled and eventually deleted. When returning + * false, an implementation should also call Error() to indicate what + * happened. + * + * @param rotate_path Reflects the path to where the rotated output + * is to be moved, with specifics depending on the writer. It should + * generally be interpreted in a way consistent with that of \c path + * as passed into DoInit(). As an example, for file-based output, \c + * rotate_path could be the original filename extended with a + * timestamp indicating the time of the rotation. + * + * @param open The network time when the *current* file was opened. + * + * @param close The network time when the *current* file was closed. + * + * @param terminating Indicates whether the rotation request occurs + * due the main Bro prcoess terminating (and not because we've + * reached a regularly scheduled time for rotation). + */ + virtual bool DoRotate(const char* rotated_path, double open, double close, + bool terminating) = 0; + + /** + * Writer-specific method called just before the threading system is + * going to shutdown. It is assumed that once this messages returns, + * the thread can be safely terminated. + * + * @param network_time The network time when the finish is triggered. + */ + virtual bool DoFinish(double network_time) = 0; + /** + * Triggered by regular heartbeat messages from the main thread. + * + * This method can be overridden. Default implementation does + * nothing. + */ + virtual bool DoHeartbeat(double network_time, double current_time) = 0; + +private: + /** + * Deletes the values as passed into Write(). + */ + void DeleteVals(int num_writes, threading::Value*** vals); + + // Frontend that instantiated us. This object must not be access from + // this class, it's running in a different thread! + WriterFrontend* frontend; + + const WriterInfo* info; // Meta information. + int num_fields; // Number of log fields. + const threading::Field* const* fields; // Log fields. + bool buffering; // True if buffering is enabled. + + int rotation_counter; // Tracks FinishedRotation() calls. +}; + + +} + +#endif + diff --git a/src/logging/WriterFrontend.cc b/src/logging/WriterFrontend.cc new file mode 100644 index 0000000000..a97f48c1ed --- /dev/null +++ b/src/logging/WriterFrontend.cc @@ -0,0 +1,262 @@ + +#include "Net.h" +#include "threading/SerialTypes.h" + +#include "Manager.h" +#include "WriterFrontend.h" +#include "WriterBackend.h" + +using threading::Value; +using threading::Field; + +namespace logging { + +// Messages sent from frontend to backend (i.e., "InputMessages"). + +class InitMessage : public threading::InputMessage +{ +public: + InitMessage(WriterBackend* backend, const int num_fields, const Field* const* fields) + : threading::InputMessage("Init", backend), + num_fields(num_fields), fields(fields) + {} + + + virtual bool Process() { return Object()->Init(num_fields, fields); } + +private: + const int num_fields; + const Field * const* fields; +}; + +class RotateMessage : public threading::InputMessage +{ +public: + RotateMessage(WriterBackend* backend, WriterFrontend* frontend, const char* rotated_path, const double open, + const double close, const bool terminating) + : threading::InputMessage("Rotate", backend), + frontend(frontend), + rotated_path(copy_string(rotated_path)), open(open), + close(close), terminating(terminating) { } + + virtual ~RotateMessage() { delete [] rotated_path; } + + virtual bool Process() { return Object()->Rotate(rotated_path, open, close, terminating); } + +private: + WriterFrontend* frontend; + const char* rotated_path; + const double open; + const double close; + const bool terminating; +}; + +class WriteMessage : public threading::InputMessage +{ +public: + WriteMessage(WriterBackend* backend, int num_fields, int num_writes, Value*** vals) + : threading::InputMessage("Write", backend), + num_fields(num_fields), num_writes(num_writes), vals(vals) {} + + virtual bool Process() { return Object()->Write(num_fields, num_writes, vals); } + +private: + int num_fields; + int num_writes; + Value ***vals; +}; + +class SetBufMessage : public threading::InputMessage +{ +public: + SetBufMessage(WriterBackend* backend, const bool enabled) + : threading::InputMessage("SetBuf", backend), + enabled(enabled) { } + + virtual bool Process() { return Object()->SetBuf(enabled); } + +private: + const bool enabled; +}; + +class FlushMessage : public threading::InputMessage +{ +public: + FlushMessage(WriterBackend* backend, double network_time) + : threading::InputMessage("Flush", backend), + network_time(network_time) {} + + virtual bool Process() { return Object()->Flush(network_time); } +private: + double network_time; +}; + +} + +// Frontend methods. + +using namespace logging; + +WriterFrontend::WriterFrontend(const WriterBackend::WriterInfo& arg_info, EnumVal* arg_stream, EnumVal* arg_writer, bool arg_local, bool arg_remote) + { + stream = arg_stream; + writer = arg_writer; + Ref(stream); + Ref(writer); + + disabled = initialized = false; + buf = true; + local = arg_local; + remote = arg_remote; + write_buffer = 0; + write_buffer_pos = 0; + info = new WriterBackend::WriterInfo(arg_info); + + const char* w = arg_writer->Type()->AsEnumType()->Lookup(arg_writer->InternalInt()); + name = copy_string(fmt("%s/%s", arg_info.path, w)); + + if ( local ) + { + backend = log_mgr->CreateBackend(this, writer->AsEnum()); + + if ( backend ) + backend->Start(); + } + + else + backend = 0; + } + +WriterFrontend::~WriterFrontend() + { + Unref(stream); + Unref(writer); + delete info; + } + +void WriterFrontend::Stop() + { + FlushWriteBuffer(); + SetDisable(); + } + +void WriterFrontend::Init(int arg_num_fields, const Field* const * arg_fields) + { + if ( disabled ) + return; + + if ( initialized ) + reporter->InternalError("writer initialize twice"); + + num_fields = arg_num_fields; + fields = arg_fields; + + initialized = true; + + if ( backend ) + backend->SendIn(new InitMessage(backend, arg_num_fields, arg_fields)); + + if ( remote ) + remote_serializer->SendLogCreateWriter(stream, + writer, + *info, + arg_num_fields, + arg_fields); + + } + +void WriterFrontend::Write(int num_fields, Value** vals) + { + if ( disabled ) + return; + + if ( remote ) + remote_serializer->SendLogWrite(stream, + writer, + info->path, + num_fields, + vals); + + if ( ! backend ) + { + DeleteVals(vals); + return; + } + + if ( ! write_buffer ) + { + // Need new buffer. + write_buffer = new Value**[WRITER_BUFFER_SIZE]; + write_buffer_pos = 0; + } + + write_buffer[write_buffer_pos++] = vals; + + if ( write_buffer_pos >= WRITER_BUFFER_SIZE || ! buf || terminating ) + // Buffer full (or no bufferin desired or termiating). + FlushWriteBuffer(); + + } + +void WriterFrontend::FlushWriteBuffer() + { + if ( ! write_buffer_pos ) + // Nothing to do. + return; + + if ( backend ) + backend->SendIn(new WriteMessage(backend, num_fields, write_buffer_pos, write_buffer)); + + // Clear buffer (no delete, we pass ownership to child thread.) + write_buffer = 0; + write_buffer_pos = 0; + } + +void WriterFrontend::SetBuf(bool enabled) + { + if ( disabled ) + return; + + buf = enabled; + + if ( backend ) + backend->SendIn(new SetBufMessage(backend, enabled)); + + if ( ! buf ) + // Make sure no longer buffer any still queued data. + FlushWriteBuffer(); + } + +void WriterFrontend::Flush(double network_time) + { + if ( disabled ) + return; + + FlushWriteBuffer(); + + if ( backend ) + backend->SendIn(new FlushMessage(backend, network_time)); + } + +void WriterFrontend::Rotate(const char* rotated_path, double open, double close, bool terminating) + { + if ( disabled ) + return; + + FlushWriteBuffer(); + + if ( backend ) + backend->SendIn(new RotateMessage(backend, this, rotated_path, open, close, terminating)); + else + // Still signal log manager that we're done. + log_mgr->FinishedRotation(this, 0, 0, 0, 0, false, terminating); + } + +void WriterFrontend::DeleteVals(Value** vals) + { + // Note this code is duplicated in Manager::DeleteVals(). + for ( int i = 0; i < num_fields; i++ ) + delete vals[i]; + + delete [] vals; + } diff --git a/src/logging/WriterFrontend.h b/src/logging/WriterFrontend.h new file mode 100644 index 0000000000..a4a8dcd415 --- /dev/null +++ b/src/logging/WriterFrontend.h @@ -0,0 +1,230 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#ifndef LOGGING_WRITERFRONTEND_H +#define LOGGING_WRITERFRONTEND_H + +#include "WriterBackend.h" + +#include "threading/MsgThread.h" + +namespace logging { + +class Manager; + +/** + * Bridge class between the logging::Manager and backend writer threads. The + * Manager instantiates one \a WriterFrontend for each open logging filter. + * Each frontend in turns instantiates a WriterBackend-derived class + * internally that's specific to the particular output format. That backend + * runs in a new thread, and it receives messages from the frontend that + * correspond to method called by the manager. + * + */ +class WriterFrontend { +public: + /** + * Constructor. + * + * stream: The logging stream. + * + * writer: The backend writer type, with the value corresponding to the + * script-level \c Log::Writer enum (e.g., \a WRITER_ASCII). The + * frontend will internally instantiate a WriterBackend of the + * corresponding type. + * + * info: The meta information struct for the writer. + * + * writer_name: A descriptive name for the writer's type. + * + * local: If true, the writer will instantiate a local backend. + * + * remote: If true, the writer will forward all data to remote + * clients. + * + * Frontends must only be instantiated by the main thread. + */ + WriterFrontend(const WriterBackend::WriterInfo& info, EnumVal* stream, EnumVal* writer, bool local, bool remote); + + /** + * Destructor. + * + * Frontends must only be destroyed by the main thread. + */ + virtual ~WriterFrontend(); + + /** + * Stops all output to this writer. Calling this methods disables all + * message forwarding to the backend. + * + * This method must only be called from the main thread. + */ + void Stop(); + + /** + * Initializes the writer. + * + * This method generates a message to the backend writer and triggers + * the corresponding message there. If the backend method fails, it + * sends a message back that will asynchronously call Disable(). + * + * See WriterBackend::Init() for arguments. The method takes + * ownership of \a fields. + * + * This method must only be called from the main thread. + */ + void Init(int num_fields, const threading::Field* const* fields); + + /** + * Write out a record. + * + * This method generates a message to the backend writer and triggers + * the corresponding message there. If the backend method fails, it + * sends a message back that will asynchronously call Disable(). + * + * As an optimization, if buffering is enabled (which is the default) + * this method may buffer several writes and send them over to the + * backend in bulk with a single message. An explicit bulk write of + * all currently buffered data can be triggered with + * FlushWriteBuffer(). The backend writer triggers this with a + * message at every heartbeat. + * + * See WriterBackend::Writer() for arguments (except that this method + * takes only a single record, not an array). The method takes + * ownership of \a vals. + * + * This method must only be called from the main thread. + */ + void Write(int num_fields, threading::Value** vals); + + /** + * Sets the buffering state. + * + * This method generates a message to the backend writer and triggers + * the corresponding message there. If the backend method fails, it + * sends a message back that will asynchronously call Disable(). + * + * See WriterBackend::SetBuf() for arguments. + * + * This method must only be called from the main thread. + */ + void SetBuf(bool enabled); + + /** + * Flushes the output. + * + * This method generates a message to the backend writer and triggers + * the corresponding message there. In addition, it also triggers + * FlushWriteBuffer(). If the backend method fails, it sends a + * message back that will asynchronously call Disable(). + * + * This method must only be called from the main thread. + * + * @param network_time The network time when the flush was triggered. + */ + void Flush(double network_time); + + /** + * Triggers log rotation. + * + * This method generates a message to the backend writer and triggers + * the corresponding message there. If the backend method fails, it + * sends a message back that will asynchronously call Disable(). + * + * See WriterBackend::Rotate() for arguments. + * + * This method must only be called from the main thread. + */ + void Rotate(const char* rotated_path, double open, double close, bool terminating); + + /** + * Finalizes writing to this tream. + * + * This method generates a message to the backend writer and triggers + * the corresponding message there. If the backend method fails, it + * sends a message back that will asynchronously call Disable(). + * + * This method must only be called from the main thread. + * + * @param network_time The network time when the finish was triggered. + */ + void Finish(double network_time); + + /** + * Explicitly triggers a transfer of all potentially buffered Write() + * operations over to the backend. + * + * This method must only be called from the main thread. + */ + void FlushWriteBuffer(); + + /** + * Disables the writer frontend. From now on, all method calls that + * would normally send message over to the backend, turn into no-ops. + * Note though that it does not stop the backend itself, use Stop() + * to do thast as well (this method is primarily for use as callback + * when the backend wants to disable the frontend). + * + * Disabled frontend will eventually be discarded by the + * logging::Manager. + * + * This method must only be called from the main thread. + */ + void SetDisable() { disabled = true; } + + /** + * Returns true if the writer frontend has been disabled with SetDisable(). + */ + bool Disabled() { return disabled; } + + /** + * Returns the additional writer information as passed into the constructor. + */ + const WriterBackend::WriterInfo& Info() const { return *info; } + + /** + * Returns the number of log fields as passed into the constructor. + */ + int NumFields() const { return num_fields; } + + /** + * Returns a descriptive name for the writer, including the type of + * the backend and the path used. + * + * This method is safe to call from any thread. + */ + const char* Name() const { return name; } + + /** + * Returns the log fields as passed into the constructor. + */ + const threading::Field* const * Fields() const { return fields; } + +protected: + friend class Manager; + + void DeleteVals(threading::Value** vals); + + EnumVal* stream; + EnumVal* writer; + + WriterBackend* backend; // The backend we have instanatiated. + bool disabled; // True if disabled. + bool initialized; // True if initialized. + bool buf; // True if buffering is enabled (default). + bool local; // True if logging locally. + bool remote; // True if loggin remotely. + + const char* name; // Descriptive name of the + WriterBackend::WriterInfo* info; // The writer information. + int num_fields; // The number of log fields. + const threading::Field* const* fields; // The log fields. + + // Buffer for bulk writes. + static const int WRITER_BUFFER_SIZE = 1000; + int write_buffer_pos; // Position of next write in buffer. + threading::Value*** write_buffer; // Buffer of size WRITER_BUFFER_SIZE. +}; + +} + +#endif diff --git a/src/logging/writers/Ascii.cc b/src/logging/writers/Ascii.cc new file mode 100644 index 0000000000..11b322f5a3 --- /dev/null +++ b/src/logging/writers/Ascii.cc @@ -0,0 +1,440 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include +#include +#include +#include + +#include "NetVar.h" +#include "threading/SerialTypes.h" + +#include "Ascii.h" + +using namespace logging; +using namespace writer; +using threading::Value; +using threading::Field; + +Ascii::Ascii(WriterFrontend* frontend) : WriterBackend(frontend) + { + fd = 0; + ascii_done = false; + + output_to_stdout = BifConst::LogAscii::output_to_stdout; + include_meta = BifConst::LogAscii::include_meta; + + separator_len = BifConst::LogAscii::separator->Len(); + separator = new char[separator_len]; + memcpy(separator, BifConst::LogAscii::separator->Bytes(), + separator_len); + + set_separator_len = BifConst::LogAscii::set_separator->Len(); + set_separator = new char[set_separator_len]; + memcpy(set_separator, BifConst::LogAscii::set_separator->Bytes(), + set_separator_len); + + empty_field_len = BifConst::LogAscii::empty_field->Len(); + empty_field = new char[empty_field_len]; + memcpy(empty_field, BifConst::LogAscii::empty_field->Bytes(), + empty_field_len); + + unset_field_len = BifConst::LogAscii::unset_field->Len(); + unset_field = new char[unset_field_len]; + memcpy(unset_field, BifConst::LogAscii::unset_field->Bytes(), + unset_field_len); + + meta_prefix_len = BifConst::LogAscii::meta_prefix->Len(); + meta_prefix = new char[meta_prefix_len]; + memcpy(meta_prefix, BifConst::LogAscii::meta_prefix->Bytes(), + meta_prefix_len); + + desc.EnableEscaping(); + desc.AddEscapeSequence(separator, separator_len); + } + +Ascii::~Ascii() + { + if ( ! ascii_done ) + { + fprintf(stderr, "internal error: finish missing\n"); + abort(); + } + + delete [] separator; + delete [] set_separator; + delete [] empty_field; + delete [] unset_field; + delete [] meta_prefix; + } + +bool Ascii::WriteHeaderField(const string& key, const string& val) + { + string str = string(meta_prefix, meta_prefix_len) + + key + string(separator, separator_len) + val + "\n"; + + return safe_write(fd, str.c_str(), str.length()); + } + +void Ascii::CloseFile(double t) + { + if ( ! fd ) + return; + + if ( include_meta ) + WriteHeaderField("close", Timestamp(0)); + + safe_close(fd); + fd = 0; + } + +bool Ascii::DoInit(const WriterInfo& info, int num_fields, const Field* const * fields) + { + assert(! fd); + + string path = info.path; + + if ( output_to_stdout ) + path = "/dev/stdout"; + + fname = IsSpecial(path) ? path : path + "." + LogExt(); + + fd = open(fname.c_str(), O_WRONLY | O_CREAT | O_TRUNC, 0666); + + if ( fd < 0 ) + { + Error(Fmt("cannot open %s: %s", fname.c_str(), + Strerror(errno))); + fd = 0; + return false; + } + + if ( include_meta ) + { + string names; + string types; + + string str = string(meta_prefix, meta_prefix_len) + + "separator " // Always use space as separator here. + + get_escaped_string(string(separator, separator_len), false) + + "\n"; + + if ( ! safe_write(fd, str.c_str(), str.length()) ) + goto write_error; + + if ( ! (WriteHeaderField("set_separator", get_escaped_string( + string(set_separator, set_separator_len), false)) && + WriteHeaderField("empty_field", get_escaped_string( + string(empty_field, empty_field_len), false)) && + WriteHeaderField("unset_field", get_escaped_string( + string(unset_field, unset_field_len), false)) && + WriteHeaderField("path", get_escaped_string(path, false)) && + WriteHeaderField("open", Timestamp(0))) ) + goto write_error; + + for ( int i = 0; i < num_fields; ++i ) + { + if ( i > 0 ) + { + names += string(separator, separator_len); + types += string(separator, separator_len); + } + + names += string(fields[i]->name); + types += fields[i]->TypeName().c_str(); + } + + if ( ! (WriteHeaderField("fields", names) + && WriteHeaderField("types", types)) ) + goto write_error; + } + + return true; + +write_error: + Error(Fmt("error writing to %s: %s", fname.c_str(), Strerror(errno))); + return false; + } + +bool Ascii::DoFlush(double network_time) + { + fsync(fd); + return true; + } + +bool Ascii::DoFinish(double network_time) + { + if ( ascii_done ) + { + fprintf(stderr, "internal error: duplicate finish\n"); + abort(); + } + + ascii_done = true; + + CloseFile(network_time); + + return true; + } + + +bool Ascii::DoWriteOne(ODesc* desc, Value* val, const Field* field) + { + if ( ! val->present ) + { + desc->AddN(unset_field, unset_field_len); + return true; + } + + switch ( val->type ) { + + case TYPE_BOOL: + desc->Add(val->val.int_val ? "T" : "F"); + break; + + case TYPE_INT: + desc->Add(val->val.int_val); + break; + + case TYPE_COUNT: + case TYPE_COUNTER: + desc->Add(val->val.uint_val); + break; + + case TYPE_PORT: + desc->Add(val->val.port_val.port); + break; + + case TYPE_SUBNET: + desc->Add(Render(val->val.subnet_val)); + break; + + case TYPE_ADDR: + desc->Add(Render(val->val.addr_val)); + break; + + case TYPE_DOUBLE: + // Rendering via Add() truncates trailing 0s after the + // decimal point. The difference with TIME/INTERVAL is mainly + // to keep the log format consistent. + desc->Add(val->val.double_val); + break; + + case TYPE_INTERVAL: + case TYPE_TIME: + // Rendering via Render() keeps trailing 0s after the decimal + // point. The difference with DOUBLEis mainly to keep the log + // format consistent. + desc->Add(Render(val->val.double_val)); + break; + + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_FUNC: + { + int size = val->val.string_val.length; + const char* data = val->val.string_val.data; + + if ( ! size ) + { + desc->AddN(empty_field, empty_field_len); + break; + } + + if ( size == unset_field_len && memcmp(data, unset_field, size) == 0 ) + { + // The value we'd write out would match exactly the + // place-holder we use for unset optional fields. We + // escape the first character so that the output + // won't be ambigious. + static const char hex_chars[] = "0123456789abcdef"; + char hex[6] = "\\x00"; + hex[2] = hex_chars[((*data) & 0xf0) >> 4]; + hex[3] = hex_chars[(*data) & 0x0f]; + desc->AddRaw(hex, 4); + + ++data; + --size; + } + + if ( size ) + desc->AddN(data, size); + + break; + } + + case TYPE_TABLE: + { + if ( ! val->val.set_val.size ) + { + desc->AddN(empty_field, empty_field_len); + break; + } + + desc->AddEscapeSequence(set_separator, set_separator_len); + for ( int j = 0; j < val->val.set_val.size; j++ ) + { + if ( j > 0 ) + desc->AddRaw(set_separator, set_separator_len); + + if ( ! DoWriteOne(desc, val->val.set_val.vals[j], field) ) + { + desc->RemoveEscapeSequence(set_separator, set_separator_len); + return false; + } + } + desc->RemoveEscapeSequence(set_separator, set_separator_len); + + break; + } + + case TYPE_VECTOR: + { + if ( ! val->val.vector_val.size ) + { + desc->AddN(empty_field, empty_field_len); + break; + } + + desc->AddEscapeSequence(set_separator, set_separator_len); + for ( int j = 0; j < val->val.vector_val.size; j++ ) + { + if ( j > 0 ) + desc->AddRaw(set_separator, set_separator_len); + + if ( ! DoWriteOne(desc, val->val.vector_val.vals[j], field) ) + { + desc->RemoveEscapeSequence(set_separator, set_separator_len); + return false; + } + } + desc->RemoveEscapeSequence(set_separator, set_separator_len); + + break; + } + + default: + Error(Fmt("unsupported field format %d for %s", val->type, field->name)); + return false; + } + + return true; + } + +bool Ascii::DoWrite(int num_fields, const Field* const * fields, + Value** vals) + { + if ( ! fd ) + DoInit(Info(), NumFields(), Fields()); + + desc.Clear(); + + for ( int i = 0; i < num_fields; i++ ) + { + if ( i > 0 ) + desc.AddRaw(separator, separator_len); + + if ( ! DoWriteOne(&desc, vals[i], fields[i]) ) + return false; + } + + desc.AddRaw("\n", 1); + + const char* bytes = (const char*)desc.Bytes(); + int len = desc.Len(); + + if ( strncmp(bytes, meta_prefix, meta_prefix_len) == 0 ) + { + // It would so escape the first character. + char buf[16]; + snprintf(buf, sizeof(buf), "\\x%02x", bytes[0]); + + if ( ! safe_write(fd, buf, strlen(buf)) ) + goto write_error; + + ++bytes; + --len; + } + + if ( ! safe_write(fd, bytes, len) ) + goto write_error; + + if ( ! IsBuf() ) + fsync(fd); + + return true; + +write_error: + Error(Fmt("error writing to %s: %s", fname.c_str(), Strerror(errno))); + return false; + } + +bool Ascii::DoRotate(const char* rotated_path, double open, double close, bool terminating) + { + // Don't rotate special files or if there's not one currently open. + if ( ! fd || IsSpecial(Info().path) ) + { + FinishedRotation(); + return true; + } + + CloseFile(close); + + string nname = string(rotated_path) + "." + LogExt(); + rename(fname.c_str(), nname.c_str()); + + if ( ! FinishedRotation(nname.c_str(), fname.c_str(), open, close, terminating) ) + { + Error(Fmt("error rotating %s to %s", fname.c_str(), nname.c_str())); + return false; + } + + return true; + } + +bool Ascii::DoSetBuf(bool enabled) + { + // Nothing to do. + return true; + } + +bool Ascii::DoHeartbeat(double network_time, double current_time) + { + // Nothing to do. + return true; + } + +string Ascii::LogExt() + { + const char* ext = getenv("BRO_LOG_SUFFIX"); + if ( ! ext ) + ext = "log"; + + return ext; + } + +string Ascii::Timestamp(double t) + { + time_t teatime = time_t(t); + + if ( ! teatime ) + { + // Use wall clock. + struct timeval tv; + if ( gettimeofday(&tv, 0) < 0 ) + Error("gettimeofday failed"); + else + teatime = tv.tv_sec; + } + + struct tm tmbuf; + struct tm* tm = localtime_r(&teatime, &tmbuf); + + char tmp[128]; + const char* const date_fmt = "%Y-%m-%d-%H-%M-%S"; + strftime(tmp, sizeof(tmp), date_fmt, tm); + + return tmp; + } + + diff --git a/src/logging/writers/Ascii.h b/src/logging/writers/Ascii.h new file mode 100644 index 0000000000..cf0190aa80 --- /dev/null +++ b/src/logging/writers/Ascii.h @@ -0,0 +1,69 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// Log writer for delimiter-separated ASCII logs. + +#ifndef LOGGING_WRITER_ASCII_H +#define LOGGING_WRITER_ASCII_H + +#include "../WriterBackend.h" + +namespace logging { namespace writer { + +class Ascii : public WriterBackend { +public: + Ascii(WriterFrontend* frontend); + ~Ascii(); + + static WriterBackend* Instantiate(WriterFrontend* frontend) + { return new Ascii(frontend); } + static string LogExt(); + +protected: + virtual bool DoInit(const WriterInfo& info, int num_fields, + const threading::Field* const* fields); + virtual bool DoWrite(int num_fields, const threading::Field* const* fields, + threading::Value** vals); + virtual bool DoSetBuf(bool enabled); + virtual bool DoRotate(const char* rotated_path, double open, + double close, bool terminating); + virtual bool DoFlush(double network_time); + virtual bool DoFinish(double network_time); + virtual bool DoHeartbeat(double network_time, double current_time); + +private: + bool IsSpecial(string path) { return path.find("/dev/") == 0; } + bool DoWriteOne(ODesc* desc, threading::Value* val, const threading::Field* field); + bool WriteHeaderField(const string& key, const string& value); + void CloseFile(double t); + string Timestamp(double t); // Uses current time if t is zero. + + int fd; + string fname; + ODesc desc; + bool ascii_done; + + // Options set from the script-level. + bool output_to_stdout; + bool include_meta; + + char* separator; + int separator_len; + + char* set_separator; + int set_separator_len; + + char* empty_field; + int empty_field_len; + + char* unset_field; + int unset_field_len; + + char* meta_prefix; + int meta_prefix_len; +}; + +} +} + + +#endif diff --git a/src/logging/writers/DataSeries.cc b/src/logging/writers/DataSeries.cc new file mode 100644 index 0000000000..bc5a82ec54 --- /dev/null +++ b/src/logging/writers/DataSeries.cc @@ -0,0 +1,445 @@ +// See the file "COPYING" in the main distribution directory for copyright. + +#include "config.h" + +#ifdef USE_DATASERIES + +#include +#include +#include + +#include + +#include "NetVar.h" +#include "threading/SerialTypes.h" + +#include "DataSeries.h" + +using namespace logging; +using namespace writer; + +std::string DataSeries::LogValueToString(threading::Value *val) + { + // In some cases, no value is attached. If this is the case, return + // an empty string. + if( ! val->present ) + return ""; + + switch(val->type) { + case TYPE_BOOL: + return (val->val.int_val ? "true" : "false"); + + case TYPE_INT: + { + std::ostringstream ostr; + ostr << val->val.int_val; + return ostr.str(); + } + + case TYPE_COUNT: + case TYPE_COUNTER: + case TYPE_PORT: + { + std::ostringstream ostr; + ostr << val->val.uint_val; + return ostr.str(); + } + + case TYPE_SUBNET: + return Render(val->val.subnet_val); + + case TYPE_ADDR: + return Render(val->val.addr_val); + + // Note: These two cases are relatively special. We need to convert + // these values into their integer equivalents to maximize precision. + // At the moment, there won't be a noticeable effect (Bro uses the + // double format everywhere internally, so we've already lost the + // precision we'd gain here), but timestamps may eventually switch to + // this representation within Bro. + // + // In the near-term, this *should* lead to better pack_relative (and + // thus smaller output files). + case TYPE_TIME: + case TYPE_INTERVAL: + if ( ds_use_integer_for_time ) + { + std::ostringstream ostr; + ostr << (uint64_t)(DataSeries::TIME_SCALE * val->val.double_val); + return ostr.str(); + } + else + return Render(val->val.double_val); + + case TYPE_DOUBLE: + return Render(val->val.double_val); + + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_FUNC: + if ( ! val->val.string_val.length ) + return ""; + + return string(val->val.string_val.data, val->val.string_val.length); + + case TYPE_TABLE: + { + if ( ! val->val.set_val.size ) + return ""; + + string tmpString = ""; + + for ( int j = 0; j < val->val.set_val.size; j++ ) + { + if ( j > 0 ) + tmpString += ds_set_separator; + + tmpString += LogValueToString(val->val.set_val.vals[j]); + } + + return tmpString; + } + + case TYPE_VECTOR: + { + if ( ! val->val.vector_val.size ) + return ""; + + string tmpString = ""; + + for ( int j = 0; j < val->val.vector_val.size; j++ ) + { + if ( j > 0 ) + tmpString += ds_set_separator; + + tmpString += LogValueToString(val->val.vector_val.vals[j]); + } + + return tmpString; + } + + default: + InternalError(Fmt("unknown type %s in DataSeries::LogValueToString", type_name(val->type))); + return "cannot be reached"; + } +} + +string DataSeries::GetDSFieldType(const threading::Field *field) +{ + switch(field->type) { + case TYPE_BOOL: + return "bool"; + + case TYPE_COUNT: + case TYPE_COUNTER: + case TYPE_PORT: + case TYPE_INT: + return "int64"; + + case TYPE_DOUBLE: + return "double"; + + case TYPE_TIME: + case TYPE_INTERVAL: + return ds_use_integer_for_time ? "int64" : "double"; + + case TYPE_SUBNET: + case TYPE_ADDR: + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_TABLE: + case TYPE_VECTOR: + case TYPE_FUNC: + return "variable32"; + + default: + InternalError(Fmt("unknown type %s in DataSeries::GetDSFieldType", type_name(field->type))); + return "cannot be reached"; + } +} + +string DataSeries::BuildDSSchemaFromFieldTypes(const vector& vals, string sTitle) + { + if( ! sTitle.size() ) + sTitle = "GenericBroStream"; + + string xmlschema = "\n"; + + for( size_t i = 0; i < vals.size(); ++i ) + { + xmlschema += "\t\n"; + } + + xmlschema += "\n"; + + for( size_t i = 0; i < vals.size(); ++i ) + { + xmlschema += "\n"; + } + + return xmlschema; +} + +std::string DataSeries::GetDSOptionsForType(const threading::Field *field) +{ + switch( field->type ) { + case TYPE_TIME: + case TYPE_INTERVAL: + { + std::string s; + s += "pack_relative=\"" + std::string(field->name) + "\""; + + if ( ! ds_use_integer_for_time ) + s += " pack_scale=\"1e-6\" print_format=\"%.6f\" pack_scale_warn=\"no\""; + else + s += string(" units=\"") + TIME_UNIT() + "\" epoch=\"unix\""; + + return s; + } + + case TYPE_SUBNET: + case TYPE_ADDR: + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_TABLE: + case TYPE_VECTOR: + return "pack_unique=\"yes\""; + + default: + return ""; + } +} + +DataSeries::DataSeries(WriterFrontend* frontend) : WriterBackend(frontend) +{ + ds_compression = string((const char *)BifConst::LogDataSeries::compression->Bytes(), + BifConst::LogDataSeries::compression->Len()); + ds_dump_schema = BifConst::LogDataSeries::dump_schema; + ds_extent_size = BifConst::LogDataSeries::extent_size; + ds_num_threads = BifConst::LogDataSeries::num_threads; + ds_use_integer_for_time = BifConst::LogDataSeries::use_integer_for_time; + ds_set_separator = ","; +} + +DataSeries::~DataSeries() +{ +} + +bool DataSeries::OpenLog(string path) + { + log_file = new DataSeriesSink(path + ".ds", compress_type); + log_file->writeExtentLibrary(log_types); + + for( size_t i = 0; i < schema_list.size(); ++i ) + { + string fn = schema_list[i].field_name; + GeneralField* gf = 0; +#ifdef USE_PERFTOOLS_DEBUG + { + // GeneralField isn't cleaning up some results of xml parsing, reported + // here: https://github.com/dataseries/DataSeries/issues/1 + // Ignore for now to make leak tests pass. There's confidence that + // we do clean up the GeneralField* since the ExtentSeries dtor for + // member log_series would trigger an assert if dynamically allocated + // fields aren't deleted beforehand. + HeapLeakChecker::Disabler disabler; +#endif + gf = GeneralField::create(log_series, fn); +#ifdef USE_PERFTOOLS_DEBUG + } +#endif + extents.insert(std::make_pair(fn, gf)); + } + + if ( ds_extent_size < ROW_MIN ) + { + Warning(Fmt("%d is not a valid value for 'rows'. Using min of %d instead", (int)ds_extent_size, (int)ROW_MIN)); + ds_extent_size = ROW_MIN; + } + + else if( ds_extent_size > ROW_MAX ) + { + Warning(Fmt("%d is not a valid value for 'rows'. Using max of %d instead", (int)ds_extent_size, (int)ROW_MAX)); + ds_extent_size = ROW_MAX; + } + + log_output = new OutputModule(*log_file, log_series, log_type, ds_extent_size); + + return true; + } + +bool DataSeries::DoInit(const WriterInfo& info, int num_fields, const threading::Field* const * fields) + { + // We first construct an XML schema thing (and, if ds_dump_schema is + // set, dump it to path + ".ds.xml"). Assuming that goes well, we + // use that schema to build our output logfile and prepare it to be + // written to. + + // Note: compressor count must be set *BEFORE* DataSeriesSink is + // instantiated. + if( ds_num_threads < THREAD_MIN && ds_num_threads != 0 ) + { + Warning(Fmt("%d is too few threads! Using %d instead", (int)ds_num_threads, (int)THREAD_MIN)); + ds_num_threads = THREAD_MIN; + } + + if( ds_num_threads > THREAD_MAX ) + { + Warning(Fmt("%d is too many threads! Dropping back to %d", (int)ds_num_threads, (int)THREAD_MAX)); + ds_num_threads = THREAD_MAX; + } + + if( ds_num_threads > 0 ) + DataSeriesSink::setCompressorCount(ds_num_threads); + + for ( int i = 0; i < num_fields; i++ ) + { + const threading::Field* field = fields[i]; + SchemaValue val; + val.ds_type = GetDSFieldType(field); + val.field_name = string(field->name); + val.field_options = GetDSOptionsForType(field); + val.bro_type = field->TypeName(); + schema_list.push_back(val); + } + + string schema = BuildDSSchemaFromFieldTypes(schema_list, info.path); + + if( ds_dump_schema ) + { + string name = string(info.path) + ".ds.xml"; + FILE* pFile = fopen(name.c_str(), "wb" ); + + if( pFile ) + { + fwrite(schema.c_str(), 1, schema.length(), pFile); + fclose(pFile); + } + + else + Error(Fmt("cannot dump schema: %s", Strerror(errno))); + } + + compress_type = Extent::compress_all; + + if( ds_compression == "lzf" ) + compress_type = Extent::compress_lzf; + + else if( ds_compression == "lzo" ) + compress_type = Extent::compress_lzo; + + else if( ds_compression == "gz" ) + compress_type = Extent::compress_gz; + + else if( ds_compression == "bz2" ) + compress_type = Extent::compress_bz2; + + else if( ds_compression == "none" ) + compress_type = Extent::compress_none; + + else if( ds_compression == "any" ) + compress_type = Extent::compress_all; + + else + Warning(Fmt("%s is not a valid compression type. Valid types are: 'lzf', 'lzo', 'gz', 'bz2', 'none', 'any'. Defaulting to 'any'", ds_compression.c_str())); + + log_type = log_types.registerTypePtr(schema); + log_series.setType(log_type); + + return OpenLog(info.path); + } + +bool DataSeries::DoFlush(double network_time) +{ + // Flushing is handled by DataSeries automatically, so this function + // doesn't do anything. + return true; +} + +void DataSeries::CloseLog() + { + for( ExtentIterator iter = extents.begin(); iter != extents.end(); ++iter ) + delete iter->second; + + extents.clear(); + + // Don't delete the file before you delete the output, or bad things + // will happen. + delete log_output; + delete log_file; + + log_output = 0; + log_file = 0; + } + +bool DataSeries::DoFinish(double network_time) +{ + CloseLog(); + return true; +} + +bool DataSeries::DoWrite(int num_fields, const threading::Field* const * fields, + threading::Value** vals) +{ + log_output->newRecord(); + + for( size_t i = 0; i < (size_t)num_fields; ++i ) + { + ExtentIterator iter = extents.find(fields[i]->name); + assert(iter != extents.end()); + + if( iter != extents.end() ) + { + GeneralField *cField = iter->second; + + if( vals[i]->present ) + cField->set(LogValueToString(vals[i])); + } + } + + return true; +} + +bool DataSeries::DoRotate(const char* rotated_path, double open, double close, bool terminating) +{ + // Note that if DS files are rotated too often, the aggregate log + // size will be (much) larger. + CloseLog(); + + string dsname = string(Info().path) + ".ds"; + string nname = string(rotated_path) + ".ds"; + rename(dsname.c_str(), nname.c_str()); + + if ( ! FinishedRotation(nname.c_str(), dsname.c_str(), open, close, terminating) ) + { + Error(Fmt("error rotating %s to %s", dsname.c_str(), nname.c_str())); + return false; + } + + return OpenLog(Info().path); +} + +bool DataSeries::DoSetBuf(bool enabled) +{ + // DataSeries is *always* buffered to some degree. This option is ignored. + return true; +} + +bool DataSeries::DoHeartbeat(double network_time, double current_time) +{ + return true; +} + +#endif /* USE_DATASERIES */ diff --git a/src/logging/writers/DataSeries.h b/src/logging/writers/DataSeries.h new file mode 100644 index 0000000000..9773c7ce1b --- /dev/null +++ b/src/logging/writers/DataSeries.h @@ -0,0 +1,125 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// A binary log writer producing DataSeries output. See doc/data-series.rst +// for more information. + +#ifndef LOGGING_WRITER_DATA_SERIES_H +#define LOGGING_WRITER_DATA_SERIES_H + +#include +#include +#include +#include + +#include "../WriterBackend.h" + +namespace logging { namespace writer { + +class DataSeries : public WriterBackend { +public: + DataSeries(WriterFrontend* frontend); + ~DataSeries(); + + static WriterBackend* Instantiate(WriterFrontend* frontend) + { return new DataSeries(frontend); } + +protected: + // Overidden from WriterBackend. + + virtual bool DoInit(const WriterInfo& info, int num_fields, + const threading::Field* const * fields); + + virtual bool DoWrite(int num_fields, const threading::Field* const* fields, + threading::Value** vals); + virtual bool DoSetBuf(bool enabled); + virtual bool DoRotate(const char* rotated_path, double open, + double close, bool terminating); + virtual bool DoFlush(double network_time); + virtual bool DoFinish(double network_time); + virtual bool DoHeartbeat(double network_time, double current_time); + +private: + static const size_t ROW_MIN = 2048; // Minimum extent size. + static const size_t ROW_MAX = (1024 * 1024 * 100); // Maximum extent size. + static const size_t THREAD_MIN = 1; // Minimum number of compression threads that DataSeries may spawn. + static const size_t THREAD_MAX = 128; // Maximum number of compression threads that DataSeries may spawn. + static const size_t TIME_SCALE = 1000000; // Fixed-point multiplier for time values when converted to integers. + const char* TIME_UNIT() { return "microseconds"; } // DS name for time resolution when converted to integers. Must match TIME_SCALE. + + struct SchemaValue + { + string ds_type; + string bro_type; + string field_name; + string field_options; + }; + + /** + * Turns a log value into a std::string. Uses an ostringstream to do the + * heavy lifting, but still need to switch on the type to know which value + * in the union to give to the string string for processing. + * + * @param val The value we wish to convert to a string + * @return the string value of val + */ + std::string LogValueToString(threading::Value *val); + + /** + * Takes a field type and converts it to a relevant DataSeries type. + * + * @param field We extract the type from this and convert it into a relevant DS type. + * @return String representation of type that DataSeries can understand. + */ + string GetDSFieldType(const threading::Field *field); + + /** + * Are there any options we should put into the XML schema? + * + * @param field We extract the type from this and return any options that make sense for that type. + * @return Options that can be added directly to the XML (e.g. "pack_relative=\"yes\"") + */ + std::string GetDSOptionsForType(const threading::Field *field); + + /** + * Takes a list of types, a list of names, and a title, and uses it to construct a valid DataSeries XML schema + * thing, which is then returned as a std::string + * + * @param opts std::vector of strings containing a list of options to be appended to each field (e.g. "pack_relative=yes") + * @param sTitle Name of this schema. Ideally, these schemas would be aggregated and re-used. + */ + string BuildDSSchemaFromFieldTypes(const vector& vals, string sTitle); + + /** Closes the currently open file. */ + void CloseLog(); + + /** Opens a new file. */ + bool OpenLog(string path); + + typedef std::map ExtentMap; + typedef ExtentMap::iterator ExtentIterator; + + // Internal DataSeries structures we need to keep track of. + vector schema_list; + ExtentTypeLibrary log_types; + ExtentType::Ptr log_type; + ExtentSeries log_series; + ExtentMap extents; + int compress_type; + + DataSeriesSink* log_file; + OutputModule* log_output; + + // Options set from the script-level. + uint64 ds_extent_size; + uint64 ds_num_threads; + string ds_compression; + bool ds_dump_schema; + bool ds_use_integer_for_time; + string ds_set_separator; +}; + +} +} + +#endif + diff --git a/src/logging/writers/ElasticSearch.cc b/src/logging/writers/ElasticSearch.cc new file mode 100644 index 0000000000..ae825ac997 --- /dev/null +++ b/src/logging/writers/ElasticSearch.cc @@ -0,0 +1,425 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// This is experimental code that is not yet ready for production usage. +// + + +#include "config.h" + +#ifdef USE_ELASTICSEARCH + +#include "util.h" // Needs to come first for stdint.h + +#include +#include + +#include "BroString.h" +#include "NetVar.h" +#include "threading/SerialTypes.h" + +#include +#include + +#include "ElasticSearch.h" + +using namespace logging; +using namespace writer; +using threading::Value; +using threading::Field; + +ElasticSearch::ElasticSearch(WriterFrontend* frontend) : WriterBackend(frontend) + { + cluster_name_len = BifConst::LogElasticSearch::cluster_name->Len(); + cluster_name = new char[cluster_name_len + 1]; + memcpy(cluster_name, BifConst::LogElasticSearch::cluster_name->Bytes(), cluster_name_len); + cluster_name[cluster_name_len] = 0; + + index_prefix = string((const char*) BifConst::LogElasticSearch::index_prefix->Bytes(), BifConst::LogElasticSearch::index_prefix->Len()); + + es_server = string(Fmt("http://%s:%d", BifConst::LogElasticSearch::server_host->Bytes(), + (int) BifConst::LogElasticSearch::server_port)); + bulk_url = string(Fmt("%s/_bulk", es_server.c_str())); + + http_headers = curl_slist_append(NULL, "Content-Type: text/json; charset=utf-8"); + buffer.Clear(); + counter = 0; + current_index = string(); + prev_index = string(); + last_send = current_time(); + failing = false; + + transfer_timeout = static_cast(BifConst::LogElasticSearch::transfer_timeout); + + curl_handle = HTTPSetup(); +} + +ElasticSearch::~ElasticSearch() + { + delete [] cluster_name; + } + +bool ElasticSearch::DoInit(const WriterInfo& info, int num_fields, const threading::Field* const* fields) + { + return true; + } + +bool ElasticSearch::DoFlush(double network_time) + { + BatchIndex(); + return true; + } + +bool ElasticSearch::DoFinish(double network_time) + { + BatchIndex(); + curl_slist_free_all(http_headers); + curl_easy_cleanup(curl_handle); + return true; + } + +bool ElasticSearch::BatchIndex() + { + curl_easy_reset(curl_handle); + curl_easy_setopt(curl_handle, CURLOPT_URL, bulk_url.c_str()); + curl_easy_setopt(curl_handle, CURLOPT_POST, 1); + curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDSIZE_LARGE, (curl_off_t)buffer.Len()); + curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDS, buffer.Bytes()); + failing = ! HTTPSend(curl_handle); + + // We are currently throwing the data out regardless of if the send failed. Fire and forget! + buffer.Clear(); + counter = 0; + last_send = current_time(); + + return true; + } + +bool ElasticSearch::AddValueToBuffer(ODesc* b, Value* val) + { + switch ( val->type ) + { + // ES treats 0 as false and any other value as true so bool types go here. + case TYPE_BOOL: + case TYPE_INT: + b->Add(val->val.int_val); + break; + + case TYPE_COUNT: + case TYPE_COUNTER: + { + // ElasticSearch doesn't seem to support unsigned 64bit ints. + if ( val->val.uint_val >= INT64_MAX ) + { + Error(Fmt("count value too large: %" PRIu64, val->val.uint_val)); + b->AddRaw("null", 4); + } + else + b->Add(val->val.uint_val); + break; + } + + case TYPE_PORT: + b->Add(val->val.port_val.port); + break; + + case TYPE_SUBNET: + b->AddRaw("\"", 1); + b->Add(Render(val->val.subnet_val)); + b->AddRaw("\"", 1); + break; + + case TYPE_ADDR: + b->AddRaw("\"", 1); + b->Add(Render(val->val.addr_val)); + b->AddRaw("\"", 1); + break; + + case TYPE_DOUBLE: + case TYPE_INTERVAL: + b->Add(val->val.double_val); + break; + + case TYPE_TIME: + { + // ElasticSearch uses milliseconds for timestamps and json only + // supports signed ints (uints can be too large). + uint64_t ts = (uint64_t) (val->val.double_val * 1000); + if ( ts >= INT64_MAX ) + { + Error(Fmt("time value too large: %" PRIu64, ts)); + b->AddRaw("null", 4); + } + else + b->Add(ts); + break; + } + + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_FUNC: + { + b->AddRaw("\"", 1); + for ( int i = 0; i < val->val.string_val.length; ++i ) + { + char c = val->val.string_val.data[i]; + // 2byte Unicode escape special characters. + if ( c < 32 || c > 126 || c == '\n' || c == '"' || c == '\'' || c == '\\' || c == '&' ) + { + static const char hex_chars[] = "0123456789abcdef"; + b->AddRaw("\\u00", 4); + b->AddRaw(&hex_chars[(c & 0xf0) >> 4], 1); + b->AddRaw(&hex_chars[c & 0x0f], 1); + } + else + b->AddRaw(&c, 1); + } + b->AddRaw("\"", 1); + break; + } + + case TYPE_TABLE: + { + b->AddRaw("[", 1); + for ( int j = 0; j < val->val.set_val.size; j++ ) + { + if ( j > 0 ) + b->AddRaw(",", 1); + AddValueToBuffer(b, val->val.set_val.vals[j]); + } + b->AddRaw("]", 1); + break; + } + + case TYPE_VECTOR: + { + b->AddRaw("[", 1); + for ( int j = 0; j < val->val.vector_val.size; j++ ) + { + if ( j > 0 ) + b->AddRaw(",", 1); + AddValueToBuffer(b, val->val.vector_val.vals[j]); + } + b->AddRaw("]", 1); + break; + } + + default: + return false; + } + return true; + } + +bool ElasticSearch::AddFieldToBuffer(ODesc *b, Value* val, const Field* field) + { + if ( ! val->present ) + return false; + + b->AddRaw("\"", 1); + b->Add(field->name); + b->AddRaw("\":", 2); + AddValueToBuffer(b, val); + return true; + } + +bool ElasticSearch::DoWrite(int num_fields, const Field* const * fields, + Value** vals) + { + if ( current_index.empty() ) + UpdateIndex(network_time, Info().rotation_interval, Info().rotation_base); + + // Our action line looks like: + buffer.AddRaw("{\"index\":{\"_index\":\"", 20); + buffer.Add(current_index); + buffer.AddRaw("\",\"_type\":\"", 11); + buffer.Add(Info().path); + buffer.AddRaw("\"}}\n", 4); + + buffer.AddRaw("{", 1); + for ( int i = 0; i < num_fields; i++ ) + { + if ( i > 0 && buffer.Bytes()[buffer.Len()] != ',' && vals[i]->present ) + buffer.AddRaw(",", 1); + AddFieldToBuffer(&buffer, vals[i], fields[i]); + } + buffer.AddRaw("}\n", 2); + + counter++; + if ( counter >= BifConst::LogElasticSearch::max_batch_size || + uint(buffer.Len()) >= BifConst::LogElasticSearch::max_byte_size ) + BatchIndex(); + + return true; + } + +bool ElasticSearch::UpdateIndex(double now, double rinterval, double rbase) + { + if ( rinterval == 0 ) + { + // if logs aren't being rotated, don't use a rotation oriented index name. + current_index = index_prefix; + } + else + { + double nr = calc_next_rotate(now, rinterval, rbase); + double interval_beginning = now - (rinterval - nr); + + struct tm tm; + char buf[128]; + time_t teatime = (time_t)interval_beginning; + localtime_r(&teatime, &tm); + strftime(buf, sizeof(buf), "%Y%m%d%H%M", &tm); + + prev_index = current_index; + current_index = index_prefix + "-" + buf; + + // Send some metadata about this index. + buffer.AddRaw("{\"index\":{\"_index\":\"@", 21); + buffer.Add(index_prefix); + buffer.AddRaw("-meta\",\"_type\":\"index\",\"_id\":\"", 30); + buffer.Add(current_index); + buffer.AddRaw("-", 1); + buffer.Add(Info().rotation_base); + buffer.AddRaw("-", 1); + buffer.Add(Info().rotation_interval); + buffer.AddRaw("\"}}\n{\"name\":\"", 13); + buffer.Add(current_index); + buffer.AddRaw("\",\"start\":", 10); + buffer.Add(interval_beginning); + buffer.AddRaw(",\"end\":", 7); + buffer.Add(interval_beginning+rinterval); + buffer.AddRaw("}\n", 2); + } + + //printf("%s - prev:%s current:%s\n", Info().path.c_str(), prev_index.c_str(), current_index.c_str()); + return true; + } + + +bool ElasticSearch::DoRotate(const char* rotated_path, double open, double close, bool terminating) + { + // Update the currently used index to the new rotation interval. + UpdateIndex(close, Info().rotation_interval, Info().rotation_base); + + // Only do this stuff if there was a previous index. + if ( ! prev_index.empty() ) + { + // FIXME: I think this section is taking too long and causing the thread to die. + + // Compress the previous index + //curl_easy_reset(curl_handle); + //curl_easy_setopt(curl_handle, CURLOPT_URL, Fmt("%s/%s/_settings", es_server.c_str(), prev_index.c_str())); + //curl_easy_setopt(curl_handle, CURLOPT_CUSTOMREQUEST, "PUT"); + //curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDS, "{\"index\":{\"store.compress.stored\":\"true\"}}"); + //curl_easy_setopt(curl_handle, CURLOPT_POSTFIELDSIZE_LARGE, (curl_off_t) 42); + //HTTPSend(curl_handle); + + // Optimize the previous index. + // TODO: make this into variables. + //curl_easy_reset(curl_handle); + //curl_easy_setopt(curl_handle, CURLOPT_URL, Fmt("%s/%s/_optimize?max_num_segments=1&wait_for_merge=false", es_server.c_str(), prev_index.c_str())); + //HTTPSend(curl_handle); + } + + if ( ! FinishedRotation(current_index.c_str(), prev_index.c_str(), open, close, terminating) ) + Error(Fmt("error rotating %s to %s", prev_index.c_str(), current_index.c_str())); + + return true; + } + +bool ElasticSearch::DoSetBuf(bool enabled) + { + // Nothing to do. + return true; + } + +bool ElasticSearch::DoHeartbeat(double network_time, double current_time) + { + if ( last_send > 0 && buffer.Len() > 0 && + current_time-last_send > BifConst::LogElasticSearch::max_batch_interval ) + { + BatchIndex(); + } + + return true; + } + + +CURL* ElasticSearch::HTTPSetup() + { + CURL* handle = curl_easy_init(); + if ( ! handle ) + { + Error("cURL did not initialize correctly."); + return 0; + } + + return handle; + } + +size_t ElasticSearch::HTTPReceive(void* ptr, int size, int nmemb, void* userdata) + { + //TODO: Do some verification on the result? + return size; + } + +bool ElasticSearch::HTTPSend(CURL *handle) + { + curl_easy_setopt(handle, CURLOPT_HTTPHEADER, http_headers); + curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, &logging::writer::ElasticSearch::HTTPReceive); // This gets called with the result. + // HTTP 1.1 likes to use chunked encoded transfers, which aren't good for speed. + // The best (only?) way to disable that is to just use HTTP 1.0 + curl_easy_setopt(handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0); + + // Some timeout options. These will need more attention later. + curl_easy_setopt(handle, CURLOPT_NOSIGNAL, 1); + curl_easy_setopt(handle, CURLOPT_CONNECTTIMEOUT, transfer_timeout); + curl_easy_setopt(handle, CURLOPT_TIMEOUT, transfer_timeout); + curl_easy_setopt(handle, CURLOPT_DNS_CACHE_TIMEOUT, 60*60); + + CURLcode return_code = curl_easy_perform(handle); + + switch ( return_code ) + { + case CURLE_COULDNT_CONNECT: + case CURLE_COULDNT_RESOLVE_HOST: + case CURLE_WRITE_ERROR: + case CURLE_RECV_ERROR: + { + if ( ! failing ) + Error(Fmt("ElasticSearch server may not be accessible.")); + + break; + } + + case CURLE_OPERATION_TIMEDOUT: + { + if ( ! failing ) + Warning(Fmt("HTTP operation with elasticsearch server timed out at %" PRIu64 " msecs.", transfer_timeout)); + + break; + } + + case CURLE_OK: + { + uint http_code = 0; + curl_easy_getinfo(curl_handle, CURLINFO_RESPONSE_CODE, &http_code); + if ( http_code == 200 ) + // Hopefully everything goes through here. + return true; + else if ( ! failing ) + Error(Fmt("Received a non-successful status code back from ElasticSearch server, check the elasticsearch server log.")); + + break; + } + + default: + { + break; + } + } + // The "successful" return happens above + return false; + } + +#endif diff --git a/src/logging/writers/ElasticSearch.h b/src/logging/writers/ElasticSearch.h new file mode 100644 index 0000000000..fef0a00ffd --- /dev/null +++ b/src/logging/writers/ElasticSearch.h @@ -0,0 +1,81 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// Log writer for writing to an ElasticSearch database +// +// This is experimental code that is not yet ready for production usage. +// + +#ifndef LOGGING_WRITER_ELASTICSEARCH_H +#define LOGGING_WRITER_ELASTICSEARCH_H + +#include +#include "../WriterBackend.h" + +namespace logging { namespace writer { + +class ElasticSearch : public WriterBackend { +public: + ElasticSearch(WriterFrontend* frontend); + ~ElasticSearch(); + + static WriterBackend* Instantiate(WriterFrontend* frontend) + { return new ElasticSearch(frontend); } + static string LogExt(); + +protected: + // Overidden from WriterBackend. + + virtual bool DoInit(const WriterInfo& info, int num_fields, + const threading::Field* const* fields); + + virtual bool DoWrite(int num_fields, const threading::Field* const* fields, + threading::Value** vals); + virtual bool DoSetBuf(bool enabled); + virtual bool DoRotate(const char* rotated_path, double open, + double close, bool terminating); + virtual bool DoFlush(double network_time); + virtual bool DoFinish(double network_time); + virtual bool DoHeartbeat(double network_time, double current_time); + +private: + bool AddFieldToBuffer(ODesc *b, threading::Value* val, const threading::Field* field); + bool AddValueToBuffer(ODesc *b, threading::Value* val); + bool BatchIndex(); + bool SendMappings(); + bool UpdateIndex(double now, double rinterval, double rbase); + + CURL* HTTPSetup(); + size_t HTTPReceive(void* ptr, int size, int nmemb, void* userdata); + bool HTTPSend(CURL *handle); + + // Buffers, etc. + ODesc buffer; + uint64 counter; + double last_send; + string current_index; + string prev_index; + + CURL* curl_handle; + + // From scripts + char* cluster_name; + int cluster_name_len; + + string es_server; + string bulk_url; + + struct curl_slist *http_headers; + + string path; + string index_prefix; + long transfer_timeout; + bool failing; + + uint64 batch_size; +}; + +} +} + + +#endif diff --git a/src/logging/writers/None.cc b/src/logging/writers/None.cc new file mode 100644 index 0000000000..9b91b82199 --- /dev/null +++ b/src/logging/writers/None.cc @@ -0,0 +1,56 @@ + +#include + +#include "None.h" +#include "NetVar.h" + +using namespace logging; +using namespace writer; + +bool None::DoInit(const WriterInfo& info, int num_fields, + const threading::Field* const * fields) + { + if ( BifConst::LogNone::debug ) + { + std::cout << "[logging::writer::None]" << std::endl; + std::cout << " path=" << info.path << std::endl; + std::cout << " rotation_interval=" << info.rotation_interval << std::endl; + std::cout << " rotation_base=" << info.rotation_base << std::endl; + + // Output the config sorted by keys. + + std::vector > keys; + + for ( WriterInfo::config_map::const_iterator i = info.config.begin(); i != info.config.end(); i++ ) + keys.push_back(std::make_pair(i->first, i->second)); + + std::sort(keys.begin(), keys.end()); + + for ( std::vector >::const_iterator i = keys.begin(); i != keys.end(); i++ ) + std::cout << " config[" << (*i).first << "] = " << (*i).second << std::endl; + + for ( int i = 0; i < num_fields; i++ ) + { + const threading::Field* field = fields[i]; + std::cout << " field " << field->name << ": " + << type_name(field->type) << std::endl; + } + + std::cout << std::endl; + } + + return true; + } + +bool None::DoRotate(const char* rotated_path, double open, double close, bool terminating) + { + if ( ! FinishedRotation("/dev/null", Info().path, open, close, terminating)) + { + Error(Fmt("error rotating %s", Info().path)); + return false; + } + + return true; + } + + diff --git a/src/logging/writers/None.h b/src/logging/writers/None.h new file mode 100644 index 0000000000..2a6f71a06a --- /dev/null +++ b/src/logging/writers/None.h @@ -0,0 +1,37 @@ +// See the file "COPYING" in the main distribution directory for copyright. +// +// Dummy log writer that just discards everything (but still pretends to rotate). + +#ifndef LOGGING_WRITER_NONE_H +#define LOGGING_WRITER_NONE_H + +#include "../WriterBackend.h" + +namespace logging { namespace writer { + +class None : public WriterBackend { +public: + None(WriterFrontend* frontend) : WriterBackend(frontend) {} + ~None() {}; + + static WriterBackend* Instantiate(WriterFrontend* frontend) + { return new None(frontend); } + +protected: + virtual bool DoInit(const WriterInfo& info, int num_fields, + const threading::Field* const * fields); + + virtual bool DoWrite(int num_fields, const threading::Field* const* fields, + threading::Value** vals) { return true; } + virtual bool DoSetBuf(bool enabled) { return true; } + virtual bool DoRotate(const char* rotated_path, double open, + double close, bool terminating); + virtual bool DoFlush(double network_time) { return true; } + virtual bool DoFinish(double network_time) { return true; } + virtual bool DoHeartbeat(double network_time, double current_time) { return true; } +}; + +} +} + +#endif diff --git a/src/main.cc b/src/main.cc index dfa46c3050..5999186240 100644 --- a/src/main.cc +++ b/src/main.cc @@ -12,12 +12,18 @@ #include #endif +#ifdef USE_CURL +#include +#endif + #ifdef USE_IDMEF extern "C" { #include } #endif +#include + extern "C" void OPENSSL_add_all_algorithms_conf(void); #include "bsd-getopt-long.h" @@ -29,7 +35,6 @@ extern "C" void OPENSSL_add_all_algorithms_conf(void); #include "Event.h" #include "File.h" #include "Reporter.h" -#include "LogMgr.h" #include "Net.h" #include "NetVar.h" #include "Var.h" @@ -44,12 +49,19 @@ extern "C" void OPENSSL_add_all_algorithms_conf(void); #include "PersistenceSerializer.h" #include "EventRegistry.h" #include "Stats.h" -#include "ConnCompressor.h" #include "DPM.h" #include "BroDoc.h" +#include "Brofiler.h" + +#include "threading/Manager.h" +#include "input/Manager.h" +#include "logging/Manager.h" +#include "logging/writers/Ascii.h" #include "binpac_bro.h" +Brofiler brofiler; + #ifndef HAVE_STRSEP extern "C" { char* strsep(char**, const char*); @@ -60,7 +72,7 @@ extern "C" { #include "setsignal.h" }; -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG HeapLeakChecker* heap_checker = 0; int perftools_leaks = 0; int perftools_profile = 0; @@ -71,7 +83,9 @@ char* writefile = 0; name_list prefixes; DNS_Mgr* dns_mgr; TimerMgr* timer_mgr; -LogMgr* log_mgr; +logging::Manager* log_mgr = 0; +threading::Manager* thread_mgr = 0; +input::Manager* input_mgr = 0; Stmt* stmts; EventHandlerPtr net_done = 0; RuleMatcher* rule_matcher = 0; @@ -91,12 +105,11 @@ int do_notice_analysis = 0; int rule_bench = 0; int generate_documentation = 0; SecondaryPath* secondary_path = 0; -ConnCompressor* conn_compressor = 0; extern char version[]; char* command_line_policy = 0; vector params; char* proc_status_file = 0; -int snaplen = 65535; // really want "capture entire packet" +int snaplen = 0; // this gets set from the scripting-layer's value int FLAGS_use_binpac = false; @@ -144,7 +157,6 @@ void usage() fprintf(stderr, " -g|--dump-config | dump current config into .state dir\n"); fprintf(stderr, " -h|--help|-? | command line help\n"); fprintf(stderr, " -i|--iface | read from given interface\n"); - fprintf(stderr, " -l|--snaplen | number of bytes per packet to capture from interfaces (default 65535)\n"); fprintf(stderr, " -p|--prefix | add given prefix to policy file resolution\n"); fprintf(stderr, " -r|--readfile | read from given tcpdump file\n"); fprintf(stderr, " -y|--flowfile [=] | read from given flow file\n"); @@ -173,7 +185,7 @@ void usage() fprintf(stderr, " -W|--watchdog | activate watchdog timer\n"); fprintf(stderr, " -Z|--doc-scripts | generate documentation for all loaded scripts\n"); -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG fprintf(stderr, " -m|--mem-leaks | show leaks [perftools]\n"); fprintf(stderr, " -M|--mem-profile | record heap [perftools]\n"); #endif @@ -194,6 +206,29 @@ void usage() fprintf(stderr, " $BRO_PREFIXES | prefix list (%s)\n", bro_prefixes()); fprintf(stderr, " $BRO_DNS_FAKE | disable DNS lookups (%s)\n", bro_dns_fake()); fprintf(stderr, " $BRO_SEED_FILE | file to load seeds from (not set)\n"); + fprintf(stderr, " $BRO_LOG_SUFFIX | ASCII log file extension (.%s)\n", logging::writer::Ascii::LogExt().c_str()); + fprintf(stderr, " $BRO_PROFILER_FILE | Output file for script execution statistics (not set)\n"); + + fprintf(stderr, "\n"); + fprintf(stderr, " Supported log formats: "); + + bool first = true; + list fmts = logging::Manager::SupportedFormats(); + + for ( list::const_iterator i = fmts.begin(); i != fmts.end(); ++i ) + { + if ( *i == "None" ) + // Skip, it's uninteresting. + continue; + + if ( ! first ) + fprintf(stderr, ","); + + fprintf(stderr, "%s", (*i).c_str()); + first = false; + } + + fprintf(stderr, "\n"); exit(1); } @@ -238,7 +273,7 @@ void done_with_network() net_finish(1); -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG if ( perftools_profile ) { @@ -260,6 +295,8 @@ void terminate_bro() terminating = true; + brofiler.WriteStats(); + EventHandlerPtr bro_done = internal_handler("bro_done"); if ( bro_done ) mgr.QueueEvent(bro_done, new val_list); @@ -280,6 +317,13 @@ void terminate_bro() if ( remote_serializer ) remote_serializer->LogStats(); + mgr.Drain(); + + log_mgr->Terminate(); + thread_mgr->Terminate(); + + mgr.Drain(); + delete timer_mgr; delete dns_mgr; delete persistence_serializer; @@ -288,11 +332,13 @@ void terminate_bro() delete state_serializer; delete event_registry; delete secondary_path; - delete conn_compressor; delete remote_serializer; delete dpm; delete log_mgr; + delete thread_mgr; delete reporter; + + reporter = 0; } void termination_signal() @@ -320,6 +366,7 @@ RETSIGTYPE sig_handler(int signo) { set_processing_status("TERMINATING", "sig_handler"); signal_val = signo; + return RETSIGVAL; } @@ -335,6 +382,10 @@ static void bro_new_handler() int main(int argc, char** argv) { + std::set_new_handler(bro_new_handler); + + brofiler.ReadStats(); + bro_argc = argc; bro_argv = new char* [argc]; @@ -370,7 +421,6 @@ int main(int argc, char** argv) {"filter", required_argument, 0, 'f'}, {"help", no_argument, 0, 'h'}, {"iface", required_argument, 0, 'i'}, - {"snaplen", required_argument, 0, 'l'}, {"doc-scripts", no_argument, 0, 'Z'}, {"prefix", required_argument, 0, 'p'}, {"readfile", required_argument, 0, 'r'}, @@ -405,7 +455,7 @@ int main(int argc, char** argv) #ifdef USE_IDMEF {"idmef-dtd", required_argument, 0, 'n'}, #endif -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG {"mem-leaks", no_argument, 0, 'm'}, {"mem-profile", no_argument, 0, 'M'}, #endif @@ -447,7 +497,7 @@ int main(int argc, char** argv) safe_strncpy(opts, "B:D:e:f:I:i:K:l:n:p:R:r:s:T:t:U:w:x:X:y:Y:z:CFGLOPSWbdghvZ", sizeof(opts)); -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG strncat(opts, "mM", 2); #endif @@ -479,10 +529,6 @@ int main(int argc, char** argv) interfaces.append(optarg); break; - case 'l': - snaplen = atoi(optarg); - break; - case 'p': prefixes.append(optarg); break; @@ -555,8 +601,7 @@ int main(int argc, char** argv) break; case 'K': - hash_md5(strlen(optarg), (const u_char*) optarg, - shared_hmac_md5_key); + MD5((const u_char*) optarg, strlen(optarg), shared_hmac_md5_key); hmac_key_set = 1; break; @@ -607,7 +652,7 @@ int main(int argc, char** argv) exit(0); break; -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG case 'm': perftools_leaks = 1; break; @@ -657,7 +702,9 @@ int main(int argc, char** argv) set_processing_status("INITIALIZING", "main"); bro_start_time = current_time(true); + reporter = new Reporter(); + thread_mgr = new threading::Manager(); #ifdef DEBUG if ( debug_streams ) @@ -673,6 +720,10 @@ int main(int argc, char** argv) SSL_library_init(); SSL_load_error_strings(); +#ifdef USE_CURL + curl_global_init(CURL_GLOBAL_ALL); +#endif + // FIXME: On systems that don't provide /dev/urandom, OpenSSL doesn't // seed the PRNG. We should do this here (but at least Linux, FreeBSD // and Solaris provide /dev/urandom). @@ -723,7 +774,8 @@ int main(int argc, char** argv) persistence_serializer = new PersistenceSerializer(); remote_serializer = new RemoteSerializer(); event_registry = new EventRegistry(); - log_mgr = new LogMgr(); + log_mgr = new logging::Manager(); + input_mgr = new input::Manager(); if ( events_file ) event_player = new EventPlayer(events_file); @@ -741,14 +793,14 @@ int main(int argc, char** argv) // nevertheless reported; see perftools docs), thus // we suppress some messages here. -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG { HeapLeakChecker::Disabler disabler; #endif yyparse(); -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG } #endif @@ -794,6 +846,10 @@ int main(int argc, char** argv) if ( *s ) rule_files.append(s); + // Append signature files defined in @load-sigs + for ( size_t i = 0; i < sig_files.size(); ++i ) + rule_files.append(copy_string(sig_files[i].c_str())); + if ( rule_files.length() > 0 ) { rule_matcher = new RuleMatcher(RE_level); @@ -809,8 +865,6 @@ int main(int argc, char** argv) delete [] script_rule_files; - conn_compressor = new ConnCompressor(); - if ( g_policy_debug ) // ### Add support for debug command file. dbg_init_debugger(0); @@ -831,12 +885,14 @@ int main(int argc, char** argv) } } + snaplen = internal_val("snaplen")->AsCount(); + // Initialize the secondary path, if it's needed. secondary_path = new SecondaryPath(); if ( dns_type != DNS_PRIME ) net_init(interfaces, read_files, netflows, flow_files, - writefile, "tcp or udp or icmp", + writefile, "", secondary_path->Filter(), do_watchdog); BroFile::SetDefaultRotation(log_rotate_interval, log_max_size); @@ -995,12 +1051,14 @@ int main(int argc, char** argv) have_pending_timers = ! reading_traces && timer_mgr->Size() > 0; + io_sources.Register(thread_mgr, true); + if ( io_sources.Size() > 0 || have_pending_timers ) { if ( profiling_logger ) profiling_logger->Log(); -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG if ( perftools_leaks ) heap_checker = new HeapLeakChecker("net_run"); @@ -1016,6 +1074,10 @@ int main(int argc, char** argv) done_with_network(); net_delete(); +#ifdef USE_CURL + curl_global_cleanup(); +#endif + terminate_bro(); // Close files after net_delete(), because net_delete() diff --git a/src/md5.c b/src/md5.c deleted file mode 100644 index 888993b9c4..0000000000 --- a/src/md5.c +++ /dev/null @@ -1,380 +0,0 @@ -/* - Copyright (C) 1999, 2000, 2002 Aladdin Enterprises. All rights reserved. - - This software is provided 'as-is', without any express or implied - warranty. In no event will the authors be held liable for any damages - arising from the use of this software. - - Permission is granted to anyone to use this software for any purpose, - including commercial applications, and to alter it and redistribute it - freely, subject to the following restrictions: - - 1. The origin of this software must not be misrepresented; you must not - claim that you wrote the original software. If you use this software - in a product, an acknowledgment in the product documentation would be - appreciated but is not required. - 2. Altered source versions must be plainly marked as such, and must not be - misrepresented as being the original software. - 3. This notice may not be removed or altered from any source distribution. - - L. Peter Deutsch - ghost@aladdin.com - - */ -/* - Independent implementation of MD5 (RFC 1321). - - This code implements the MD5 Algorithm defined in RFC 1321, whose - text is available at - http://www.ietf.org/rfc/rfc1321.txt - The code is derived from the text of the RFC, including the test suite - (section A.5) but excluding the rest of Appendix A. It does not include - any code or documentation that is identified in the RFC as being - copyrighted. - - The original and principal author of md5.c is L. Peter Deutsch - . Other authors are noted in the change history - that follows (in reverse chronological order): - - 2002-04-13 lpd Clarified derivation from RFC 1321; now handles byte order - either statically or dynamically; added missing #include - in library. - 2002-03-11 lpd Corrected argument list for main(), and added int return - type, in test program and T value program. - 2002-02-21 lpd Added missing #include in test program. - 2000-07-03 lpd Patched to eliminate warnings about "constant is - unsigned in ANSI C, signed in traditional"; made test program - self-checking. - 1999-11-04 lpd Edited comments slightly for automatic TOC extraction. - 1999-10-18 lpd Fixed typo in header comment (ansi2knr rather than md5). - 1999-05-03 lpd Original version. - */ - -#include "md5.h" -#include - -#undef BYTE_ORDER /* 1 = big-endian, -1 = little-endian, 0 = unknown */ -#ifdef ARCH_IS_BIG_ENDIAN -# define BYTE_ORDER (ARCH_IS_BIG_ENDIAN ? 1 : -1) -#else -# define BYTE_ORDER 0 -#endif - -#define T_MASK ((md5_word_t)~0) -#define T1 /* 0xd76aa478 */ (T_MASK ^ 0x28955b87) -#define T2 /* 0xe8c7b756 */ (T_MASK ^ 0x173848a9) -#define T3 0x242070db -#define T4 /* 0xc1bdceee */ (T_MASK ^ 0x3e423111) -#define T5 /* 0xf57c0faf */ (T_MASK ^ 0x0a83f050) -#define T6 0x4787c62a -#define T7 /* 0xa8304613 */ (T_MASK ^ 0x57cfb9ec) -#define T8 /* 0xfd469501 */ (T_MASK ^ 0x02b96afe) -#define T9 0x698098d8 -#define T10 /* 0x8b44f7af */ (T_MASK ^ 0x74bb0850) -#define T11 /* 0xffff5bb1 */ (T_MASK ^ 0x0000a44e) -#define T12 /* 0x895cd7be */ (T_MASK ^ 0x76a32841) -#define T13 0x6b901122 -#define T14 /* 0xfd987193 */ (T_MASK ^ 0x02678e6c) -#define T15 /* 0xa679438e */ (T_MASK ^ 0x5986bc71) -#define T16 0x49b40821 -#define T17 /* 0xf61e2562 */ (T_MASK ^ 0x09e1da9d) -#define T18 /* 0xc040b340 */ (T_MASK ^ 0x3fbf4cbf) -#define T19 0x265e5a51 -#define T20 /* 0xe9b6c7aa */ (T_MASK ^ 0x16493855) -#define T21 /* 0xd62f105d */ (T_MASK ^ 0x29d0efa2) -#define T22 0x02441453 -#define T23 /* 0xd8a1e681 */ (T_MASK ^ 0x275e197e) -#define T24 /* 0xe7d3fbc8 */ (T_MASK ^ 0x182c0437) -#define T25 0x21e1cde6 -#define T26 /* 0xc33707d6 */ (T_MASK ^ 0x3cc8f829) -#define T27 /* 0xf4d50d87 */ (T_MASK ^ 0x0b2af278) -#define T28 0x455a14ed -#define T29 /* 0xa9e3e905 */ (T_MASK ^ 0x561c16fa) -#define T30 /* 0xfcefa3f8 */ (T_MASK ^ 0x03105c07) -#define T31 0x676f02d9 -#define T32 /* 0x8d2a4c8a */ (T_MASK ^ 0x72d5b375) -#define T33 /* 0xfffa3942 */ (T_MASK ^ 0x0005c6bd) -#define T34 /* 0x8771f681 */ (T_MASK ^ 0x788e097e) -#define T35 0x6d9d6122 -#define T36 /* 0xfde5380c */ (T_MASK ^ 0x021ac7f3) -#define T37 /* 0xa4beea44 */ (T_MASK ^ 0x5b4115bb) -#define T38 0x4bdecfa9 -#define T39 /* 0xf6bb4b60 */ (T_MASK ^ 0x0944b49f) -#define T40 /* 0xbebfbc70 */ (T_MASK ^ 0x4140438f) -#define T41 0x289b7ec6 -#define T42 /* 0xeaa127fa */ (T_MASK ^ 0x155ed805) -#define T43 /* 0xd4ef3085 */ (T_MASK ^ 0x2b10cf7a) -#define T44 0x04881d05 -#define T45 /* 0xd9d4d039 */ (T_MASK ^ 0x262b2fc6) -#define T46 /* 0xe6db99e5 */ (T_MASK ^ 0x1924661a) -#define T47 0x1fa27cf8 -#define T48 /* 0xc4ac5665 */ (T_MASK ^ 0x3b53a99a) -#define T49 /* 0xf4292244 */ (T_MASK ^ 0x0bd6ddbb) -#define T50 0x432aff97 -#define T51 /* 0xab9423a7 */ (T_MASK ^ 0x546bdc58) -#define T52 /* 0xfc93a039 */ (T_MASK ^ 0x036c5fc6) -#define T53 0x655b59c3 -#define T54 /* 0x8f0ccc92 */ (T_MASK ^ 0x70f3336d) -#define T55 /* 0xffeff47d */ (T_MASK ^ 0x00100b82) -#define T56 /* 0x85845dd1 */ (T_MASK ^ 0x7a7ba22e) -#define T57 0x6fa87e4f -#define T58 /* 0xfe2ce6e0 */ (T_MASK ^ 0x01d3191f) -#define T59 /* 0xa3014314 */ (T_MASK ^ 0x5cfebceb) -#define T60 0x4e0811a1 -#define T61 /* 0xf7537e82 */ (T_MASK ^ 0x08ac817d) -#define T62 /* 0xbd3af235 */ (T_MASK ^ 0x42c50dca) -#define T63 0x2ad7d2bb -#define T64 /* 0xeb86d391 */ (T_MASK ^ 0x14792c6e) - - -static void -md5_process(md5_state_t *pms, const md5_byte_t *data /*[64]*/) -{ - md5_word_t - a = pms->abcd[0], b = pms->abcd[1], - c = pms->abcd[2], d = pms->abcd[3]; - md5_word_t t; -#if BYTE_ORDER > 0 - /* Define storage only for big-endian CPUs. */ - md5_word_t X[16]; -#else - /* Define storage for little-endian or both types of CPUs. */ - md5_word_t xbuf[16]; - const md5_word_t *X; -#endif - - { -#if BYTE_ORDER == 0 - /* - * Determine dynamically whether this is a big-endian or - * little-endian machine, since we can use a more efficient - * algorithm on the latter. - */ - static const int w = 1; - - if (*((const md5_byte_t *)&w)) /* dynamic little-endian */ -#endif -#if BYTE_ORDER <= 0 /* little-endian */ - { - /* - * On little-endian machines, we can process properly aligned - * data without copying it. - */ - if (!((data - (const md5_byte_t *)0) & 3)) { - /* data are properly aligned */ - X = (const md5_word_t *)data; - } else { - /* not aligned */ - memcpy(xbuf, data, 64); - X = xbuf; - } - } -#endif -#if BYTE_ORDER == 0 - else /* dynamic big-endian */ -#endif -#if BYTE_ORDER >= 0 /* big-endian */ - { - /* - * On big-endian machines, we must arrange the bytes in the - * right order. - */ - const md5_byte_t *xp = data; - int i; - -# if BYTE_ORDER == 0 - X = xbuf; /* (dynamic only) */ -# else -# define xbuf X /* (static only) */ -# endif - for (i = 0; i < 16; ++i, xp += 4) - xbuf[i] = xp[0] + (xp[1] << 8) + (xp[2] << 16) + (xp[3] << 24); - } -#endif - } - -#define ROTATE_LEFT(x, n) (((x) << (n)) | ((x) >> (32 - (n)))) - - /* Round 1. */ - /* Let [abcd k s i] denote the operation - a = b + ((a + F(b,c,d) + X[k] + T[i]) <<< s). */ -#define F(x, y, z) (((x) & (y)) | (~(x) & (z))) -#define SET(a, b, c, d, k, s, Ti)\ - t = a + F(b,c,d) + X[k] + Ti;\ - a = ROTATE_LEFT(t, s) + b - /* Do the following 16 operations. */ - SET(a, b, c, d, 0, 7, T1); - SET(d, a, b, c, 1, 12, T2); - SET(c, d, a, b, 2, 17, T3); - SET(b, c, d, a, 3, 22, T4); - SET(a, b, c, d, 4, 7, T5); - SET(d, a, b, c, 5, 12, T6); - SET(c, d, a, b, 6, 17, T7); - SET(b, c, d, a, 7, 22, T8); - SET(a, b, c, d, 8, 7, T9); - SET(d, a, b, c, 9, 12, T10); - SET(c, d, a, b, 10, 17, T11); - SET(b, c, d, a, 11, 22, T12); - SET(a, b, c, d, 12, 7, T13); - SET(d, a, b, c, 13, 12, T14); - SET(c, d, a, b, 14, 17, T15); - SET(b, c, d, a, 15, 22, T16); -#undef SET - - /* Round 2. */ - /* Let [abcd k s i] denote the operation - a = b + ((a + G(b,c,d) + X[k] + T[i]) <<< s). */ -#define G(x, y, z) (((x) & (z)) | ((y) & ~(z))) -#define SET(a, b, c, d, k, s, Ti)\ - t = a + G(b,c,d) + X[k] + Ti;\ - a = ROTATE_LEFT(t, s) + b - /* Do the following 16 operations. */ - SET(a, b, c, d, 1, 5, T17); - SET(d, a, b, c, 6, 9, T18); - SET(c, d, a, b, 11, 14, T19); - SET(b, c, d, a, 0, 20, T20); - SET(a, b, c, d, 5, 5, T21); - SET(d, a, b, c, 10, 9, T22); - SET(c, d, a, b, 15, 14, T23); - SET(b, c, d, a, 4, 20, T24); - SET(a, b, c, d, 9, 5, T25); - SET(d, a, b, c, 14, 9, T26); - SET(c, d, a, b, 3, 14, T27); - SET(b, c, d, a, 8, 20, T28); - SET(a, b, c, d, 13, 5, T29); - SET(d, a, b, c, 2, 9, T30); - SET(c, d, a, b, 7, 14, T31); - SET(b, c, d, a, 12, 20, T32); -#undef SET - - /* Round 3. */ - /* Let [abcd k s t] denote the operation - a = b + ((a + H(b,c,d) + X[k] + T[i]) <<< s). */ -#define H(x, y, z) ((x) ^ (y) ^ (z)) -#define SET(a, b, c, d, k, s, Ti)\ - t = a + H(b,c,d) + X[k] + Ti;\ - a = ROTATE_LEFT(t, s) + b - /* Do the following 16 operations. */ - SET(a, b, c, d, 5, 4, T33); - SET(d, a, b, c, 8, 11, T34); - SET(c, d, a, b, 11, 16, T35); - SET(b, c, d, a, 14, 23, T36); - SET(a, b, c, d, 1, 4, T37); - SET(d, a, b, c, 4, 11, T38); - SET(c, d, a, b, 7, 16, T39); - SET(b, c, d, a, 10, 23, T40); - SET(a, b, c, d, 13, 4, T41); - SET(d, a, b, c, 0, 11, T42); - SET(c, d, a, b, 3, 16, T43); - SET(b, c, d, a, 6, 23, T44); - SET(a, b, c, d, 9, 4, T45); - SET(d, a, b, c, 12, 11, T46); - SET(c, d, a, b, 15, 16, T47); - SET(b, c, d, a, 2, 23, T48); -#undef SET - - /* Round 4. */ - /* Let [abcd k s t] denote the operation - a = b + ((a + I(b,c,d) + X[k] + T[i]) <<< s). */ -#define I(x, y, z) ((y) ^ ((x) | ~(z))) -#define SET(a, b, c, d, k, s, Ti)\ - t = a + I(b,c,d) + X[k] + Ti;\ - a = ROTATE_LEFT(t, s) + b - /* Do the following 16 operations. */ - SET(a, b, c, d, 0, 6, T49); - SET(d, a, b, c, 7, 10, T50); - SET(c, d, a, b, 14, 15, T51); - SET(b, c, d, a, 5, 21, T52); - SET(a, b, c, d, 12, 6, T53); - SET(d, a, b, c, 3, 10, T54); - SET(c, d, a, b, 10, 15, T55); - SET(b, c, d, a, 1, 21, T56); - SET(a, b, c, d, 8, 6, T57); - SET(d, a, b, c, 15, 10, T58); - SET(c, d, a, b, 6, 15, T59); - SET(b, c, d, a, 13, 21, T60); - SET(a, b, c, d, 4, 6, T61); - SET(d, a, b, c, 11, 10, T62); - SET(c, d, a, b, 2, 15, T63); - SET(b, c, d, a, 9, 21, T64); -#undef SET - - /* Then perform the following additions. (That is increment each - of the four registers by the value it had before this block - was started.) */ - pms->abcd[0] += a; - pms->abcd[1] += b; - pms->abcd[2] += c; - pms->abcd[3] += d; -} - -void -md5_init(md5_state_t *pms) -{ - pms->count[0] = pms->count[1] = 0; - pms->abcd[0] = 0x67452301; - pms->abcd[1] = /*0xefcdab89*/ T_MASK ^ 0x10325476; - pms->abcd[2] = /*0x98badcfe*/ T_MASK ^ 0x67452301; - pms->abcd[3] = 0x10325476; -} - -void -md5_append(md5_state_t *pms, const md5_byte_t *data, int nbytes) -{ - const md5_byte_t *p = data; - int left = nbytes; - int offset = (pms->count[0] >> 3) & 63; - md5_word_t nbits = (md5_word_t)(nbytes << 3); - - if (nbytes <= 0) - return; - - /* Update the message length. */ - pms->count[1] += nbytes >> 29; - pms->count[0] += nbits; - if (pms->count[0] < nbits) - pms->count[1]++; - - /* Process an initial partial block. */ - if (offset) { - int copy = (offset + nbytes > 64 ? 64 - offset : nbytes); - - memcpy(pms->buf + offset, p, copy); - if (offset + copy < 64) - return; - p += copy; - left -= copy; - md5_process(pms, pms->buf); - } - - /* Process full blocks. */ - for (; left >= 64; p += 64, left -= 64) - md5_process(pms, p); - - /* Process a final partial block. */ - if (left) - memcpy(pms->buf, p, left); -} - -void -md5_finish(md5_state_t *pms, md5_byte_t digest[16]) -{ - static const md5_byte_t pad[64] = { - 0x80, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 - }; - md5_byte_t data[8]; - int i; - - /* Save the length before padding. */ - for (i = 0; i < 8; ++i) - data[i] = (md5_byte_t)(pms->count[i >> 2] >> ((i & 3) << 3)); - /* Pad to 56 bytes mod 64. */ - md5_append(pms, pad, ((55 - (pms->count[0] >> 3)) & 63) + 1); - /* Append the length. */ - md5_append(pms, data, 8); - for (i = 0; i < 16; ++i) - digest[i] = (md5_byte_t)(pms->abcd[i >> 2] >> ((i & 3) << 3)); -} diff --git a/src/md5.h b/src/md5.h deleted file mode 100644 index 2806b5b9b5..0000000000 --- a/src/md5.h +++ /dev/null @@ -1,90 +0,0 @@ -/* - Copyright (C) 1999, 2002 Aladdin Enterprises. All rights reserved. - - This software is provided 'as-is', without any express or implied - warranty. In no event will the authors be held liable for any damages - arising from the use of this software. - - Permission is granted to anyone to use this software for any purpose, - including commercial applications, and to alter it and redistribute it - freely, subject to the following restrictions: - - 1. The origin of this software must not be misrepresented; you must not - claim that you wrote the original software. If you use this software - in a product, an acknowledgment in the product documentation would be - appreciated but is not required. - 2. Altered source versions must be plainly marked as such, and must not be - misrepresented as being the original software. - 3. This notice may not be removed or altered from any source distribution. - - L. Peter Deutsch - ghost@aladdin.com - - */ -/* - Independent implementation of MD5 (RFC 1321). - - This code implements the MD5 Algorithm defined in RFC 1321, whose - text is available at - http://www.ietf.org/rfc/rfc1321.txt - The code is derived from the text of the RFC, including the test suite - (section A.5) but excluding the rest of Appendix A. It does not include - any code or documentation that is identified in the RFC as being - copyrighted. - - The original and principal author of md5.h is L. Peter Deutsch - . Other authors are noted in the change history - that follows (in reverse chronological order): - - 2002-04-13 lpd Removed support for non-ANSI compilers; removed - references to Ghostscript; clarified derivation from RFC 1321; - now handles byte order either statically or dynamically. - 1999-11-04 lpd Edited comments slightly for automatic TOC extraction. - 1999-10-18 lpd Fixed typo in header comment (ansi2knr rather than md5); - added conditionalization for C++ compilation from Martin - Purschke . - 1999-05-03 lpd Original version. - */ - -#ifndef md5_INCLUDED -# define md5_INCLUDED - -/* - * This package supports both compile-time and run-time determination of CPU - * byte order. If ARCH_IS_BIG_ENDIAN is defined as 0, the code will be - * compiled to run only on little-endian CPUs; if ARCH_IS_BIG_ENDIAN is - * defined as non-zero, the code will be compiled to run only on big-endian - * CPUs; if ARCH_IS_BIG_ENDIAN is not defined, the code will be compiled to - * run on either big- or little-endian CPUs, but will run slightly less - * efficiently on either one than if ARCH_IS_BIG_ENDIAN is defined. - */ - -typedef unsigned char md5_byte_t; /* 8-bit byte */ -typedef unsigned int md5_word_t; /* 32-bit word */ - -/* Define the state of the MD5 Algorithm. */ -typedef struct md5_state_s { - md5_word_t count[2]; /* message length in bits, lsw first */ - md5_word_t abcd[4]; /* digest buffer */ - md5_byte_t buf[64]; /* accumulate block */ -} md5_state_t; - -#ifdef __cplusplus -extern "C" -{ -#endif - -/* Initialize the algorithm. */ -void md5_init(md5_state_t *pms); - -/* Append a string to the message. */ -void md5_append(md5_state_t *pms, const md5_byte_t *data, int nbytes); - -/* Finish the message and return the digest. */ -void md5_finish(md5_state_t *pms, md5_byte_t digest[16]); - -#ifdef __cplusplus -} /* end extern "C" */ -#endif - -#endif /* md5_INCLUDED */ diff --git a/src/modp_numtoa.c b/src/modp_numtoa.c index 6deb8a70ed..2024f7c55b 100644 --- a/src/modp_numtoa.c +++ b/src/modp_numtoa.c @@ -56,7 +56,7 @@ void modp_uitoa10(uint32_t value, char* str) void modp_litoa10(int64_t value, char* str) { char* wstr=str; - unsigned long uvalue = (value < 0) ? -value : value; + uint64_t uvalue = (value < 0) ? -value : value; // Conversion. Number is reversed. do *wstr++ = (char)(48 + (uvalue % 10)); while(uvalue /= 10); diff --git a/src/nb_dns.c b/src/nb_dns.c index 475ba3b8b0..3051be9bc2 100644 --- a/src/nb_dns.c +++ b/src/nb_dns.c @@ -124,7 +124,7 @@ nb_dns_init(char *errstr) nd->s = -1; /* XXX should be able to init static hostent struct some other way */ - (void)gethostbyname("localhost."); + (void)gethostbyname("localhost"); if ((_res.options & RES_INIT) == 0 && res_init() == -1) { snprintf(errstr, NB_DNS_ERRSIZE, "res_init() failed"); @@ -186,7 +186,7 @@ _nb_dns_cmpsockaddr(register struct sockaddr *sa1, #endif static const char serr[] = "answer from wrong nameserver (%d)"; - if (sa1->sa_family != sa1->sa_family) { + if (sa1->sa_family != sa2->sa_family) { snprintf(errstr, NB_DNS_ERRSIZE, serr, 1); return (-1); } @@ -381,7 +381,7 @@ nb_dns_addr_request2(register struct nb_dns_info *nd, char *addrp, size -= i; cp += i; } - snprintf(cp, size, "ip6.int"); + snprintf(cp, size, "ip6.arpa"); break; #endif diff --git a/src/net_util.cc b/src/net_util.cc index c0bacc98b2..d91cf02de9 100644 --- a/src/net_util.cc +++ b/src/net_util.cc @@ -2,17 +2,17 @@ #include "config.h" -#ifdef BROv6 #include #include #include #include -#endif #include "Reporter.h" #include "net_util.h" +#include "IPAddr.h" +#include "IP.h" // - adapted from tcpdump // Returns the ones-complement checksum of a chunk of b short-aligned bytes. @@ -32,80 +32,13 @@ int ones_complement_checksum(const void* p, int b, uint32 sum) return sum; } -int tcp_checksum(const struct ip* ip, const struct tcphdr* tp, int len) +int ones_complement_checksum(const IPAddr& a, uint32 sum) { - // ### Note, this is only correct for IPv4. This routine is only - // used by the connection compressor (which we turn off for IPv6 - // traffic). - - int tcp_len = tp->th_off * 4 + len; - uint32 sum; - - if ( len % 2 == 1 ) - // Add in pad byte. - sum = htons(((const u_char*) tp)[tcp_len - 1] << 8); - else - sum = 0; - - sum = ones_complement_checksum((void*) &ip->ip_src.s_addr, 4, sum); - sum = ones_complement_checksum((void*) &ip->ip_dst.s_addr, 4, sum); - - uint32 addl_pseudo = - (htons(IPPROTO_TCP) << 16) | htons((unsigned short) tcp_len); - - sum = ones_complement_checksum((void*) &addl_pseudo, 4, sum); - sum = ones_complement_checksum((void*) tp, tcp_len, sum); - - return sum; + const uint32* bytes; + int len = a.GetBytes(&bytes); + return ones_complement_checksum(bytes, len*4, sum); } -int udp_checksum(const struct ip* ip, const struct udphdr* up, int len) - { - uint32 sum; - - if ( len % 2 == 1 ) - // Add in pad byte. - sum = htons(((const u_char*) up)[len - 1] << 8); - else - sum = 0; - - sum = ones_complement_checksum((void*) &ip->ip_src.s_addr, 4, sum); - sum = ones_complement_checksum((void*) &ip->ip_dst.s_addr, 4, sum); - - uint32 addl_pseudo = - (htons(IPPROTO_UDP) << 16) | htons((unsigned short) len); - - sum = ones_complement_checksum((void*) &addl_pseudo, 4, sum); - sum = ones_complement_checksum((void*) up, len, sum); - - return sum; - } - -#ifdef BROv6 -int udp6_checksum(const struct ip6_hdr* ip6, const struct udphdr* up, int len) - { - uint32 sum; - - if ( len % 2 == 1 ) - // Add in pad byte. - sum = htons(((const u_char*) up)[len - 1] << 8); - else - sum = 0; - - sum = ones_complement_checksum((void*) ip6->ip6_src.s6_addr, 16, sum); - sum = ones_complement_checksum((void*) ip6->ip6_dst.s6_addr, 16, sum); - - uint32 l = htonl(len); - sum = ones_complement_checksum((void*) &l, 4, sum); - - uint32 addl_pseudo = htons(IPPROTO_UDP); - sum = ones_complement_checksum((void*) &addl_pseudo, 4, sum); - sum = ones_complement_checksum((void*) up, len, sum); - - return sum; - } -#endif - int icmp_checksum(const struct icmp* icmpp, int len) { uint32 sum; @@ -121,6 +54,58 @@ int icmp_checksum(const struct icmp* icmpp, int len) return sum; } +#ifdef ENABLE_MOBILE_IPV6 +int mobility_header_checksum(const IP_Hdr* ip) + { + const ip6_mobility* mh = ip->MobilityHeader(); + + if ( ! mh ) return 0; + + uint32 sum = 0; + uint8 mh_len = 8 + 8 * mh->ip6mob_len; + + if ( mh_len % 2 == 1 ) + reporter->Weird(ip->SrcAddr(), ip->DstAddr(), "odd_mobility_hdr_len"); + + sum = ones_complement_checksum(ip->SrcAddr(), sum); + sum = ones_complement_checksum(ip->DstAddr(), sum); + // Note, for IPv6, strictly speaking the protocol and length fields are + // 32 bits rather than 16 bits. But because the upper bits are all zero, + // we get the same checksum either way. + sum += htons(IPPROTO_MOBILITY); + sum += htons(mh_len); + sum = ones_complement_checksum(mh, mh_len, sum); + + return sum; + } +#endif + +int icmp6_checksum(const struct icmp* icmpp, const IP_Hdr* ip, int len) + { + // ICMP6 uses the same checksum function as ICMP4 but a different + // pseudo-header over which it is computed. + uint32 sum; + + if ( len % 2 == 1 ) + // Add in pad byte. + sum = htons(((const u_char*) icmpp)[len - 1] << 8); + else + sum = 0; + + // Pseudo-header as for UDP over IPv6 above. + sum = ones_complement_checksum(ip->SrcAddr(), sum); + sum = ones_complement_checksum(ip->DstAddr(), sum); + uint32 l = htonl(len); + sum = ones_complement_checksum((void*) &l, 4, sum); + + uint32 addl_pseudo = htons(IPPROTO_ICMPV6); + sum = ones_complement_checksum((void*) &addl_pseudo, 4, sum); + + sum = ones_complement_checksum((void*) icmpp, len, sum); + + return sum; + } + #define CLASS_A 0x00000000 #define CLASS_B 0x80000000 @@ -143,223 +128,24 @@ char addr_to_class(uint32 addr) return 'A'; } -uint32 addr_to_net(uint32 addr) +const char* fmt_conn_id(const IPAddr& src_addr, uint32 src_port, + const IPAddr& dst_addr, uint32 dst_port) { - if ( CHECK_CLASS(addr, CLASS_D) ) - ; // class D's are left alone ### - else if ( CHECK_CLASS(addr, CLASS_C) ) - addr = addr & 0xffffff00; - else if ( CHECK_CLASS(addr, CLASS_B) ) - addr = addr & 0xffff0000; - else - addr = addr & 0xff000000; + static char buffer[512]; - return addr; - } + safe_snprintf(buffer, sizeof(buffer), "%s:%d > %s:%d", + string(src_addr).c_str(), src_port, + string(dst_addr).c_str(), dst_port); -const char* dotted_addr(uint32 addr, int alternative) - { - addr = ntohl(addr); - const char* fmt = alternative ? "%d,%d.%d.%d" : "%d.%d.%d.%d"; - - static char buf[32]; - snprintf(buf, sizeof(buf), fmt, - addr >> 24, (addr >> 16) & 0xff, - (addr >> 8) & 0xff, addr & 0xff); - - return buf; - } - -const char* dotted_addr(const uint32* addr, int alternative) - { -#ifdef BROv6 - if ( is_v4_addr(addr) ) - return dotted_addr(addr[3], alternative); - - static char buf[256]; - - if ( inet_ntop(AF_INET6, addr, buf, sizeof buf) == NULL ) - return ""; - - return buf; - -#else - return dotted_addr(to_v4_addr(addr), alternative); -#endif - } - -const char* dotted_net(uint32 addr) - { - addr = ntohl(addr); - - static char buf[32]; - - if ( CHECK_CLASS(addr, CLASS_D) ) - sprintf(buf, "%d.%d.%d.%d", - addr >> 24, (addr >> 16) & 0xff, - (addr >> 8) & 0xff, addr & 0xff); - - else if ( CHECK_CLASS(addr, CLASS_C) ) - sprintf(buf, "%d.%d.%d", - addr >> 24, (addr >> 16) & 0xff, (addr >> 8) & 0xff); - - else - // Same for class A's and B's. - sprintf(buf, "%d.%d", addr >> 24, (addr >> 16) & 0xff); - - return buf; - } - -#ifdef BROv6 -const char* dotted_net6(const uint32* addr) - { - if ( is_v4_addr(addr) ) - return dotted_net(to_v4_addr(addr)); - else - // ### this isn't right, but net's should go away eventually ... - return dotted_addr(addr); - } -#endif - -uint32 dotted_to_addr(const char* addr_text) - { - int addr[4]; - - if ( sscanf(addr_text, - "%d.%d.%d.%d", addr+0, addr+1, addr+2, addr+3) != 4 ) - { - reporter->Error("bad dotted address: %s", addr_text ); - return 0; - } - - if ( addr[0] < 0 || addr[1] < 0 || addr[2] < 0 || addr[3] < 0 || - addr[0] > 255 || addr[1] > 255 || addr[2] > 255 || addr[3] > 255 ) - { - reporter->Error("bad dotted address: %s", addr_text); - return 0; - } - - uint32 a = (addr[0] << 24) | (addr[1] << 16) | (addr[2] << 8) | addr[3]; - - // ### perhaps do gethostbyaddr here? - - return uint32(htonl(a)); - } - -#ifdef BROv6 -uint32* dotted_to_addr6(const char* addr_text) - { - uint32* addr = new uint32[4]; - if ( inet_pton(AF_INET6, addr_text, addr) <= 0 ) - { - reporter->Error("bad IPv6 address: %s", addr_text ); - addr[0] = addr[1] = addr[2] = addr[3] = 0; - } - - return addr; - } - -#endif - -#ifdef BROv6 -int is_v4_addr(const uint32 addr[4]) - { - return addr[0] == 0 && addr[1] == 0 && addr[2] == 0; - } -#endif - -uint32 to_v4_addr(const uint32* addr) - { -#ifdef BROv6 - if ( ! is_v4_addr(addr) ) - reporter->InternalError("conversion of non-IPv4 address to IPv4 address"); - return addr[3]; -#else - return addr[0]; -#endif - } - -uint32 mask_addr(uint32 a, uint32 top_bits_to_keep) - { - if ( top_bits_to_keep > 32 ) - { - reporter->Error("bad address mask value %d", top_bits_to_keep); - return a; - } - - if ( top_bits_to_keep == 0 ) - // The shifts below don't have any effect with 0, i.e., - // 1 << 32 does not yield 0; either due to compiler - // misoptimization or language semantics. - return 0; - - uint32 addr = ntohl(a); - - int shift = 32 - top_bits_to_keep; - addr >>= shift; - addr <<= shift; - - return htonl(addr); - } - -const uint32* mask_addr(const uint32* a, uint32 top_bits_to_keep) - { -#ifdef BROv6 - static uint32 addr[4]; - - addr[0] = a[0]; - addr[1] = a[1]; - addr[2] = a[2]; - addr[3] = a[3]; - - // This is a bit dicey: if it's a v4 address, then we interpret - // the mask as being with respect to 32 bits total, even though - // strictly speaking, the v4 address comprises the least-significant - // bits out of 128, rather than the most significant. However, - // we only do this if the mask itself is consistent for a 32-bit - // address. - uint32 max_bits = (is_v4_addr(a) && top_bits_to_keep <= 32) ? 32 : 128; - - if ( top_bits_to_keep == 0 || top_bits_to_keep > max_bits ) - { - reporter->Error("bad address mask value %s", top_bits_to_keep); - return addr; - } - - int word = 3; // start zeroing out with word #3 - int bits_to_chop = max_bits - top_bits_to_keep; // bits to discard - while ( bits_to_chop >= 32 ) - { // there's an entire word to discard - addr[word] = 0; - --word; // move on to next, more significant word - bits_to_chop -= 32; // we just go rid of 32 bits - } - - // All that's left to work with now is the word pointed to by "word". - uint32 addr32 = ntohl(addr[word]); - addr32 >>= bits_to_chop; - addr32 <<= bits_to_chop; - addr[word] = htonl(addr32); - - return addr; -#else - return a; -#endif + return buffer; } const char* fmt_conn_id(const uint32* src_addr, uint32 src_port, const uint32* dst_addr, uint32 dst_port) { - char addr1[128], addr2[128]; - static char buffer[512]; - - strcpy(addr1, dotted_addr(src_addr)); - strcpy(addr2, dotted_addr(dst_addr)); - - safe_snprintf(buffer, sizeof(buffer), "%s:%d > %s:%d", - addr1, src_port, addr2, dst_port); - - return buffer; + IPAddr src(IPv6, src_addr, IPAddr::Network); + IPAddr dst(IPv6, dst_addr, IPAddr::Network); + return fmt_conn_id(src, src_port, dst, dst_port); } uint32 extract_uint32(const u_char* data) diff --git a/src/net_util.h b/src/net_util.h index 28abf4dbcb..733d946564 100644 --- a/src/net_util.h +++ b/src/net_util.h @@ -5,6 +5,13 @@ #include "config.h" +// Define first. +typedef enum { + TRANSPORT_UNKNOWN, TRANSPORT_TCP, TRANSPORT_UDP, TRANSPORT_ICMP, +} TransportProto; + +typedef enum { IPv4, IPv6 } IPFamily; + #include #include @@ -24,37 +31,82 @@ #ifdef HAVE_NETINET_IP6_H #include -#else -struct ip6_hdr { - uint16 ip6_plen; - uint8 ip6_nxt; - uint8 ip6_hlim; + +#ifndef HAVE_IP6_OPT +struct ip6_opt { + uint8 ip6o_type; + uint8 ip6o_len; }; -#endif +#endif // HAVE_IP6_OPT -#include "util.h" - -#ifdef BROv6 -typedef uint32* addr_type; // a pointer to 4 uint32's -typedef const uint32* const_addr_type; -#define NUM_ADDR_WORDS 4 - -typedef struct { - uint32 net[4]; - uint32 width; -} subnet_type; +#ifndef HAVE_IP6_EXT +struct ip6_ext { + uint8 ip6e_nxt; + uint8 ip6e_len; +}; +#endif // HAVE_IP6_EXT #else -typedef uint32 addr_type; -typedef const uint32 const_addr_type; -#define NUM_ADDR_WORDS 1 -typedef struct { - uint32 net; - uint32 width; -} subnet_type; +struct ip6_hdr { + union { + struct ip6_hdrctl { + uint32 ip6_un1_flow; /* 4 bits version, 8 bits TC, 20 bits + flow-ID */ + uint16 ip6_un1_plen; /* payload length */ + uint8 ip6_un1_nxt; /* next header */ + uint8 ip6_un1_hlim; /* hop limit */ + } ip6_un1; + uint8 ip6_un2_vfc; /* 4 bits version, top 4 bits tclass */ + } ip6_ctlun; + struct in6_addr ip6_src; /* source address */ + struct in6_addr ip6_dst; /* destination address */ +}; -#endif +#define ip6_vfc ip6_ctlun.ip6_un2_vfc +#define ip6_flow ip6_ctlun.ip6_un1.ip6_un1_flow +#define ip6_plen ip6_ctlun.ip6_un1.ip6_un1_plen +#define ip6_nxt ip6_ctlun.ip6_un1.ip6_un1_nxt +#define ip6_hlim ip6_ctlun.ip6_un1.ip6_un1_hlim +#define ip6_hops ip6_ctlun.ip6_un1.ip6_un1_hlim + +struct ip6_opt { + uint8 ip6o_type; + uint8 ip6o_len; +}; + +struct ip6_ext { + uint8 ip6e_nxt; + uint8 ip6e_len; +}; + +struct ip6_frag { + uint8 ip6f_nxt; /* next header */ + uint8 ip6f_reserved; /* reserved field */ + uint16 ip6f_offlg; /* offset, reserved, and flag */ + uint32 ip6f_ident; /* identification */ +}; + +struct ip6_hbh { + uint8 ip6h_nxt; /* next header */ + uint8 ip6h_len; /* length in units of 8 octets */ + /* followed by options */ +}; + +struct ip6_dest { + uint8 ip6d_nxt; /* next header */ + uint8 ip6d_len; /* length in units of 8 octets */ + /* followed by options */ +}; + +struct ip6_rthdr { + uint8 ip6r_nxt; /* next header */ + uint8 ip6r_len; /* length in units of 8 octets */ + uint8 ip6r_type; /* routing type */ + uint8 ip6r_segleft; /* segments left */ + /* followed by routing type specific data */ +}; +#endif // HAVE_NETINET_IP6_H // For Solaris. #if !defined(TCPOPT_WINDOW) && defined(TCPOPT_WSCALE) @@ -81,83 +133,29 @@ inline int seq_delta(uint32 a, uint32 b) return int(a-b); } +class IPAddr; +class IP_Hdr; + // Returns the ones-complement checksum of a chunk of b short-aligned bytes. extern int ones_complement_checksum(const void* p, int b, uint32 sum); -extern int tcp_checksum(const struct ip* ip, const struct tcphdr* tp, int len); -extern int udp_checksum(const struct ip* ip, const struct udphdr* up, int len); -#ifdef BROv6 -extern int udp6_checksum(const struct ip6_hdr* ip, const struct udphdr* up, - int len); -#endif +extern int ones_complement_checksum(const IPAddr& a, uint32 sum); + +extern int icmp6_checksum(const struct icmp* icmpp, const IP_Hdr* ip, int len); extern int icmp_checksum(const struct icmp* icmpp, int len); -// Given an address in host order, returns its "classical network prefix", -// also in host order. -extern uint32 addr_to_net(uint32 addr); +#ifdef ENABLE_MOBILE_IPV6 +extern int mobility_header_checksum(const IP_Hdr* ip); +#endif + // Returns 'A', 'B', 'C' or 'D' extern char addr_to_class(uint32 addr); -// Returns a pointer to static storage giving the ASCII dotted representation -// of the given address, which should be passed in network order. -extern const char* dotted_addr(uint32 addr, int alternative=0); -extern const char* dotted_addr(const uint32* addr, int alternative=0); - -// Same, but for the network prefix. -extern const char* dotted_net(uint32 addr); -extern const char* dotted_net6(const uint32* addr); - -// Given an ASCII dotted representation, returns the corresponding address -// in network order. -extern uint32 dotted_to_addr(const char* addr_text); -extern uint32* dotted_to_addr6(const char* addr_text); - -extern int is_v4_addr(const uint32 addr[4]); -extern uint32 to_v4_addr(const uint32* addr); - -extern uint32 mask_addr(uint32 a, uint32 top_bits_to_keep); -extern const uint32* mask_addr(const uint32* a, uint32 top_bits_to_keep); - +extern const char* fmt_conn_id(const IPAddr& src_addr, uint32 src_port, + const IPAddr& dst_addr, uint32 dst_port); extern const char* fmt_conn_id(const uint32* src_addr, uint32 src_port, const uint32* dst_addr, uint32 dst_port); -inline void copy_addr(const uint32* src_a, uint32* dst_a) - { -#ifdef BROv6 - dst_a[0] = src_a[0]; - dst_a[1] = src_a[1]; - dst_a[2] = src_a[2]; - dst_a[3] = src_a[3]; -#else - dst_a[0] = src_a[0]; -#endif - } - -inline int addr_eq(const uint32* a1, const uint32* a2) - { -#ifdef BROv6 - return a1[0] == a2[0] && - a1[1] == a2[1] && - a1[2] == a2[2] && - a1[3] == a2[3]; -#else - return a1[0] == a2[0]; -#endif - } - -inline int subnet_eq(const subnet_type* s1, const subnet_type* s2) - { -#ifdef BROv6 - return s1->net[0] == s2->net[0] && - s1->net[1] == s2->net[1] && - s1->net[2] == s2->net[2] && - s1->net[3] == s2->net[3] && - s1->width == s2->width; -#else - return s1->net == s2->net && s1->width == s2->width; -#endif - } - // Read 4 bytes from data and return in network order. extern uint32 extract_uint32(const u_char* data); @@ -170,6 +168,8 @@ extern uint32 extract_uint32(const u_char* data); inline double ntohd(double d) { return d; } inline double htond(double d) { return d; } +inline uint64 ntohll(uint64 i) { return i; } +inline uint64 htonll(uint64 i) { return i; } #else @@ -195,6 +195,24 @@ inline double ntohd(double d) inline double htond(double d) { return ntohd(d); } +inline uint64 ntohll(uint64 i) + { + u_char c; + union { + uint64 i; + u_char c[8]; + } x; + + x.i = i; + c = x.c[0]; x.c[0] = x.c[7]; x.c[7] = c; + c = x.c[1]; x.c[1] = x.c[6]; x.c[6] = c; + c = x.c[2]; x.c[2] = x.c[5]; x.c[5] = c; + c = x.c[3]; x.c[3] = x.c[4]; x.c[4] = c; + return x.i; + } + +inline uint64 htonll(uint64 i) { return ntohll(i); } + #endif #endif diff --git a/src/parse.y b/src/parse.y index 495931aae0..c1f6ddd96e 100644 --- a/src/parse.y +++ b/src/parse.y @@ -2,7 +2,7 @@ // See the file "COPYING" in the main distribution directory for copyright. %} -%expect 88 +%expect 87 %token TOK_ADD TOK_ADD_TO TOK_ADDR TOK_ANY %token TOK_ATENDIF TOK_ATELSE TOK_ATIF TOK_ATIFDEF TOK_ATIFNDEF @@ -10,11 +10,11 @@ %token TOK_CONSTANT TOK_COPY TOK_COUNT TOK_COUNTER TOK_DEFAULT TOK_DELETE %token TOK_DOUBLE TOK_ELSE TOK_ENUM TOK_EVENT TOK_EXPORT TOK_FILE TOK_FOR %token TOK_FUNCTION TOK_GLOBAL TOK_ID TOK_IF TOK_INT -%token TOK_INTERVAL TOK_LIST TOK_LOCAL TOK_MODULE TOK_MATCH +%token TOK_INTERVAL TOK_LIST TOK_LOCAL TOK_MODULE %token TOK_NEXT TOK_OF TOK_PATTERN TOK_PATTERN_TEXT %token TOK_PORT TOK_PRINT TOK_RECORD TOK_REDEF %token TOK_REMOVE_FROM TOK_RETURN TOK_SCHEDULE TOK_SET -%token TOK_STRING TOK_SUBNET TOK_SWITCH TOK_TABLE TOK_THIS +%token TOK_STRING TOK_SUBNET TOK_SWITCH TOK_TABLE %token TOK_TIME TOK_TIMEOUT TOK_TIMER TOK_TYPE TOK_UNION TOK_VECTOR TOK_WHEN %token TOK_ATTR_ADD_FUNC TOK_ATTR_ATTR TOK_ATTR_ENCRYPT TOK_ATTR_DEFAULT @@ -22,16 +22,19 @@ %token TOK_ATTR_ROTATE_SIZE TOK_ATTR_DEL_FUNC TOK_ATTR_EXPIRE_FUNC %token TOK_ATTR_EXPIRE_CREATE TOK_ATTR_EXPIRE_READ TOK_ATTR_EXPIRE_WRITE %token TOK_ATTR_PERSISTENT TOK_ATTR_SYNCHRONIZED -%token TOK_ATTR_DISABLE_PRINT_HOOK TOK_ATTR_RAW_OUTPUT TOK_ATTR_MERGEABLE +%token TOK_ATTR_RAW_OUTPUT TOK_ATTR_MERGEABLE %token TOK_ATTR_PRIORITY TOK_ATTR_GROUP TOK_ATTR_LOG TOK_ATTR_ERROR_HANDLER +%token TOK_ATTR_TYPE_COLUMN %token TOK_DEBUG %token TOK_DOC TOK_POST_DOC +%token TOK_NO_TEST + %left ',' '|' %right '=' TOK_ADD_TO TOK_REMOVE_FROM -%right '?' ':' TOK_USING +%right '?' ':' %left TOK_OR %left TOK_AND %nonassoc '<' '>' TOK_LE TOK_GE TOK_EQ TOK_NE @@ -42,6 +45,7 @@ %right '!' %left '$' '[' ']' '(' ')' TOK_HAS_FIELD TOK_HAS_ATTR +%type opt_no_test opt_no_test_block %type TOK_ID TOK_PATTERN_TEXT single_pattern TOK_DOC TOK_POST_DOC %type opt_doc_list opt_post_doc_list %type local_id global_id def_global_id event_id global_or_event_id resolve_id begin_func @@ -53,7 +57,7 @@ %type expr init anonymous_function %type event %type stmt stmt_list func_body for_head -%type type opt_type refined_type enum_body +%type type opt_type enum_body %type func_hdr func_params %type type_list %type type_decl formal_args_decl @@ -80,10 +84,12 @@ #include "Reporter.h" #include "BroDoc.h" #include "BroDocObj.h" +#include "Brofiler.h" #include #include +extern Brofiler brofiler; extern BroDoc* current_reST_doc; extern int generate_documentation; extern std::list* reST_doc_comments; @@ -107,13 +113,13 @@ bool is_export = false; // true if in an export {} block * (obviously not reentrant). */ extern Expr* g_curr_debug_expr; +extern bool in_debug; +extern const char* g_curr_debug_error; #define YYLTYPE yyltype -Expr* bro_this = 0; int in_init = 0; int in_record = 0; -bool in_debug = false; bool resolving_global_ID = false; bool defining_global_ID = false; @@ -194,6 +200,7 @@ static std::list* concat_opt_docs (std::list* pre, %} %union { + bool b; char* str; std::list* str_l; ID* id; @@ -243,7 +250,6 @@ bro: TOK_DEBUG { in_debug = true; } expr { g_curr_debug_expr = $3; - in_debug = false; } ; @@ -498,12 +504,6 @@ expr: $$ = new VectorConstructorExpr($3); } - | TOK_MATCH expr TOK_USING expr - { - set_location(@1, @4); - $$ = new RecordMatchExpr($2, $4); - } - | expr '(' opt_expr_list ')' { set_location(@1, @4); @@ -583,12 +583,6 @@ expr: $$ = new ConstExpr(new PatternVal($1)); } - | TOK_THIS - { - set_location(@1); - $$ = bro_this->Ref(); - } - | '|' expr '|' { set_location(@1, @3); @@ -1104,7 +1098,7 @@ decl: } } - | TOK_TYPE global_id ':' refined_type opt_attr ';' + | TOK_TYPE global_id ':' type opt_attr ';' { add_type($2, $4, $5, 0); @@ -1134,7 +1128,7 @@ decl: } } - | TOK_EVENT event_id ':' refined_type opt_attr ';' + | TOK_EVENT event_id ':' type_list opt_attr ';' { add_type($2, $4, $5, 1); @@ -1220,13 +1214,6 @@ func_params: { $$ = new FuncType($2, base_type(TYPE_VOID), 0); } ; -refined_type: - type_list '{' type_decl_list '}' - { $$ = refine_type($1, $3); } - | type_list - { $$ = refine_type($1, 0); } - ; - opt_type: ':' type { $$ = $2; } @@ -1303,8 +1290,6 @@ attr: { $$ = new Attr(ATTR_ENCRYPT); } | TOK_ATTR_ENCRYPT '=' expr { $$ = new Attr(ATTR_ENCRYPT, $3); } - | TOK_ATTR_DISABLE_PRINT_HOOK - { $$ = new Attr(ATTR_DISABLE_PRINT_HOOK); } | TOK_ATTR_RAW_OUTPUT { $$ = new Attr(ATTR_RAW_OUTPUT); } | TOK_ATTR_MERGEABLE @@ -1313,6 +1298,8 @@ attr: { $$ = new Attr(ATTR_PRIORITY, $3); } | TOK_ATTR_GROUP '=' expr { $$ = new Attr(ATTR_GROUP, $3); } + | TOK_ATTR_TYPE_COLUMN '=' expr + { $$ = new Attr(ATTR_TYPE_COLUMN, $3); } | TOK_ATTR_LOG { $$ = new Attr(ATTR_LOG); } | TOK_ATTR_ERROR_HANDLER @@ -1320,22 +1307,28 @@ attr: ; stmt: - '{' stmt_list '}' + '{' opt_no_test_block stmt_list '}' { - set_location(@1, @3); - $$ = $2; + set_location(@1, @4); + $$ = $3; + if ( $2 ) + brofiler.DecIgnoreDepth(); } - | TOK_PRINT expr_list ';' + | TOK_PRINT expr_list ';' opt_no_test { set_location(@1, @3); $$ = new PrintStmt($2); + if ( ! $4 ) + brofiler.AddStmt($$); } - | TOK_EVENT event ';' + | TOK_EVENT event ';' opt_no_test { set_location(@1, @3); $$ = new EventStmt($2); + if ( ! $4 ) + brofiler.AddStmt($$); } | TOK_IF '(' expr ')' stmt @@ -1357,54 +1350,72 @@ stmt: } | for_head stmt - { $1->AsForStmt()->AddBody($2); } + { + $1->AsForStmt()->AddBody($2); + } - | TOK_NEXT ';' + | TOK_NEXT ';' opt_no_test { set_location(@1, @2); $$ = new NextStmt; + if ( ! $3 ) + brofiler.AddStmt($$); } - | TOK_BREAK ';' + | TOK_BREAK ';' opt_no_test { set_location(@1, @2); $$ = new BreakStmt; + if ( ! $3 ) + brofiler.AddStmt($$); } - | TOK_RETURN ';' + | TOK_RETURN ';' opt_no_test { set_location(@1, @2); $$ = new ReturnStmt(0); + if ( ! $3 ) + brofiler.AddStmt($$); } - | TOK_RETURN expr ';' + | TOK_RETURN expr ';' opt_no_test { set_location(@1, @2); $$ = new ReturnStmt($2); + if ( ! $4 ) + brofiler.AddStmt($$); } - | TOK_ADD expr ';' + | TOK_ADD expr ';' opt_no_test { set_location(@1, @3); $$ = new AddStmt($2); + if ( ! $4 ) + brofiler.AddStmt($$); } - | TOK_DELETE expr ';' + | TOK_DELETE expr ';' opt_no_test { set_location(@1, @3); $$ = new DelStmt($2); + if ( ! $4 ) + brofiler.AddStmt($$); } - | TOK_LOCAL local_id opt_type init_class opt_init opt_attr ';' + | TOK_LOCAL local_id opt_type init_class opt_init opt_attr ';' opt_no_test { set_location(@1, @7); $$ = add_local($2, $3, $4, $5, $6, VAR_REGULAR); + if ( ! $8 ) + brofiler.AddStmt($$); } - | TOK_CONST local_id opt_type init_class opt_init opt_attr ';' + | TOK_CONST local_id opt_type init_class opt_init opt_attr ';' opt_no_test { set_location(@1, @6); $$ = add_local($2, $3, $4, $5, $6, VAR_CONST); + if ( ! $8 ) + brofiler.AddStmt($$); } | TOK_WHEN '(' expr ')' stmt @@ -1413,10 +1424,12 @@ stmt: $$ = new WhenStmt($3, $5, 0, 0, false); } - | TOK_WHEN '(' expr ')' stmt TOK_TIMEOUT expr '{' stmt_list '}' + | TOK_WHEN '(' expr ')' stmt TOK_TIMEOUT expr '{' opt_no_test_block stmt_list '}' { - set_location(@3, @8); - $$ = new WhenStmt($3, $5, $9, $7, false); + set_location(@3, @9); + $$ = new WhenStmt($3, $5, $10, $7, false); + if ( $9 ) + brofiler.DecIgnoreDepth(); } @@ -1426,16 +1439,20 @@ stmt: $$ = new WhenStmt($4, $6, 0, 0, true); } - | TOK_RETURN TOK_WHEN '(' expr ')' stmt TOK_TIMEOUT expr '{' stmt_list '}' + | TOK_RETURN TOK_WHEN '(' expr ')' stmt TOK_TIMEOUT expr '{' opt_no_test_block stmt_list '}' { - set_location(@4, @9); - $$ = new WhenStmt($4, $6, $10, $8, true); + set_location(@4, @10); + $$ = new WhenStmt($4, $6, $11, $8, true); + if ( $10 ) + brofiler.DecIgnoreDepth(); } - | expr ';' + | expr ';' opt_no_test { set_location(@1, @2); $$ = new ExprStmt($1); + if ( ! $3 ) + brofiler.AddStmt($$); } | ';' @@ -1633,6 +1650,18 @@ opt_doc_list: { $$ = 0; } ; +opt_no_test: + TOK_NO_TEST + { $$ = true; } + | + { $$ = false; } + +opt_no_test_block: + TOK_NO_TEST + { $$ = true; brofiler.IncIgnoreDepth(); } + | + { $$ = false; } + %% int yyerror(const char msg[]) @@ -1650,6 +1679,9 @@ int yyerror(const char msg[]) strcat(msgbuf, "\nDocumentation mode is enabled: " "remember to check syntax of ## style comments\n"); + if ( in_debug ) + g_curr_debug_error = copy_string(msg); + reporter->Error("%s", msgbuf); return 0; diff --git a/src/patricia.c b/src/patricia.c index ef9c008f4b..1dbc795ab7 100644 --- a/src/patricia.c +++ b/src/patricia.c @@ -115,16 +115,12 @@ local_inet_pton (int af, const char *src, void *dst) } } #ifdef NT -#ifdef HAVE_IPV6 else if (af == AF_INET6) { struct in6_addr Address; return (inet6_addr(src, &Address)); } -#endif /* HAVE_IPV6 */ -#endif /* NT */ -#ifndef NT +#else else { - errno = EAFNOSUPPORT; return -1; } @@ -160,10 +156,8 @@ my_inet_pton (int af, const char *src, void *dst) } memcpy (dst, xp, 4); return (1); -#ifdef HAVE_IPV6 } else if (af == AF_INET6) { return (local_inet_pton (af, src, dst)); -#endif /* HAVE_IPV6 */ } else { #ifndef NT errno = EAFNOSUPPORT; @@ -217,7 +211,6 @@ prefix_toa2x (prefix_t *prefix, char *buff, int with_len) } return (buff); } -#ifdef HAVE_IPV6 else if (prefix->family == AF_INET6) { char *r; r = (char *) inet_ntop (AF_INET6, &prefix->add.sin6, buff, 48 /* a guess value */ ); @@ -227,7 +220,6 @@ prefix_toa2x (prefix_t *prefix, char *buff, int with_len) } return (buff); } -#endif /* HAVE_IPV6 */ else return (NULL); } @@ -255,17 +247,15 @@ New_Prefix2 (int family, void *dest, int bitlen, prefix_t *prefix) int dynamic_allocated = 0; int default_bitlen = 32; -#ifdef HAVE_IPV6 if (family == AF_INET6) { default_bitlen = 128; if (prefix == NULL) { - prefix = calloc(1, sizeof (prefix6_t)); + prefix = calloc(1, sizeof (prefix_t)); dynamic_allocated++; } memcpy (&prefix->add.sin6, dest, 16); } else -#endif /* HAVE_IPV6 */ if (family == AF_INET) { if (prefix == NULL) { #ifndef NT @@ -308,9 +298,7 @@ ascii2prefix (int family, char *string) u_long bitlen, maxbitlen = 0; char *cp; struct in_addr sin; -#ifdef HAVE_IPV6 struct in6_addr sin6; -#endif /* HAVE_IPV6 */ int result; char save[MAXLINE]; @@ -320,19 +308,15 @@ ascii2prefix (int family, char *string) /* easy way to handle both families */ if (family == 0) { family = AF_INET; -#ifdef HAVE_IPV6 if (strchr (string, ':')) family = AF_INET6; -#endif /* HAVE_IPV6 */ } if (family == AF_INET) { maxbitlen = 32; } -#ifdef HAVE_IPV6 else if (family == AF_INET6) { maxbitlen = 128; } -#endif /* HAVE_IPV6 */ if ((cp = strchr (string, '/')) != NULL) { bitlen = atol (cp + 1); @@ -355,7 +339,6 @@ ascii2prefix (int family, char *string) return (New_Prefix (AF_INET, &sin, bitlen)); } -#ifdef HAVE_IPV6 else if (family == AF_INET6) { // Get rid of this with next IPv6 upgrade #if defined(NT) && !defined(HAVE_INET_NTOP) @@ -367,7 +350,6 @@ ascii2prefix (int family, char *string) #endif /* NT */ return (New_Prefix (AF_INET6, &sin6, bitlen)); } -#endif /* HAVE_IPV6 */ else return (NULL); } diff --git a/src/patricia.h b/src/patricia.h index 4bc2f9b81f..dc67226362 100644 --- a/src/patricia.h +++ b/src/patricia.h @@ -52,13 +52,6 @@ #include - -#ifdef BROv6 -#ifndef HAVE_IPV6 -#define HAVE_IPV6 -#endif -#endif - /* typedef unsigned int u_int; */ typedef void (*void_fn_t)(); /* { from defs.h */ @@ -88,9 +81,7 @@ typedef struct _prefix_t { int ref_count; /* reference count */ union { struct in_addr sin; -#ifdef HAVE_IPV6 struct in6_addr sin6; -#endif /* IPV6 */ } add; } prefix_t; diff --git a/src/reporter.bif b/src/reporter.bif index 6b481eeb79..c0f83205c3 100644 --- a/src/reporter.bif +++ b/src/reporter.bif @@ -1,3 +1,11 @@ +##! The reporter built-in functions allow for the scripting layer to +##! generate messages of varying severity. If no event handlers +##! exist for reporter messages, the messages are output to stderr. +##! If event handlers do exist, it's assumed they take care of determining +##! how/where to output the messages. +##! +##! See :doc:`/scripts/base/frameworks/reporter/main` for a convenient +##! reporter message logging framework. module Reporter; @@ -5,6 +13,13 @@ module Reporter; #include "NetVar.h" %%} +## Generates an informational message. +## +## msg: The informational message to report. +## +## Returns: Always true. +## +## .. bro:see:: reporter_info function Reporter::info%(msg: string%): bool %{ reporter->PushLocation(frame->GetCall()->GetLocationInfo()); @@ -13,6 +28,13 @@ function Reporter::info%(msg: string%): bool return new Val(1, TYPE_BOOL); %} +## Generates a message that warns of a potential problem. +## +## msg: The warning message to report. +## +## Returns: Always true. +## +## .. bro:see:: reporter_warning function Reporter::warning%(msg: string%): bool %{ reporter->PushLocation(frame->GetCall()->GetLocationInfo()); @@ -21,6 +43,14 @@ function Reporter::warning%(msg: string%): bool return new Val(1, TYPE_BOOL); %} +## Generates a non-fatal error indicative of a definite problem that should +## be addressed. Program execution does not terminate. +## +## msg: The error message to report. +## +## Returns: Always true. +## +## .. bro:see:: reporter_error function Reporter::error%(msg: string%): bool %{ reporter->PushLocation(frame->GetCall()->GetLocationInfo()); @@ -29,6 +59,11 @@ function Reporter::error%(msg: string%): bool return new Val(1, TYPE_BOOL); %} +## Generates a fatal error on stderr and terminates program execution. +## +## msg: The error message to report. +## +## Returns: Always true. function Reporter::fatal%(msg: string%): bool %{ reporter->PushLocation(frame->GetCall()->GetLocationInfo()); diff --git a/src/rule-parse.y b/src/rule-parse.y index c8770c3e22..47346eb7b9 100644 --- a/src/rule-parse.y +++ b/src/rule-parse.y @@ -1,13 +1,30 @@ %{ #include +#include +#include +#include "config.h" #include "RuleMatcher.h" #include "Reporter.h" +#include "IPAddr.h" +#include "net_util.h" extern void begin_PS(); extern void end_PS(); Rule* current_rule = 0; const char* current_rule_file = 0; + +static uint8_t mask_to_len(uint32_t mask) + { + if ( mask == 0xffffffff ) + return 32; + + uint32_t x = ~mask + 1; + uint8_t len; + for ( len = 0; len < 32 && (! (x & (1 << len))); ++len ); + + return len; + } %} %token TOK_COMP @@ -21,6 +38,7 @@ const char* current_rule_file = 0; %token TOK_IDENT %token TOK_INT %token TOK_IP +%token TOK_IP6 %token TOK_IP_OPTIONS %token TOK_IP_OPTION_SYM %token TOK_IP_PROTO @@ -49,7 +67,9 @@ const char* current_rule_file = 0; %type hdr_expr %type range rangeopt %type value_list +%type prefix_value_list %type TOK_IP value +%type TOK_IP6 prefix_value %type TOK_PROT %type TOK_PATTERN_TYPE @@ -57,6 +77,8 @@ const char* current_rule_file = 0; Rule* rule; RuleHdrTest* hdr_test; maskedvalue_list* vallist; + vector* prefix_val_list; + IPPrefix* prefixval; bool bl; int val; @@ -91,11 +113,11 @@ rule_attr_list: ; rule_attr: - TOK_DST_IP TOK_COMP value_list + TOK_DST_IP TOK_COMP prefix_value_list { current_rule->AddHdrTest(new RuleHdrTest( - RuleHdrTest::IP, 16, 4, - (RuleHdrTest::Comp) $2, $3)); + RuleHdrTest::IPDst, + (RuleHdrTest::Comp) $2, *($3))); } | TOK_DST_PORT TOK_COMP value_list @@ -123,10 +145,14 @@ rule_attr: { int proto = 0; switch ( $3 ) { - case RuleHdrTest::ICMP: proto = 1; break; + case RuleHdrTest::ICMP: proto = IPPROTO_ICMP; break; + case RuleHdrTest::ICMPv6: proto = IPPROTO_ICMPV6; break; + // signature matching against outer packet headers of IP-in-IP + // tunneling not supported, so do a no-op there case RuleHdrTest::IP: proto = 0; break; - case RuleHdrTest::TCP: proto = 6; break; - case RuleHdrTest::UDP: proto = 17; break; + case RuleHdrTest::IPv6: proto = 0; break; + case RuleHdrTest::TCP: proto = IPPROTO_TCP; break; + case RuleHdrTest::UDP: proto = IPPROTO_UDP; break; default: rules_error("internal_error: unknown protocol"); } @@ -140,16 +166,20 @@ rule_attr: val->mask = 0xffffffff; vallist->append(val); + // offset & size params are dummies, actual next proto value in + // header is retrieved dynamically via IP_Hdr::NextProto() current_rule->AddHdrTest(new RuleHdrTest( - RuleHdrTest::IP, 9, 1, + RuleHdrTest::NEXT, 0, 0, (RuleHdrTest::Comp) $2, vallist)); } } | TOK_IP_PROTO TOK_COMP value_list { + // offset & size params are dummies, actual next proto value in + // header is retrieved dynamically via IP_Hdr::NextProto() current_rule->AddHdrTest(new RuleHdrTest( - RuleHdrTest::IP, 9, 1, + RuleHdrTest::NEXT, 0, 0, (RuleHdrTest::Comp) $2, $3)); } @@ -193,11 +223,11 @@ rule_attr: | TOK_SAME_IP { current_rule->AddCondition(new RuleConditionSameIP()); } - | TOK_SRC_IP TOK_COMP value_list + | TOK_SRC_IP TOK_COMP prefix_value_list { current_rule->AddHdrTest(new RuleHdrTest( - RuleHdrTest::IP, 12, 4, - (RuleHdrTest::Comp) $2, $3)); + RuleHdrTest::IPSrc, + (RuleHdrTest::Comp) $2, *($3))); } | TOK_SRC_PORT TOK_COMP value_list @@ -254,6 +284,38 @@ value_list: } ; +prefix_value_list: + prefix_value_list ',' prefix_value + { + $$ = $1; + $$->push_back(*($3)); + } + | prefix_value_list ',' TOK_IDENT + { + $$ = $1; + id_to_maskedvallist($3, 0, $1); + } + | prefix_value + { + $$ = new vector(); + $$->push_back(*($1)); + } + | TOK_IDENT + { + $$ = new vector(); + id_to_maskedvallist($1, 0, $$); + } + ; + +prefix_value: + TOK_IP + { + $$ = new IPPrefix(IPAddr(IPv4, &($1.val), IPAddr::Host), + mask_to_len($1.mask)); + } + | TOK_IP6 + ; + value: TOK_INT { $$.val = $1; $$.mask = 0xffffffff; } diff --git a/src/rule-scan.l b/src/rule-scan.l index 1ba9bed1de..d516a98e89 100644 --- a/src/rule-scan.l +++ b/src/rule-scan.l @@ -1,24 +1,38 @@ %{ -typedef unsigned int uint32; - #include +#include #include #include #include #include #include "RuleMatcher.h" +#include "IPAddr.h" +#include "util.h" #include "rule-parse.h" int rules_line_number = 0; + +static string extract_ipv6(string s) + { + if ( s.substr(0, 3) == "[0x" ) + s = s.substr(3, s.find("]") - 3); + else + s = s.substr(1, s.find("]") - 1); + + return s; + } %} %x PS +OWS [ \t]* WS [ \t]+ D [0-9]+ H [0-9a-fA-F]+ +HEX {H} STRING \"([^\n\"]|\\\")*\" -ID [0-9a-zA-Z_-]+ +ID ([0-9a-zA-Z_-]+::)*[0-9a-zA-Z_-]+ +IP6 ("["({HEX}:){7}{HEX}"]")|("["0x{HEX}({HEX}|:)*"::"({HEX}|:)*"]")|("["({HEX}|:)*"::"({HEX}|:)*"]")|("["({HEX}|:)*"::"({HEX}|:)*({D}"."){3}{D}"]") RE \/(\\\/)?([^/]|[^\\]\\\/)*\/ META \.[^ \t]+{WS}[^\n]+ PID ([0-9a-zA-Z_-]|"::")+ @@ -34,6 +48,19 @@ PID ([0-9a-zA-Z_-]|"::")+ \n ++rules_line_number; } +{IP6} { + rules_lval.prefixval = new IPPrefix(IPAddr(extract_ipv6(yytext)), 128); + return TOK_IP6; + } + +{IP6}{OWS}"/"{OWS}{D} { + char* l = strchr(yytext, '/'); + *l++ = '\0'; + int len = atoi(l); + rules_lval.prefixval = new IPPrefix(IPAddr(extract_ipv6(yytext)), len); + return TOK_IP6; + } + [!\]\[{}&:,] return rules_text[0]; "<=" { rules_lval.val = RuleHdrTest::LE; return TOK_COMP; } @@ -45,7 +72,9 @@ PID ([0-9a-zA-Z_-]|"::")+ "!=" { rules_lval.val = RuleHdrTest::NE; return TOK_COMP; } ip { rules_lval.val = RuleHdrTest::IP; return TOK_PROT; } +ip6 { rules_lval.val = RuleHdrTest::IPv6; return TOK_PROT; } icmp { rules_lval.val = RuleHdrTest::ICMP; return TOK_PROT; } +icmp6 { rules_lval.val = RuleHdrTest::ICMPv6; return TOK_PROT; } tcp { rules_lval.val = RuleHdrTest::TCP; return TOK_PROT; } udp { rules_lval.val = RuleHdrTest::UDP; return TOK_PROT; } @@ -123,7 +152,7 @@ http { rules_lval.val = Rule::HTTP_REQUEST; return TOK_PATTERN_TYPE; } ftp { rules_lval.val = Rule::FTP; return TOK_PATTERN_TYPE; } finger { rules_lval.val = Rule::FINGER; return TOK_PATTERN_TYPE; } -{D}("."{D}){3}"/"{D} { +{D}("."{D}){3}{OWS}"/"{OWS}{D} { char* s = strchr(yytext, '/'); *s++ = '\0'; diff --git a/src/scan.l b/src/scan.l index 7ebd7894e1..6c87766781 100644 --- a/src/scan.l +++ b/src/scan.l @@ -167,7 +167,7 @@ ESCSEQ (\\([^\n]|[0-7]+|x[[:xdigit:]]+)) return TOK_POST_DOC; } -##{OWS}{ID}:.* { +##{OWS}{ID}:{WS}.* { const char* id_start = skip_whitespace(yytext + 2); yylval.str = copy_string(canon_doc_func_param(id_start).c_str()); return TOK_DOC; @@ -181,7 +181,7 @@ ESCSEQ (\\([^\n]|[0-7]+|x[[:xdigit:]]+)) } } -##{OWS}{ID}:.* { +##{OWS}{ID}:{WS}.* { if ( generate_documentation ) { // Comment is documenting either a function parameter or return type, @@ -201,6 +201,11 @@ ESCSEQ (\\([^\n]|[0-7]+|x[[:xdigit:]]+)) } } +##<.* { + if ( generate_documentation && BroDocObj::last ) + BroDocObj::last->AddDocString(canon_doc_comment(yytext + 3)); +} + ##.* { if ( generate_documentation && (yytext[2] != '#') ) { @@ -211,6 +216,8 @@ ESCSEQ (\\([^\n]|[0-7]+|x[[:xdigit:]]+)) } } +#{OWS}@no-test.* return TOK_NO_TEST; + #.* /* eat comments */ {WS} /* eat whitespace */ @@ -221,6 +228,24 @@ ESCSEQ (\\([^\n]|[0-7]+|x[[:xdigit:]]+)) ++yylloc.last_line; } + /* IPv6 literal constant patterns */ +"["({HEX}:){7}{HEX}"]" { + string s(yytext+1); + RET_CONST(new AddrVal(s.erase(s.size()-1))) +} +"["0x{HEX}({HEX}|:)*"::"({HEX}|:)*"]" { + string s(yytext+3); + RET_CONST(new AddrVal(s.erase(s.size()-1))) +} +"["({HEX}|:)*"::"({HEX}|:)*"]" { + string s(yytext+1); + RET_CONST(new AddrVal(s.erase(s.size()-1))) +} +"["({HEX}|:)*"::"({HEX}|:)*({D}"."){3}{D}"]" { + string s(yytext+1); + RET_CONST(new AddrVal(s.erase(s.size()-1))) +} + [!%*/+\-,:;<=>?()\[\]{}~$|] return yytext[0]; "--" return TOK_DECR; @@ -266,7 +291,6 @@ int return TOK_INT; interval return TOK_INTERVAL; list return TOK_LIST; local return TOK_LOCAL; -match return TOK_MATCH; module return TOK_MODULE; next return TOK_NEXT; of return TOK_OF; @@ -282,13 +306,11 @@ string return TOK_STRING; subnet return TOK_SUBNET; switch return TOK_SWITCH; table return TOK_TABLE; -this return TOK_THIS; time return TOK_TIME; timeout return TOK_TIMEOUT; timer return TOK_TIMER; type return TOK_TYPE; union return TOK_UNION; -using return TOK_USING; vector return TOK_VECTOR; when return TOK_WHEN; @@ -297,7 +319,6 @@ when return TOK_WHEN; &create_expire return TOK_ATTR_EXPIRE_CREATE; &default return TOK_ATTR_DEFAULT; &delete_func return TOK_ATTR_DEL_FUNC; -&disable_print_hook return TOK_ATTR_DISABLE_PRINT_HOOK; &raw_output return TOK_ATTR_RAW_OUTPUT; &encrypt return TOK_ATTR_ENCRYPT; &error_handler return TOK_ATTR_ERROR_HANDLER; @@ -308,6 +329,7 @@ when return TOK_WHEN; &optional return TOK_ATTR_OPTIONAL; &persistent return TOK_ATTR_PERSISTENT; &priority return TOK_ATTR_PRIORITY; +&type_column return TOK_ATTR_TYPE_COLUMN; &read_expire return TOK_ATTR_EXPIRE_READ; &redef return TOK_ATTR_REDEF; &rotate_interval return TOK_ATTR_ROTATE_INTERVAL; @@ -334,6 +356,22 @@ when return TOK_WHEN; (void) load_files(new_file); } +@load-sigs{WS}{FILE} { + const char* new_sig_file = skip_whitespace(yytext + 10); + const char* full_filename = 0; + FILE* f = search_for_file(new_sig_file, "sig", &full_filename, false, 0); + + if ( f ) + { + sig_files.push_back(full_filename); + fclose(f); + delete [] full_filename; + } + else + reporter->Error("failed to find file associated with @load-sigs %s", + new_sig_file); + } + @unload{WS}{FILE} { // Skip "@unload". const char* new_file = skip_whitespace(yytext + 7); @@ -397,9 +435,7 @@ F RET_CONST(new Val(false, TYPE_BOOL)) } {D} { - // TODO: check if we can use strtoull instead of atol, - // and similarly for {HEX}. - RET_CONST(new Val(static_cast(atol(yytext)), + RET_CONST(new Val(static_cast(strtoull(yytext, (char**) NULL, 10)), TYPE_COUNT)) } {FLOAT} RET_CONST(new Val(atof(yytext), TYPE_DOUBLE)) @@ -441,17 +477,6 @@ F RET_CONST(new Val(false, TYPE_BOOL)) RET_CONST(new PortVal(p, TRANSPORT_UNKNOWN)) } -({D}"."){3}{D} RET_CONST(new AddrVal(yytext)) - -({HEX}:){7}{HEX} RET_CONST(new AddrVal(yytext)) - -0x{HEX}({HEX}|:)*"::"({HEX}|:)* RET_CONST(new AddrVal(yytext+2)) -(({D}|:)({HEX}|:)*)?"::"({HEX}|:)* RET_CONST(new AddrVal(yytext)) - -"0x"{HEX}+ RET_CONST(new Val(static_cast(strtol(yytext, 0, 16)), TYPE_COUNT)) - -{H}("."{H})+ RET_CONST(dns_mgr->LookupHost(yytext)) - {FLOAT}{OWS}day(s?) RET_CONST(new IntervalVal(atof(yytext),Days)) {FLOAT}{OWS}hr(s?) RET_CONST(new IntervalVal(atof(yytext),Hours)) {FLOAT}{OWS}min(s?) RET_CONST(new IntervalVal(atof(yytext),Minutes)) @@ -459,6 +484,12 @@ F RET_CONST(new Val(false, TYPE_BOOL)) {FLOAT}{OWS}msec(s?) RET_CONST(new IntervalVal(atof(yytext),Milliseconds)) {FLOAT}{OWS}usec(s?) RET_CONST(new IntervalVal(atof(yytext),Microseconds)) +({D}"."){3}{D} RET_CONST(new AddrVal(yytext)) + +"0x"{HEX}+ RET_CONST(new Val(static_cast(strtoull(yytext, 0, 16)), TYPE_COUNT)) + +{H}("."{H})+ RET_CONST(dns_mgr->LookupHost(yytext)) + \"([^\\\n\"]|{ESCSEQ})*\" { const char* text = yytext; int len = strlen(text) + 1; diff --git a/src/socks-analyzer.pac b/src/socks-analyzer.pac new file mode 100644 index 0000000000..7ce364670b --- /dev/null +++ b/src/socks-analyzer.pac @@ -0,0 +1,170 @@ + +%header{ +StringVal* array_to_string(vector *a); +%} + +%code{ +StringVal* array_to_string(vector *a) + { + int len = a->size(); + char tmp[len]; + char *s = tmp; + for ( vector::iterator i = a->begin(); i != a->end(); *s++ = *i++ ); + + while ( len > 0 && tmp[len-1] == '\0' ) + --len; + + return new StringVal(len, tmp); + } +%} + +refine connection SOCKS_Conn += { + + function socks4_request(request: SOCKS4_Request): bool + %{ + RecordVal* sa = new RecordVal(socks_address); + sa->Assign(0, new AddrVal(htonl(${request.addr}))); + if ( ${request.v4a} ) + sa->Assign(1, array_to_string(${request.name})); + + BifEvent::generate_socks_request(bro_analyzer(), + bro_analyzer()->Conn(), + 4, + ${request.command}, + sa, + new PortVal(${request.port} | TCP_PORT_MASK), + array_to_string(${request.user})); + + static_cast(bro_analyzer())->EndpointDone(true); + + return true; + %} + + function socks4_reply(reply: SOCKS4_Reply): bool + %{ + RecordVal* sa = new RecordVal(socks_address); + sa->Assign(0, new AddrVal(htonl(${reply.addr}))); + + BifEvent::generate_socks_reply(bro_analyzer(), + bro_analyzer()->Conn(), + 4, + ${reply.status}, + sa, + new PortVal(${reply.port} | TCP_PORT_MASK)); + + bro_analyzer()->ProtocolConfirmation(); + static_cast(bro_analyzer())->EndpointDone(false); + return true; + %} + + function socks5_request(request: SOCKS5_Request): bool + %{ + if ( ${request.reserved} != 0 ) + { + bro_analyzer()->ProtocolViolation(fmt("invalid value in reserved field: %d", ${request.reserved})); + return false; + } + + RecordVal* sa = new RecordVal(socks_address); + + // This is dumb and there must be a better way (checking for presence of a field)... + switch ( ${request.remote_name.addr_type} ) + { + case 1: + sa->Assign(0, new AddrVal(htonl(${request.remote_name.ipv4}))); + break; + + case 3: + sa->Assign(1, new StringVal(${request.remote_name.domain_name.name}.length(), + (const char*) ${request.remote_name.domain_name.name}.data())); + break; + + case 4: + sa->Assign(0, new AddrVal(IPAddr(IPv6, (const uint32_t*) ${request.remote_name.ipv6}, IPAddr::Network))); + break; + + default: + bro_analyzer()->ProtocolViolation(fmt("invalid SOCKSv5 addr type: %d", ${request.remote_name.addr_type})); + return false; + break; + } + + BifEvent::generate_socks_request(bro_analyzer(), + bro_analyzer()->Conn(), + 5, + ${request.command}, + sa, + new PortVal(${request.port} | TCP_PORT_MASK), + new StringVal("")); + + static_cast(bro_analyzer())->EndpointDone(true); + + return true; + %} + + function socks5_reply(reply: SOCKS5_Reply): bool + %{ + RecordVal* sa = new RecordVal(socks_address); + + // This is dumb and there must be a better way (checking for presence of a field)... + switch ( ${reply.bound.addr_type} ) + { + case 1: + sa->Assign(0, new AddrVal(htonl(${reply.bound.ipv4}))); + break; + + case 3: + sa->Assign(1, new StringVal(${reply.bound.domain_name.name}.length(), + (const char*) ${reply.bound.domain_name.name}.data())); + break; + + case 4: + sa->Assign(0, new AddrVal(IPAddr(IPv6, (const uint32_t*) ${reply.bound.ipv6}, IPAddr::Network))); + break; + + default: + bro_analyzer()->ProtocolViolation(fmt("invalid SOCKSv5 addr type: %d", ${reply.bound.addr_type})); + return false; + break; + } + + BifEvent::generate_socks_reply(bro_analyzer(), + bro_analyzer()->Conn(), + 5, + ${reply.reply}, + sa, + new PortVal(${reply.port} | TCP_PORT_MASK)); + + bro_analyzer()->ProtocolConfirmation(); + static_cast(bro_analyzer())->EndpointDone(false); + return true; + %} + + function version_error(version: uint8): bool + %{ + bro_analyzer()->ProtocolViolation(fmt("unsupported/unknown SOCKS version %d", version)); + return true; + %} + + +}; + +refine typeattr SOCKS_Version_Error += &let { + proc: bool = $context.connection.version_error(version); +}; + +refine typeattr SOCKS4_Request += &let { + proc: bool = $context.connection.socks4_request(this); +}; + +refine typeattr SOCKS4_Reply += &let { + proc: bool = $context.connection.socks4_reply(this); +}; + +refine typeattr SOCKS5_Request += &let { + proc: bool = $context.connection.socks5_request(this); +}; + +refine typeattr SOCKS5_Reply += &let { + proc: bool = $context.connection.socks5_reply(this); +}; diff --git a/src/socks-protocol.pac b/src/socks-protocol.pac new file mode 100644 index 0000000000..05ca4bc861 --- /dev/null +++ b/src/socks-protocol.pac @@ -0,0 +1,119 @@ + +type SOCKS_Version(is_orig: bool) = record { + version: uint8; + msg: case version of { + 4 -> socks4_msg: SOCKS4_Message(is_orig); + 5 -> socks5_msg: SOCKS5_Message(is_orig); + default -> socks_msg_fail: SOCKS_Version_Error(version); + }; +}; + +type SOCKS_Version_Error(version: uint8) = record { + nothing: empty; +}; + +# SOCKS5 Implementation +type SOCKS5_Message(is_orig: bool) = case $context.connection.v5_past_authentication() of { + true -> msg: SOCKS5_Real_Message(is_orig); + false -> auth: SOCKS5_Auth_Negotiation(is_orig); +}; + +type SOCKS5_Auth_Negotiation(is_orig: bool) = case is_orig of { + true -> req: SOCKS5_Auth_Negotiation_Request; + false -> rep: SOCKS5_Auth_Negotiation_Reply; +}; + +type SOCKS5_Auth_Negotiation_Request = record { + method_count: uint8; + methods: uint8[method_count]; +}; + +type SOCKS5_Auth_Negotiation_Reply = record { + selected_auth_method: uint8; +} &let { + past_auth = $context.connection.set_v5_past_authentication(); +}; + +type SOCKS5_Real_Message(is_orig: bool) = case is_orig of { + true -> request: SOCKS5_Request; + false -> reply: SOCKS5_Reply; +}; + +type Domain_Name = record { + len: uint8; + name: bytestring &length=len; +} &byteorder = bigendian; + +type SOCKS5_Address = record { + addr_type: uint8; + addr: case addr_type of { + 1 -> ipv4: uint32; + 3 -> domain_name: Domain_Name; + 4 -> ipv6: uint32[4]; + default -> err: bytestring &restofdata &transient; + }; +} &byteorder = bigendian; + +type SOCKS5_Request = record { + command: uint8; + reserved: uint8; + remote_name: SOCKS5_Address; + port: uint16; +} &byteorder = bigendian; + +type SOCKS5_Reply = record { + reply: uint8; + reserved: uint8; + bound: SOCKS5_Address; + port: uint16; +} &byteorder = bigendian; + + +# SOCKS4 Implementation +type SOCKS4_Message(is_orig: bool) = case is_orig of { + true -> request: SOCKS4_Request; + false -> reply: SOCKS4_Reply; +}; + +type SOCKS4_Request = record { + command: uint8; + port: uint16; + addr: uint32; + user: uint8[] &until($element == 0); + host: case v4a of { + true -> name: uint8[] &until($element == 0); # v4a + false -> empty: uint8[] &length=0; + } &requires(v4a); +} &byteorder = bigendian &let { + v4a: bool = (addr <= 0x000000ff); +}; + +type SOCKS4_Reply = record { + zero: uint8; + status: uint8; + port: uint16; + addr: uint32; +} &byteorder = bigendian; + + +refine connection SOCKS_Conn += { + %member{ + bool v5_authenticated_; + %} + + %init{ + v5_authenticated_ = false; + %} + + function v5_past_authentication(): bool + %{ + return v5_authenticated_; + %} + + function set_v5_past_authentication(): bool + %{ + v5_authenticated_ = true; + return true; + %} +}; + diff --git a/src/socks.pac b/src/socks.pac new file mode 100644 index 0000000000..15d3580674 --- /dev/null +++ b/src/socks.pac @@ -0,0 +1,24 @@ +%include binpac.pac +%include bro.pac + +%extern{ +#include "SOCKS.h" +%} + +analyzer SOCKS withcontext { + connection: SOCKS_Conn; + flow: SOCKS_Flow; +}; + +connection SOCKS_Conn(bro_analyzer: BroAnalyzer) { + upflow = SOCKS_Flow(true); + downflow = SOCKS_Flow(false); +}; + +%include socks-protocol.pac + +flow SOCKS_Flow(is_orig: bool) { + datagram = SOCKS_Version(is_orig) withcontext(connection, this); +}; + +%include socks-analyzer.pac \ No newline at end of file diff --git a/src/ssl-analyzer.pac b/src/ssl-analyzer.pac index 6471d9c4a4..3d9564eaab 100644 --- a/src/ssl-analyzer.pac +++ b/src/ssl-analyzer.pac @@ -22,11 +22,18 @@ } }; + string orig_label(bool is_orig); void free_X509(void *); X509* d2i_X509_binpac(X509** px, const uint8** in, int len); + string handshake_type_label(int type); %} %code{ +string orig_label(bool is_orig) + { + return string(is_orig ? "originator" :"responder"); + } + void free_X509(void* cert) { X509_free((X509*) cert); @@ -40,6 +47,27 @@ return d2i_X509(px, (u_char**) in, len); #endif } + + string handshake_type_label(int type) + { + switch ( type ) { + case HELLO_REQUEST: return string("HELLO_REQUEST"); + case CLIENT_HELLO: return string("CLIENT_HELLO"); + case SERVER_HELLO: return string("SERVER_HELLO"); + case SESSION_TICKET: return string("SESSION_TICKET"); + case CERTIFICATE: return string("CERTIFICATE"); + case SERVER_KEY_EXCHANGE: return string("SERVER_KEY_EXCHANGE"); + case CERTIFICATE_REQUEST: return string("CERTIFICATE_REQUEST"); + case SERVER_HELLO_DONE: return string("SERVER_HELLO_DONE"); + case CERTIFICATE_VERIFY: return string("CERTIFICATE_VERIFY"); + case CLIENT_KEY_EXCHANGE: return string("CLIENT_KEY_EXCHANGE"); + case FINISHED: return string("FINISHED"); + case CERTIFICATE_URL: return string("CERTIFICATE_URL"); + case CERTIFICATE_STATUS: return string("CERTIFICATE_STATUS"); + default: return string(fmt("UNKNOWN (%d)", type)); + } + } + %} @@ -65,6 +93,7 @@ function version_ok(vers : uint16) : bool case SSLv30: case TLSv10: case TLSv11: + case TLSv12: return true; default: @@ -82,15 +111,15 @@ refine connection SSL_Conn += { eof=0; %} - %eof{ - if ( ! eof && - state_ != STATE_CONN_ESTABLISHED && - state_ != STATE_TRACK_LOST && - state_ != STATE_INITIAL ) - bro_analyzer()->ProtocolViolation(fmt("unexpected end of connection in state %s", - state_label(state_).c_str())); - ++eof; - %} + #%eof{ + # if ( ! eof && + # state_ != STATE_CONN_ESTABLISHED && + # state_ != STATE_TRACK_LOST && + # state_ != STATE_INITIAL ) + # bro_analyzer()->ProtocolViolation(fmt("unexpected end of connection in state %s", + # state_label(state_).c_str())); + # ++eof; + #%} %cleanup{ %} @@ -117,28 +146,23 @@ refine connection SSL_Conn += { function proc_alert(rec: SSLRecord, level : int, desc : int) : bool %{ BifEvent::generate_ssl_alert(bro_analyzer(), bro_analyzer()->Conn(), - level, desc); + ${rec.is_orig}, level, desc); return true; %} function proc_client_hello(rec: SSLRecord, version : uint16, ts : double, session_id : uint8[], - cipher_suites16 : uint16[], + cipher_suites16 : uint16[], cipher_suites24 : uint24[]) : bool %{ - if ( state_ == STATE_TRACK_LOST ) - bro_analyzer()->ProtocolViolation(fmt("unexpected client hello message from %s in state %s", - orig_label(${rec.is_orig}).c_str(), - state_label(old_state_).c_str())); - if ( ! version_ok(version) ) bro_analyzer()->ProtocolViolation(fmt("unsupported client SSL version 0x%04x", version)); if ( ssl_client_hello ) { vector* cipher_suites = new vector(); - if ( cipher_suites16 ) + if ( cipher_suites16 ) std::copy(cipher_suites16->begin(), cipher_suites16->end(), std::back_inserter(*cipher_suites)); else std::transform(cipher_suites24->begin(), cipher_suites24->end(), std::back_inserter(*cipher_suites), to_int()); @@ -150,15 +174,15 @@ refine connection SSL_Conn += { cipher_set->Assign(ciph, 0); Unref(ciph); } - + BifEvent::generate_ssl_client_hello(bro_analyzer(), bro_analyzer()->Conn(), version, ts, to_string_val(session_id), cipher_set); - + delete cipher_suites; } - + return true; %} @@ -169,11 +193,6 @@ refine connection SSL_Conn += { cipher_suites24 : uint24[], comp_method : uint8) : bool %{ - if ( state_ == STATE_TRACK_LOST ) - bro_analyzer()->ProtocolViolation(fmt("unexpected server hello message from %s in state %s", - orig_label(${rec.is_orig}).c_str(), - state_label(old_state_).c_str())); - if ( ! version_ok(version) ) bro_analyzer()->ProtocolViolation(fmt("unsupported server SSL version 0x%04x", version)); else @@ -187,42 +206,49 @@ refine connection SSL_Conn += { std::copy(cipher_suites16->begin(), cipher_suites16->end(), std::back_inserter(*ciphers)); else std::transform(cipher_suites24->begin(), cipher_suites24->end(), std::back_inserter(*ciphers), to_int()); - + BifEvent::generate_ssl_server_hello(bro_analyzer(), bro_analyzer()->Conn(), version, ts, to_string_val(session_id), ciphers->size()==0 ? 0 : ciphers->at(0), comp_method); - + delete ciphers; } - + return true; %} - function proc_ssl_extension(type: int, data: bytestring) : bool + function proc_session_ticket_handshake(rec: SessionTicketHandshake, is_orig: bool): bool + %{ + if ( ssl_session_ticket_handshake ) + { + BifEvent::generate_ssl_session_ticket_handshake(bro_analyzer(), + bro_analyzer()->Conn(), + ${rec.ticket_lifetime_hint}, + new StringVal(${rec.data}.length(), (const char*) ${rec.data}.data())); + } + return true; + %} + + function proc_ssl_extension(rec: SSLRecord, type: int, data: bytestring) : bool %{ if ( ssl_extension ) BifEvent::generate_ssl_extension(bro_analyzer(), - bro_analyzer()->Conn(), type, + bro_analyzer()->Conn(), ${rec.is_orig}, type, new StringVal(data.length(), (const char*) data.data())); return true; %} function proc_certificate(rec: SSLRecord, certificates : bytestring[]) : bool %{ - if ( state_ == STATE_TRACK_LOST ) - bro_analyzer()->ProtocolViolation(fmt("unexpected certificate message from %s in state %s", - orig_label(${rec.is_orig}).c_str(), - state_label(old_state_).c_str())); - if ( certificates->size() == 0 ) return true; if ( x509_certificate ) { STACK_OF(X509)* untrusted_certs = 0; - + for ( unsigned int i = 0; i < certificates->size(); ++i ) { const bytestring& cert = (*certificates)[i]; @@ -231,7 +257,7 @@ refine connection SSL_Conn += { if ( ! pTemp ) { BifEvent::generate_x509_error(bro_analyzer(), bro_analyzer()->Conn(), - ERR_get_error()); + ${rec.is_orig}, ERR_get_error()); return false; } @@ -257,12 +283,13 @@ refine connection SSL_Conn += { StringVal* der_cert = new StringVal(cert.length(), (const char*) cert.data()); BifEvent::generate_x509_certificate(bro_analyzer(), bro_analyzer()->Conn(), + ${rec.is_orig}, pX509Cert, - ! ${rec.is_orig}, i, certificates->size(), der_cert); // Are there any X509 extensions? + //printf("Number of x509 extensions: %d\n", X509_get_ext_count(pTemp)); if ( x509_extension && X509_get_ext_count(pTemp) > 0 ) { int num_ext = X509_get_ext_count(pTemp); @@ -277,14 +304,14 @@ refine connection SSL_Conn += { ASN1_STRING *pString = X509_EXTENSION_get_data(ex); length = ASN1_STRING_to_UTF8(&pBuffer, pString); //i2t_ASN1_OBJECT(&pBuffer, length, obj) - + // printf("extension length: %d\n", length); // -1 indicates an error. - if ( length < 0 ) - continue; - - StringVal* value = new StringVal(length, (char*)pBuffer); - BifEvent::generate_x509_extension(bro_analyzer(), - bro_analyzer()->Conn(), value); + if ( length >= 0 ) + { + StringVal* value = new StringVal(length, (char*)pBuffer); + BifEvent::generate_x509_extension(bro_analyzer(), + bro_analyzer()->Conn(), ${rec.is_orig}, value); + } OPENSSL_free(pBuffer); } } @@ -343,6 +370,7 @@ refine connection SSL_Conn += { handshake_type_label(${hs.msg_type}).c_str(), orig_label(is_orig).c_str(), state_label(old_state_).c_str())); + return true; %} @@ -436,6 +464,10 @@ refine typeattr Handshake += &let { proc : bool = $context.connection.proc_handshake(this, rec.is_orig); }; +refine typeattr SessionTicketHandshake += &let { + proc : bool = $context.connection.proc_session_ticket_handshake(this, rec.is_orig); +} + refine typeattr UnknownRecord += &let { proc : bool = $context.connection.proc_unknown_record(rec); }; @@ -445,5 +477,5 @@ refine typeattr CiphertextRecord += &let { } refine typeattr SSLExtension += &let { - proc : bool = $context.connection.proc_ssl_extension(type, data); + proc : bool = $context.connection.proc_ssl_extension(rec, type, data); }; diff --git a/src/ssl-defs.pac b/src/ssl-defs.pac index 31d90338f5..4f715bbddd 100644 --- a/src/ssl-defs.pac +++ b/src/ssl-defs.pac @@ -17,39 +17,11 @@ enum ContentType { UNKNOWN_OR_V2_ENCRYPTED = 400 }; -%code{ - string* record_type_label(int type) - { - switch ( type ) { - case CHANGE_CIPHER_SPEC: - return new string("CHANGE_CIPHER_SPEC"); - case ALERT: - return new string("ALERT"); - case HANDSHAKE: - return new string("HANDSHAKE"); - case APPLICATION_DATA: - return new string("APPLICATION_DATA"); - case V2_ERROR: - return new string("V2_ERROR"); - case V2_CLIENT_HELLO: - return new string("V2_CLIENT_HELLO"); - case V2_CLIENT_MASTER_KEY: - return new string("V2_CLIENT_MASTER_KEY"); - case V2_SERVER_HELLO: - return new string("V2_SERVER_HELLO"); - case UNKNOWN_OR_V2_ENCRYPTED: - return new string("UNKNOWN_OR_V2_ENCRYPTED"); - - default: - return new string(fmt("UNEXPECTED (%d)", type)); - } - } -%} - enum SSLVersions { UNKNOWN_VERSION = 0x0000, SSLv20 = 0x0002, SSLv30 = 0x0300, TLSv10 = 0x0301, - TLSv11 = 0x0302 + TLSv11 = 0x0302, + TLSv12 = 0x0303 }; diff --git a/src/ssl-protocol.pac b/src/ssl-protocol.pac index f60d73b27e..0019478518 100644 --- a/src/ssl-protocol.pac +++ b/src/ssl-protocol.pac @@ -22,9 +22,7 @@ type uint24 = record { }; string state_label(int state_nr); - string orig_label(bool is_orig); double get_time_from_asn1(const ASN1_TIME * atime); - string handshake_type_label(int type); %} extern type to_int; @@ -35,7 +33,7 @@ type SSLRecord(is_orig: bool) = record { head2 : uint8; head3 : uint8; head4 : uint8; - rec : RecordText(this, is_orig)[] &length=length, &requires(content_type); + rec : RecordText(this)[] &length=length, &requires(content_type); } &length = length+5, &byteorder=bigendian, &let { version : int = @@ -54,25 +52,18 @@ type SSLRecord(is_orig: bool) = record { }; }; -type RecordText(rec: SSLRecord, is_orig: bool) = case $context.connection.state() of { +type RecordText(rec: SSLRecord) = case $context.connection.state() of { STATE_ABBREV_SERVER_ENCRYPTED, STATE_CLIENT_ENCRYPTED, STATE_COMM_ENCRYPTED, STATE_CONN_ESTABLISHED - -> ciphertext : CiphertextRecord(rec, is_orig); + -> ciphertext : CiphertextRecord(rec); default - -> plaintext : PlaintextRecord(rec, is_orig); + -> plaintext : PlaintextRecord(rec); }; -type PossibleEncryptedHandshake(rec: SSLRecord, is_orig: bool) = case $context.connection.state() of { - # Deal with encrypted handshakes before the server cipher spec change. - STATE_CLIENT_FINISHED, STATE_CLIENT_ENCRYPTED - -> ct : CiphertextRecord(rec, is_orig); - default -> hs : Handshake(rec); -}; - -type PlaintextRecord(rec: SSLRecord, is_orig: bool) = case rec.content_type of { +type PlaintextRecord(rec: SSLRecord) = case rec.content_type of { CHANGE_CIPHER_SPEC -> ch_cipher : ChangeCipherSpec(rec); ALERT -> alert : Alert(rec); - HANDSHAKE -> handshake : PossibleEncryptedHandshake(rec, is_orig); + HANDSHAKE -> handshake : Handshake(rec); APPLICATION_DATA -> app_data : ApplicationData(rec); V2_ERROR -> v2_error : V2Error(rec); V2_CLIENT_HELLO -> v2_client_hello : V2ClientHello(rec); @@ -81,7 +72,7 @@ type PlaintextRecord(rec: SSLRecord, is_orig: bool) = case rec.content_type of { default -> unknown_record : UnknownRecord(rec); }; -type SSLExtension = record { +type SSLExtension(rec: SSLRecord) = record { type: uint16; data_len: uint16; data: bytestring &length=data_len; @@ -156,10 +147,6 @@ enum AnalyzerState { } } - string orig_label(bool is_orig) - { - return string(is_orig ? "originator" :"responder"); - } double get_time_from_asn1(const ASN1_TIME * atime) { @@ -265,41 +252,21 @@ enum AnalyzerState { ###################################################################### enum HandshakeType { - HELLO_REQUEST = 0, - CLIENT_HELLO = 1, - SERVER_HELLO = 2, - CERTIFICATE = 11, - SERVER_KEY_EXCHANGE = 12, - CERTIFICATE_REQUEST = 13, - SERVER_HELLO_DONE = 14, - CERTIFICATE_VERIFY = 15, - CLIENT_KEY_EXCHANGE = 16, - FINISHED = 20, - CERTIFICATE_URL = 21, # RFC 3546 - CERTIFICATE_STATUS = 22, # RFC 3546 + HELLO_REQUEST = 0, + CLIENT_HELLO = 1, + SERVER_HELLO = 2, + SESSION_TICKET = 4, # RFC 5077 + CERTIFICATE = 11, + SERVER_KEY_EXCHANGE = 12, + CERTIFICATE_REQUEST = 13, + SERVER_HELLO_DONE = 14, + CERTIFICATE_VERIFY = 15, + CLIENT_KEY_EXCHANGE = 16, + FINISHED = 20, + CERTIFICATE_URL = 21, # RFC 3546 + CERTIFICATE_STATUS = 22, # RFC 3546 }; -%code{ - string handshake_type_label(int type) - { - switch ( type ) { - case HELLO_REQUEST: return string("HELLO_REQUEST"); - case CLIENT_HELLO: return string("CLIENT_HELLO"); - case SERVER_HELLO: return string("SERVER_HELLO"); - case CERTIFICATE: return string("CERTIFICATE"); - case SERVER_KEY_EXCHANGE: return string("SERVER_KEY_EXCHANGE"); - case CERTIFICATE_REQUEST: return string("CERTIFICATE_REQUEST"); - case SERVER_HELLO_DONE: return string("SERVER_HELLO_DONE"); - case CERTIFICATE_VERIFY: return string("CERTIFICATE_VERIFY"); - case CLIENT_KEY_EXCHANGE: return string("CLIENT_KEY_EXCHANGE"); - case FINISHED: return string("FINISHED"); - case CERTIFICATE_URL: return string("CERTIFICATE_URL"); - case CERTIFICATE_STATUS: return string("CERTIFICATE_STATUS"); - default: return string(fmt("UNKNOWN (%d)", type)); - } - } -%} - ###################################################################### # V3 Change Cipher Spec Protocol (7.1.) @@ -389,7 +356,7 @@ type ClientHello(rec: SSLRecord) = record { # This weirdness is to deal with the possible existence or absence # of the following fields. ext_len: uint16[] &until($element == 0 || $element != 0); - extensions : SSLExtension[] &until($input.length() == 0); + extensions : SSLExtension(rec)[] &until($input.length() == 0); } &let { state_changed : bool = $context.connection.transition(STATE_INITIAL, @@ -435,6 +402,10 @@ type ServerHello(rec: SSLRecord) = record { session_id : uint8[session_len]; cipher_suite : uint16[1]; compression_method : uint8; + # This weirdness is to deal with the possible existence or absence + # of the following fields. + ext_len: uint16[] &until($element == 0 || $element != 0); + extensions : SSLExtension(rec)[] &until($input.length() == 0); } &let { state_changed : bool = $context.connection.transition(STATE_CLIENT_HELLO_RCVD, @@ -457,8 +428,7 @@ type V2ServerHello(rec: SSLRecord) = record { cert_data : bytestring &length = cert_len; ciphers : uint24[ciph_len/3]; conn_id_data : bytestring &length = conn_id_len; -} #&length = 8 + cert_len + ciph_len + conn_id_len, -&let { +} &let { state_changed : bool = (session_id_hit > 0 ? $context.connection.transition(STATE_CLIENT_HELLO_RCVD, @@ -608,7 +578,7 @@ type CertificateVerify(rec: SSLRecord) = record { ###################################################################### # The finished messages are always sent after encryption is in effect, -# so we will not be able to read those message. +# so we will not be able to read those messages. type Finished(rec: SSLRecord) = record { cont : bytestring &restofdata &transient; } &let { @@ -620,13 +590,17 @@ type Finished(rec: SSLRecord) = record { $context.connection.lost_track(); }; +type SessionTicketHandshake(rec: SSLRecord) = record { + ticket_lifetime_hint: uint32; + data: bytestring &restofdata; +}; ###################################################################### # V3 Handshake Protocol (7.) ###################################################################### type UnknownHandshake(hs: Handshake, is_orig: bool) = record { - cont : bytestring &restofdata &transient; + data : bytestring &restofdata &transient; } &let { state_changed : bool = $context.connection.lost_track(); }; @@ -636,19 +610,20 @@ type Handshake(rec: SSLRecord) = record { length : uint24; body : case msg_type of { - HELLO_REQUEST -> hello_request : HelloRequest(rec); - CLIENT_HELLO -> client_hello : ClientHello(rec); - SERVER_HELLO -> server_hello : ServerHello(rec); - CERTIFICATE -> certificate : Certificate(rec); - SERVER_KEY_EXCHANGE -> server_key_exchange : ServerKeyExchange(rec); - CERTIFICATE_REQUEST -> certificate_request : CertificateRequest(rec); - SERVER_HELLO_DONE -> server_hello_done : ServerHelloDone(rec); - CERTIFICATE_VERIFY -> certificate_verify : CertificateVerify(rec); - CLIENT_KEY_EXCHANGE -> client_key_exchange : ClientKeyExchange(rec); - FINISHED -> finished : Finished(rec); - CERTIFICATE_URL -> certificate_url : bytestring &restofdata &transient; - CERTIFICATE_STATUS -> certificate_status : bytestring &restofdata &transient; - default -> unknown_handshake : UnknownHandshake(this, rec.is_orig); + HELLO_REQUEST -> hello_request : HelloRequest(rec); + CLIENT_HELLO -> client_hello : ClientHello(rec); + SERVER_HELLO -> server_hello : ServerHello(rec); + SESSION_TICKET -> session_ticket : SessionTicketHandshake(rec); + CERTIFICATE -> certificate : Certificate(rec); + SERVER_KEY_EXCHANGE -> server_key_exchange : ServerKeyExchange(rec); + CERTIFICATE_REQUEST -> certificate_request : CertificateRequest(rec); + SERVER_HELLO_DONE -> server_hello_done : ServerHelloDone(rec); + CERTIFICATE_VERIFY -> certificate_verify : CertificateVerify(rec); + CLIENT_KEY_EXCHANGE -> client_key_exchange : ClientKeyExchange(rec); + FINISHED -> finished : Finished(rec); + CERTIFICATE_URL -> certificate_url : bytestring &restofdata &transient; + CERTIFICATE_STATUS -> certificate_status : bytestring &restofdata &transient; + default -> unknown_handshake : UnknownHandshake(this, rec.is_orig); } &length = to_int()(length); }; @@ -663,7 +638,7 @@ type UnknownRecord(rec: SSLRecord) = record { state_changed : bool = $context.connection.lost_track(); }; -type CiphertextRecord(rec: SSLRecord, is_orig: bool) = record { +type CiphertextRecord(rec: SSLRecord) = record { cont : bytestring &restofdata &transient; } &let { state_changed : bool = diff --git a/src/strings.bif b/src/strings.bif index 7e9885e296..dc5e064dc6 100644 --- a/src/strings.bif +++ b/src/strings.bif @@ -1,4 +1,5 @@ -# Definitions of Bro built-in functions related to strings. +##! Definitions of built-in functions related to string processing and +##! manipulation. %%{ // C segment @@ -10,6 +11,14 @@ using namespace std; %%} +## Concatenates all arguments into a single string. The function takes a +## variable number of arguments of type string and stitches them together. +## +## Returns: The concatenation of all (string) arguments. +## +## .. bro:see:: cat cat_sep cat_string_array cat_string_array_n +## fmt +## join_string_vec join_string_array function string_cat%(...%): string %{ int n = 0; @@ -73,18 +82,53 @@ BroString* cat_string_array_n(TableVal* tbl, int start, int end) } %%} +## Concatenates all elements in an array of strings. +## +## a: The :bro:type:`string_array` (``table[count] of string``). +## +## Returns: The concatenation of all elements in *a*. +## +## .. bro:see:: cat cat_sep string_cat cat_string_array_n +## fmt +## join_string_vec join_string_array function cat_string_array%(a: string_array%): string %{ TableVal* tbl = a->AsTableVal(); return new StringVal(cat_string_array_n(tbl, 1, a->AsTable()->Length())); %} +## Concatenates a specific range of elements in an array of strings. +## +## a: The :bro:type:`string_array` (``table[count] of string``). +## +## start: The array index of the first element of the range. +## +## end: The array index of the last element of the range. +## +## Returns: The concatenation of the range *[start, end]* in *a*. +## +## .. bro:see:: cat string_cat cat_string_array +## fmt +## join_string_vec join_string_array function cat_string_array_n%(a: string_array, start: count, end: count%): string %{ TableVal* tbl = a->AsTableVal(); return new StringVal(cat_string_array_n(tbl, start, end)); %} +## Joins all values in the given array of strings with a separator placed +## between each element. +## +## sep: The separator to place between each element. +## +## a: The :bro:type:`string_array` (``table[count] of string``). +## +## Returns: The concatenation of all elements in *a*, with *sep* placed +## between each element. +## +## .. bro:see:: cat cat_sep string_cat cat_string_array cat_string_array_n +## fmt +## join_string_vec function join_string_array%(sep: string, a: string_array%): string %{ vector vs; @@ -108,6 +152,45 @@ function join_string_array%(sep: string, a: string_array%): string return new StringVal(concatenate(vs)); %} +## Joins all values in the given vector of strings with a separator placed +## between each element. +## +## sep: The separator to place between each element. +## +## vec: The :bro:type:`string_vec` (``vector of string``). +## +## Returns: The concatenation of all elements in *vec*, with *sep* placed +## between each element. +## +## .. bro:see:: cat cat_sep string_cat cat_string_array cat_string_array_n +## fmt +## join_string_array +function join_string_vec%(vec: string_vec, sep: string%): string + %{ + ODesc d; + VectorVal *v = vec->AsVectorVal(); + + for ( unsigned i = 0; i < v->Size(); ++i ) + { + if ( i > 0 ) + d.Add(sep->CheckString(), 0); + + v->Lookup(i)->Describe(&d); + } + + BroString* s = new BroString(1, d.TakeBytes(), d.Len()); + s->SetUseFreeToDelete(true); + + return new StringVal(s); + %} + +## Sorts an array of strings. +## +## a: The :bro:type:`string_array` (``table[count] of string``). +## +## Returns: A sorted copy of *a*. +## +## .. bro:see:: sort function sort_string_array%(a: string_array%): string_array %{ TableVal* tbl = a->AsTableVal(); @@ -129,31 +212,29 @@ function sort_string_array%(a: string_array%): string_array } // sort(vs.begin(), vs.end(), Bstr_cmp); - TableVal* b = new TableVal(internal_type("string_array")->AsTableType()); + TableVal* b = new TableVal(string_array); vs_to_string_array(vs, b, 1, n); return b; %} - -function join_string_vec%(vec: string_vec, sep: string%): string - %{ - ODesc d; - VectorVal *v = vec->AsVectorVal(); - - for ( unsigned i = 0; i < v->Size(); ++i ) - { - if ( i > 0 ) - d.Add(sep->CheckString(), 0); - - v->Lookup(i+1)->Describe(&d); - } - - BroString* s = new BroString(1, d.TakeBytes(), d.Len()); - s->SetUseFreeToDelete(true); - - return new StringVal(s); - %} - +## Returns an edited version of a string that applies a special +## "backspace character" (usually ``\x08`` for backspace or ``\x7f`` for DEL). +## For example, ``edit("hello there", "e")`` returns ``"llo t"``. +## +## arg_s: The string to edit. +## +## arg_edit_char: A string of exactly one character that represents the +## "backspace character". If it is longer than one character Bro +## generates a run-time error and uses the first character in +## the string. +## +## Returns: An edited version of *arg_s* where *arg_edit_char* triggers the +## deletion of the last character. +## +## .. bro:see:: clean +## to_string_literal +## escape_string +## strip function edit%(arg_s: string, arg_edit_char: string%): string %{ if ( arg_edit_char->Len() != 1 ) @@ -184,11 +265,28 @@ function edit%(arg_s: string, arg_edit_char: string%): string return new StringVal(new BroString(1, byte_vec(new_s), ind)); %} +## Returns the number of characters (bytes) in the given string. The +## length computation includes any embedded NULs, and also a trailing NUL, +## if any (which is why the function isn't called ``strlen``; to remind +## the user that Bro strings can include NULs). +## +## s: The string to compute the length for. +## +## Returns: The number of characters in *s*. function byte_len%(s: string%): count %{ return new Val(s->Len(), TYPE_COUNT); %} +## Get a substring from a string, given a starting position and length. +## +## s: The string to obtain a substring from. +## +## start: The starting position of the substring in *s* +## +## n: The number of characters to extract, beginning at *start*. +## +## Returns: A substring of *s* of length *n* from position *start*. function sub_bytes%(s: string, start: count, n: int%): string %{ if ( start > 0 ) @@ -213,15 +311,9 @@ static int match_prefix(int s_len, const char* s, int t_len, const char* t) return 1; } -Val* do_split(StringVal* str_val, RE_Matcher* re, TableVal* other_sep, - int incl_sep, int max_num_sep) +Val* do_split(StringVal* str_val, RE_Matcher* re, int incl_sep, int max_num_sep) { - TableVal* a = new TableVal(internal_type("string_array")->AsTableType()); - ListVal* other_strings = 0; - - if ( other_sep && other_sep->Size() > 0 ) - other_strings = other_sep->ConvertToPureList(); - + TableVal* a = new TableVal(string_array); const u_char* s = str_val->Bytes(); int n = str_val->Len(); const u_char* end_of_s = s + n; @@ -275,9 +367,6 @@ Val* do_split(StringVal* str_val, RE_Matcher* re, TableVal* other_sep, reporter->InternalError("RegMatch in split goes beyond the string"); } - if ( other_strings ) - delete other_strings; - return a; } @@ -368,65 +457,151 @@ Val* do_sub(StringVal* str_val, RE_Matcher* re, StringVal* repl, int do_all) } %%} -# Similar to split in awk. - +## Splits a string into an array of strings according to a pattern. +## +## str: The string to split. +## +## re: The pattern describing the element separator in *str*. +## +## Returns: An array of strings where each element corresponds to a substring +## in *str* separated by *re*. +## +## .. bro:see:: split1 split_all split_n str_split +## +## .. note:: The returned table starts at index 1. Note that conceptually the +## return value is meant to be a vector and this might change in the +## future. +## function split%(str: string, re: pattern%): string_array %{ - return do_split(str, re, 0, 0, 0); + return do_split(str, re, 0, 0); %} -# split1(str, pattern, include_separator): table[count] of string -# -# Same as split, except that str is only split (if possible) at the -# earliest position and an array of two strings is returned. -# An array of one string is returned when str cannot be splitted. - +## Splits a string *once* into a two-element array of strings according to a +## pattern. This function is the same as :bro:id:`split`, but *str* is only +## split once (if possible) at the earliest position and an array of two strings +## is returned. +## +## str: The string to split. +## +## re: The pattern describing the separator to split *str* in two pieces. +## +## Returns: An array of strings with two elements in which the first represents +## the substring in *str* up to the first occurence of *re*, and the +## second everything after *re*. An array of one string is returned +## when *s* cannot be split. +## +## .. bro:see:: split split_all split_n str_split function split1%(str: string, re: pattern%): string_array %{ - return do_split(str, re, 0, 0, 1); + return do_split(str, re, 0, 1); %} -# Same as split, except that the array returned by split_all also -# includes parts of string that match the pattern in the array. - -# For example, split_all("a-b--cd", /(\-)+/) returns {"a", "-", "b", -# "--", "cd"}: odd-indexed elements do not match the pattern -# and even-indexed ones do. - +## Splits a string into an array of strings according to a pattern. This +## function is the same as :bro:id:`split`, except that the separators are +## returned as well. For example, ``split_all("a-b--cd", /(\-)+/)`` returns +## ``{"a", "-", "b", "--", "cd"}``: odd-indexed elements do not match the +## pattern and even-indexed ones do. +## +## str: The string to split. +## +## re: The pattern describing the element separator in *str*. +## +## Returns: An array of strings where each two successive elements correspond +## to a substring in *str* of the part not matching *re* (odd-indexed) +## and the part that matches *re* (even-indexed). +## +## .. bro:see:: split split1 split_n str_split function split_all%(str: string, re: pattern%): string_array %{ - return do_split(str, re, 0, 1, 0); + return do_split(str, re, 1, 0); %} +## Splits a string a given number of times into an array of strings according +## to a pattern. This function is similar to :bro:id:`split1` and +## :bro:id:`split_all`, but with customizable behavior with respect to +## including separators in the result and the number of times to split. +## +## str: The string to split. +## +## re: The pattern describing the element separator in *str*. +## +## incl_sep: A flag indicating whether to include the separator matches in the +## result (as in :bro:id:`split_all`). +## +## max_num_sep: The number of times to split *str*. +## +## Returns: An array of strings where, if *incl_sep* is true, each two +## successive elements correspond to a substring in *str* of the part +## not matching *re* (odd-indexed) and the part that matches *re* +## (even-indexed). +## +## .. bro:see:: split split1 split_all str_split function split_n%(str: string, re: pattern, incl_sep: bool, max_num_sep: count%): string_array %{ - return do_split(str, re, 0, incl_sep, max_num_sep); - %} - -function split_complete%(str: string, - re: pattern, other: string_set, - incl_sep: bool, max_num_sep: count%): string_array - %{ - return do_split(str, re, other->AsTableVal(), incl_sep, max_num_sep); + return do_split(str, re, incl_sep, max_num_sep); %} +## Substitutes a given replacement string for the first occurrence of a pattern +## in a given string. +## +## str: The string to perform the substitution in. +## +## re: The pattern being replaced with *repl*. +## +## repl: The string that replaces *re*. +## +## Returns: A copy of *str* with the first occurence of *re* replaced with +## *repl*. +## +## .. bro:see:: gsub subst_string function sub%(str: string, re: pattern, repl: string%): string %{ return do_sub(str, re, repl, 0); %} +## Substitutes a given replacement string for all occurrences of a pattern +## in a given string. +## +## str: The string to perform the substitution in. +## +## re: The pattern being replaced with *repl*. +## +## repl: The string that replaces *re*. +## +## Returns: A copy of *str* with all occurrences of *re* replaced with *repl*. +## +## .. bro:see:: sub subst_string function gsub%(str: string, re: pattern, repl: string%): string %{ return do_sub(str, re, repl, 1); %} + +## Lexicographically compares two strings. +## +## s1: The first string. +## +## s2: The second string. +## +## Returns: An integer greater than, equal to, or less than 0 according as +## *s1* is greater than, equal to, or less than *s2*. function strcmp%(s1: string, s2: string%): int %{ return new Val(Bstr_cmp(s1->AsString(), s2->AsString()), TYPE_INT); %} -# Returns 0 if $little is not found in $big. +## Locates the first occurrence of one string in another. +## +## big: The string to look in. +## +## little: The (smaller) string to find inside *big*. +## +## Returns: The location of *little* in *big*, or 0 if *little* is not found in +## *big*. +## +## .. bro:see:: find_all find_last function strstr%(big: string, little: string%): count %{ return new Val( @@ -434,8 +609,17 @@ function strstr%(big: string, little: string%): count TYPE_COUNT); %} -# Substitute each (non-overlapping) appearance of $from in $s to $to, -# and return the resulting string. +## Substitutes each (non-overlapping) appearance of a string in another. +## +## s: The string in which to perform the substitution. +## +## from: The string to look for which is replaced with *to*. +## +## to: The string that replaces all occurrences of *from* in *s*. +## +## Returns: A copy of *s* where each occurrence of *from* is replaced with *to*. +## +## .. bro:see:: sub gsub function subst_string%(s: string, from: string, to: string%): string %{ const int little_len = from->Len(); @@ -478,6 +662,15 @@ function subst_string%(s: string, from: string, to: string%): string return new StringVal(concatenate(vs)); %} +## Replaces all uppercase letters in a string with their lowercase counterpart. +## +## str: The string to convert to lowercase letters. +## +## Returns: A copy of the given string with the uppercase letters (as indicated +## by ``isascii`` and ``isupper``) folded to lowercase +## (via ``tolower``). +## +## .. bro:see:: to_upper is_ascii function to_lower%(str: string%): string %{ const u_char* s = str->Bytes(); @@ -498,6 +691,15 @@ function to_lower%(str: string%): string return new StringVal(new BroString(1, lower_s, n)); %} +## Replaces all lowercase letters in a string with their uppercase counterpart. +## +## str: The string to convert to uppercase letters. +## +## Returns: A copy of the given string with the lowercase letters (as indicated +## by ``isascii`` and ``islower``) folded to uppercase +## (via ``toupper``). +## +## .. bro:see:: to_lower is_ascii function to_upper%(str: string%): string %{ const u_char* s = str->Bytes(); @@ -518,18 +720,54 @@ function to_upper%(str: string%): string return new StringVal(new BroString(1, upper_s, n)); %} +## Replaces non-printable characters in a string with escaped sequences. The +## mappings are: +## +## - ``NUL`` to ``\0`` +## - ``DEL`` to ``^?`` +## - values <= 26 to ``^[A-Z]`` +## - values not in *[32, 126]* to ``%XX`` +## +## If the string does not yet have a trailing NUL, one is added. +## +## str: The string to escape. +## +## Returns: The escaped string. +## +## .. bro:see:: to_string_literal escape_string function clean%(str: string%): string %{ char* s = str->AsString()->Render(); return new StringVal(new BroString(1, byte_vec(s), strlen(s))); %} +## Replaces non-printable characters in a string with escaped sequences. The +## mappings are: +## +## - ``NUL`` to ``\0`` +## - ``DEL`` to ``^?`` +## - values <= 26 to ``^[A-Z]`` +## - values not in *[32, 126]* to ``%XX`` +## +## str: The string to escape. +## +## Returns: The escaped string. +## +## .. bro:see:: clean escape_string function to_string_literal%(str: string%): string %{ char* s = str->AsString()->Render(BroString::BRO_STRING_LITERAL); return new StringVal(new BroString(1, byte_vec(s), strlen(s))); %} +## Determines whether a given string contains only ASCII characters. +## +## str: The string to examine. +## +## Returns: False if any byte value of *str* is greater than 127, and true +## otherwise. +## +## .. bro:see:: to_upper to_lower function is_ascii%(str: string%): bool %{ int n = str->Len(); @@ -542,7 +780,14 @@ function is_ascii%(str: string%): bool return new Val(1, TYPE_BOOL); %} -# Make printable version of string. +## Creates a printable version of a string. This function is the same as +## :bro:id:`clean` except that non-printable characters are removed. +## +## s: The string to escape. +## +## Returns: The escaped string. +## +## .. bro:see:: clean to_string_literal function escape_string%(s: string%): string %{ char* escstr = s->AsString()->Render(); @@ -551,7 +796,12 @@ function escape_string%(s: string%): string return val; %} -# Returns an ASCII hexadecimal representation of a string. +## Returns an ASCII hexadecimal representation of a string. +## +## s: The string to convert to hex. +## +## Returns: A copy of *s* where each byte is replaced with the corresponding +## hex nibble. function string_to_ascii_hex%(s: string%): string %{ char* x = new char[s->Len() * 2 + 1]; @@ -563,8 +813,17 @@ function string_to_ascii_hex%(s: string%): string return new StringVal(new BroString(1, (u_char*) x, s->Len() * 2)); %} -function str_smith_waterman%(s1: string, s2: string, params: sw_params%) -: sw_substring_vec +## Uses the Smith-Waterman algorithm to find similar/overlapping substrings. +## See `Wikipedia `_. +## +## s1: The first string. +## +## s2: The second string. +## +## params: Parameters for the Smith-Waterman algorithm. +## +## Returns: The result of the Smith-Waterman algorithm calculation. +function str_smith_waterman%(s1: string, s2: string, params: sw_params%) : sw_substring_vec %{ SWParams sw_params(params->AsRecordVal()->Lookup(0)->AsCount(), SWVariant(params->AsRecordVal()->Lookup(1)->AsCount())); @@ -578,6 +837,16 @@ function str_smith_waterman%(s1: string, s2: string, params: sw_params%) return result; %} +## Splits a string into substrings with the help of an index vector of cutting +## points. +## +## s: The string to split. +## +## idx: The index vector (``vector of count``) with the cutting points. +## +## Returns: A vector of strings. +## +## .. bro:see:: split split1 split_all split_n function str_split%(s: string, idx: index_vec%): string_vec %{ vector* idx_v = idx->AsVector(); @@ -588,8 +857,8 @@ function str_split%(s: string, idx: index_vec%): string_vec indices[i] = (*idx_v)[i]->AsCount(); BroString::Vec* result = s->AsString()->Split(indices); - VectorVal* result_v = - new VectorVal(new VectorType(base_type(TYPE_STRING))); + VectorVal* result_v = new VectorVal( + internal_type("string_vec")->AsVectorType()); if ( result ) { @@ -606,6 +875,13 @@ function str_split%(s: string, idx: index_vec%): string_vec return result_v; %} +## Strips whitespace at both ends of a string. +## +## str: The string to strip the whitespace from. +## +## Returns: A copy of *str* with leading and trailing whitespace removed. +## +## .. bro:see:: sub gsub function strip%(str: string%): string %{ const u_char* s = str->Bytes(); @@ -629,6 +905,14 @@ function strip%(str: string%): string return new StringVal(new BroString(sp, (e - sp + 1), 1)); %} +## Generates a string of a given size and fills it with repetitions of a source +## string. +## +## len: The length of the output string. +## +## source: The string to concatenate repeatedly until *len* has been reached. +## +## Returns: A string of length *len* filled with *source*. function string_fill%(len: int, source: string%): string %{ const u_char* src = source->Bytes(); @@ -643,10 +927,15 @@ function string_fill%(len: int, source: string%): string return new StringVal(new BroString(1, byte_vec(dst), len)); %} -# Takes a string and escapes characters that would allow execution of commands -# at the shell level. Must be used before including strings in system() or -# similar calls. -# +## Takes a string and escapes characters that would allow execution of +## commands at the shell level. Must be used before including strings in +## :bro:id:`system` or similar calls. +## +## source: The string to escape. +## +## Returns: A shell-escaped version of *source*. +## +## .. bro:see:: system function str_shell_escape%(source: string%): string %{ unsigned j = 0; @@ -675,11 +964,18 @@ function str_shell_escape%(source: string%): string return new StringVal(new BroString(1, dst, j)); %} -# Returns all occurrences of the given pattern in the given string (an empty -# empty set if none). +## Finds all occurrences of a pattern in a string. +## +## str: The string to inspect. +## +## re: The pattern to look for in *str*. +## +## Returns: The set of strings in *str* that match *re*, or the empty set. +## +## .. bro:see: find_last strstr function find_all%(str: string, re: pattern%) : string_set %{ - TableVal* a = new TableVal(internal_type("string_set")->AsTableType()); + TableVal* a = new TableVal(string_set); const u_char* s = str->Bytes(); const u_char* e = s + str->Len(); @@ -697,11 +993,18 @@ function find_all%(str: string, re: pattern%) : string_set return a; %} -# Returns the last occurrence of the given pattern in the given string. -# If not found, returns an empty string. Note that this function returns -# the match that starts at the largest index in the string, which is -# not necessarily the longest match. For example, a pattern of /.*/ -# will return the final character in the string. +## Finds the last occurrence of a pattern in a string. This function returns +## the match that starts at the largest index in the string, which is not +## necessarily the longest match. For example, a pattern of ``/.*/`` will +## return the final character in the string. +## +## str: The string to inspect. +## +## re: The pattern to look for in *str*. +## +## Returns: The last string in *str* that matches *re*, or the empty string. +## +## .. bro:see: find_all strstr function find_last%(str: string, re: pattern%) : string %{ const u_char* s = str->Bytes(); @@ -717,10 +1020,16 @@ function find_last%(str: string, re: pattern%) : string return new StringVal(""); %} -# Returns a hex dump for given input data. The hex dump renders -# 16 bytes per line, with hex on the left and ASCII (where printable) -# on the right. Based on Netdude's hex editor code. -# +## Returns a hex dump for given input data. The hex dump renders 16 bytes per +## line, with hex on the left and ASCII (where printable) +## on the right. +## +## data_str: The string to dump in hex format. +## +## .. bro:see:: string_to_ascii_hex bytestring_to_hexstr +## +## .. note:: Based on Netdude's hex editor code. +## function hexdump%(data_str: string%) : string %{ diff --git a/src/threading/BasicThread.cc b/src/threading/BasicThread.cc new file mode 100644 index 0000000000..c708bb79ef --- /dev/null +++ b/src/threading/BasicThread.cc @@ -0,0 +1,208 @@ + +#include +#include + +#include "config.h" +#include "BasicThread.h" +#include "Manager.h" + +#ifdef HAVE_LINUX +#include +#endif + +using namespace threading; + +static const int STD_FMT_BUF_LEN = 2048; + +uint64_t BasicThread::thread_counter = 0; + +BasicThread::BasicThread() + { + started = false; + terminating = false; + killed = false; + pthread = 0; + + buf_len = STD_FMT_BUF_LEN; + buf = (char*) malloc(buf_len); + + strerr_buffer = 0; + + name = copy_string(fmt("thread-%" PRIu64, ++thread_counter)); + + thread_mgr->AddThread(this); + } + +BasicThread::~BasicThread() + { + if ( buf ) + free(buf); + + delete [] name; + delete [] strerr_buffer; + } + +void BasicThread::SetName(const char* arg_name) + { + delete [] name; + name = copy_string(arg_name); + } + +void BasicThread::SetOSName(const char* arg_name) + { + +#ifdef HAVE_LINUX + prctl(PR_SET_NAME, arg_name, 0, 0, 0); +#endif + +#ifdef __APPLE__ + pthread_setname_np(arg_name); +#endif + +#ifdef FREEBSD + pthread_set_name_np(pthread_self(), arg_name, arg_name); +#endif + } + +const char* BasicThread::Fmt(const char* format, ...) + { + if ( buf_len > 10 * STD_FMT_BUF_LEN ) + { + // Shrink back to normal. + buf = (char*) safe_realloc(buf, STD_FMT_BUF_LEN); + buf_len = STD_FMT_BUF_LEN; + } + + va_list al; + va_start(al, format); + int n = safe_vsnprintf(buf, buf_len, format, al); + va_end(al); + + if ( (unsigned int) n >= buf_len ) + { // Not enough room, grow the buffer. + buf_len = n + 32; + buf = (char*) safe_realloc(buf, buf_len); + + // Is it portable to restart? + va_start(al, format); + n = safe_vsnprintf(buf, buf_len, format, al); + va_end(al); + } + + return buf; + } + +const char* BasicThread::Strerror(int err) + { + if ( ! strerr_buffer ) + strerr_buffer = new char[256]; + + strerror_r(err, strerr_buffer, 256); + return strerr_buffer; + } + +void BasicThread::Start() + { + if ( started ) + return; + + started = true; + + int err = pthread_create(&pthread, 0, BasicThread::launcher, this); + if ( err != 0 ) + reporter->FatalError("Cannot create thread %s: %s", name, Strerror(err)); + + DBG_LOG(DBG_THREADING, "Started thread %s", name); + + OnStart(); + } + +void BasicThread::PrepareStop() + { + if ( ! started ) + return; + + if ( terminating ) + return; + + DBG_LOG(DBG_THREADING, "Preparing thread %s to terminate ...", name); + + OnPrepareStop(); + } + +void BasicThread::Stop() + { + if ( ! started ) + return; + + if ( terminating ) + return; + + DBG_LOG(DBG_THREADING, "Signaling thread %s to terminate ...", name); + + OnStop(); + + terminating = true; + } + +void BasicThread::Join() + { + if ( ! started ) + return; + + assert(terminating); + + DBG_LOG(DBG_THREADING, "Joining thread %s ...", name); + + if ( pthread && pthread_join(pthread, 0) != 0 ) + reporter->FatalError("Failure joining thread %s", name); + + DBG_LOG(DBG_THREADING, "Joined with thread %s", name); + + pthread = 0; + } + +void BasicThread::Kill() + { + // We don't *really* kill the thread here because that leads to race + // conditions. Instead we set a flag that parts of the the code need + // to check and get out of any loops they might be in. + terminating = true; + killed = true; + OnKill(); + } + +void BasicThread::Done() + { + DBG_LOG(DBG_THREADING, "Thread %s has finished", name); + + terminating = true; + killed = true; + } + +void* BasicThread::launcher(void *arg) + { + BasicThread* thread = (BasicThread *)arg; + + // Block signals in thread. We handle signals only in the main + // process. + sigset_t mask_set; + sigfillset(&mask_set); + + // Unblock the signals where according to POSIX the result is undefined if they are blocked + // in a thread and received by that thread. If those are not unblocked, threads will just + // hang when they crash without the user being notified. + sigdelset(&mask_set, SIGFPE); + sigdelset(&mask_set, SIGILL); + sigdelset(&mask_set, SIGSEGV); + sigdelset(&mask_set, SIGBUS); + int res = pthread_sigmask(SIG_BLOCK, &mask_set, 0); + assert(res == 0); + + // Run thread's main function. + thread->Run(); + + thread->Done(); + + return 0; + } diff --git a/src/threading/BasicThread.h b/src/threading/BasicThread.h new file mode 100644 index 0000000000..e17324e948 --- /dev/null +++ b/src/threading/BasicThread.h @@ -0,0 +1,218 @@ + +#ifndef THREADING_BASICTHREAD_H +#define THREADING_BASICTHREAD_H + +#include +#include + +#include "util.h" + +using namespace std; + +namespace threading { + +class Manager; + +/** + * Base class for all threads. + * + * This class encapsulates all the OS-level thread handling. All thread + * instances are automatically added to the threading::Manager for management. The + * manager also takes care of deleting them (which must not be done + * manually). + */ +class BasicThread +{ +public: + /** + * Creates a new thread object. Instantiating the object does however + * not yet start the actual OS thread, that requires calling Start(). + * + * Only Bro's main thread may create new thread instances. + * + * @param name A descriptive name for thread the thread. This may + * show up in messages to the user. + */ + BasicThread(); + + /** + * Returns a descriptive name for the thread. If not set via + * SetName(). If not set, a default name is choosen automatically. + * + * This method is safe to call from any thread. + */ + const char* Name() const { return name; } + + /** + * Sets a descriptive name for the thread. This should be a string + * that's useful in output presented to the user and uniquely + * identifies the thread. + * + * This method must be called only from main thread at initialization + * time. + */ + void SetName(const char* name); + + /** + * Set the name shown by the OS as the thread's description. Not + * supported on all OSs. + * + * Must be called only from the child thread. + */ + void SetOSName(const char* name); + + /** + * Starts the thread. Calling this methods will spawn a new OS thread + * executing Run(). Note that one can't restart a thread after a + * Stop(), doing so will be ignored. + * + * Only Bro's main thread must call this method. + */ + void Start(); + + /** + * Signals the thread to prepare for stopping. This must be called + * before Stop() and allows the thread to trigger shutting down + * without yet blocking for doing so. + * + * Calling this method has no effect if Start() hasn't been executed + * yet. + * + * Only Bro's main thread must call this method. + */ + void PrepareStop(); + + /** + * Signals the thread to stop. The method lets Terminating() now + * return true. It does however not force the thread to terminate. + * It's up to the Run() method to to query Terminating() and exit + * eventually. + * + * Calling this method has no effect if Start() hasn't been executed + * yet. + * + * Only Bro's main thread must call this method. + */ + void Stop(); + + /** + * Returns true if Stop() has been called. + * + * This method is safe to call from any thread. + */ + bool Terminating() const { return terminating; } + + /** + * Returns true if Kill() has been called. + * + * This method is safe to call from any thread. + */ + bool Killed() const { return killed; } + + /** + * A version of fmt() that the thread can safely use. + * + * This is safe to call from Run() but must not be used from any + * other thread than the current one. + */ + const char* Fmt(const char* format, ...); + + /** + * A version of strerror() that the thread can safely use. This is + * essentially a wrapper around strerror_r(). Note that it keeps a + * single buffer per thread internally so the result remains valid + * only until the next call. + */ + const char* Strerror(int err); + +protected: + friend class Manager; + + /** + * Entry point for the thread. This must be overridden by derived + * classes and will execute in a separate thread once Start() is + * called. The thread will not terminate before this method finishes. + * An implementation should regularly check Terminating() to see if + * exiting has been requested. + */ + virtual void Run() = 0; + + /** + * Executed with Start(). This is a hook into starting the thread. It + * will be called from Bro's main thread after the OS thread has been + * started. + */ + virtual void OnStart() {} + + /** + * Executed with PrepareStop() (and before OnStop()). This is a hook + * into preparing the thread for stopping. It will be called from + * Bro's main thread before the thread has been signaled to stop. + */ + virtual void OnPrepareStop() {} + + /** + * Executed with Stop() (and after OnPrepareStop()). This is a hook + * into stopping the thread. It will be called from Bro's main thread + * after the thread has been signaled to stop. + */ + virtual void OnStop() {} + + /** + * Executed with Kill(). This is a hook into killing the thread. + */ + virtual void OnKill() {} + + /** + * Destructor. This will be called by the manager. + * + * Only Bro's main thread may delete thread instances. + * + */ + virtual ~BasicThread(); + + /** + * Waits until the thread's Run() method has finished and then joins + * it. This is called from the threading::Manager. + */ + void Join(); + + /** + * Kills the thread immediately. One still needs to call Join() + * afterwards. + * + * This is called from the threading::Manager and safe to execute + * during a signal handler. + */ + void Kill(); + + /** Called by child thread's launcher when it's done processing. */ + void Done(); + +private: + // pthread entry function. + static void* launcher(void *arg); + + const char* name; + pthread_t pthread; + bool started; // Set to to true once running. + bool terminating; // Set to to true to signal termination. + bool killed; // Set to true once forcefully killed. + + // Used as a semaphore to tell the pthread thread when it may + // terminate. + pthread_mutex_t terminate; + + // For implementing Fmt(). + char* buf; + unsigned int buf_len; + + // For implementating Strerror(). + char* strerr_buffer; + + static uint64_t thread_counter; +}; + +} + +#endif diff --git a/src/threading/Manager.cc b/src/threading/Manager.cc new file mode 100644 index 0000000000..cfc44596e1 --- /dev/null +++ b/src/threading/Manager.cc @@ -0,0 +1,171 @@ + +#include "Manager.h" +#include "NetVar.h" + +using namespace threading; + +Manager::Manager() + { + DBG_LOG(DBG_THREADING, "Creating thread manager ..."); + + did_process = true; + next_beat = 0; + terminating = false; + idle = true; + } + +Manager::~Manager() + { + if ( all_threads.size() ) + Terminate(); + } + +void Manager::Terminate() + { + DBG_LOG(DBG_THREADING, "Terminating thread manager ..."); + + terminating = true; + + // First process remaining thread output for the message threads. + do Process(); while ( did_process ); + + // Signal all to stop. + + for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ ) + (*i)->PrepareStop(); + + for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ ) + (*i)->Stop(); + + // Then join them all. + for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ ) + { + (*i)->Join(); + delete *i; + } + + all_threads.clear(); + msg_threads.clear(); + + idle = true; + closed = true; + terminating = false; + } + +void Manager::AddThread(BasicThread* thread) + { + DBG_LOG(DBG_THREADING, "Adding thread %s ...", thread->Name()); + all_threads.push_back(thread); + idle = false; + } + +void Manager::AddMsgThread(MsgThread* thread) + { + DBG_LOG(DBG_THREADING, "%s is a MsgThread ...", thread->Name()); + msg_threads.push_back(thread); + } + +void Manager::GetFds(int* read, int* write, int* except) + { + } + +double Manager::NextTimestamp(double* network_time) + { +// fprintf(stderr, "N %.6f %.6f did_process=%d next_next=%.6f\n", ::network_time, timer_mgr->Time(), (int)did_process, next_beat); + + if ( ::network_time && (did_process || ::network_time > next_beat || ! next_beat) ) + // If we had something to process last time (or out heartbeat + // is due or not set yet), we want to check for more asap. + return timer_mgr->Time(); + + for ( msg_thread_list::iterator i = msg_threads.begin(); i != msg_threads.end(); i++ ) + { + MsgThread* t = *i; + + if ( (*i)->MightHaveOut() && ! t->Killed() ) + return timer_mgr->Time(); + } + + return -1.0; + } + +void Manager::KillThreads() + { + DBG_LOG(DBG_THREADING, "Killing threads ..."); + + for ( all_thread_list::iterator i = all_threads.begin(); i != all_threads.end(); i++ ) + (*i)->Kill(); + } + +void Manager::KillThread(BasicThread* thread) + { + DBG_LOG(DBG_THREADING, "Killing thread %s ...", thread->Name()); + thread->Kill(); + } + +void Manager::Process() + { + bool do_beat = false; + + if ( network_time && (network_time > next_beat || ! next_beat) ) + { + do_beat = true; + next_beat = ::network_time + BifConst::Threading::heartbeat_interval; + } + + did_process = false; + + for ( msg_thread_list::iterator i = msg_threads.begin(); i != msg_threads.end(); i++ ) + { + MsgThread* t = *i; + + if ( do_beat ) + t->Heartbeat(); + + while ( t->HasOut() && ! t->Killed() ) + { + Message* msg = t->RetrieveOut(); + + if ( ! msg ) + { + assert(t->Killed()); + break; + } + + if ( msg->Process() ) + { + if ( network_time ) + did_process = true; + } + + else + { + reporter->Error("%s failed, terminating thread", msg->Name()); + t->Stop(); + } + + delete msg; + } + } + +// fprintf(stderr, "P %.6f %.6f do_beat=%d did_process=%d next_next=%.6f\n", network_time, timer_mgr->Time(), do_beat, (int)did_process, next_beat); + } + +const threading::Manager::msg_stats_list& threading::Manager::GetMsgThreadStats() + { + stats.clear(); + + for ( msg_thread_list::iterator i = msg_threads.begin(); i != msg_threads.end(); i++ ) + { + MsgThread* t = *i; + + MsgThread::Stats s; + t->GetStats(&s); + + stats.push_back(std::make_pair(t->Name(),s)); + } + + return stats; + } + + diff --git a/src/threading/Manager.h b/src/threading/Manager.h new file mode 100644 index 0000000000..b46a06a46e --- /dev/null +++ b/src/threading/Manager.h @@ -0,0 +1,151 @@ + +#ifndef THREADING_MANAGER_H +#define THREADING_MANAGER_H + +#include + +#include "IOSource.h" + +#include "BasicThread.h" +#include "MsgThread.h" + +namespace threading { + +/** + * The thread manager coordinates all child threads. Once a BasicThread is + * instantitated, it gets addedd to the manager, which will delete it later + * once it has terminated. + * + * In addition to basic threads, the manager also provides additional + * functionality specific to MsgThread instances. In particular, it polls + * their outgoing message queue on a regular basis and feeds data sent into + * the rest of Bro. It also triggers the regular heartbeats. + */ +class Manager : public IOSource +{ +public: + /** + * Constructor. Only a single instance of the manager must be + * created. + */ + Manager(); + + /** + * Destructir. + */ + ~Manager(); + + /** + * Terminates the manager's processor. The method signals all threads + * to terminates and wait for them to do so. It then joins them and + * returns to the caller. Afterwards, no more thread instances may be + * created. + */ + void Terminate(); + + /** + * Returns True if we are currently in Terminate() waiting for + * threads to exit. + */ + bool Terminating() const { return terminating; } + + typedef std::list > msg_stats_list; + + /** + * Returns statistics from all current MsgThread instances. + * + * @return A list of statistics, with one entry for each MsgThread. + * Each entry is a tuple of thread name and statistics. The list + * reference remains valid until the next call to this method (or + * termination of the manager). + */ + const msg_stats_list& GetMsgThreadStats(); + + /** + * Returns the number of currently active threads. This counts all + * threads that are not yet joined, includingt any potentially in + * Terminating() state. + */ + int NumThreads() const { return all_threads.size(); } + + /** Manually triggers processing of any thread input. This can be useful + * if the main thread is waiting for a specific message from a child. + * Usually, though, one should avoid using it. + */ + void ForceProcessing() { Process(); } + + /** + * Signals a specific threads to terminate immediately. + */ + void KillThread(BasicThread* thread); + + /** + * Signals all threads to terminate immediately. + */ + void KillThreads(); + +protected: + friend class BasicThread; + friend class MsgThread; + + /** + * Registers a new basic thread with the manager. This is + * automatically called by the thread's constructor. + * + * @param thread The thread. + */ + void AddThread(BasicThread* thread); + + /** + * Registers a new message thread with the manager. This is + * automatically called by the thread's constructor. This must be + * called \a in \a addition to AddThread(BasicThread* thread). The + * MsgThread constructor makes sure to do so. + * + * @param thread The thread. + */ + void AddMsgThread(MsgThread* thread); + + /** + * Part of the IOSource interface. + */ + virtual void GetFds(int* read, int* write, int* except); + + /** + * Part of the IOSource interface. + */ + virtual double NextTimestamp(double* network_time); + + /** + * Part of the IOSource interface. + */ + virtual void Process(); + + /** + * Part of the IOSource interface. + */ + virtual const char* Tag() { return "threading::Manager"; } + +private: + typedef std::list all_thread_list; + all_thread_list all_threads; + + typedef std::list msg_thread_list; + msg_thread_list msg_threads; + + bool did_process; // True if the last Process() found some work to do. + double next_beat; // Timestamp when the next heartbeat will be sent. + bool terminating; // True if we are in Terminate(). + + msg_stats_list stats; +}; + +} + +/** + * A singleton instance of the thread manager. All methods must only be + * called from Bro's main thread. + */ +extern threading::Manager* thread_mgr; + +#endif diff --git a/src/threading/MsgThread.cc b/src/threading/MsgThread.cc new file mode 100644 index 0000000000..6c63c5a287 --- /dev/null +++ b/src/threading/MsgThread.cc @@ -0,0 +1,390 @@ + +#include "DebugLogger.h" + +#include "MsgThread.h" +#include "Manager.h" + +#include +#include + +using namespace threading; + +namespace threading { + +////// Messages. + +// Signals child thread to shutdown operation. +class FinishMessage : public InputMessage +{ +public: + FinishMessage(MsgThread* thread, double network_time) : InputMessage("Finish", thread), + network_time(network_time) { } + + virtual bool Process() { + bool result = Object()->OnFinish(network_time); + Object()->Finished(); + return result; + } + +private: + double network_time; +}; + +/// Sends a heartbeat to the child thread. +class HeartbeatMessage : public InputMessage +{ +public: + HeartbeatMessage(MsgThread* thread, double arg_network_time, double arg_current_time) + : InputMessage("Heartbeat", thread) + { network_time = arg_network_time; current_time = arg_current_time; } + + virtual bool Process() { + Object()->HeartbeatInChild(); + return Object()->OnHeartbeat(network_time, current_time); + } + +private: + double network_time; + double current_time; +}; + +// A message from the child to be passed on to the Reporter. +class ReporterMessage : public OutputMessage +{ +public: + enum Type { + INFO, WARNING, ERROR, FATAL_ERROR, FATAL_ERROR_WITH_CORE, + INTERNAL_WARNING, INTERNAL_ERROR + }; + + ReporterMessage(Type arg_type, MsgThread* thread, const char* arg_msg) + : OutputMessage("ReporterMessage", thread) + { type = arg_type; msg = copy_string(arg_msg); } + + ~ReporterMessage() { delete [] msg; } + + virtual bool Process(); + +private: + const char* msg; + Type type; +}; + +// A message from the the child to the main process, requesting suicide. +class KillMeMessage : public OutputMessage +{ +public: + KillMeMessage(MsgThread* thread) + : OutputMessage("ReporterMessage", thread) {} + + virtual bool Process() { thread_mgr->KillThread(Object()); return true; } +}; + +#ifdef DEBUG +// A debug message from the child to be passed on to the DebugLogger. +class DebugMessage : public OutputMessage +{ +public: + DebugMessage(DebugStream arg_stream, MsgThread* thread, const char* arg_msg) + : OutputMessage("DebugMessage", thread) + { stream = arg_stream; msg = copy_string(arg_msg); } + + virtual ~DebugMessage() { delete [] msg; } + + virtual bool Process() + { + debug_logger.Log(stream, "%s: %s", Object()->Name(), msg); + return true; + } +private: + const char* msg; + DebugStream stream; +}; +#endif + +} + +////// Methods. + +Message::~Message() + { + delete [] name; + } + +bool ReporterMessage::Process() + { + switch ( type ) { + + case INFO: + reporter->Info("%s: %s", Object()->Name(), msg); + break; + + case WARNING: + reporter->Warning("%s: %s", Object()->Name(), msg); + break; + + case ERROR: + reporter->Error("%s: %s", Object()->Name(), msg); + break; + + case FATAL_ERROR: + reporter->FatalError("%s: %s", Object()->Name(), msg); + break; + + case FATAL_ERROR_WITH_CORE: + reporter->FatalErrorWithCore("%s: %s", Object()->Name(), msg); + break; + + case INTERNAL_WARNING: + reporter->InternalWarning("%s: %s", Object()->Name(), msg); + break; + + case INTERNAL_ERROR : + reporter->InternalError("%s: %s", Object()->Name(), msg); + break; + + default: + reporter->InternalError("unknown ReporterMessage type %d", type); + } + + return true; + } + +MsgThread::MsgThread() : BasicThread(), queue_in(this, 0), queue_out(0, this) + { + cnt_sent_in = cnt_sent_out = 0; + finished = false; + failed = false; + thread_mgr->AddMsgThread(this); + } + +// Set by Bro's main signal handler. +extern int signal_val; + +void MsgThread::OnPrepareStop() + { + if ( finished || Killed() ) + return; + + // Signal thread to terminate and wait until it has acknowledged. + SendIn(new FinishMessage(this, network_time), true); + } + +void MsgThread::OnStop() + { + int signal_count = 0; + int old_signal_val = signal_val; + signal_val = 0; + + int cnt = 0; + uint64_t last_size = 0; + uint64_t cur_size = 0; + + while ( ! (finished || Killed() ) ) + { + // Terminate if we get another kill signal. + if ( signal_val == SIGTERM || signal_val == SIGINT ) + { + ++signal_count; + + if ( signal_count == 1 ) + { + // Abort all threads here so that we won't hang next + // on another one. + fprintf(stderr, "received signal while waiting for thread %s, aborting all ...\n", Name()); + thread_mgr->KillThreads(); + } + else + { + // More than one signal. Abort processing + // right away. on another one. + fprintf(stderr, "received another signal while waiting for thread %s, aborting processing\n", Name()); + exit(1); + } + + signal_val = 0; + } + + queue_in.WakeUp(); + + usleep(1000); + } + + signal_val = old_signal_val; + } + +void MsgThread::OnKill() + { + // Send a message to unblock the reader if its currently waiting for + // input. This is just an optimization to make it terminate more + // quickly, even without the message it will eventually time out. + queue_in.WakeUp(); + } + +void MsgThread::Heartbeat() + { + SendIn(new HeartbeatMessage(this, network_time, current_time())); + } + +void MsgThread::HeartbeatInChild() + { + string n = Fmt("bro: %s (%" PRIu64 "/%" PRIu64 ")", Name(), + cnt_sent_in - queue_in.Size(), + cnt_sent_out - queue_out.Size()); + + SetOSName(n.c_str()); + } + +void MsgThread::Finished() + { + // This is thread-safe "enough", we're the only one ever writing + // there. + finished = true; + } + +void MsgThread::Info(const char* msg) + { + SendOut(new ReporterMessage(ReporterMessage::INFO, this, msg)); + } + +void MsgThread::Warning(const char* msg) + { + SendOut(new ReporterMessage(ReporterMessage::WARNING, this, msg)); + } + +void MsgThread::Error(const char* msg) + { + SendOut(new ReporterMessage(ReporterMessage::ERROR, this, msg)); + } + +void MsgThread::FatalError(const char* msg) + { + SendOut(new ReporterMessage(ReporterMessage::FATAL_ERROR, this, msg)); + } + +void MsgThread::FatalErrorWithCore(const char* msg) + { + SendOut(new ReporterMessage(ReporterMessage::FATAL_ERROR_WITH_CORE, this, msg)); + } + +void MsgThread::InternalWarning(const char* msg) + { + SendOut(new ReporterMessage(ReporterMessage::INTERNAL_WARNING, this, msg)); + } + +void MsgThread::InternalError(const char* msg) + { + // This one aborts immediately. + fprintf(stderr, "internal error in thread: %s\n", msg); + abort(); + } + +#ifdef DEBUG + +void MsgThread::Debug(DebugStream stream, const char* msg) + { + SendOut(new DebugMessage(stream, this, msg)); + } + +#endif + +void MsgThread::SendIn(BasicInputMessage* msg, bool force) + { + if ( Terminating() && ! force ) + { + delete msg; + return; + } + + DBG_LOG(DBG_THREADING, "Sending '%s' to %s ...", msg->Name(), Name()); + + queue_in.Put(msg); + ++cnt_sent_in; + } + + +void MsgThread::SendOut(BasicOutputMessage* msg, bool force) + { + if ( Terminating() && ! force ) + { + delete msg; + return; + } + + queue_out.Put(msg); + + ++cnt_sent_out; + } + +BasicOutputMessage* MsgThread::RetrieveOut() + { + BasicOutputMessage* msg = queue_out.Get(); + if ( ! msg ) + return 0; + + DBG_LOG(DBG_THREADING, "Retrieved '%s' from %s", msg->Name(), Name()); + + return msg; + } + +BasicInputMessage* MsgThread::RetrieveIn() + { + BasicInputMessage* msg = queue_in.Get(); + + if ( ! msg ) + return 0; + +#ifdef DEBUG + string s = Fmt("Retrieved '%s' in %s", msg->Name(), Name()); + Debug(DBG_THREADING, s.c_str()); +#endif + + return msg; + } + +void MsgThread::Run() + { + while ( ! (finished || Killed() ) ) + { + BasicInputMessage* msg = RetrieveIn(); + + if ( ! msg ) + continue; + + bool result = msg->Process(); + + delete msg; + + if ( ! result ) + { + Error("terminating thread"); + + // This will eventually kill this thread, but only + // after all other outgoing messages (in particular + // error messages have been processed by then main + // thread). + SendOut(new KillMeMessage(this)); + failed = true; + } + } + + // In case we haven't send the finish method yet, do it now. Reading + // global network_time here should be fine, it isn't changing + // anymore. + if ( ! finished && ! Killed() ) + { + OnFinish(network_time); + Finished(); + } + } + +void MsgThread::GetStats(Stats* stats) + { + stats->sent_in = cnt_sent_in; + stats->sent_out = cnt_sent_out; + stats->pending_in = queue_in.Size(); + stats->pending_out = queue_out.Size(); + queue_in.GetStats(&stats->queue_in_stats); + queue_out.GetStats(&stats->queue_out_stats); + } + diff --git a/src/threading/MsgThread.h b/src/threading/MsgThread.h new file mode 100644 index 0000000000..e3e7c8500f --- /dev/null +++ b/src/threading/MsgThread.h @@ -0,0 +1,434 @@ + +#ifndef THREADING_MSGTHREAD_H +#define THREADING_MSGTHREAD_H + +#include + +#include "DebugLogger.h" + +#include "BasicThread.h" +#include "Queue.h" + +namespace threading { + +class BasicInputMessage; +class BasicOutputMessage; +class HeartbeatMessage; + +/** + * A specialized thread that provides bi-directional message passing between + * Bro's main thread and the child thread. Messages are instances of + * BasicInputMessage and BasicOutputMessage for message sent \a to the child + * thread and received \a from the child thread, respectively. + * + * The thread's Run() method implements main loop that processes incoming + * messages until Terminating() indicates that execution should stop. Once + * that happens, the thread stops accepting any new messages, finishes + * processes all remaining ones still in the queue, and then exits. + */ +class MsgThread : public BasicThread +{ +public: + /** + * Constructor. It automatically registers the thread with the + * threading::Manager. + * + * Only Bro's main thread may instantiate a new thread. + */ + MsgThread(); + + /** + * Sends a message to the child thread. The message will be proceesed + * once the thread has retrieved it from its incoming queue. + * + * Only the main thread may call this method. + * + * @param msg The message. + */ + void SendIn(BasicInputMessage* msg) { return SendIn(msg, false); } + + /** + * Sends a message from the child thread to the main thread. + * + * Only the child thread may call this method. + * + * @param msg The mesasge. + */ + void SendOut(BasicOutputMessage* msg) { return SendOut(msg, false); } + + /** + * Reports an informational message from the child thread. The main + * thread will pass this to the Reporter once received. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void Info(const char* msg); + + /** + * Reports a warning from the child thread that may indicate a + * problem. The main thread will pass this to the Reporter once + * received. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void Warning(const char* msg); + + /** + * Reports a non-fatal error from the child thread. The main thread + * will pass this to the Reporter once received. Processing proceeds + * normally after the error has been reported. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void Error(const char* msg); + + /** + * Reports a fatal error from the child thread. The main thread will + * pass this to the Reporter once received. Bro will terminate after + * the message has been reported. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void FatalError(const char* msg); + + /** + * Reports a fatal error from the child thread. The main thread will + * pass this to the Reporter once received. Bro will terminate with a + * core dump after the message has been reported. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void FatalErrorWithCore(const char* msg); + + /** + * Reports a potential internal problem from the child thread. The + * main thread will pass this to the Reporter once received. Bro will + * continue normally. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void InternalWarning(const char* msg); + + /** + * Reports an internal program error from the child thread. The main + * thread will pass this to the Reporter once received. Bro will + * terminate with a core dump after the message has been reported. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void InternalError(const char* msg); + +#ifdef DEBUG + /** + * Records a debug message for the given stream from the child + * thread. The main thread will pass this to the DebugLogger once + * received. + * + * Only the child thread may call this method. + * + * @param msg The message. It will be prefixed with the thread's name. + */ + void Debug(DebugStream stream, const char* msg); +#endif + + /** + * Statistics about inter-thread communication. + */ + struct Stats + { + uint64_t sent_in; //! Number of messages sent to the child thread. + uint64_t sent_out; //! Number of messages sent from the child thread to the main thread + uint64_t pending_in; //! Number of messages sent to the child but not yet processed. + uint64_t pending_out; //! Number of messages sent from the child but not yet processed by the main thread. + + /// Statistics from our queues. + Queue::Stats queue_in_stats; + Queue::Stats queue_out_stats; + }; + + /** + * Returns statistics about the inter-thread communication. + * + * @param stats A pointer to a structure that will be filled with + * current numbers. + */ + void GetStats(Stats* stats); + +protected: + friend class Manager; + friend class HeartbeatMessage; + friend class FinishMessage; + friend class FinishedMessage; + + /** + * Pops a message sent by the child from the child-to-main queue. + * + * This is method is called regularly by the threading::Manager. + * + * @return The message, wth ownership passed to caller. Returns null + * if the queue is empty. + */ + BasicOutputMessage* RetrieveOut(); + + /** + * Triggers a heartbeat message being sent to the client thread. + * + * This is method is called regularly by the threading::Manager. + * + * Can be overriden in derived classed to hook into the heart beat + * sending, but must call the parent implementation. Note that this + * method is always called by the main thread and must not access + * data of the child thread directly. Implement OnHeartbeat() if you + * want to do something on the child-side. + */ + virtual void Heartbeat(); + + /** Internal heartbeat processing. Called from child. + */ + void HeartbeatInChild(); + + /** Returns true if a child command has reported a failure. In that case, we'll + * be in the process of killing this thread and no further activity + * should carried out. To be called only from this child thread. + */ + bool Failed() const { return failed; } + + /** + * Regulatly triggered for execution in the child thread. + * + * network_time: The network_time when the heartbeat was trigger by + * the main thread. + * + * current_time: Wall clock when the heartbeat was trigger by the + * main thread. + */ + virtual bool OnHeartbeat(double network_time, double current_time) = 0; + + /** Triggered for execution in the child thread just before shutting threads down. + * The child thread should finish its operations. + */ + virtual bool OnFinish(double network_time) = 0; + + /** + * Overriden from BasicThread. + * + */ + virtual void Run(); + virtual void OnStop(); + virtual void OnPrepareStop(); + virtual void OnKill(); + +private: + /** + * Pops a message sent by the main thread from the main-to-chold + * queue. + * + * Must only be called by the child thread. + * + * @return The message, wth ownership passed to caller. Returns null + * if the queue is empty. + */ + BasicInputMessage* RetrieveIn(); + + /** + * Queues a message for the child. + * + * Must only be called by the main thread. + * + * @param msg The message. + * + * @param force: If true, the message will be queued even when we're already + * Terminating(). Normally, the message would be discarded in that + * case. + */ + void SendIn(BasicInputMessage* msg, bool force); + + /** + * Queues a message for the main thread. + * + * Must only be called by the child thread. + * + * @param msg The message. + * + * @param force: If true, the message will be queued even when we're already + * Terminating(). Normally, the message would be discarded in that + * case. + */ + void SendOut(BasicOutputMessage* msg, bool force); + + /** + * Returns true if there's at least one message pending for the child + * thread. + */ + bool HasIn() { return queue_in.Ready(); } + + /** + * Returns true if there's at least one message pending for the main + * thread. + */ + bool HasOut() { return queue_out.Ready(); } + + /** + * Returns true if there might be at least one message pending for + * the main thread. This function may occasionally return a value not + * indicating the actual state, but won't do so very often. + */ + bool MightHaveOut() { return queue_out.MaybeReady(); } + + /** Flags that the child process has finished processing. Called from child. + */ + void Finished(); + + Queue queue_in; + Queue queue_out; + + uint64_t cnt_sent_in; // Counts message sent to child. + uint64_t cnt_sent_out; // Counts message sent by child. + + bool finished; // Set to true by Finished message. + bool failed; // Set to true when a command failed. +}; + +/** + * Base class for all message between Bro's main process and a MsgThread. + */ +class Message +{ +public: + /** + * Destructor. + */ + virtual ~Message(); + + /** + * Returns a descriptive name for the message's general type. This is + * what's passed into the constructor and used mainly for debugging + * purposes. + */ + const char* Name() const { return name; } + + /** + * Callback that must be overriden for processing a message. + */ + virtual bool Process() = 0; // Thread will be terminated if returngin false. + +protected: + /** + * Constructor. + * + * @param arg_name A descriptive name for the type of message. Used + * mainly for debugging purposes. + */ + Message(const char* arg_name) + { name = copy_string(arg_name); } + +private: + const char* name; +}; + +/** + * Base class for messages sent from Bro's main thread to a child MsgThread. + */ +class BasicInputMessage : public Message +{ +protected: + /** + * Constructor. + * + * @param name A descriptive name for the type of message. Used + * mainly for debugging purposes. + */ + BasicInputMessage(const char* name) : Message(name) {} +}; + +/** + * Base class for messages sent from a child MsgThread to Bro's main thread. + */ +class BasicOutputMessage : public Message +{ +protected: + /** + * Constructor. + * + * @param name A descriptive name for the type of message. Used + * mainly for debugging purposes. + */ + BasicOutputMessage(const char* name) : Message(name) {} +}; + +/** + * A paremeterized InputMessage that stores a pointer to an argument object. + * Normally, the objects will be used from the Process() callback. + */ +template +class InputMessage : public BasicInputMessage +{ +public: + /** + * Returns the objects passed to the constructor. + */ + O* Object() const { return object; } + +protected: + /** + * Constructor. + * + * @param name: A descriptive name for the type of message. Used + * mainly for debugging purposes. + * + * @param arg_object: An object to store with the message. + */ + InputMessage(const char* name, O* arg_object) : BasicInputMessage(name) + { object = arg_object; } + +private: + O* object; +}; + +/** + * A paremeterized OututMessage that stores a pointer to an argument object. + * Normally, the objects will be used from the Process() callback. + */ +template +class OutputMessage : public BasicOutputMessage +{ +public: + /** + * Returns the objects passed to the constructor. + */ + O* Object() const { return object; } + +protected: + /** + * Constructor. + * + * @param name A descriptive name for the type of message. Used + * mainly for debugging purposes. + * + * @param arg_object An object to store with the message. + */ + OutputMessage(const char* name, O* arg_object) : BasicOutputMessage(name) + { object = arg_object; } + +private: + O* object; +}; + +} + + +#endif diff --git a/src/threading/Queue.h b/src/threading/Queue.h new file mode 100644 index 0000000000..0ddcda29f7 --- /dev/null +++ b/src/threading/Queue.h @@ -0,0 +1,268 @@ +#ifndef THREADING_QUEUE_H +#define THREADING_QUEUE_H + +#include +#include +#include +#include +#include + +#include "Reporter.h" +#include "BasicThread.h" + +#undef Queue // Defined elsewhere unfortunately. + +namespace threading { + +/** + * A thread-safe single-reader single-writer queue. + * + * The implementation uses multiple queues and reads/writes in rotary fashion + * in an attempt to limit contention. + * + * All Queue instances must be instantiated by Bro's main thread. + * + * TODO: Unclear how critical performance is for this qeueue. We could like;y + * optimize it further if helpful. + */ +template +class Queue +{ +public: + /** + * Constructor. + * + * reader, writer: The corresponding threads. This is for checking + * whether they have terminated so that we can abort I/O opeations. + * Can be left null for the main thread. + */ + Queue(BasicThread* arg_reader, BasicThread* arg_writer); + + /** + * Destructor. + */ + ~Queue(); + + /** + * Retrieves one element. This may block for a little while of no + * input is available and eventually return with a null element if + * nothing shows up. + */ + T Get(); + + /** + * Queues one element. + */ + void Put(T data); + + /** + * Returns true if the next Get() operation will succeed. + */ + bool Ready(); + + /** + * Returns true if the next Get() operation might succeed. + * This function may occasionally return a value not + * indicating the actual state, but won't do so very often. + */ + bool MaybeReady() { return ( ( read_ptr - write_ptr) != 0 ); } + + /** Wake up the reader if it's currently blocked for input. This is + primarily to give it a chance to check termination quickly. + **/ + void WakeUp(); + + /** + * Returns the number of queued items not yet retrieved. + */ + uint64_t Size(); + + /** + * Statistics about inter-thread communication. + */ + struct Stats + { + uint64_t num_reads; //! Number of messages read from the queue. + uint64_t num_writes; //! Number of messages written to the queue. + }; + + /** + * Returns statistics about the queue's usage. + * + * @param stats A pointer to a structure that will be filled with + * current numbers. */ + void GetStats(Stats* stats); + +private: + static const int NUM_QUEUES = 8; + + pthread_mutex_t mutex[NUM_QUEUES]; // Mutex protected shared accesses. + pthread_cond_t has_data[NUM_QUEUES]; // Signals when data becomes available + std::queue messages[NUM_QUEUES]; // Actually holds the queued messages + + int read_ptr; // Where the next operation will read from + int write_ptr; // Where the next operation will write to + + BasicThread* reader; + BasicThread* writer; + + // Statistics. + uint64_t num_reads; + uint64_t num_writes; +}; + +inline static void safe_lock(pthread_mutex_t* mutex) + { + if ( pthread_mutex_lock(mutex) != 0 ) + reporter->FatalErrorWithCore("cannot lock mutex"); + } + +inline static void safe_unlock(pthread_mutex_t* mutex) + { + if ( pthread_mutex_unlock(mutex) != 0 ) + reporter->FatalErrorWithCore("cannot unlock mutex"); + } + +template +inline Queue::Queue(BasicThread* arg_reader, BasicThread* arg_writer) + { + read_ptr = 0; + write_ptr = 0; + num_reads = num_writes = 0; + reader = arg_reader; + writer = arg_writer; + + for( int i = 0; i < NUM_QUEUES; ++i ) + { + if ( pthread_cond_init(&has_data[i], 0) != 0 ) + reporter->FatalError("cannot init queue condition variable"); + + if ( pthread_mutex_init(&mutex[i], 0) != 0 ) + reporter->FatalError("cannot init queue mutex"); + } + } + +template +inline Queue::~Queue() + { + for( int i = 0; i < NUM_QUEUES; ++i ) + { + pthread_cond_destroy(&has_data[i]); + pthread_mutex_destroy(&mutex[i]); + } + } + +template +inline T Queue::Get() + { + if ( (reader && reader->Killed()) || (writer && writer->Killed()) ) + return 0; + + safe_lock(&mutex[read_ptr]); + + int old_read_ptr = read_ptr; + + if ( messages[read_ptr].empty() ) + { + struct timespec ts; + ts.tv_sec = time(0) + 5; + ts.tv_nsec = 0; + + pthread_cond_timedwait(&has_data[read_ptr], &mutex[read_ptr], &ts); + safe_unlock(&mutex[read_ptr]); + return 0; + } + + T data = messages[read_ptr].front(); + messages[read_ptr].pop(); + + read_ptr = (read_ptr + 1) % NUM_QUEUES; + ++num_reads; + + safe_unlock(&mutex[old_read_ptr]); + + return data; + } + +template +inline void Queue::Put(T data) + { + safe_lock(&mutex[write_ptr]); + + int old_write_ptr = write_ptr; + + bool need_signal = messages[write_ptr].empty(); + + messages[write_ptr].push(data); + + if ( need_signal ) + pthread_cond_signal(&has_data[write_ptr]); + + write_ptr = (write_ptr + 1) % NUM_QUEUES; + ++num_writes; + + safe_unlock(&mutex[old_write_ptr]); + } + + +template +inline bool Queue::Ready() + { + safe_lock(&mutex[read_ptr]); + + bool ret = (messages[read_ptr].size()); + + safe_unlock(&mutex[read_ptr]); + + return ret; + } + +template +inline uint64_t Queue::Size() + { + // Need to lock all queues. + for ( int i = 0; i < NUM_QUEUES; i++ ) + safe_lock(&mutex[i]); + + uint64_t size = 0; + + for ( int i = 0; i < NUM_QUEUES; i++ ) + size += messages[i].size(); + + for ( int i = 0; i < NUM_QUEUES; i++ ) + safe_unlock(&mutex[i]); + + return size; + } + +template +inline void Queue::GetStats(Stats* stats) + { + // To be safe, we look all queues. That's probably unneccessary, but + // doesn't really hurt. + for ( int i = 0; i < NUM_QUEUES; i++ ) + safe_lock(&mutex[i]); + + stats->num_reads = num_reads; + stats->num_writes = num_writes; + + for ( int i = 0; i < NUM_QUEUES; i++ ) + safe_unlock(&mutex[i]); + } + +template +inline void Queue::WakeUp() + { + for ( int i = 0; i < NUM_QUEUES; i++ ) + { + safe_lock(&mutex[i]); + pthread_cond_signal(&has_data[i]); + safe_unlock(&mutex[i]); + } + } + +} + + +#endif + diff --git a/src/threading/SerialTypes.cc b/src/threading/SerialTypes.cc new file mode 100644 index 0000000000..c0e26ccb32 --- /dev/null +++ b/src/threading/SerialTypes.cc @@ -0,0 +1,406 @@ +// See the file "COPYING" in the main distribution directory for copyright. + + +#include "SerialTypes.h" +#include "../RemoteSerializer.h" + + +using namespace threading; + +bool Field::Read(SerializationFormat* fmt) + { + int t; + int st; + string tmp_name; + bool have_2nd; + + if ( ! fmt->Read(&have_2nd, "have_2nd") ) + return false; + + if ( have_2nd ) + { + string tmp_secondary_name; + if ( ! fmt->Read(&tmp_secondary_name, "secondary_name") ) + return false; + + secondary_name = copy_string(tmp_secondary_name.c_str()); + } + else + secondary_name = 0; + + bool success = (fmt->Read(&tmp_name, "name") + && fmt->Read(&t, "type") + && fmt->Read(&st, "subtype") + && fmt->Read(&optional, "optional")); + + if ( ! success ) + return false; + + name = copy_string(tmp_name.c_str()); + + type = (TypeTag) t; + subtype = (TypeTag) st; + + return true; + } + +bool Field::Write(SerializationFormat* fmt) const + { + assert(name); + + if ( secondary_name ) + { + if ( ! (fmt->Write(true, "have_2nd") + && fmt->Write(secondary_name, "secondary_name")) ) + return false; + } + else + if ( ! fmt->Write(false, "have_2nd") ) + return false; + + return (fmt->Write(name, "name") + && fmt->Write((int)type, "type") + && fmt->Write((int)subtype, "subtype"), + fmt->Write(optional, "optional")); + } + +string Field::TypeName() const + { + string n = type_name(type); + + if ( (type == TYPE_TABLE) || (type == TYPE_VECTOR) ) + { + n += "["; + n += type_name(subtype); + n += "]"; + } + + return n; + } + +Value::~Value() + { + if ( (type == TYPE_ENUM || type == TYPE_STRING || type == TYPE_FILE || type == TYPE_FUNC) + && present ) + delete [] val.string_val.data; + + if ( type == TYPE_TABLE && present ) + { + for ( int i = 0; i < val.set_val.size; i++ ) + delete val.set_val.vals[i]; + + delete [] val.set_val.vals; + } + + if ( type == TYPE_VECTOR && present ) + { + for ( int i = 0; i < val.vector_val.size; i++ ) + delete val.vector_val.vals[i]; + + delete [] val.vector_val.vals; + } + } + +bool Value::IsCompatibleType(BroType* t, bool atomic_only) + { + if ( ! t ) + return false; + + switch ( t->Tag() ) { + case TYPE_BOOL: + case TYPE_INT: + case TYPE_COUNT: + case TYPE_COUNTER: + case TYPE_PORT: + case TYPE_SUBNET: + case TYPE_ADDR: + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_FUNC: + return true; + + case TYPE_RECORD: + return ! atomic_only; + + case TYPE_TABLE: + { + if ( atomic_only ) + return false; + + if ( ! t->IsSet() ) + return false; + + return IsCompatibleType(t->AsSetType()->Indices()->PureType(), true); + } + + case TYPE_VECTOR: + { + if ( atomic_only ) + return false; + + return IsCompatibleType(t->AsVectorType()->YieldType(), true); + } + + default: + return false; + } + + return false; + } + +bool Value::Read(SerializationFormat* fmt) + { + int ty; + + if ( ! (fmt->Read(&ty, "type") && fmt->Read(&present, "present")) ) + return false; + + type = (TypeTag)(ty); + + if ( ! present ) + return true; + + switch ( type ) { + case TYPE_BOOL: + case TYPE_INT: + return fmt->Read(&val.int_val, "int"); + + case TYPE_COUNT: + case TYPE_COUNTER: + return fmt->Read(&val.uint_val, "uint"); + + case TYPE_PORT: { + int proto; + if ( ! (fmt->Read(&val.port_val.port, "port") && fmt->Read(&proto, "proto") ) ) { + return false; + } + + switch ( proto ) { + case 0: + val.port_val.proto = TRANSPORT_UNKNOWN; + break; + case 1: + val.port_val.proto = TRANSPORT_TCP; + break; + case 2: + val.port_val.proto = TRANSPORT_UDP; + break; + case 3: + val.port_val.proto = TRANSPORT_ICMP; + break; + default: + return false; + } + + return true; + } + + case TYPE_ADDR: + { + char family; + + if ( ! fmt->Read(&family, "addr-family") ) + return false; + + switch ( family ) { + case 4: + val.addr_val.family = IPv4; + return fmt->Read(&val.addr_val.in.in4, "addr-in4"); + + case 6: + val.addr_val.family = IPv6; + return fmt->Read(&val.addr_val.in.in6, "addr-in6"); + + } + + // Can't be reached. + abort(); + } + + case TYPE_SUBNET: + { + char length; + char family; + + if ( ! (fmt->Read(&length, "subnet-len") && fmt->Read(&family, "subnet-family")) ) + return false; + + switch ( family ) { + case 4: + val.subnet_val.length = (uint8_t)length; + val.subnet_val.prefix.family = IPv4; + return fmt->Read(&val.subnet_val.prefix.in.in4, "subnet-in4"); + + case 6: + val.subnet_val.length = (uint8_t)length; + val.subnet_val.prefix.family = IPv6; + return fmt->Read(&val.subnet_val.prefix.in.in6, "subnet-in6"); + + } + + // Can't be reached. + abort(); + } + + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + return fmt->Read(&val.double_val, "double"); + + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_FUNC: + return fmt->Read(&val.string_val.data, &val.string_val.length, "string"); + + case TYPE_TABLE: + { + if ( ! fmt->Read(&val.set_val.size, "set_size") ) + return false; + + val.set_val.vals = new Value* [val.set_val.size]; + + for ( int i = 0; i < val.set_val.size; ++i ) + { + val.set_val.vals[i] = new Value; + + if ( ! val.set_val.vals[i]->Read(fmt) ) + return false; + } + + return true; + } + + case TYPE_VECTOR: + { + if ( ! fmt->Read(&val.vector_val.size, "vector_size") ) + return false; + + val.vector_val.vals = new Value* [val.vector_val.size]; + + for ( int i = 0; i < val.vector_val.size; ++i ) + { + val.vector_val.vals[i] = new Value; + + if ( ! val.vector_val.vals[i]->Read(fmt) ) + return false; + } + + return true; + } + + default: + reporter->InternalError("unsupported type %s in Value::Write", type_name(type)); + } + + return false; + } + +bool Value::Write(SerializationFormat* fmt) const + { + if ( ! (fmt->Write((int)type, "type") && + fmt->Write(present, "present")) ) + return false; + + if ( ! present ) + return true; + + switch ( type ) { + case TYPE_BOOL: + case TYPE_INT: + return fmt->Write(val.int_val, "int"); + + case TYPE_COUNT: + case TYPE_COUNTER: + return fmt->Write(val.uint_val, "uint"); + + case TYPE_PORT: + return fmt->Write(val.port_val.port, "port") && fmt->Write(val.port_val.proto, "proto"); + + case TYPE_ADDR: + { + switch ( val.addr_val.family ) { + case IPv4: + return fmt->Write((char)4, "addr-family") + && fmt->Write(val.addr_val.in.in4, "addr-in4"); + + case IPv6: + return fmt->Write((char)6, "addr-family") + && fmt->Write(val.addr_val.in.in6, "addr-in6"); + break; + } + + // Can't be reached. + abort(); + } + + case TYPE_SUBNET: + { + if ( ! fmt->Write((char)val.subnet_val.length, "subnet-length") ) + return false; + + switch ( val.subnet_val.prefix.family ) { + case IPv4: + return fmt->Write((char)4, "subnet-family") + && fmt->Write(val.subnet_val.prefix.in.in4, "subnet-in4"); + + case IPv6: + return fmt->Write((char)6, "subnet-family") + && fmt->Write(val.subnet_val.prefix.in.in6, "subnet-in6"); + break; + } + + // Can't be reached. + abort(); + } + + case TYPE_DOUBLE: + case TYPE_TIME: + case TYPE_INTERVAL: + return fmt->Write(val.double_val, "double"); + + case TYPE_ENUM: + case TYPE_STRING: + case TYPE_FILE: + case TYPE_FUNC: + return fmt->Write(val.string_val.data, val.string_val.length, "string"); + + case TYPE_TABLE: + { + if ( ! fmt->Write(val.set_val.size, "set_size") ) + return false; + + for ( int i = 0; i < val.set_val.size; ++i ) + { + if ( ! val.set_val.vals[i]->Write(fmt) ) + return false; + } + + return true; + } + + case TYPE_VECTOR: + { + if ( ! fmt->Write(val.vector_val.size, "vector_size") ) + return false; + + for ( int i = 0; i < val.vector_val.size; ++i ) + { + if ( ! val.vector_val.vals[i]->Write(fmt) ) + return false; + } + + return true; + } + + default: + reporter->InternalError("unsupported type %s in Value::REad", type_name(type)); + } + + return false; + } + diff --git a/src/threading/SerialTypes.h b/src/threading/SerialTypes.h new file mode 100644 index 0000000000..60aee2411e --- /dev/null +++ b/src/threading/SerialTypes.h @@ -0,0 +1,178 @@ + +#ifndef THREADING_SERIALIZATIONTYPES_H +#define THREADING_SERIALIZATIONTYPES_H + +#include +#include +#include + +#include "Type.h" +#include "net_util.h" + +using namespace std; + +class SerializationFormat; +class RemoteSerializer; + +namespace threading { + +/** + * Definition of a log file, i.e., one column of a log stream. + */ +struct Field { + const char* name; //! Name of the field. + //! Needed by input framework. Port fields have two names (one for the + //! port, one for the type), and this specifies the secondary name. + const char* secondary_name; + TypeTag type; //! Type of the field. + TypeTag subtype; //! Inner type for sets. + bool optional; //! True if field is optional. + + /** + * Constructor. + */ + Field(const char* name, const char* secondary_name, TypeTag type, TypeTag subtype, bool optional) + : name(name ? copy_string(name) : 0), + secondary_name(secondary_name ? copy_string(secondary_name) : 0), + type(type), subtype(subtype), optional(optional) { } + + /** + * Copy constructor. + */ + Field(const Field& other) + : name(other.name ? copy_string(other.name) : 0), + secondary_name(other.secondary_name ? copy_string(other.secondary_name) : 0), + type(other.type), subtype(other.subtype), optional(other.optional) { } + + ~Field() + { + delete [] name; + delete [] secondary_name; + } + + /** + * Unserializes a field. + * + * @param fmt The serialization format to use. The format handles + * low-level I/O. + * + * @return False if an error occured. + */ + bool Read(SerializationFormat* fmt); + + /** + * Serializes a field. + * + * @param fmt The serialization format to use. The format handles + * low-level I/O. + * + * @return False if an error occured. + */ + bool Write(SerializationFormat* fmt) const; + + /** + * Returns a textual description of the field's type. This method is + * thread-safe. + */ + string TypeName() const; + +private: + friend class ::RemoteSerializer; + + // Force usage of constructor above. + Field() {}; +}; + +/** + * Definition of a log value, i.e., a entry logged by a stream. + * + * This struct essentialy represents a serialization of a Val instance (for + * those Vals supported). + */ +struct Value { + TypeTag type; //! The type of the value. + bool present; //! False for optional record fields that are not set. + + struct set_t { bro_int_t size; Value** vals; }; + typedef set_t vec_t; + struct port_t { bro_uint_t port; TransportProto proto; }; + + struct addr_t { + IPFamily family; + union { + struct in_addr in4; + struct in6_addr in6; + } in; + }; + + struct subnet_t { addr_t prefix; uint8_t length; }; + + /** + * This union is a subset of BroValUnion, including only the types we + * can log directly. See IsCompatibleType(). + */ + union _val { + bro_int_t int_val; + bro_uint_t uint_val; + port_t port_val; + double double_val; + set_t set_val; + vec_t vector_val; + addr_t addr_val; + subnet_t subnet_val; + + struct { + char* data; + int length; + } string_val; + } val; + + /** + * Constructor. + * + * arg_type: The type of the value. + * + * arg_present: False if the value represents an optional record field + * that is not set. + */ + Value(TypeTag arg_type = TYPE_ERROR, bool arg_present = true) + : type(arg_type), present(arg_present) {} + + /** + * Destructor. + */ + ~Value(); + + /** + * Unserializes a value. + * + * @param fmt The serialization format to use. The format handles low-level I/O. + * + * @return False if an error occured. + */ + bool Read(SerializationFormat* fmt); + + /** + * Serializes a value. + * + * @param fmt The serialization format to use. The format handles + * low-level I/O. + * + * @return False if an error occured. + */ + bool Write(SerializationFormat* fmt) const; + + /** + * Returns true if the type can be represented by a Value. If + * `atomic_only` is true, will not permit composite types. This + * method is thread-safe. */ + static bool IsCompatibleType(BroType* t, bool atomic_only=false); + +private: + friend class ::IPAddr; + Value(const Value& other) { } // Disabled. +}; + +} + +#endif /* THREADING_SERIALIZATIONTZPES_H */ diff --git a/src/types.bif b/src/types.bif index da6bd6e031..92cc8db551 100644 --- a/src/types.bif +++ b/src/types.bif @@ -1,3 +1,4 @@ +##! Declaration of various types that the Bro core uses internally. enum dce_rpc_ptype %{ DCE_RPC_REQUEST, @@ -134,8 +135,8 @@ enum createmode_t %{ EXCLUSIVE = 2, %} -# Decleare record types that we want to access from the even engine. These are -# defined in bro.init. +# Declare record types that we want to access from the event engine. These are +# defined in init-bare.bro. type info_t: record; type fattr_t: record; type diropargs_t: record; @@ -161,10 +162,44 @@ enum Writer %{ WRITER_DEFAULT, WRITER_NONE, WRITER_ASCII, + WRITER_DATASERIES, + WRITER_ELASTICSEARCH, %} enum ID %{ Unknown, %} +module Tunnel; +enum Type %{ + NONE, + IP, + AYIYA, + TEREDO, + SOCKS, +%} + +type EncapsulatingConn: record; + +module Input; + +enum Reader %{ + READER_DEFAULT, + READER_ASCII, + READER_RAW, + READER_BENCHMARK, +%} + +enum Event %{ + EVENT_NEW, + EVENT_CHANGED, + EVENT_REMOVED, +%} + +enum Mode %{ + MANUAL = 0, + REREAD = 1, + STREAM = 2, +%} + module GLOBAL; diff --git a/src/util-config.h.in b/src/util-config.h.in new file mode 100644 index 0000000000..c50c4e6b48 --- /dev/null +++ b/src/util-config.h.in @@ -0,0 +1,3 @@ +#define BRO_SCRIPT_INSTALL_PATH "@BRO_SCRIPT_INSTALL_PATH@" +#define BRO_SCRIPT_SOURCE_PATH "@BRO_SCRIPT_SOURCE_PATH@" +#define BRO_BUILD_PATH "@CMAKE_CURRENT_BINARY_DIR@" diff --git a/src/util.cc b/src/util.cc index f81eff8f22..76ca7729df 100644 --- a/src/util.cc +++ b/src/util.cc @@ -1,6 +1,7 @@ // See the file "COPYING" in the main distribution directory for copyright. #include "config.h" +#include "util-config.h" #ifdef TIME_WITH_SYS_TIME # include @@ -27,6 +28,8 @@ #include #include #include +#include +#include #ifdef HAVE_MALLINFO # include @@ -35,14 +38,85 @@ #include "input.h" #include "util.h" #include "Obj.h" -#include "md5.h" #include "Val.h" #include "NetVar.h" #include "Net.h" #include "Reporter.h" +/** + * Takes a string, unescapes all characters that are escaped as hex codes + * (\x##) and turns them into the equivalent ascii-codes. Returns a string + * containing no escaped values + * + * @param str string to unescape + * @return A str::string without escaped characters. + */ +std::string get_unescaped_string(const std::string& arg_str) + { + const char* str = arg_str.c_str(); + char* buf = new char [arg_str.length() + 1]; // it will at most have the same length as str. + char* bufpos = buf; + size_t pos = 0; + + while ( pos < arg_str.length() ) + { + if ( str[pos] == '\\' && str[pos+1] == 'x' && + isxdigit(str[pos+2]) && isxdigit(str[pos+3]) ) + { + *bufpos = (decode_hex(str[pos+2]) << 4) + + decode_hex(str[pos+3]); + + pos += 4; + bufpos++; + } + else + *bufpos++ = str[pos++]; + } + + *bufpos = 0; + string outstring(buf, bufpos - buf); + + delete [] buf; + + return outstring; + } + +/** + * Takes a string, escapes characters into equivalent hex codes (\x##), and + * returns a string containing all escaped values. + * + * @param str string to escape + * @param escape_all If true, all characters are escaped. If false, only + * characters are escaped that are either whitespace or not printable in + * ASCII. + * @return A std::string containing a list of escaped hex values of the form + * \x## */ +std::string get_escaped_string(const std::string& str, bool escape_all) + { + char tbuf[16]; + string esc = ""; + + for ( size_t i = 0; i < str.length(); ++i ) + { + char c = str[i]; + + if ( escape_all || isspace(c) || ! isascii(c) || ! isprint(c) ) + { + snprintf(tbuf, sizeof(tbuf), "\\x%02x", str[i]); + esc += tbuf; + } + else + esc += c; + } + + return esc; + } + char* copy_string(const char* s) { + if ( ! s ) + return 0; + char* c = new char[strlen(s)+1]; strcpy(c, s); return c; @@ -344,7 +418,10 @@ template int atoi_n(int len, const char* s, const char** end, int base, // Instantiate the ones we need. template int atoi_n(int len, const char* s, const char** end, int base, int& result); +template int atoi_n(int len, const char* s, const char** end, int base, uint16_t& result); +template int atoi_n(int len, const char* s, const char** end, int base, uint32_t& result); template int atoi_n(int len, const char* s, const char** end, int base, int64_t& result); +template int atoi_n(int len, const char* s, const char** end, int base, uint64_t& result); char* uitoa_n(uint64 value, char* str, int n, int base, const char* prefix) { @@ -514,24 +591,6 @@ bool is_dir(const char* path) return S_ISDIR(st.st_mode); } -void hash_md5(size_t size, const unsigned char* bytes, unsigned char digest[16]) - { - md5_state_s h; - md5_init(&h); - md5_append(&h, bytes, size); - md5_finish(&h, digest); - } - -const char* md5_digest_print(const unsigned char digest[16]) - { - static char digest_print[256]; - - for ( int i = 0; i < 16; ++i ) - snprintf(digest_print + i * 2, 3, "%02x", digest[i]); - - return digest_print; - } - int hmac_key_set = 0; uint8 shared_hmac_md5_key[16]; @@ -540,12 +599,12 @@ void hmac_md5(size_t size, const unsigned char* bytes, unsigned char digest[16]) if ( ! hmac_key_set ) reporter->InternalError("HMAC-MD5 invoked before the HMAC key is set"); - hash_md5(size, bytes, digest); + MD5(bytes, size, digest); for ( int i = 0; i < 16; ++i ) digest[i] ^= shared_hmac_md5_key[i]; - hash_md5(16, digest, digest); + MD5(digest, 16, digest); } static bool read_random_seeds(const char* read_file, uint32* seed, @@ -616,18 +675,27 @@ static bool write_random_seeds(const char* write_file, uint32 seed, static bool bro_rand_determistic = false; static unsigned int bro_rand_state = 0; -static void bro_srand(unsigned int seed, bool deterministic) +static void bro_srandom(unsigned int seed, bool deterministic) { bro_rand_state = seed; bro_rand_determistic = deterministic; - srand(seed); + srandom(seed); + } + +void bro_srandom(unsigned int seed) + { + if ( bro_rand_determistic ) + bro_rand_state = seed; + else + srandom(seed); } void init_random_seed(uint32 seed, const char* read_file, const char* write_file) { static const int bufsiz = 16; uint32 buf[bufsiz]; + memset(buf, 0, sizeof(buf)); int pos = 0; // accumulates entropy bool seeds_done = false; @@ -658,7 +726,7 @@ void init_random_seed(uint32 seed, const char* read_file, const char* write_file { int amt = read(fd, buf + pos, sizeof(uint32) * (bufsiz - pos)); - close(fd); + safe_close(fd); if ( amt > 0 ) pos += amt / sizeof(uint32); @@ -688,11 +756,11 @@ void init_random_seed(uint32 seed, const char* read_file, const char* write_file seeds_done = true; } - bro_srand(seed, seeds_done); + bro_srandom(seed, seeds_done); if ( ! hmac_key_set ) { - hash_md5(sizeof(buf), (u_char*) buf, shared_hmac_md5_key); + MD5((const u_char*) buf, sizeof(buf), shared_hmac_md5_key); hmac_key_set = 1; } @@ -1065,18 +1133,8 @@ const char* log_file_name(const char* tag) return fmt("%s.%s", tag, (env ? env : "log")); } -double calc_next_rotate(double interval, const char* rotate_base_time) +double parse_rotate_base_time(const char* rotate_base_time) { - double current = network_time; - - // Calculate start of day. - time_t teatime = time_t(current); - - struct tm t; - t = *localtime(&teatime); - t.tm_hour = t.tm_min = t.tm_sec = 0; - double startofday = mktime(&t); - double base = -1; if ( rotate_base_time && rotate_base_time[0] != '\0' ) @@ -1088,6 +1146,19 @@ double calc_next_rotate(double interval, const char* rotate_base_time) base = t.tm_min * 60 + t.tm_hour * 60 * 60; } + return base; + } + +double calc_next_rotate(double current, double interval, double base) + { + // Calculate start of day. + time_t teatime = time_t(current); + + struct tm t; + t = *localtime_r(&teatime, &t); + t.tm_hour = t.tm_min = t.tm_sec = 0; + double startofday = mktime(&t); + if ( base < 0 ) // No base time given. To get nice timestamps, we round // the time up to the next multiple of the rotation interval. @@ -1137,7 +1208,7 @@ void _set_processing_status(const char* status) len -= n; } - close(fd); + safe_close(fd); errno = old_errno; } @@ -1262,9 +1333,64 @@ uint64 calculate_unique_id(size_t pool) return HashKey::HashBytes(&(uid_pool[pool].key), sizeof(uid_pool[pool].key)); } +bool safe_write(int fd, const char* data, int len) + { + while ( len > 0 ) + { + int n = write(fd, data, len); + + if ( n < 0 ) + { + if ( errno == EINTR ) + continue; + + fprintf(stderr, "safe_write error: %d\n", errno); + abort(); + + return false; + } + + data += n; + len -= n; + } + + return true; + } + +void safe_close(int fd) + { + /* + * Failure cases of close(2) are ... + * EBADF: Indicative of programming logic error that needs to be fixed, we + * should always be attempting to close a valid file descriptor. + * EINTR: Ignore signal interruptions, most implementations will actually + * reclaim the open descriptor and POSIX standard doesn't leave many + * options by declaring the state of the descriptor as "unspecified". + * Attempting to inspect actual state or re-attempt close() is not + * thread safe. + * EIO: Again the state of descriptor is "unspecified", but don't recover + * from an I/O error, safe_write() won't either. + * + * Note that we don't use the reporter here to allow use from different threads. + */ + if ( close(fd) < 0 && errno != EINTR ) + { + char buf[128]; + strerror_r(errno, buf, sizeof(buf)); + fprintf(stderr, "safe_close error %d: %s\n", errno, buf); + abort(); + } + } + void out_of_memory(const char* where) { - reporter->FatalError("out of memory in %s.\n", where); + fprintf(stderr, "out of memory in %s.\n", where); + + if ( reporter ) + // Guess that might fail here if memory is really tight ... + reporter->FatalError("out of memory in %s.\n", where); + + abort(); } void get_memory_usage(unsigned int* total, unsigned int* malloced) diff --git a/src/util.h b/src/util.h index 6e76b0f61f..e69167abce 100644 --- a/src/util.h +++ b/src/util.h @@ -3,6 +3,13 @@ #ifndef util_h #define util_h +// Expose C99 functionality from inttypes.h, which would otherwise not be +// available in C++. +#define __STDC_FORMAT_MACROS +#define __STDC_LIMIT_MACROS +#include +#include + #include #include #include @@ -10,11 +17,6 @@ #include #include "config.h" -// Expose C99 functionality from inttypes.h, which would otherwise not be -// available in C++. -#define __STDC_FORMAT_MACROS -#include - #if __STDC__ #define myattribute __attribute__ #else @@ -37,7 +39,7 @@ #endif -#ifdef USE_PERFTOOLS +#ifdef USE_PERFTOOLS_DEBUG #include #include extern HeapLeakChecker* heap_checker; @@ -89,6 +91,9 @@ void delete_each(T* t) delete *it; } +std::string get_unescaped_string(const std::string& str); +std::string get_escaped_string(const std::string& str, bool escape_all); + extern char* copy_string(const char* s); extern int streq(const char* s1, const char* s2); @@ -131,19 +136,15 @@ extern const char* fmt_access_time(double time); extern bool ensure_dir(const char *dirname); // Returns true if path exists and is a directory. -bool is_dir(const char* path); +bool is_dir(const char* path); extern uint8 shared_hmac_md5_key[16]; -extern void hash_md5(size_t size, const unsigned char* bytes, - unsigned char digest[16]); extern int hmac_key_set; extern unsigned char shared_hmac_md5_key[16]; extern void hmac_md5(size_t size, const unsigned char* bytes, unsigned char digest[16]); -extern const char* md5_digest_print(const unsigned char digest[16]); - // Initializes RNGs for bro_random() and MD5 usage. If seed is given, then // it is used (to provide determinism). If load_file is given, the seeds // (both random & MD5) are loaded from that file. This takes precedence @@ -161,6 +162,10 @@ extern bool have_random_seed(); // predictable PRNG. long int bro_random(); +// Calls the system srandom() function with the given seed if not running +// in deterministic mode, else it updates the state of the deterministic PRNG. +void bro_srandom(unsigned int seed); + extern uint64 rand64bit(); // Each event source that may generate events gets an internally unique ID. @@ -195,9 +200,22 @@ extern FILE* rotate_file(const char* name, RecordVal* rotate_info); // This mimics the script-level function with the same name. const char* log_file_name(const char* tag); +// Parse a time string of the form "HH:MM" (as used for the rotation base +// time) into a double representing the number of seconds. Returns -1 if the +// string cannot be parsed. The function's result is intended to be used with +// calc_next_rotate(). +// +// This function is not thread-safe. +double parse_rotate_base_time(const char* rotate_base_time); + // Calculate the duration until the next time a file is to be rotated, based -// on the given rotate_interval and rotate_base_time. -double calc_next_rotate(double rotate_interval, const char* rotate_base_time); +// on the given rotate_interval and rotate_base_time. 'current' the the +// current time to be used as base, 'rotate_interval' the rotation interval, +// and 'base' the value returned by parse_rotate_base_time(). For the latter, +// if the function returned -1, that's fine, calc_next_rotate() handles that. +// +// This function is thread-safe. +double calc_next_rotate(double current, double rotate_interval, double base); // Terminates processing gracefully, similar to pressing CTRL-C. void terminate_processing(); @@ -274,6 +292,14 @@ inline size_t pad_size(size_t size) #define padded_sizeof(x) (pad_size(sizeof(x))) +// Like write() but handles interrupted system calls by restarting. Returns +// true if the write was successful, otherwise sets errno. This function is +// thread-safe as long as no two threads write to the same descriptor. +extern bool safe_write(int fd, const char* data, int len); + +// Wraps close(2) to emit error messages and abort on unrecoverable errors. +extern void safe_close(int fd); + extern void out_of_memory(const char* where); inline void* safe_realloc(void* ptr, size_t size) @@ -323,4 +349,16 @@ inline int safe_vsnprintf(char* str, size_t size, const char* format, va_list al // handed out by malloc. extern void get_memory_usage(unsigned int* total, unsigned int* malloced); + +// Class to be used as a third argument for STL maps to be able to use +// char*'s as keys. Otherwise the pointer values will be compared instead of +// the actual string values. +struct CompareString + { + bool operator()(char const *a, char const *b) const + { + return strcmp(a, b) < 0; + } + }; + #endif diff --git a/testing/.gitignore b/testing/.gitignore new file mode 100644 index 0000000000..a664c1d684 --- /dev/null +++ b/testing/.gitignore @@ -0,0 +1 @@ +coverage.log diff --git a/testing/Makefile b/testing/Makefile index 7f03a55f49..d56ee4e0e1 100644 --- a/testing/Makefile +++ b/testing/Makefile @@ -1,8 +1,21 @@ DIRS=btest external -all: - @for repo in $(DIRS); do (cd $$repo && make ); done +all: make-verbose coverage + +brief: make-brief coverage + +make-verbose: + @for repo in $(DIRS); do (cd $$repo && make -s ); done + +make-brief: + @for repo in $(DIRS); do (cd $$repo && make -s brief ); done + +coverage: + @for repo in $(DIRS); do (cd $$repo && echo "Coverage for '$$repo' dir:" && make -s coverage); done + @test -f btest/coverage.log && cp btest/coverage.log `mktemp brocov.tmp.XXX` || true + @for f in external/*/coverage.log; do test -f $$f && cp $$f `mktemp brocov.tmp.XXX` || true; done + @echo "Complete test suite code coverage:" + @./scripts/coverage-calc "brocov.tmp.*" coverage.log `pwd`/../scripts + @rm -f brocov.tmp.* -brief: - @for repo in $(DIRS); do (cd $$repo && make brief ); done diff --git a/testing/btest/.gitignore b/testing/btest/.gitignore index 0c143f664e..b4c1b7a858 100644 --- a/testing/btest/.gitignore +++ b/testing/btest/.gitignore @@ -1,2 +1,4 @@ .tmp +.btest.failed.dat diag.log +coverage.log diff --git a/testing/btest/Baseline/analyzers.conn-size-cc/conn.log b/testing/btest/Baseline/analyzers.conn-size-cc/conn.log deleted file mode 100644 index 2f703cbcd6..0000000000 --- a/testing/btest/Baseline/analyzers.conn-size-cc/conn.log +++ /dev/null @@ -1,5 +0,0 @@ -1128727430.350788 ? 141.42.64.125 125.190.109.199 other 56729 12345 tcp ? ? S0 X 1 60 0 0 cc=1 -1144876538.705610 5.921003 169.229.147.203 239.255.255.253 other 49370 427 udp 147 ? S0 X 3 231 0 0 -1144876599.397603 0.815763 192.150.186.169 194.64.249.244 http 53063 80 tcp 377 445 SF X 6 677 5 713 -1144876709.032670 9.000191 169.229.147.43 239.255.255.253 other 49370 427 udp 196 ? S0 X 4 308 0 0 -1144876697.068273 0.000650 192.150.186.169 192.150.186.15 icmp-unreach 3 3 icmp 56 ? OTH X 2 112 0 0 diff --git a/testing/btest/Baseline/analyzers.conn-size/conn.log b/testing/btest/Baseline/analyzers.conn-size/conn.log deleted file mode 100644 index 8129bc37f8..0000000000 --- a/testing/btest/Baseline/analyzers.conn-size/conn.log +++ /dev/null @@ -1,5 +0,0 @@ -1128727430.350788 ? 141.42.64.125 125.190.109.199 other 56729 12345 tcp ? ? S0 X 1 60 0 0 -1144876538.705610 5.921003 169.229.147.203 239.255.255.253 other 49370 427 udp 147 ? S0 X 3 231 0 0 -1144876599.397603 0.815763 192.150.186.169 194.64.249.244 http 53063 80 tcp 377 445 SF X 6 697 5 713 -1144876709.032670 9.000191 169.229.147.43 239.255.255.253 other 49370 427 udp 196 ? S0 X 4 308 0 0 -1144876697.068273 0.000650 192.150.186.169 192.150.186.15 icmp-unreach 3 3 icmp 56 ? OTH X 2 112 0 0 diff --git a/testing/btest/Baseline/bifs.addr_count_conversion/output b/testing/btest/Baseline/bifs.addr_count_conversion/output new file mode 100644 index 0000000000..08a74512d3 --- /dev/null +++ b/testing/btest/Baseline/bifs.addr_count_conversion/output @@ -0,0 +1,4 @@ +[536939960, 2242052096, 35374, 57701172] +2001:db8:85a3::8a2e:370:7334 +[16909060] +1.2.3.4 diff --git a/testing/btest/Baseline/bifs.addr_to_ptr_name/output b/testing/btest/Baseline/bifs.addr_to_ptr_name/output new file mode 100644 index 0000000000..8de05d1d8e --- /dev/null +++ b/testing/btest/Baseline/bifs.addr_to_ptr_name/output @@ -0,0 +1,2 @@ +2.1.0.1.0.0.0.0.0.0.0.0.0.0.0.0.2.0.8.0.9.0.0.4.0.b.8.f.7.0.6.2.ip6.arpa +52.225.125.74.in-addr.arpa diff --git a/testing/btest/Baseline/bifs.addr_version/out b/testing/btest/Baseline/bifs.addr_version/out new file mode 100644 index 0000000000..328bff6687 --- /dev/null +++ b/testing/btest/Baseline/bifs.addr_version/out @@ -0,0 +1,4 @@ +T +F +F +T diff --git a/testing/btest/Baseline/bifs.all_set/out b/testing/btest/Baseline/bifs.all_set/out new file mode 100644 index 0000000000..ed4964b655 --- /dev/null +++ b/testing/btest/Baseline/bifs.all_set/out @@ -0,0 +1,3 @@ +F +F +T diff --git a/testing/btest/Baseline/bifs.analyzer_name/out b/testing/btest/Baseline/bifs.analyzer_name/out new file mode 100644 index 0000000000..84613e9dd1 --- /dev/null +++ b/testing/btest/Baseline/bifs.analyzer_name/out @@ -0,0 +1 @@ +PIA_TCP diff --git a/testing/btest/Baseline/bifs.any_set/out b/testing/btest/Baseline/bifs.any_set/out new file mode 100644 index 0000000000..3ea3c39b0d --- /dev/null +++ b/testing/btest/Baseline/bifs.any_set/out @@ -0,0 +1,3 @@ +T +F +F diff --git a/testing/btest/Baseline/bifs.byte_len/out b/testing/btest/Baseline/bifs.byte_len/out new file mode 100644 index 0000000000..b4de394767 --- /dev/null +++ b/testing/btest/Baseline/bifs.byte_len/out @@ -0,0 +1 @@ +11 diff --git a/testing/btest/Baseline/bifs.bytestring_to_hexstr/out b/testing/btest/Baseline/bifs.bytestring_to_hexstr/out new file mode 100644 index 0000000000..241fa43ec3 --- /dev/null +++ b/testing/btest/Baseline/bifs.bytestring_to_hexstr/out @@ -0,0 +1,3 @@ +3034 + +00 diff --git a/testing/btest/Baseline/bifs.capture_state_updates/out b/testing/btest/Baseline/bifs.capture_state_updates/out new file mode 100644 index 0000000000..62a6e3c9df --- /dev/null +++ b/testing/btest/Baseline/bifs.capture_state_updates/out @@ -0,0 +1 @@ +T diff --git a/testing/btest/Baseline/bifs.cat/out b/testing/btest/Baseline/bifs.cat/out new file mode 100644 index 0000000000..cf73512b88 --- /dev/null +++ b/testing/btest/Baseline/bifs.cat/out @@ -0,0 +1,6 @@ +foo3T + +3T +foo|3|T + +|3|T diff --git a/testing/btest/Baseline/bifs.cat_string_array/out b/testing/btest/Baseline/bifs.cat_string_array/out new file mode 100644 index 0000000000..963f826db9 --- /dev/null +++ b/testing/btest/Baseline/bifs.cat_string_array/out @@ -0,0 +1,3 @@ +isatest +thisisatest +isa diff --git a/testing/btest/Baseline/bifs.clear_table/out b/testing/btest/Baseline/bifs.clear_table/out new file mode 100644 index 0000000000..b261da18d5 --- /dev/null +++ b/testing/btest/Baseline/bifs.clear_table/out @@ -0,0 +1,2 @@ +1 +0 diff --git a/testing/btest/Baseline/bifs.convert_for_pattern/out b/testing/btest/Baseline/bifs.convert_for_pattern/out new file mode 100644 index 0000000000..0de79c0927 --- /dev/null +++ b/testing/btest/Baseline/bifs.convert_for_pattern/out @@ -0,0 +1,3 @@ +foo + +b\[a\-z\]\+ diff --git a/testing/btest/Baseline/bifs.create_file/out b/testing/btest/Baseline/bifs.create_file/out new file mode 100644 index 0000000000..330268ec59 --- /dev/null +++ b/testing/btest/Baseline/bifs.create_file/out @@ -0,0 +1,15 @@ +T +testfile +F +15.0 +T +F +28.0 +-1.0 +15.0 +0.0 +T +15.0 +T +testdir/testfile4 +F diff --git a/testing/btest/Baseline/bifs.create_file/testfile b/testing/btest/Baseline/bifs.create_file/testfile new file mode 100644 index 0000000000..a29421755d --- /dev/null +++ b/testing/btest/Baseline/bifs.create_file/testfile @@ -0,0 +1,2 @@ +This is a test +another test diff --git a/testing/btest/Baseline/bifs.create_file/testfile2 b/testing/btest/Baseline/bifs.create_file/testfile2 new file mode 100644 index 0000000000..eee417f1b9 --- /dev/null +++ b/testing/btest/Baseline/bifs.create_file/testfile2 @@ -0,0 +1 @@ +new text diff --git a/testing/btest/Baseline/bifs.decode_base64/out b/testing/btest/Baseline/bifs.decode_base64/out new file mode 100644 index 0000000000..af0d32fbb8 --- /dev/null +++ b/testing/btest/Baseline/bifs.decode_base64/out @@ -0,0 +1,6 @@ +bro +bro +bro +bro +bro +bro diff --git a/testing/btest/Baseline/bifs.edit/out b/testing/btest/Baseline/bifs.edit/out new file mode 100644 index 0000000000..d8582f9b20 --- /dev/null +++ b/testing/btest/Baseline/bifs.edit/out @@ -0,0 +1 @@ +llo t diff --git a/testing/btest/Baseline/bifs.entropy_test/out b/testing/btest/Baseline/bifs.entropy_test/out new file mode 100644 index 0000000000..08a09de4e4 --- /dev/null +++ b/testing/btest/Baseline/bifs.entropy_test/out @@ -0,0 +1,2 @@ +[entropy=4.715374, chi_square=591.981818, mean=75.472727, monte_carlo_pi=4.0, serial_correlation=-0.11027] +[entropy=2.083189, chi_square=3906.018182, mean=69.054545, monte_carlo_pi=4.0, serial_correlation=0.849402] diff --git a/testing/btest/Baseline/bifs.escape_string/out b/testing/btest/Baseline/bifs.escape_string/out new file mode 100644 index 0000000000..6d79533c61 --- /dev/null +++ b/testing/btest/Baseline/bifs.escape_string/out @@ -0,0 +1,10 @@ +12 +Test \0string +13 +Test \0string +15 +Test \x00string +13 +Test \0string +24 +546573742000737472696e67 diff --git a/testing/btest/Baseline/bifs.exit/out b/testing/btest/Baseline/bifs.exit/out new file mode 100644 index 0000000000..ce01362503 --- /dev/null +++ b/testing/btest/Baseline/bifs.exit/out @@ -0,0 +1 @@ +hello diff --git a/testing/btest/Baseline/bifs.file_mode/out b/testing/btest/Baseline/bifs.file_mode/out new file mode 100644 index 0000000000..0c7b672b5b --- /dev/null +++ b/testing/btest/Baseline/bifs.file_mode/out @@ -0,0 +1,10 @@ +rw-r--r-- +rwxrwxrwx +rwxrwxrwt +rwxr-x--T +rwsr-xr-x +r-S------ +rwxr-sr-x +r--r-S--- +--xr-xrwx +--------- diff --git a/testing/btest/Baseline/bifs.find_all/out b/testing/btest/Baseline/bifs.find_all/out new file mode 100644 index 0000000000..17913c44ed --- /dev/null +++ b/testing/btest/Baseline/bifs.find_all/out @@ -0,0 +1,4 @@ +es +hi +------------------- +0 diff --git a/testing/btest/Baseline/bifs.find_entropy/out b/testing/btest/Baseline/bifs.find_entropy/out new file mode 100644 index 0000000000..08a09de4e4 --- /dev/null +++ b/testing/btest/Baseline/bifs.find_entropy/out @@ -0,0 +1,2 @@ +[entropy=4.715374, chi_square=591.981818, mean=75.472727, monte_carlo_pi=4.0, serial_correlation=-0.11027] +[entropy=2.083189, chi_square=3906.018182, mean=69.054545, monte_carlo_pi=4.0, serial_correlation=0.849402] diff --git a/testing/btest/Baseline/bifs.find_last/out b/testing/btest/Baseline/bifs.find_last/out new file mode 100644 index 0000000000..13eabac948 --- /dev/null +++ b/testing/btest/Baseline/bifs.find_last/out @@ -0,0 +1,3 @@ +es +------------------- +0 diff --git a/testing/btest/Baseline/bifs.fmt/out b/testing/btest/Baseline/bifs.fmt/out new file mode 100644 index 0000000000..2a28bf333a --- /dev/null +++ b/testing/btest/Baseline/bifs.fmt/out @@ -0,0 +1,55 @@ +test +% + +*test * +* test* +* T* +*T * +* 3.14e+00* +*3.14e+00 * +* 3.14* +* 3.1* +* -3.14e+00* +* -3.14* +* -3.1* +*-3.14e+00 * +*-3.14 * +*-3.1 * +* -128* +*-128 * +* 128* +*0000000128* +*128 * +* a0* +*00000000a0* +* a0* +* 160/tcp* +* 127.0.0.1* +* 7f000001* +*192.168.0.0/16* +* ::1* +*fe000000000000000000000000000001* +*fe80:1234::1* +*fe80:1234::/32* +* 3.0 hrs* +*/^?(^foo|bar)$?/* +* Blue* +* [1, 2, 3]* +*{^J^I2,^J^I1,^J^I3^J}* +*{^J^I[2] = bro,^J^I[1] = test^J}* +3.100000e+02 +310.000000 +310 +3.100e+02 +310.000 +310 +310 +2 +3 +4 +2 +2 +6 +2 +2 +6 diff --git a/testing/btest/Baseline/bifs.fmt_ftp_port/out b/testing/btest/Baseline/bifs.fmt_ftp_port/out new file mode 100644 index 0000000000..124878dd48 --- /dev/null +++ b/testing/btest/Baseline/bifs.fmt_ftp_port/out @@ -0,0 +1,2 @@ +192,168,0,2,1,1 + diff --git a/testing/btest/Baseline/bifs.get_port_transport_proto/out b/testing/btest/Baseline/bifs.get_port_transport_proto/out new file mode 100644 index 0000000000..dceddbc0f3 --- /dev/null +++ b/testing/btest/Baseline/bifs.get_port_transport_proto/out @@ -0,0 +1,3 @@ +tcp +udp +icmp diff --git a/testing/btest/Baseline/bifs.getsetenv/out b/testing/btest/Baseline/bifs.getsetenv/out new file mode 100644 index 0000000000..0eabe36713 --- /dev/null +++ b/testing/btest/Baseline/bifs.getsetenv/out @@ -0,0 +1,3 @@ +OK +OK +OK diff --git a/testing/btest/Baseline/bifs.global_ids/out b/testing/btest/Baseline/bifs.global_ids/out new file mode 100644 index 0000000000..415b9ac63d --- /dev/null +++ b/testing/btest/Baseline/bifs.global_ids/out @@ -0,0 +1 @@ +func diff --git a/testing/btest/Baseline/bifs.global_sizes/out b/testing/btest/Baseline/bifs.global_sizes/out new file mode 100644 index 0000000000..76c40b297a --- /dev/null +++ b/testing/btest/Baseline/bifs.global_sizes/out @@ -0,0 +1 @@ +found bro_init diff --git a/testing/btest/Baseline/bifs.hexdump/out b/testing/btest/Baseline/bifs.hexdump/out new file mode 100644 index 0000000000..740435f7ea --- /dev/null +++ b/testing/btest/Baseline/bifs.hexdump/out @@ -0,0 +1 @@ +0000 61 62 63 ff 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f abc.defg hijklmno^J0010 70 71 72 73 74 75 76 77 78 79 7a pqrstuvw xyz^J diff --git a/testing/btest/Baseline/bifs.identify_data/out b/testing/btest/Baseline/bifs.identify_data/out new file mode 100644 index 0000000000..1cadefbf6e --- /dev/null +++ b/testing/btest/Baseline/bifs.identify_data/out @@ -0,0 +1,4 @@ +ASCII text, with no line terminators +text/plain; charset=us-ascii +PNG image +image/png; charset=binary diff --git a/testing/btest/Baseline/bifs.install_src_addr_filter/output b/testing/btest/Baseline/bifs.install_src_addr_filter/output new file mode 100644 index 0000000000..bf99083391 --- /dev/null +++ b/testing/btest/Baseline/bifs.install_src_addr_filter/output @@ -0,0 +1,8 @@ +[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] diff --git a/testing/btest/Baseline/bifs.is_ascii/out b/testing/btest/Baseline/bifs.is_ascii/out new file mode 100644 index 0000000000..82d2bc093e --- /dev/null +++ b/testing/btest/Baseline/bifs.is_ascii/out @@ -0,0 +1,2 @@ +F +T diff --git a/testing/btest/Baseline/bifs.is_local_interface/out b/testing/btest/Baseline/bifs.is_local_interface/out new file mode 100644 index 0000000000..328bff6687 --- /dev/null +++ b/testing/btest/Baseline/bifs.is_local_interface/out @@ -0,0 +1,4 @@ +T +F +F +T diff --git a/testing/btest/Baseline/bifs.is_port/out b/testing/btest/Baseline/bifs.is_port/out new file mode 100644 index 0000000000..0a7c80fc6e --- /dev/null +++ b/testing/btest/Baseline/bifs.is_port/out @@ -0,0 +1,9 @@ +T +F +F +F +T +F +F +F +T diff --git a/testing/btest/Baseline/bifs.join_string/out b/testing/btest/Baseline/bifs.join_string/out new file mode 100644 index 0000000000..f1640a57ee --- /dev/null +++ b/testing/btest/Baseline/bifs.join_string/out @@ -0,0 +1,6 @@ +this * is * a * test +thisisatest +mytest +this__is__another__test +thisisanothertest +Test diff --git a/testing/btest/Baseline/bifs.length/out b/testing/btest/Baseline/bifs.length/out new file mode 100644 index 0000000000..ad43182650 --- /dev/null +++ b/testing/btest/Baseline/bifs.length/out @@ -0,0 +1,6 @@ +1 +4 +2 +0 +0 +0 diff --git a/testing/btest/Baseline/bifs.lookup_ID/out b/testing/btest/Baseline/bifs.lookup_ID/out new file mode 100644 index 0000000000..64b6379deb --- /dev/null +++ b/testing/btest/Baseline/bifs.lookup_ID/out @@ -0,0 +1,5 @@ +bro test + + + +event() diff --git a/testing/btest/Baseline/bifs.lowerupper/out b/testing/btest/Baseline/bifs.lowerupper/out new file mode 100644 index 0000000000..96b69a43c8 --- /dev/null +++ b/testing/btest/Baseline/bifs.lowerupper/out @@ -0,0 +1,2 @@ +this is a test +THIS IS A TEST diff --git a/testing/btest/Baseline/bifs.math/out b/testing/btest/Baseline/bifs.math/out new file mode 100644 index 0000000000..40131d2528 --- /dev/null +++ b/testing/btest/Baseline/bifs.math/out @@ -0,0 +1,8 @@ +3.0 +2.0 +-4.0 +-3.0 +1.772005 +23.103867 +1.144223 +0.49693 diff --git a/testing/btest/Baseline/bifs.md5/output b/testing/btest/Baseline/bifs.md5/output new file mode 100644 index 0000000000..a560286854 --- /dev/null +++ b/testing/btest/Baseline/bifs.md5/output @@ -0,0 +1,6 @@ +f97c5d29941bfb1b2fdab0874906ab82 +7b0391feb2e0cd271f1cf39aafb4376f +f97c5d29941bfb1b2fdab0874906ab82 +7b0391feb2e0cd271f1cf39aafb4376f +571c0a35c7858ad5a0e16b8fdb41adcd +1751cbd623726f423f734e23a8c7ec06 diff --git a/testing/btest/Baseline/bifs.merge_pattern/out b/testing/btest/Baseline/bifs.merge_pattern/out new file mode 100644 index 0000000000..fe8ebc3c01 --- /dev/null +++ b/testing/btest/Baseline/bifs.merge_pattern/out @@ -0,0 +1,2 @@ +match +match diff --git a/testing/btest/Baseline/bifs.net_stats_trace/output b/testing/btest/Baseline/bifs.net_stats_trace/output index a2e25e03a7..55d6693db5 100644 --- a/testing/btest/Baseline/bifs.net_stats_trace/output +++ b/testing/btest/Baseline/bifs.net_stats_trace/output @@ -1 +1 @@ -[pkts_recvd=131, pkts_dropped=0, pkts_link=0] +[pkts_recvd=136, pkts_dropped=0, pkts_link=0] diff --git a/testing/btest/Baseline/bifs.order/out b/testing/btest/Baseline/bifs.order/out new file mode 100644 index 0000000000..e77fbd310c --- /dev/null +++ b/testing/btest/Baseline/bifs.order/out @@ -0,0 +1,8 @@ +[5, 2, 8, 3] +[1, 3, 0, 2] +[5.0 hrs, 2.0 days, 1.0 sec, -7.0 mins] +[3, 2, 0, 1] +[192.168.123.200, 10.0.0.157, 192.168.0.3] +[1, 2, 0] +[3.03, 3.01, 3.02, 3.015] +[1, 3, 2, 0] diff --git a/testing/btest/Baseline/bifs.parse_ftp/out b/testing/btest/Baseline/bifs.parse_ftp/out new file mode 100644 index 0000000000..c080d56bdf --- /dev/null +++ b/testing/btest/Baseline/bifs.parse_ftp/out @@ -0,0 +1,5 @@ +[h=192.168.0.2, p=257/tcp, valid=T] +[h=192.168.0.2, p=257/tcp, valid=T] +[h=fe80::12, p=1234/tcp, valid=T] +[h=192.168.0.2, p=257/tcp, valid=T] +[h=::, p=1234/tcp, valid=T] diff --git a/testing/btest/Baseline/bifs.ptr_name_to_addr/output b/testing/btest/Baseline/bifs.ptr_name_to_addr/output new file mode 100644 index 0000000000..7c290027aa --- /dev/null +++ b/testing/btest/Baseline/bifs.ptr_name_to_addr/output @@ -0,0 +1,2 @@ +2607:f8b0:4009:802::1012 +74.125.225.52 diff --git a/testing/btest/Baseline/bifs.rand/out b/testing/btest/Baseline/bifs.rand/out new file mode 100644 index 0000000000..a016eb6f15 --- /dev/null +++ b/testing/btest/Baseline/bifs.rand/out @@ -0,0 +1,6 @@ +985 +474 +738 +4 +634 +473 diff --git a/testing/btest/Baseline/bifs.rand/out.2 b/testing/btest/Baseline/bifs.rand/out.2 new file mode 100644 index 0000000000..2cd43d985c --- /dev/null +++ b/testing/btest/Baseline/bifs.rand/out.2 @@ -0,0 +1,6 @@ +985 +474 +738 +974 +371 +638 diff --git a/testing/btest/Baseline/bifs.raw_bytes_to_v4_addr/out b/testing/btest/Baseline/bifs.raw_bytes_to_v4_addr/out new file mode 100644 index 0000000000..e0424e0e07 --- /dev/null +++ b/testing/btest/Baseline/bifs.raw_bytes_to_v4_addr/out @@ -0,0 +1,2 @@ +65.66.67.68 +0.0.0.0 diff --git a/testing/btest/Baseline/bifs.reading_traces/out1 b/testing/btest/Baseline/bifs.reading_traces/out1 new file mode 100644 index 0000000000..cf84443e49 --- /dev/null +++ b/testing/btest/Baseline/bifs.reading_traces/out1 @@ -0,0 +1 @@ +F diff --git a/testing/btest/Baseline/bifs.reading_traces/out2 b/testing/btest/Baseline/bifs.reading_traces/out2 new file mode 100644 index 0000000000..62a6e3c9df --- /dev/null +++ b/testing/btest/Baseline/bifs.reading_traces/out2 @@ -0,0 +1 @@ +T diff --git a/testing/btest/Baseline/bifs.record_type_to_vector/out b/testing/btest/Baseline/bifs.record_type_to_vector/out new file mode 100644 index 0000000000..1b4fa4baf1 --- /dev/null +++ b/testing/btest/Baseline/bifs.record_type_to_vector/out @@ -0,0 +1 @@ +[, ct, str1] diff --git a/testing/btest/Baseline/bifs.records_fields/out b/testing/btest/Baseline/bifs.records_fields/out index b221230fc0..0d52e64255 100644 --- a/testing/btest/Baseline/bifs.records_fields/out +++ b/testing/btest/Baseline/bifs.records_fields/out @@ -1,6 +1,6 @@ -[a=42, b=, c=, d=Bar] +[a=42, b=Foo, c=, d=Bar] { -[b] = [type_name=record, log=F, value=, default_val=Foo], +[b] = [type_name=record, log=F, value=Foo, default_val=Foo], [d] = [type_name=record, log=T, value=Bar, default_val=], [c] = [type_name=record, log=F, value=, default_val=], [a] = [type_name=record, log=F, value=42, default_val=] diff --git a/testing/btest/Baseline/bifs.remask_addr/output b/testing/btest/Baseline/bifs.remask_addr/output new file mode 100644 index 0000000000..1d276abd4f --- /dev/null +++ b/testing/btest/Baseline/bifs.remask_addr/output @@ -0,0 +1,32 @@ +1: 127.255.0.0 +2: 63.255.0.0 +3: 31.255.0.0 +4: 15.255.0.0 +5: 7.255.0.0 +6: 3.255.0.0 +7: 1.255.0.0 +8: 0.255.0.0 +9: 0.127.0.0 +10: 0.63.0.0 +11: 0.31.0.0 +12: 0.15.0.0 +13: 0.7.0.0 +14: 0.3.0.0 +15: 0.1.0.0 +16: 0.0.0.0 +17: 0.0.128.0 +18: 0.0.192.0 +19: 0.0.224.0 +20: 0.0.240.0 +21: 0.0.248.0 +22: 0.0.252.0 +23: 0.0.254.0 +24: 0.0.255.0 +25: 0.0.255.128 +26: 0.0.255.192 +27: 0.0.255.224 +28: 0.0.255.240 +29: 0.0.255.248 +30: 0.0.255.252 +31: 0.0.255.254 +32: 0.0.255.255 diff --git a/testing/btest/Baseline/bifs.resize/out b/testing/btest/Baseline/bifs.resize/out new file mode 100644 index 0000000000..fcefeaf4df --- /dev/null +++ b/testing/btest/Baseline/bifs.resize/out @@ -0,0 +1,4 @@ +3 +5 +0 +7 diff --git a/testing/btest/Baseline/bifs.rotate_file/out b/testing/btest/Baseline/bifs.rotate_file/out new file mode 100644 index 0000000000..1e833bbae4 --- /dev/null +++ b/testing/btest/Baseline/bifs.rotate_file/out @@ -0,0 +1,3 @@ +file rotated +15.0 +0.0 diff --git a/testing/btest/Baseline/bifs.rotate_file_by_name/out b/testing/btest/Baseline/bifs.rotate_file_by_name/out new file mode 100644 index 0000000000..1e833bbae4 --- /dev/null +++ b/testing/btest/Baseline/bifs.rotate_file_by_name/out @@ -0,0 +1,3 @@ +file rotated +15.0 +0.0 diff --git a/testing/btest/Baseline/bifs.routing0_data_to_addrs/output b/testing/btest/Baseline/bifs.routing0_data_to_addrs/output new file mode 100644 index 0000000000..c79aef89d0 --- /dev/null +++ b/testing/btest/Baseline/bifs.routing0_data_to_addrs/output @@ -0,0 +1 @@ +[2001:78:1:32::1, 2001:78:1:32::2] diff --git a/testing/btest/Baseline/bifs.same_object/out b/testing/btest/Baseline/bifs.same_object/out new file mode 100644 index 0000000000..3ea3c39b0d --- /dev/null +++ b/testing/btest/Baseline/bifs.same_object/out @@ -0,0 +1,3 @@ +T +F +F diff --git a/testing/btest/Baseline/bifs.sha1/output b/testing/btest/Baseline/bifs.sha1/output new file mode 100644 index 0000000000..ddcf9060b9 --- /dev/null +++ b/testing/btest/Baseline/bifs.sha1/output @@ -0,0 +1,4 @@ +fe05bcdcdc4928012781a5f1a2a77cbb5398e106 +3e949019500deb1369f13d9644d420d3a920aa5e +fe05bcdcdc4928012781a5f1a2a77cbb5398e106 +3e949019500deb1369f13d9644d420d3a920aa5e diff --git a/testing/btest/Baseline/bifs.sha256/output b/testing/btest/Baseline/bifs.sha256/output new file mode 100644 index 0000000000..5bd6a63fa4 --- /dev/null +++ b/testing/btest/Baseline/bifs.sha256/output @@ -0,0 +1,4 @@ +7692c3ad3540bb803c020b3aee66cd8887123234ea0c6e7143c0add73ff431ed +4592092e1061c7ea85af2aed194621cc17a2762bae33a79bf8ce33fd0168b801 +7692c3ad3540bb803c020b3aee66cd8887123234ea0c6e7143c0add73ff431ed +4592092e1061c7ea85af2aed194621cc17a2762bae33a79bf8ce33fd0168b801 diff --git a/testing/btest/Baseline/bifs.sort/out b/testing/btest/Baseline/bifs.sort/out new file mode 100644 index 0000000000..fed75265b9 --- /dev/null +++ b/testing/btest/Baseline/bifs.sort/out @@ -0,0 +1,16 @@ +[2, 3, 5, 8] +[2, 3, 5, 8] +[-7.0 mins, 1.0 sec, 5.0 hrs, 2.0 days] +[-7.0 mins, 1.0 sec, 5.0 hrs, 2.0 days] +[F, F, T, T] +[F, F, T, T] +[57/tcp, 123/tcp, 7/udp, 500/udp, 12/icmp] +[57/tcp, 123/tcp, 7/udp, 500/udp, 12/icmp] +[3.03, 3.01, 3.02, 3.015] +[3.03, 3.01, 3.02, 3.015] +[192.168.123.200, 10.0.0.157, 192.168.0.3] +[192.168.123.200, 10.0.0.157, 192.168.0.3] +[10.0.0.157, 192.168.0.3, 192.168.123.200] +[10.0.0.157, 192.168.0.3, 192.168.123.200] +[3.01, 3.015, 3.02, 3.03] +[3.01, 3.015, 3.02, 3.03] diff --git a/testing/btest/Baseline/bifs.sort_string_array/out b/testing/btest/Baseline/bifs.sort_string_array/out new file mode 100644 index 0000000000..533844768d --- /dev/null +++ b/testing/btest/Baseline/bifs.sort_string_array/out @@ -0,0 +1,4 @@ +a +is +test +this diff --git a/testing/btest/Baseline/bifs.split/out b/testing/btest/Baseline/bifs.split/out new file mode 100644 index 0000000000..0ec2541f3d --- /dev/null +++ b/testing/btest/Baseline/bifs.split/out @@ -0,0 +1,32 @@ +t +s is a t +t +--------------------- +t +s is a test +--------------------- +t +hi +s is a t +es +t +--------------------- +t +s is a test +--------------------- +t +hi +s is a test +--------------------- +[, thi, s i, s a tes, t] +--------------------- +X-Mailer +Testing Test (http://www.example.com) +--------------------- +A += + B += + C += + D diff --git a/testing/btest/Baseline/bifs.str_shell_escape/out b/testing/btest/Baseline/bifs.str_shell_escape/out new file mode 100644 index 0000000000..1845fefa37 --- /dev/null +++ b/testing/btest/Baseline/bifs.str_shell_escape/out @@ -0,0 +1,4 @@ +24 +echo ${TEST} > "my file" +27 +echo \${TEST} > \"my file\" diff --git a/testing/btest/Baseline/bifs.strcmp/out b/testing/btest/Baseline/bifs.strcmp/out new file mode 100644 index 0000000000..d67491ed75 --- /dev/null +++ b/testing/btest/Baseline/bifs.strcmp/out @@ -0,0 +1,3 @@ +T +T +T diff --git a/testing/btest/Baseline/bifs.strftime/out b/testing/btest/Baseline/bifs.strftime/out new file mode 100644 index 0000000000..b32393b332 --- /dev/null +++ b/testing/btest/Baseline/bifs.strftime/out @@ -0,0 +1,4 @@ +1970-01-01 00:00:00 +000000 19700101 +1973-11-29 21:33:09 +213309 19731129 diff --git a/testing/btest/Baseline/bifs.string_fill/out b/testing/btest/Baseline/bifs.string_fill/out new file mode 100644 index 0000000000..b15a2d1006 --- /dev/null +++ b/testing/btest/Baseline/bifs.string_fill/out @@ -0,0 +1,3 @@ +*\0* 1 +*t\0* 2 +*test test\0* 10 diff --git a/testing/btest/Baseline/bifs.string_splitting/out b/testing/btest/Baseline/bifs.string_splitting/out deleted file mode 100644 index 8514916834..0000000000 --- a/testing/btest/Baseline/bifs.string_splitting/out +++ /dev/null @@ -1,13 +0,0 @@ -{ -[2] = Testing Test (http://www.example.com), -[1] = X-Mailer -} -{ -[2] = =, -[4] = =, -[6] = =, -[7] = D, -[1] = A , -[5] = C , -[3] = B -} diff --git a/testing/btest/Baseline/bifs.string_to_pattern/out b/testing/btest/Baseline/bifs.string_to_pattern/out new file mode 100644 index 0000000000..2492fbade2 --- /dev/null +++ b/testing/btest/Baseline/bifs.string_to_pattern/out @@ -0,0 +1,6 @@ +/^?(foo)$?/ +/^?()$?/ +/^?(b[a-z]+)$?/ +/^?(foo)$?/ +/^?()$?/ +/^?(b\[a\-z\]\+)$?/ diff --git a/testing/btest/Baseline/bifs.strip/out b/testing/btest/Baseline/bifs.strip/out new file mode 100644 index 0000000000..dc1ca4204c --- /dev/null +++ b/testing/btest/Baseline/bifs.strip/out @@ -0,0 +1,6 @@ +* this is a test * +*this is a test* +** +** +* * +** diff --git a/testing/btest/Baseline/bifs.strptime/.stdout b/testing/btest/Baseline/bifs.strptime/.stdout new file mode 100644 index 0000000000..179612d4c4 --- /dev/null +++ b/testing/btest/Baseline/bifs.strptime/.stdout @@ -0,0 +1,2 @@ +1350604800.0 +0.0 diff --git a/testing/btest/Baseline/bifs.strptime/reporter.log b/testing/btest/Baseline/bifs.strptime/reporter.log new file mode 100644 index 0000000000..367dbd63c1 --- /dev/null +++ b/testing/btest/Baseline/bifs.strptime/reporter.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path reporter +#open 2012-10-19-06-06-36 +#fields ts level message location +#types time enum string string +0.000000 Reporter::WARNING strptime conversion failed: fmt:%m d:1980-10-24 (empty) +#close 2012-10-19-06-06-36 diff --git a/testing/btest/Baseline/language.match-test2/output b/testing/btest/Baseline/bifs.strstr/out similarity index 50% rename from testing/btest/Baseline/language.match-test2/output rename to testing/btest/Baseline/bifs.strstr/out index 0cfbf08886..389e262145 100644 --- a/testing/btest/Baseline/language.match-test2/output +++ b/testing/btest/Baseline/bifs.strstr/out @@ -1 +1,2 @@ 2 +0 diff --git a/testing/btest/Baseline/bifs.sub/out b/testing/btest/Baseline/bifs.sub/out new file mode 100644 index 0000000000..d8860ac5f8 --- /dev/null +++ b/testing/btest/Baseline/bifs.sub/out @@ -0,0 +1,2 @@ +that is a test +that at a test diff --git a/testing/btest/Baseline/bifs.subst_string/out b/testing/btest/Baseline/bifs.subst_string/out new file mode 100644 index 0000000000..be3c92a20b --- /dev/null +++ b/testing/btest/Baseline/bifs.subst_string/out @@ -0,0 +1 @@ +that at another test diff --git a/testing/btest/Baseline/bifs.system/out b/testing/btest/Baseline/bifs.system/out new file mode 100644 index 0000000000..ae782e3280 --- /dev/null +++ b/testing/btest/Baseline/bifs.system/out @@ -0,0 +1 @@ +thistest diff --git a/testing/btest/Baseline/bifs.system_env/testfile b/testing/btest/Baseline/bifs.system_env/testfile new file mode 100644 index 0000000000..31e0fce560 --- /dev/null +++ b/testing/btest/Baseline/bifs.system_env/testfile @@ -0,0 +1 @@ +helloworld diff --git a/testing/btest/Baseline/bifs.to_addr/error b/testing/btest/Baseline/bifs.to_addr/error new file mode 100644 index 0000000000..b8ba985f7a --- /dev/null +++ b/testing/btest/Baseline/bifs.to_addr/error @@ -0,0 +1 @@ +error: Bad IP address: not an IP diff --git a/testing/btest/Baseline/bifs.to_addr/output b/testing/btest/Baseline/bifs.to_addr/output new file mode 100644 index 0000000000..ff277498f8 --- /dev/null +++ b/testing/btest/Baseline/bifs.to_addr/output @@ -0,0 +1,9 @@ +to_addr(0.0.0.0) = 0.0.0.0 (SUCCESS) +to_addr(1.2.3.4) = 1.2.3.4 (SUCCESS) +to_addr(01.02.03.04) = 1.2.3.4 (SUCCESS) +to_addr(001.002.003.004) = 1.2.3.4 (SUCCESS) +to_addr(10.20.30.40) = 10.20.30.40 (SUCCESS) +to_addr(100.200.30.40) = 100.200.30.40 (SUCCESS) +to_addr(10.0.0.0) = 10.0.0.0 (SUCCESS) +to_addr(10.00.00.000) = 10.0.0.0 (SUCCESS) +to_addr(not an IP) = :: (SUCCESS) diff --git a/testing/btest/Baseline/bifs.to_count/out b/testing/btest/Baseline/bifs.to_count/out new file mode 100644 index 0000000000..a283cbaed3 --- /dev/null +++ b/testing/btest/Baseline/bifs.to_count/out @@ -0,0 +1,9 @@ +0 +2 +3 +4 +7 +0 +18446744073709551611 +0 +123 diff --git a/testing/btest/Baseline/bifs.to_double/out b/testing/btest/Baseline/bifs.to_double/out new file mode 100644 index 0000000000..8e172dcaa6 --- /dev/null +++ b/testing/btest/Baseline/bifs.to_double/out @@ -0,0 +1,6 @@ +0.000001 +1.0 +-60.0 +3600.0 +86400.0 +1342748947.655087 diff --git a/testing/btest/Baseline/bifs.to_double_from_string/error b/testing/btest/Baseline/bifs.to_double_from_string/error new file mode 100644 index 0000000000..d6c6c0c75b --- /dev/null +++ b/testing/btest/Baseline/bifs.to_double_from_string/error @@ -0,0 +1,2 @@ +error in /da/home/robin/bro/master/testing/btest/.tmp/bifs.to_double_from_string/to_double_from_string.bro, line 7 and /da/home/robin/bro/master/testing/btest/.tmp/bifs.to_double_from_string/to_double_from_string.bro, line 15: bad conversion to double (to_double(d) and NotADouble) +error in /da/home/robin/bro/master/testing/btest/.tmp/bifs.to_double_from_string/to_double_from_string.bro, line 7 and /da/home/robin/bro/master/testing/btest/.tmp/bifs.to_double_from_string/to_double_from_string.bro, line 16: bad conversion to double (to_double(d) and ) diff --git a/testing/btest/Baseline/bifs.to_double_from_string/output b/testing/btest/Baseline/bifs.to_double_from_string/output new file mode 100644 index 0000000000..661d2b1479 --- /dev/null +++ b/testing/btest/Baseline/bifs.to_double_from_string/output @@ -0,0 +1,5 @@ +to_double(3.14) = 3.14 (SUCCESS) +to_double(-3.14) = -3.14 (SUCCESS) +to_double(0) = 0.0 (SUCCESS) +to_double(NotADouble) = 0.0 (SUCCESS) +to_double() = 0.0 (SUCCESS) diff --git a/testing/btest/Baseline/bifs.to_int/out b/testing/btest/Baseline/bifs.to_int/out new file mode 100644 index 0000000000..cde0c82987 --- /dev/null +++ b/testing/btest/Baseline/bifs.to_int/out @@ -0,0 +1,3 @@ +1 +-1 +0 diff --git a/testing/btest/Baseline/bifs.to_interval/out b/testing/btest/Baseline/bifs.to_interval/out new file mode 100644 index 0000000000..d841f8d99a --- /dev/null +++ b/testing/btest/Baseline/bifs.to_interval/out @@ -0,0 +1,2 @@ +1234563.14 +-1234563.14 diff --git a/testing/btest/Baseline/bifs.to_port/out b/testing/btest/Baseline/bifs.to_port/out new file mode 100644 index 0000000000..79796d605e --- /dev/null +++ b/testing/btest/Baseline/bifs.to_port/out @@ -0,0 +1,7 @@ +123/tcp +123/udp +123/icmp +0/unknown +256/tcp +256/udp +256/icmp diff --git a/testing/btest/Baseline/bifs.to_subnet/error b/testing/btest/Baseline/bifs.to_subnet/error new file mode 100644 index 0000000000..ee0062cd83 --- /dev/null +++ b/testing/btest/Baseline/bifs.to_subnet/error @@ -0,0 +1 @@ +error: Bad string in SubNetVal ctor: 10.0.0.0 diff --git a/testing/btest/Baseline/bifs.to_subnet/output b/testing/btest/Baseline/bifs.to_subnet/output new file mode 100644 index 0000000000..0775063f89 --- /dev/null +++ b/testing/btest/Baseline/bifs.to_subnet/output @@ -0,0 +1,3 @@ +10.0.0.0/8, T +2607:f8b0::/32, T +::/0, T diff --git a/testing/btest/Baseline/bifs.to_time/out b/testing/btest/Baseline/bifs.to_time/out new file mode 100644 index 0000000000..d841f8d99a --- /dev/null +++ b/testing/btest/Baseline/bifs.to_time/out @@ -0,0 +1,2 @@ +1234563.14 +-1234563.14 diff --git a/testing/btest/Baseline/bifs.type_name/out b/testing/btest/Baseline/bifs.type_name/out new file mode 100644 index 0000000000..30be3b49b2 --- /dev/null +++ b/testing/btest/Baseline/bifs.type_name/out @@ -0,0 +1,26 @@ +string +count +int +double +bool +time +interval +pattern +enum +port +addr +addr +subnet +subnet +vector of count +vector of table[count] of string +set[count] +set[port,string] +table[count] of string +table[string] of table[addr,port] of string +record { c:count; s:string; } +function(aa:int; bb:int;) : bool +function() : any +function() : void +file of string +event() diff --git a/testing/btest/Baseline/bifs.uuid_to_string/out b/testing/btest/Baseline/bifs.uuid_to_string/out new file mode 100644 index 0000000000..8ea4f86dae --- /dev/null +++ b/testing/btest/Baseline/bifs.uuid_to_string/out @@ -0,0 +1,2 @@ +626180fe-6463-6665-6730-313233343536 + diff --git a/testing/btest/Baseline/core.checksums/bad.out b/testing/btest/Baseline/core.checksums/bad.out new file mode 100644 index 0000000000..94b141c9e1 --- /dev/null +++ b/testing/btest/Baseline/core.checksums/bad.out @@ -0,0 +1,103 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-18-03-01 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332784981.078396 - - - - - bad_IP_checksum - F bro +#close 2012-03-26-18-03-01 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-18-01-25 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332784885.686428 UWkUyAuUGXf 127.0.0.1 30000 127.0.0.1 80 bad_TCP_checksum - F bro +#close 2012-03-26-18-01-25 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-18-02-13 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332784933.501023 UWkUyAuUGXf 127.0.0.1 30000 127.0.0.1 13000 bad_UDP_checksum - F bro +#close 2012-03-26-18-02-13 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-29-23 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334075363.536871 UWkUyAuUGXf 192.168.1.100 8 192.168.1.101 0 bad_ICMP_checksum - F bro +#close 2012-04-10-16-29-23 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-18-06-50 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332785210.013051 - - - - - routing0_hdr - F bro +1332785210.013051 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:78:1:32::2 80 bad_TCP_checksum - F bro +#close 2012-03-26-18-06-50 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-17-23-00 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332782580.798420 - - - - - routing0_hdr - F bro +1332782580.798420 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:78:1:32::2 13000 bad_UDP_checksum - F bro +#close 2012-03-26-17-23-00 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-25-11 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334075111.800086 - - - - - routing0_hdr - F bro +1334075111.800086 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 128 2001:78:1:32::1 129 bad_ICMP_checksum - F bro +#close 2012-04-10-16-25-11 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-18-07-30 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332785250.469132 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:4f8:4:7:2e0:81ff:fe52:9a6b 80 bad_TCP_checksum - F bro +#close 2012-03-26-18-07-30 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-17-02-22 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332781342.923813 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 30000 2001:4f8:4:7:2e0:81ff:fe52:9a6b 13000 bad_UDP_checksum - F bro +#close 2012-03-26-17-02-22 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-22-19 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334074939.467194 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 128 2001:4f8:4:7:2e0:81ff:fe52:9a6b 129 bad_ICMP_checksum - F bro +#close 2012-04-10-16-22-19 diff --git a/testing/btest/Baseline/core.checksums/good.out b/testing/btest/Baseline/core.checksums/good.out new file mode 100644 index 0000000000..a47931a15c --- /dev/null +++ b/testing/btest/Baseline/core.checksums/good.out @@ -0,0 +1,70 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-22-19 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334074939.467194 UWkUyAuUGXf 2001:4f8:4:7:2e0:81ff:fe52:ffff 128 2001:4f8:4:7:2e0:81ff:fe52:9a6b 129 bad_ICMP_checksum - F bro +#close 2012-04-10-16-22-19 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-18-05-25 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332785125.596793 - - - - - routing0_hdr - F bro +#close 2012-03-26-18-05-25 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-03-26-17-21-48 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1332782508.592037 - - - - - routing0_hdr - F bro +#close 2012-03-26-17-21-48 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-23-47 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334075027.053380 - - - - - routing0_hdr - F bro +#close 2012-04-10-16-23-47 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-23-47 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334075027.053380 - - - - - routing0_hdr - F bro +#close 2012-04-10-16-23-47 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-23-47 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334075027.053380 - - - - - routing0_hdr - F bro +#close 2012-04-10-16-23-47 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-16-23-47 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334075027.053380 - - - - - routing0_hdr - F bro +#close 2012-04-10-16-23-47 diff --git a/testing/btest/Baseline/core.conn-uid/counts b/testing/btest/Baseline/core.conn-uid/counts index a8fa06e1be..38b10c1b2b 100644 --- a/testing/btest/Baseline/core.conn-uid/counts +++ b/testing/btest/Baseline/core.conn-uid/counts @@ -1 +1 @@ -62 +68 diff --git a/testing/btest/Baseline/core.conn-uid/output b/testing/btest/Baseline/core.conn-uid/output index 6f58e7c10c..c77eda4f04 100644 --- a/testing/btest/Baseline/core.conn-uid/output +++ b/testing/btest/Baseline/core.conn-uid/output @@ -1,40 +1,43 @@ [orig_h=141.142.220.202, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], UWkUyAuUGXf -[orig_h=141.142.220.50, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], arKYeMETxOg -[orig_h=141.142.220.118, orig_p=35634/tcp, resp_h=208.80.152.2, resp_p=80/tcp], k6kgXLOoSKl -[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], nQcgTWjvg4c -[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], nQcgTWjvg4c -[orig_h=141.142.220.118, orig_p=43927/udp, resp_h=141.142.2.2, resp_p=53/udp], j4u32Pc5bif -[orig_h=141.142.220.118, orig_p=37676/udp, resp_h=141.142.2.2, resp_p=53/udp], TEfuqmmG4bh -[orig_h=141.142.220.118, orig_p=40526/udp, resp_h=141.142.2.2, resp_p=53/udp], FrJExwHcSal -[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 5OKnoww6xl4 -[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 -[orig_h=141.142.220.118, orig_p=32902/udp, resp_h=141.142.2.2, resp_p=53/udp], VW0XPVINV8a -[orig_h=141.142.220.118, orig_p=59816/udp, resp_h=141.142.2.2, resp_p=53/udp], fRFu0wcOle6 -[orig_h=141.142.220.118, orig_p=59714/udp, resp_h=141.142.2.2, resp_p=53/udp], qSsw6ESzHV4 -[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], iE6yhOq3SF -[orig_h=141.142.220.118, orig_p=58206/udp, resp_h=141.142.2.2, resp_p=53/udp], GSxOnSLghOa -[orig_h=141.142.220.118, orig_p=38911/udp, resp_h=141.142.2.2, resp_p=53/udp], qCaWGmzFtM5 -[orig_h=141.142.220.118, orig_p=59746/udp, resp_h=141.142.2.2, resp_p=53/udp], 70MGiRM1Qf4 -[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], h5DsfNtYzi1 -[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a -[orig_h=141.142.220.118, orig_p=45000/udp, resp_h=141.142.2.2, resp_p=53/udp], Tw8jXtpTGu6 -[orig_h=141.142.220.118, orig_p=48479/udp, resp_h=141.142.2.2, resp_p=53/udp], c4Zw9TmAE05 -[orig_h=141.142.220.118, orig_p=48128/udp, resp_h=141.142.2.2, resp_p=53/udp], EAr0uf4mhq -[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GvmoxJFXdTa -[orig_h=141.142.220.118, orig_p=56056/udp, resp_h=141.142.2.2, resp_p=53/udp], 0Q4FH8sESw5 -[orig_h=141.142.220.118, orig_p=55092/udp, resp_h=141.142.2.2, resp_p=53/udp], slFea8xwSmb -[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], UfGkYA2HI2g -[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 -[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 5OKnoww6xl4 -[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], iE6yhOq3SF -[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a -[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], h5DsfNtYzi1 -[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GvmoxJFXdTa -[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], UfGkYA2HI2g -[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], i2rO3KD1Syg -[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], i2rO3KD1Syg -[orig_h=141.142.220.44, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], 2cx26uAvUPl -[orig_h=141.142.220.226, orig_p=137/udp, resp_h=141.142.220.255, resp_p=137/udp], BWaU4aSuwkc -[orig_h=141.142.220.226, orig_p=55131/udp, resp_h=224.0.0.252, resp_p=5355/udp], 10XodEwRycf -[orig_h=141.142.220.226, orig_p=55671/udp, resp_h=224.0.0.252, resp_p=5355/udp], zno26fFZkrh -[orig_h=141.142.220.238, orig_p=56641/udp, resp_h=141.142.220.255, resp_p=137/udp], v5rgkJBig5l +[orig_h=fe80::217:f2ff:fed7:cf65, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp], arKYeMETxOg +[orig_h=141.142.220.50, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], k6kgXLOoSKl +[orig_h=141.142.220.118, orig_p=35634/tcp, resp_h=208.80.152.2, resp_p=80/tcp], nQcgTWjvg4c +[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], j4u32Pc5bif +[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], j4u32Pc5bif +[orig_h=141.142.220.118, orig_p=43927/udp, resp_h=141.142.2.2, resp_p=53/udp], TEfuqmmG4bh +[orig_h=141.142.220.118, orig_p=37676/udp, resp_h=141.142.2.2, resp_p=53/udp], FrJExwHcSal +[orig_h=141.142.220.118, orig_p=40526/udp, resp_h=141.142.2.2, resp_p=53/udp], 5OKnoww6xl4 +[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 +[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], VW0XPVINV8a +[orig_h=141.142.220.118, orig_p=32902/udp, resp_h=141.142.2.2, resp_p=53/udp], fRFu0wcOle6 +[orig_h=141.142.220.118, orig_p=59816/udp, resp_h=141.142.2.2, resp_p=53/udp], qSsw6ESzHV4 +[orig_h=141.142.220.118, orig_p=59714/udp, resp_h=141.142.2.2, resp_p=53/udp], iE6yhOq3SF +[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GSxOnSLghOa +[orig_h=141.142.220.118, orig_p=58206/udp, resp_h=141.142.2.2, resp_p=53/udp], qCaWGmzFtM5 +[orig_h=141.142.220.118, orig_p=38911/udp, resp_h=141.142.2.2, resp_p=53/udp], 70MGiRM1Qf4 +[orig_h=141.142.220.118, orig_p=59746/udp, resp_h=141.142.2.2, resp_p=53/udp], h5DsfNtYzi1 +[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a +[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], Tw8jXtpTGu6 +[orig_h=141.142.220.118, orig_p=45000/udp, resp_h=141.142.2.2, resp_p=53/udp], c4Zw9TmAE05 +[orig_h=141.142.220.118, orig_p=48479/udp, resp_h=141.142.2.2, resp_p=53/udp], EAr0uf4mhq +[orig_h=141.142.220.118, orig_p=48128/udp, resp_h=141.142.2.2, resp_p=53/udp], GvmoxJFXdTa +[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 0Q4FH8sESw5 +[orig_h=141.142.220.118, orig_p=56056/udp, resp_h=141.142.2.2, resp_p=53/udp], slFea8xwSmb +[orig_h=141.142.220.118, orig_p=55092/udp, resp_h=141.142.2.2, resp_p=53/udp], UfGkYA2HI2g +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], i2rO3KD1Syg +[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], VW0XPVINV8a +[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 +[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GSxOnSLghOa +[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], Tw8jXtpTGu6 +[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a +[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 0Q4FH8sESw5 +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], i2rO3KD1Syg +[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], 2cx26uAvUPl +[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], 2cx26uAvUPl +[orig_h=141.142.220.44, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], BWaU4aSuwkc +[orig_h=141.142.220.226, orig_p=137/udp, resp_h=141.142.220.255, resp_p=137/udp], 10XodEwRycf +[orig_h=fe80::3074:17d5:2052:c324, orig_p=65373/udp, resp_h=ff02::1:3, resp_p=5355/udp], zno26fFZkrh +[orig_h=141.142.220.226, orig_p=55131/udp, resp_h=224.0.0.252, resp_p=5355/udp], v5rgkJBig5l +[orig_h=fe80::3074:17d5:2052:c324, orig_p=54213/udp, resp_h=ff02::1:3, resp_p=5355/udp], eWZCH7OONC1 +[orig_h=141.142.220.226, orig_p=55671/udp, resp_h=224.0.0.252, resp_p=5355/udp], 0Pwk3ntf8O3 +[orig_h=141.142.220.238, orig_p=56641/udp, resp_h=141.142.220.255, resp_p=137/udp], 0HKorjr8Zp7 diff --git a/testing/btest/Baseline/core.conn-uid/output.cc b/testing/btest/Baseline/core.conn-uid/output.cc deleted file mode 100644 index 6f58e7c10c..0000000000 --- a/testing/btest/Baseline/core.conn-uid/output.cc +++ /dev/null @@ -1,40 +0,0 @@ -[orig_h=141.142.220.202, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], UWkUyAuUGXf -[orig_h=141.142.220.50, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], arKYeMETxOg -[orig_h=141.142.220.118, orig_p=35634/tcp, resp_h=208.80.152.2, resp_p=80/tcp], k6kgXLOoSKl -[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], nQcgTWjvg4c -[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], nQcgTWjvg4c -[orig_h=141.142.220.118, orig_p=43927/udp, resp_h=141.142.2.2, resp_p=53/udp], j4u32Pc5bif -[orig_h=141.142.220.118, orig_p=37676/udp, resp_h=141.142.2.2, resp_p=53/udp], TEfuqmmG4bh -[orig_h=141.142.220.118, orig_p=40526/udp, resp_h=141.142.2.2, resp_p=53/udp], FrJExwHcSal -[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 5OKnoww6xl4 -[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 -[orig_h=141.142.220.118, orig_p=32902/udp, resp_h=141.142.2.2, resp_p=53/udp], VW0XPVINV8a -[orig_h=141.142.220.118, orig_p=59816/udp, resp_h=141.142.2.2, resp_p=53/udp], fRFu0wcOle6 -[orig_h=141.142.220.118, orig_p=59714/udp, resp_h=141.142.2.2, resp_p=53/udp], qSsw6ESzHV4 -[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], iE6yhOq3SF -[orig_h=141.142.220.118, orig_p=58206/udp, resp_h=141.142.2.2, resp_p=53/udp], GSxOnSLghOa -[orig_h=141.142.220.118, orig_p=38911/udp, resp_h=141.142.2.2, resp_p=53/udp], qCaWGmzFtM5 -[orig_h=141.142.220.118, orig_p=59746/udp, resp_h=141.142.2.2, resp_p=53/udp], 70MGiRM1Qf4 -[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], h5DsfNtYzi1 -[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a -[orig_h=141.142.220.118, orig_p=45000/udp, resp_h=141.142.2.2, resp_p=53/udp], Tw8jXtpTGu6 -[orig_h=141.142.220.118, orig_p=48479/udp, resp_h=141.142.2.2, resp_p=53/udp], c4Zw9TmAE05 -[orig_h=141.142.220.118, orig_p=48128/udp, resp_h=141.142.2.2, resp_p=53/udp], EAr0uf4mhq -[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GvmoxJFXdTa -[orig_h=141.142.220.118, orig_p=56056/udp, resp_h=141.142.2.2, resp_p=53/udp], 0Q4FH8sESw5 -[orig_h=141.142.220.118, orig_p=55092/udp, resp_h=141.142.2.2, resp_p=53/udp], slFea8xwSmb -[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], UfGkYA2HI2g -[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 -[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 5OKnoww6xl4 -[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], iE6yhOq3SF -[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a -[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], h5DsfNtYzi1 -[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GvmoxJFXdTa -[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], UfGkYA2HI2g -[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], i2rO3KD1Syg -[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], i2rO3KD1Syg -[orig_h=141.142.220.44, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], 2cx26uAvUPl -[orig_h=141.142.220.226, orig_p=137/udp, resp_h=141.142.220.255, resp_p=137/udp], BWaU4aSuwkc -[orig_h=141.142.220.226, orig_p=55131/udp, resp_h=224.0.0.252, resp_p=5355/udp], 10XodEwRycf -[orig_h=141.142.220.226, orig_p=55671/udp, resp_h=224.0.0.252, resp_p=5355/udp], zno26fFZkrh -[orig_h=141.142.220.238, orig_p=56641/udp, resp_h=141.142.220.255, resp_p=137/udp], v5rgkJBig5l diff --git a/testing/btest/Baseline/core.conn-uid/output.cc2 b/testing/btest/Baseline/core.conn-uid/output.cc2 deleted file mode 100644 index 6f58e7c10c..0000000000 --- a/testing/btest/Baseline/core.conn-uid/output.cc2 +++ /dev/null @@ -1,40 +0,0 @@ -[orig_h=141.142.220.202, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], UWkUyAuUGXf -[orig_h=141.142.220.50, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], arKYeMETxOg -[orig_h=141.142.220.118, orig_p=35634/tcp, resp_h=208.80.152.2, resp_p=80/tcp], k6kgXLOoSKl -[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], nQcgTWjvg4c -[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp], nQcgTWjvg4c -[orig_h=141.142.220.118, orig_p=43927/udp, resp_h=141.142.2.2, resp_p=53/udp], j4u32Pc5bif -[orig_h=141.142.220.118, orig_p=37676/udp, resp_h=141.142.2.2, resp_p=53/udp], TEfuqmmG4bh -[orig_h=141.142.220.118, orig_p=40526/udp, resp_h=141.142.2.2, resp_p=53/udp], FrJExwHcSal -[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 5OKnoww6xl4 -[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 -[orig_h=141.142.220.118, orig_p=32902/udp, resp_h=141.142.2.2, resp_p=53/udp], VW0XPVINV8a -[orig_h=141.142.220.118, orig_p=59816/udp, resp_h=141.142.2.2, resp_p=53/udp], fRFu0wcOle6 -[orig_h=141.142.220.118, orig_p=59714/udp, resp_h=141.142.2.2, resp_p=53/udp], qSsw6ESzHV4 -[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], iE6yhOq3SF -[orig_h=141.142.220.118, orig_p=58206/udp, resp_h=141.142.2.2, resp_p=53/udp], GSxOnSLghOa -[orig_h=141.142.220.118, orig_p=38911/udp, resp_h=141.142.2.2, resp_p=53/udp], qCaWGmzFtM5 -[orig_h=141.142.220.118, orig_p=59746/udp, resp_h=141.142.2.2, resp_p=53/udp], 70MGiRM1Qf4 -[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], h5DsfNtYzi1 -[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a -[orig_h=141.142.220.118, orig_p=45000/udp, resp_h=141.142.2.2, resp_p=53/udp], Tw8jXtpTGu6 -[orig_h=141.142.220.118, orig_p=48479/udp, resp_h=141.142.2.2, resp_p=53/udp], c4Zw9TmAE05 -[orig_h=141.142.220.118, orig_p=48128/udp, resp_h=141.142.2.2, resp_p=53/udp], EAr0uf4mhq -[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GvmoxJFXdTa -[orig_h=141.142.220.118, orig_p=56056/udp, resp_h=141.142.2.2, resp_p=53/udp], 0Q4FH8sESw5 -[orig_h=141.142.220.118, orig_p=55092/udp, resp_h=141.142.2.2, resp_p=53/udp], slFea8xwSmb -[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], UfGkYA2HI2g -[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 3PKsZ2Uye21 -[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp], 5OKnoww6xl4 -[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp], iE6yhOq3SF -[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp], P654jzLoe3a -[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp], h5DsfNtYzi1 -[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp], GvmoxJFXdTa -[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp], UfGkYA2HI2g -[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], i2rO3KD1Syg -[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp], i2rO3KD1Syg -[orig_h=141.142.220.44, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], 2cx26uAvUPl -[orig_h=141.142.220.226, orig_p=137/udp, resp_h=141.142.220.255, resp_p=137/udp], BWaU4aSuwkc -[orig_h=141.142.220.226, orig_p=55131/udp, resp_h=224.0.0.252, resp_p=5355/udp], 10XodEwRycf -[orig_h=141.142.220.226, orig_p=55671/udp, resp_h=224.0.0.252, resp_p=5355/udp], zno26fFZkrh -[orig_h=141.142.220.238, orig_p=56641/udp, resp_h=141.142.220.255, resp_p=137/udp], v5rgkJBig5l diff --git a/testing/btest/Baseline/core.disable-mobile-ipv6/weird.log b/testing/btest/Baseline/core.disable-mobile-ipv6/weird.log new file mode 100644 index 0000000000..9da1a8d3ba --- /dev/null +++ b/testing/btest/Baseline/core.disable-mobile-ipv6/weird.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-05-21-56-51 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1333663011.602839 - - - - - unknown_protocol_135 - F bro +#close 2012-04-05-21-56-51 diff --git a/testing/btest/Baseline/core.discarder/output b/testing/btest/Baseline/core.discarder/output new file mode 100644 index 0000000000..82b4b3e622 --- /dev/null +++ b/testing/btest/Baseline/core.discarder/output @@ -0,0 +1,24 @@ +################ IP Discarder ################ +[orig_h=141.142.220.118, orig_p=35634/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=35634/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +################ TCP Discarder ################ +[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +################ UDP Discarder ################ +[orig_h=fe80::217:f2ff:fed7:cf65, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp] +[orig_h=fe80::3074:17d5:2052:c324, orig_p=65373/udp, resp_h=ff02::1:3, resp_p=5355/udp] +[orig_h=fe80::3074:17d5:2052:c324, orig_p=65373/udp, resp_h=ff02::1:3, resp_p=5355/udp] +[orig_h=fe80::3074:17d5:2052:c324, orig_p=54213/udp, resp_h=ff02::1:3, resp_p=5355/udp] +[orig_h=fe80::3074:17d5:2052:c324, orig_p=54213/udp, resp_h=ff02::1:3, resp_p=5355/udp] +################ ICMP Discarder ################ +Discard icmp packet: [icmp_type=3] diff --git a/testing/btest/Baseline/core.dns-init/output b/testing/btest/Baseline/core.dns-init/output new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/core.expr-exception/output b/testing/btest/Baseline/core.expr-exception/output new file mode 100644 index 0000000000..45abb333c6 --- /dev/null +++ b/testing/btest/Baseline/core.expr-exception/output @@ -0,0 +1,18 @@ +ftp field missing +[orig_h=141.142.220.118, orig_p=48649/tcp, resp_h=208.80.152.118, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.118, orig_p=49997/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.118, orig_p=49996/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.118, orig_p=49998/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.118, orig_p=50000/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.118, orig_p=49999/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.118, orig_p=50001/tcp, resp_h=208.80.152.3, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +ftp field missing +[orig_h=141.142.220.235, orig_p=6705/tcp, resp_h=173.192.163.128, resp_p=80/tcp] diff --git a/testing/btest/Baseline/core.expr-exception/reporter.log b/testing/btest/Baseline/core.expr-exception/reporter.log index 2dfe6b7b8e..d6e07b42b3 100644 --- a/testing/btest/Baseline/core.expr-exception/reporter.log +++ b/testing/btest/Baseline/core.expr-exception/reporter.log @@ -1,13 +1,18 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path reporter +#open 2011-03-18-19-06-08 #fields ts level message location #types time enum string string -1300475168.783842 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475168.915940 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475168.916118 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475168.918295 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475168.952193 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475168.952228 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475168.954761 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475168.962628 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 -1300475169.780331 Reporter::ERROR field value missing [c$ftp] /home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 8 +1300475168.783842 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475168.915940 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475168.916118 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475168.918295 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475168.952193 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475168.952228 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475168.954761 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475168.962628 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +1300475169.780331 Reporter::ERROR field value missing [c$ftp] /da/home/robin/bro/master/testing/btest/.tmp/core.expr-exception/expr-exception.bro, line 10 +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/core.file-caching-serialization/one0 b/testing/btest/Baseline/core.file-caching-serialization/one0 new file mode 100644 index 0000000000..abfe9a2af6 --- /dev/null +++ b/testing/btest/Baseline/core.file-caching-serialization/one0 @@ -0,0 +1,4 @@ +opened +write 0 +write 3 +write 6 diff --git a/testing/btest/Baseline/core.file-caching-serialization/one1 b/testing/btest/Baseline/core.file-caching-serialization/one1 new file mode 100644 index 0000000000..d53edaed28 --- /dev/null +++ b/testing/btest/Baseline/core.file-caching-serialization/one1 @@ -0,0 +1,4 @@ +opened +write 1 +write 4 +write 7 diff --git a/testing/btest/Baseline/core.file-caching-serialization/one2 b/testing/btest/Baseline/core.file-caching-serialization/one2 new file mode 100644 index 0000000000..5b5c9bc130 --- /dev/null +++ b/testing/btest/Baseline/core.file-caching-serialization/one2 @@ -0,0 +1,4 @@ +opened +write 2 +write 5 +write 8 diff --git a/testing/btest/Baseline/core.file-caching-serialization/two0 b/testing/btest/Baseline/core.file-caching-serialization/two0 new file mode 100644 index 0000000000..88e273032e --- /dev/null +++ b/testing/btest/Baseline/core.file-caching-serialization/two0 @@ -0,0 +1,6 @@ +opened +write 0 +opened +write 3 +opened +write 6 diff --git a/testing/btest/Baseline/core.file-caching-serialization/two1 b/testing/btest/Baseline/core.file-caching-serialization/two1 new file mode 100644 index 0000000000..b2f9350bc4 --- /dev/null +++ b/testing/btest/Baseline/core.file-caching-serialization/two1 @@ -0,0 +1,6 @@ +opened +write 1 +opened +write 4 +opened +write 7 diff --git a/testing/btest/Baseline/core.file-caching-serialization/two2 b/testing/btest/Baseline/core.file-caching-serialization/two2 new file mode 100644 index 0000000000..94a971c7db --- /dev/null +++ b/testing/btest/Baseline/core.file-caching-serialization/two2 @@ -0,0 +1,6 @@ +opened +write 2 +opened +write 5 +opened +write 8 diff --git a/testing/btest/Baseline/core.icmp.icmp-context/output b/testing/btest/Baseline/core.icmp.icmp-context/output new file mode 100644 index 0000000000..40dc778d8b --- /dev/null +++ b/testing/btest/Baseline/core.icmp.icmp-context/output @@ -0,0 +1,12 @@ +icmp_unreachable (code=0) + conn_id: [orig_h=10.0.0.1, orig_p=3/icmp, resp_h=10.0.0.2, resp_p=0/icmp] + icmp_conn: [orig_h=10.0.0.1, resp_h=10.0.0.2, itype=3, icode=0, len=0, hlim=64, v6=F] + icmp_context: [id=[orig_h=::, orig_p=0/unknown, resp_h=::, resp_p=0/unknown], len=0, proto=0, frag_offset=0, bad_hdr_len=T, bad_checksum=F, MF=F, DF=F] +icmp_unreachable (code=0) + conn_id: [orig_h=10.0.0.1, orig_p=3/icmp, resp_h=10.0.0.2, resp_p=0/icmp] + icmp_conn: [orig_h=10.0.0.1, resp_h=10.0.0.2, itype=3, icode=0, len=20, hlim=64, v6=F] + icmp_context: [id=[orig_h=10.0.0.2, orig_p=0/unknown, resp_h=10.0.0.1, resp_p=0/unknown], len=20, proto=0, frag_offset=0, bad_hdr_len=T, bad_checksum=F, MF=F, DF=F] +icmp_unreachable (code=3) + conn_id: [orig_h=192.168.1.102, orig_p=3/icmp, resp_h=192.168.1.1, resp_p=3/icmp] + icmp_conn: [orig_h=192.168.1.102, resp_h=192.168.1.1, itype=3, icode=3, len=148, hlim=128, v6=F] + icmp_context: [id=[orig_h=192.168.1.1, orig_p=53/udp, resp_h=192.168.1.102, resp_p=59207/udp], len=163, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] diff --git a/testing/btest/Baseline/core.icmp.icmp-events/output b/testing/btest/Baseline/core.icmp.icmp-events/output new file mode 100644 index 0000000000..c8c8eb317f --- /dev/null +++ b/testing/btest/Baseline/core.icmp.icmp-events/output @@ -0,0 +1,20 @@ +icmp_unreachable (code=3) + conn_id: [orig_h=192.168.1.102, orig_p=3/icmp, resp_h=192.168.1.1, resp_p=3/icmp] + icmp_conn: [orig_h=192.168.1.102, resp_h=192.168.1.1, itype=3, icode=3, len=148, hlim=128, v6=F] + icmp_context: [id=[orig_h=192.168.1.1, orig_p=53/udp, resp_h=192.168.1.102, resp_p=59207/udp], len=163, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] +icmp_time_exceeded (code=0) + conn_id: [orig_h=10.0.0.1, orig_p=11/icmp, resp_h=10.0.0.2, resp_p=0/icmp] + icmp_conn: [orig_h=10.0.0.1, resp_h=10.0.0.2, itype=11, icode=0, len=32, hlim=64, v6=F] + icmp_context: [id=[orig_h=10.0.0.2, orig_p=30000/udp, resp_h=10.0.0.1, resp_p=13000/udp], len=32, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] +icmp_echo_request (id=34844, seq=0, payload=O\x85\xe0C\0^N\xeb\xff^H^I^J^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z\x1b\x1c\x1d\x1e\x1f !"#$%&'()*+,-./01234567) + conn_id: [orig_h=10.0.0.1, orig_p=8/icmp, resp_h=74.125.225.99, resp_p=0/icmp] + icmp_conn: [orig_h=10.0.0.1, resp_h=74.125.225.99, itype=8, icode=0, len=56, hlim=64, v6=F] +icmp_echo_reply (id=34844, seq=0, payload=O\x85\xe0C\0^N\xeb\xff^H^I^J^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z\x1b\x1c\x1d\x1e\x1f !"#$%&'()*+,-./01234567) + conn_id: [orig_h=10.0.0.1, orig_p=8/icmp, resp_h=74.125.225.99, resp_p=0/icmp] + icmp_conn: [orig_h=10.0.0.1, resp_h=74.125.225.99, itype=8, icode=0, len=56, hlim=64, v6=F] +icmp_echo_request (id=34844, seq=1, payload=O\x85\xe0D\0^N\xf0}^H^I^J^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z\x1b\x1c\x1d\x1e\x1f !"#$%&'()*+,-./01234567) + conn_id: [orig_h=10.0.0.1, orig_p=8/icmp, resp_h=74.125.225.99, resp_p=0/icmp] + icmp_conn: [orig_h=10.0.0.1, resp_h=74.125.225.99, itype=8, icode=0, len=56, hlim=64, v6=F] +icmp_echo_reply (id=34844, seq=1, payload=O\x85\xe0D\0^N\xf0}^H^I^J^K^L^M^N^O^P^Q^R^S^T^U^V^W^X^Y^Z\x1b\x1c\x1d\x1e\x1f !"#$%&'()*+,-./01234567) + conn_id: [orig_h=10.0.0.1, orig_p=8/icmp, resp_h=74.125.225.99, resp_p=0/icmp] + icmp_conn: [orig_h=10.0.0.1, resp_h=74.125.225.99, itype=8, icode=0, len=56, hlim=64, v6=F] diff --git a/testing/btest/Baseline/core.icmp.icmp6-context/output b/testing/btest/Baseline/core.icmp.icmp6-context/output new file mode 100644 index 0000000000..7a83679018 --- /dev/null +++ b/testing/btest/Baseline/core.icmp.icmp6-context/output @@ -0,0 +1,16 @@ +icmp_unreachable (code=0) + conn_id: [orig_h=fe80::dead, orig_p=1/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=1, icode=0, len=0, hlim=64, v6=T] + icmp_context: [id=[orig_h=::, orig_p=0/unknown, resp_h=::, resp_p=0/unknown], len=0, proto=0, frag_offset=0, bad_hdr_len=T, bad_checksum=F, MF=F, DF=F] +icmp_unreachable (code=0) + conn_id: [orig_h=fe80::dead, orig_p=1/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=1, icode=0, len=40, hlim=64, v6=T] + icmp_context: [id=[orig_h=fe80::beef, orig_p=0/unknown, resp_h=fe80::dead, resp_p=0/unknown], len=48, proto=0, frag_offset=0, bad_hdr_len=T, bad_checksum=F, MF=F, DF=F] +icmp_unreachable (code=0) + conn_id: [orig_h=fe80::dead, orig_p=1/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=1, icode=0, len=60, hlim=64, v6=T] + icmp_context: [id=[orig_h=fe80::beef, orig_p=30000/udp, resp_h=fe80::dead, resp_p=13000/udp], len=60, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] +icmp_unreachable (code=0) + conn_id: [orig_h=fe80::dead, orig_p=1/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=1, icode=0, len=48, hlim=64, v6=T] + icmp_context: [id=[orig_h=fe80::beef, orig_p=0/unknown, resp_h=fe80::dead, resp_p=0/unknown], len=48, proto=0, frag_offset=0, bad_hdr_len=T, bad_checksum=F, MF=F, DF=F] diff --git a/testing/btest/Baseline/core.icmp.icmp6-events/output b/testing/btest/Baseline/core.icmp.icmp6-events/output new file mode 100644 index 0000000000..fdb58e5be1 --- /dev/null +++ b/testing/btest/Baseline/core.icmp.icmp6-events/output @@ -0,0 +1,73 @@ +icmp_unreachable (code=0) + conn_id: [orig_h=fe80::dead, orig_p=1/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=1, icode=0, len=60, hlim=64, v6=T] + icmp_context: [id=[orig_h=fe80::beef, orig_p=30000/udp, resp_h=fe80::dead, resp_p=13000/udp], len=60, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] +icmp_packet_too_big (code=0) + conn_id: [orig_h=fe80::dead, orig_p=2/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=2, icode=0, len=52, hlim=64, v6=T] + icmp_context: [id=[orig_h=fe80::beef, orig_p=30000/udp, resp_h=fe80::dead, resp_p=13000/udp], len=52, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] +icmp_time_exceeded (code=0) + conn_id: [orig_h=fe80::dead, orig_p=3/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=3, icode=0, len=52, hlim=64, v6=T] + icmp_context: [id=[orig_h=fe80::beef, orig_p=30000/udp, resp_h=fe80::dead, resp_p=13000/udp], len=52, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] +icmp_parameter_problem (code=0) + conn_id: [orig_h=fe80::dead, orig_p=4/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=4, icode=0, len=52, hlim=64, v6=T] + icmp_context: [id=[orig_h=fe80::beef, orig_p=30000/udp, resp_h=fe80::dead, resp_p=13000/udp], len=52, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F] +icmp_echo_request (id=1, seq=3, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_echo_reply (id=1, seq=3, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_echo_request (id=1, seq=4, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_echo_reply (id=1, seq=4, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_echo_request (id=1, seq=5, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_echo_reply (id=1, seq=5, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_echo_request (id=1, seq=6, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_echo_reply (id=1, seq=6, payload=abcdefghijklmnopqrstuvwabcdefghi) + conn_id: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, orig_p=128/icmp, resp_h=2001:4860:8006::63, resp_p=129/icmp] + icmp_conn: [orig_h=2620:0:e00:400e:d1d:db37:beb:5aac, resp_h=2001:4860:8006::63, itype=128, icode=0, len=32, hlim=128, v6=T] +icmp_redirect (tgt=fe80::cafe, dest=fe80::babe) + conn_id: [orig_h=fe80::dead, orig_p=137/icmp, resp_h=fe80::beef, resp_p=0/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=137, icode=0, len=32, hlim=255, v6=T] + options: [] +icmp_router_advertisement + cur_hop_limit=13 + managed=T + other=F + home_agent=T + pref=3 + proxy=F + rsv=0 + router_lifetime=30.0 mins + reachable_time=3.0 secs 700.0 msecs + retrans_timer=1.0 sec 300.0 msecs + conn_id: [orig_h=fe80::dead, orig_p=134/icmp, resp_h=fe80::beef, resp_p=133/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=134, icode=0, len=8, hlim=255, v6=T] + options: [] +icmp_neighbor_advertisement (tgt=fe80::babe) + router=T + solicited=F + override=T + conn_id: [orig_h=fe80::dead, orig_p=136/icmp, resp_h=fe80::beef, resp_p=135/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=136, icode=0, len=16, hlim=255, v6=T] + options: [] +icmp_router_solicitation + conn_id: [orig_h=fe80::dead, orig_p=133/icmp, resp_h=fe80::beef, resp_p=134/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=133, icode=0, len=0, hlim=255, v6=T] + options: [] +icmp_neighbor_solicitation (tgt=fe80::babe) + conn_id: [orig_h=fe80::dead, orig_p=135/icmp, resp_h=fe80::beef, resp_p=136/icmp] + icmp_conn: [orig_h=fe80::dead, resp_h=fe80::beef, itype=135, icode=0, len=16, hlim=255, v6=T] + options: [] diff --git a/testing/btest/Baseline/core.icmp.icmp6-nd-options/output b/testing/btest/Baseline/core.icmp.icmp6-nd-options/output new file mode 100644 index 0000000000..1a3958f32d --- /dev/null +++ b/testing/btest/Baseline/core.icmp.icmp6-nd-options/output @@ -0,0 +1,28 @@ +icmp_redirect options + [otype=4, len=8, link_address=, prefix=, redirect=[id=[orig_h=fe80::aaaa, orig_p=30000/udp, resp_h=fe80::bbbb, resp_p=13000/udp], len=56, proto=2, frag_offset=0, bad_hdr_len=F, bad_checksum=F, MF=F, DF=F], mtu=, payload=] +icmp_neighbor_advertisement options + [otype=2, len=1, link_address=\xc2\0T\xf5\0\0, prefix=, redirect=, mtu=, payload=] + MAC: c20054f50000 +icmp_router_advertisement options + [otype=1, len=1, link_address=\xc2\0T\xf5\0\0, prefix=, redirect=, mtu=, payload=] + MAC: c20054f50000 + [otype=5, len=1, link_address=, prefix=, redirect=, mtu=1500, payload=] + [otype=3, len=4, link_address=, prefix=[prefix_len=64, L_flag=T, A_flag=T, valid_lifetime=30.0 days, preferred_lifetime=7.0 days, prefix=2001:db8:0:1::], redirect=, mtu=, payload=] +icmp_neighbor_advertisement options + [otype=2, len=1, link_address=\xc2\0T\xf5\0\0, prefix=, redirect=, mtu=, payload=] + MAC: c20054f50000 +icmp_router_advertisement options + [otype=1, len=1, link_address=\xc2\0T\xf5\0\0, prefix=, redirect=, mtu=, payload=] + MAC: c20054f50000 + [otype=5, len=1, link_address=, prefix=, redirect=, mtu=1500, payload=] + [otype=3, len=4, link_address=, prefix=[prefix_len=64, L_flag=T, A_flag=T, valid_lifetime=30.0 days, preferred_lifetime=7.0 days, prefix=2001:db8:0:1::], redirect=, mtu=, payload=] +icmp_router_advertisement options + [otype=1, len=1, link_address=\xc2\0T\xf5\0\0, prefix=, redirect=, mtu=, payload=] + MAC: c20054f50000 + [otype=5, len=1, link_address=, prefix=, redirect=, mtu=1500, payload=] + [otype=3, len=4, link_address=, prefix=[prefix_len=64, L_flag=T, A_flag=T, valid_lifetime=30.0 days, preferred_lifetime=7.0 days, prefix=2001:db8:0:1::], redirect=, mtu=, payload=] +icmp_router_advertisement options + [otype=1, len=1, link_address=\xc2\0T\xf5\0\0, prefix=, redirect=, mtu=, payload=] + MAC: c20054f50000 + [otype=5, len=1, link_address=, prefix=, redirect=, mtu=1500, payload=] + [otype=3, len=4, link_address=, prefix=[prefix_len=64, L_flag=T, A_flag=T, valid_lifetime=30.0 days, preferred_lifetime=7.0 days, prefix=2001:db8:0:1::], redirect=, mtu=, payload=] diff --git a/testing/btest/Baseline/core.ipv6-atomic-frag/output b/testing/btest/Baseline/core.ipv6-atomic-frag/output new file mode 100644 index 0000000000..4a628a4bdc --- /dev/null +++ b/testing/btest/Baseline/core.ipv6-atomic-frag/output @@ -0,0 +1,4 @@ +[orig_h=2001:db8:1::2, orig_p=36951/tcp, resp_h=2001:db8:1::1, resp_p=80/tcp] +[orig_h=2001:db8:1::2, orig_p=59694/tcp, resp_h=2001:db8:1::1, resp_p=80/tcp] +[orig_h=2001:db8:1::2, orig_p=27393/tcp, resp_h=2001:db8:1::1, resp_p=80/tcp] +[orig_h=2001:db8:1::2, orig_p=45805/tcp, resp_h=2001:db8:1::1, resp_p=80/tcp] diff --git a/testing/btest/Baseline/core.ipv6-flow-labels/output b/testing/btest/Baseline/core.ipv6-flow-labels/output new file mode 100644 index 0000000000..9f7292d485 --- /dev/null +++ b/testing/btest/Baseline/core.ipv6-flow-labels/output @@ -0,0 +1,74 @@ +new_connection: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] + orig_flow 0 + resp_flow 0 +connection_established: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] + orig_flow 0 + resp_flow 0 +connection_flow_label_changed(resp): [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] + orig_flow 0 + resp_flow 7407 + old_label 0 + new_label 7407 +new_connection: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49186/tcp, resp_h=2001:470:4867:99::21, resp_p=57086/tcp] + orig_flow 0 + resp_flow 0 +connection_established: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49186/tcp, resp_h=2001:470:4867:99::21, resp_p=57086/tcp] + orig_flow 0 + resp_flow 0 +connection_flow_label_changed(resp): [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49186/tcp, resp_h=2001:470:4867:99::21, resp_p=57086/tcp] + orig_flow 0 + resp_flow 176012 + old_label 0 + new_label 176012 +new_connection: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49187/tcp, resp_h=2001:470:4867:99::21, resp_p=57087/tcp] + orig_flow 0 + resp_flow 0 +connection_established: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49187/tcp, resp_h=2001:470:4867:99::21, resp_p=57087/tcp] + orig_flow 0 + resp_flow 0 +connection_flow_label_changed(resp): [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49187/tcp, resp_h=2001:470:4867:99::21, resp_p=57087/tcp] + orig_flow 0 + resp_flow 390927 + old_label 0 + new_label 390927 +new_connection: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49188/tcp, resp_h=2001:470:4867:99::21, resp_p=57088/tcp] + orig_flow 0 + resp_flow 0 +connection_established: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49188/tcp, resp_h=2001:470:4867:99::21, resp_p=57088/tcp] + orig_flow 0 + resp_flow 0 +connection_flow_label_changed(resp): [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49188/tcp, resp_h=2001:470:4867:99::21, resp_p=57088/tcp] + orig_flow 0 + resp_flow 364705 + old_label 0 + new_label 364705 +connection_state_remove: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49186/tcp, resp_h=2001:470:4867:99::21, resp_p=57086/tcp] + orig_flow 0 + resp_flow 176012 +connection_state_remove: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49187/tcp, resp_h=2001:470:4867:99::21, resp_p=57087/tcp] + orig_flow 0 + resp_flow 390927 +connection_state_remove: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49188/tcp, resp_h=2001:470:4867:99::21, resp_p=57088/tcp] + orig_flow 0 + resp_flow 364705 +new_connection: [orig_h=2001:470:4867:99::21, orig_p=55785/tcp, resp_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, resp_p=49189/tcp] + orig_flow 267377 + resp_flow 0 +connection_established: [orig_h=2001:470:4867:99::21, orig_p=55785/tcp, resp_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, resp_p=49189/tcp] + orig_flow 267377 + resp_flow 126027 +new_connection: [orig_h=2001:470:4867:99::21, orig_p=55647/tcp, resp_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, resp_p=49190/tcp] + orig_flow 355265 + resp_flow 0 +connection_established: [orig_h=2001:470:4867:99::21, orig_p=55647/tcp, resp_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, resp_p=49190/tcp] + orig_flow 355265 + resp_flow 126028 +connection_state_remove: [orig_h=2001:470:4867:99::21, orig_p=55785/tcp, resp_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, resp_p=49189/tcp] + orig_flow 267377 + resp_flow 126027 +connection_state_remove: [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] + orig_flow 0 + resp_flow 7407 +connection_state_remove: [orig_h=2001:470:4867:99::21, orig_p=55647/tcp, resp_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, resp_p=49190/tcp] + orig_flow 355265 + resp_flow 126028 diff --git a/testing/btest/Baseline/core.ipv6-frag/dns.log b/testing/btest/Baseline/core.ipv6-frag/dns.log new file mode 100644 index 0000000000..de027644e8 --- /dev/null +++ b/testing/btest/Baseline/core.ipv6-frag/dns.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path dns +#open 2012-10-05-17-47-27 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected +#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool +1331084278.438444 UWkUyAuUGXf 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51850 2607:f740:b::f93 53 udp 3903 txtpadding_323.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000 F +1331084293.592245 arKYeMETxOg 2001:470:1f11:81f:d138:5f55:6d4:1fe2 51851 2607:f740:b::f93 53 udp 40849 txtpadding_3230.n1.netalyzr.icsi.berkeley.edu 1 C_INTERNET 16 TXT 0 NOERROR T F T F 0 This TXT record should be ignored 1.000000 F +#close 2012-10-05-17-47-27 diff --git a/testing/btest/Baseline/core.ipv6-frag/output b/testing/btest/Baseline/core.ipv6-frag/output new file mode 100644 index 0000000000..12dfc3a841 --- /dev/null +++ b/testing/btest/Baseline/core.ipv6-frag/output @@ -0,0 +1,5 @@ +ip6=[class=0, flow=0, len=81, nxt=17, hlim=64, src=2001:470:1f11:81f:d138:5f55:6d4:1fe2, dst=2607:f740:b::f93, exts=[]], udp = [sport=51850/udp, dport=53/udp, ulen=81] +ip6=[class=0, flow=0, len=331, nxt=17, hlim=53, src=2607:f740:b::f93, dst=2001:470:1f11:81f:d138:5f55:6d4:1fe2, exts=[]], udp = [sport=53/udp, dport=51850/udp, ulen=331] +ip6=[class=0, flow=0, len=82, nxt=17, hlim=64, src=2001:470:1f11:81f:d138:5f55:6d4:1fe2, dst=2607:f740:b::f93, exts=[]], udp = [sport=51851/udp, dport=53/udp, ulen=82] +ip6=[class=0, flow=0, len=82, nxt=17, hlim=64, src=2001:470:1f11:81f:d138:5f55:6d4:1fe2, dst=2607:f740:b::f93, exts=[]], udp = [sport=51851/udp, dport=53/udp, ulen=82] +ip6=[class=0, flow=0, len=3238, nxt=17, hlim=53, src=2607:f740:b::f93, dst=2001:470:1f11:81f:d138:5f55:6d4:1fe2, exts=[]], udp = [sport=53/udp, dport=51851/udp, ulen=3238] diff --git a/testing/btest/Baseline/core.ipv6_esp/output b/testing/btest/Baseline/core.ipv6_esp/output new file mode 100644 index 0000000000..02fb7e154f --- /dev/null +++ b/testing/btest/Baseline/core.ipv6_esp/output @@ -0,0 +1,120 @@ +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=1], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=2], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=3], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=4], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=5], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=6], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=7], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=8], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=9], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::2, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=10], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=1], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=2], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=3], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=4], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=5], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=6], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=7], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=8], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=9], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::3, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=10], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=1], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=2], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=3], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=4], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=5], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=6], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=7], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=8], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=9], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::4, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=10], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=1], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=2], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=3], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=4], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=5], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=6], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=7], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=8], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=9], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::5, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=10], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=1], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=2], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=3], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=4], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=5], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=6], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=7], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=8], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=9], mobility=]]] +[class=0, flow=0, len=116, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::12, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=10, seq=10], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=1], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=2], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=3], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=4], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=5], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=6], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=7], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=8], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=9], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::13, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=11, seq=10], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=1], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=2], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=3], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=4], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=5], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=6], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=7], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=8], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=9], mobility=]]] +[class=0, flow=0, len=100, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::14, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=12, seq=10], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=1], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=2], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=3], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=4], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=5], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=6], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=7], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=8], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=9], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::15, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=13, seq=10], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=1], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=2], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=3], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=4], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=5], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=6], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=7], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=8], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=9], mobility=]]] +[class=0, flow=0, len=104, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::22, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=20, seq=10], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=1], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=2], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=3], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=4], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=5], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=6], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=7], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=8], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=9], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::23, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=21, seq=10], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=1], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=2], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=3], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=4], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=5], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=6], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=7], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=8], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=9], mobility=]]] +[class=0, flow=0, len=88, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::24, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=22, seq=10], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=1], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=2], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=3], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=4], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=5], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=6], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=7], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=8], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=9], mobility=]]] +[class=0, flow=0, len=76, nxt=50, hlim=64, src=3ffe::1, dst=3ffe::25, exts=[[id=50, hopopts=, dstopts=, routing=, fragment=, ah=, esp=[spi=23, seq=10], mobility=]]] diff --git a/testing/btest/Baseline/core.ipv6_ext_headers/output b/testing/btest/Baseline/core.ipv6_ext_headers/output new file mode 100644 index 0000000000..b4cd249371 --- /dev/null +++ b/testing/btest/Baseline/core.ipv6_ext_headers/output @@ -0,0 +1,3 @@ +weird routing0_hdr from 2001:4f8:4:7:2e0:81ff:fe52:ffff to 2001:78:1:32::2 +[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=53/udp, resp_h=2001:78:1:32::2, resp_p=53/udp] +[ip=, ip6=[class=0, flow=0, len=59, nxt=0, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=0, hopopts=[nxt=43, len=0, options=[[otype=1, len=4, data=\0\0\0\0]]], dstopts=, routing=, fragment=, ah=, esp=, mobility=], [id=43, hopopts=, dstopts=, routing=[nxt=17, len=4, rtype=0, segleft=2, data=\0\0\0\0 ^A\0x\0^A\02\0\0\0\0\0\0\0^A ^A\0x\0^A\02\0\0\0\0\0\0\0^B], fragment=, ah=, esp=, mobility=]]], tcp=, udp=[sport=53/udp, dport=53/udp, ulen=11], icmp=] diff --git a/testing/btest/Baseline/core.ipv6_zero_len_ah/output b/testing/btest/Baseline/core.ipv6_zero_len_ah/output new file mode 100644 index 0000000000..d8db6a4c48 --- /dev/null +++ b/testing/btest/Baseline/core.ipv6_zero_len_ah/output @@ -0,0 +1,2 @@ +[orig_h=2000:1300::1, orig_p=128/icmp, resp_h=2000:1300::2, resp_p=129/icmp] +[ip=, ip6=[class=0, flow=0, len=166, nxt=51, hlim=255, src=2000:1300::1, dst=2000:1300::2, exts=[[id=51, hopopts=, dstopts=, routing=, fragment=, ah=[nxt=58, len=0, rsv=0, spi=0, seq=, data=], esp=, mobility=]]], tcp=, udp=, icmp=] diff --git a/testing/btest/Baseline/core.leaks.basic-cluster/manager-1.metrics.log b/testing/btest/Baseline/core.leaks.basic-cluster/manager-1.metrics.log new file mode 100644 index 0000000000..cb1bd5af01 --- /dev/null +++ b/testing/btest/Baseline/core.leaks.basic-cluster/manager-1.metrics.log @@ -0,0 +1,12 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path metrics +#open 2012-07-20-01-50-41 +#fields ts metric_id filter_name index.host index.str index.network value +#types time enum string addr string subnet count +1342749041.601712 TEST_METRIC foo-bar 6.5.4.3 - - 4 +1342749041.601712 TEST_METRIC foo-bar 7.2.1.5 - - 2 +1342749041.601712 TEST_METRIC foo-bar 1.2.3.4 - - 6 +#close 2012-07-20-01-50-49 diff --git a/testing/btest/Baseline/core.leaks.ip-in-ip/output b/testing/btest/Baseline/core.leaks.ip-in-ip/output new file mode 100644 index 0000000000..d8c6bee223 --- /dev/null +++ b/testing/btest/Baseline/core.leaks.ip-in-ip/output @@ -0,0 +1,13 @@ +new_connection: tunnel + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + encap: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +new_connection: tunnel + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + encap: [[cid=[orig_h=feed::beef, orig_p=0/unknown, resp_h=feed::cafe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf], [cid=[orig_h=babe::beef, orig_p=0/unknown, resp_h=dead::babe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=arKYeMETxOg]] +new_connection: tunnel + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + encap: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +tunnel_changed: + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + old: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] + new: [[cid=[orig_h=feed::beef, orig_p=0/unknown, resp_h=feed::cafe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=k6kgXLOoSKl]] diff --git a/testing/btest/Baseline/core.leaks.ipv6_ext_headers/output b/testing/btest/Baseline/core.leaks.ipv6_ext_headers/output new file mode 100644 index 0000000000..5c2177718c --- /dev/null +++ b/testing/btest/Baseline/core.leaks.ipv6_ext_headers/output @@ -0,0 +1,4 @@ +weird routing0_hdr from 2001:4f8:4:7:2e0:81ff:fe52:ffff to 2001:78:1:32::2 +[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=53/udp, resp_h=2001:78:1:32::2, resp_p=53/udp] +[ip=, ip6=[class=0, flow=0, len=59, nxt=0, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=0, hopopts=[nxt=43, len=0, options=[[otype=1, len=4, data=\0\0\0\0]]], dstopts=, routing=, fragment=, ah=, esp=, mobility=], [id=43, hopopts=, dstopts=, routing=[nxt=17, len=4, rtype=0, segleft=2, data=\0\0\0\0 ^A\0x\0^A\02\0\0\0\0\0\0\0^A ^A\0x\0^A\02\0\0\0\0\0\0\0^B], fragment=, ah=, esp=, mobility=]]], tcp=, udp=[sport=53/udp, dport=53/udp, ulen=11], icmp=] +[2001:78:1:32::1, 2001:78:1:32::2] diff --git a/testing/btest/Baseline/core.leaks.remote/sender.test.failure.log b/testing/btest/Baseline/core.leaks.remote/sender.test.failure.log new file mode 100644 index 0000000000..71e1d18c73 --- /dev/null +++ b/testing/btest/Baseline/core.leaks.remote/sender.test.failure.log @@ -0,0 +1,12 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path test.failure +#open 2012-07-20-01-50-18 +#fields t id.orig_h id.orig_p id.resp_h id.resp_p status country +#types time addr port addr port string string +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure US +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure UK +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-50-18 diff --git a/testing/btest/Baseline/core.leaks.remote/sender.test.log b/testing/btest/Baseline/core.leaks.remote/sender.test.log new file mode 100644 index 0000000000..bc3dac5a1a --- /dev/null +++ b/testing/btest/Baseline/core.leaks.remote/sender.test.log @@ -0,0 +1,14 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path test +#open 2012-07-20-01-50-18 +#fields t id.orig_h id.orig_p id.resp_h id.resp_p status country +#types time addr port addr port string string +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success unknown +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure US +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure UK +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success BR +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-50-18 diff --git a/testing/btest/Baseline/core.leaks.remote/sender.test.success.log b/testing/btest/Baseline/core.leaks.remote/sender.test.success.log new file mode 100644 index 0000000000..f0b26454b4 --- /dev/null +++ b/testing/btest/Baseline/core.leaks.remote/sender.test.success.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path test.success +#open 2012-07-20-01-50-18 +#fields t id.orig_h id.orig_p id.resp_h id.resp_p status country +#types time addr port addr port string string +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success unknown +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success BR +#close 2012-07-20-01-50-18 diff --git a/testing/btest/Baseline/core.leaks.vector-val-bifs/output b/testing/btest/Baseline/core.leaks.vector-val-bifs/output new file mode 100644 index 0000000000..4a57d29a71 --- /dev/null +++ b/testing/btest/Baseline/core.leaks.vector-val-bifs/output @@ -0,0 +1,10 @@ +[1, 3, 0, 2] +[2374950123] +[1, 3, 0, 2] +[2374950123] +[1, 3, 0, 2] +[2374950123] +[1, 3, 0, 2] +[3353991673] +[1, 3, 0, 2] +[3353991673] diff --git a/testing/btest/Baseline/core.mobile-ipv6-home-addr/output b/testing/btest/Baseline/core.mobile-ipv6-home-addr/output new file mode 100644 index 0000000000..88cbe0cb16 --- /dev/null +++ b/testing/btest/Baseline/core.mobile-ipv6-home-addr/output @@ -0,0 +1,2 @@ +[orig_h=2001:78:1:32::1, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] +[ip=, ip6=[class=0, flow=0, len=36, nxt=60, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=60, hopopts=, dstopts=[nxt=17, len=2, options=[[otype=1, len=2, data=\0\0], [otype=201, len=16, data= ^A\0x\0^A\02\0\0\0\0\0\0\0^A]]], routing=, fragment=, ah=, esp=, mobility=]]], tcp=, udp=[sport=30000/udp, dport=13000/udp, ulen=12], icmp=] diff --git a/testing/btest/Baseline/core.mobile-ipv6-routing/output b/testing/btest/Baseline/core.mobile-ipv6-routing/output new file mode 100644 index 0000000000..04292caaa7 --- /dev/null +++ b/testing/btest/Baseline/core.mobile-ipv6-routing/output @@ -0,0 +1,2 @@ +[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:78:1:32::1, resp_p=13000/udp] +[ip=, ip6=[class=0, flow=0, len=36, nxt=43, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=43, hopopts=, dstopts=, routing=[nxt=17, len=2, rtype=2, segleft=1, data=\0\0\0\0 ^A\0x\0^A\02\0\0\0\0\0\0\0^A], fragment=, ah=, esp=, mobility=]]], tcp=, udp=[sport=30000/udp, dport=13000/udp, ulen=12], icmp=] diff --git a/testing/btest/Baseline/core.mobility-checksums/bad.out b/testing/btest/Baseline/core.mobility-checksums/bad.out new file mode 100644 index 0000000000..dfbd5006a9 --- /dev/null +++ b/testing/btest/Baseline/core.mobility-checksums/bad.out @@ -0,0 +1,24 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1333988844.893456 - - - - - bad_MH_checksum - F bro +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1333640536.489921 UWkUyAuUGXf 2001:78:1:32::1 30000 2001:4f8:4:7:2e0:81ff:fe52:9a6b 80 bad_TCP_checksum - F bro +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1333640468.146461 UWkUyAuUGXf 2001:78:1:32::1 30000 2001:4f8:4:7:2e0:81ff:fe52:9a6b 13000 bad_UDP_checksum - F bro diff --git a/testing/btest/Baseline/core.mobility_msg/output b/testing/btest/Baseline/core.mobility_msg/output new file mode 100644 index 0000000000..6f8d6a1699 --- /dev/null +++ b/testing/btest/Baseline/core.mobility_msg/output @@ -0,0 +1,16 @@ +Binding ACK: +[class=0, flow=0, len=16, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=1, mh_type=6, rsv=0, chksum=53722, msg=[id=6, brr=, hoti=, coti=, hot=, cot=, bu=, back=[status=0, k=T, seq=42, life=8, options=[[otype=1, len=2, data=\0\0]]], be=]]]]] +Binding Error: +[class=0, flow=0, len=24, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=2, mh_type=7, rsv=0, chksum=45272, msg=[id=7, brr=, hoti=, coti=, hot=, cot=, bu=, back=, be=[status=1, hoa=2001:78:1:32::1, options=[]]]]]]] +Binding Refresh Request: +[class=0, flow=0, len=8, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=0, mh_type=0, rsv=0, chksum=55703, msg=[id=0, brr=[rsv=0, options=[]], hoti=, coti=, hot=, cot=, bu=, back=, be=]]]]] +Binding Update: +[class=0, flow=0, len=16, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=1, mh_type=5, rsv=0, chksum=868, msg=[id=5, brr=, hoti=, coti=, hot=, cot=, bu=[seq=37, a=T, h=T, l=F, k=T, life=3, options=[[otype=1, len=2, data=\0\0]]], back=, be=]]]]] +Care-of Test: +[class=0, flow=0, len=24, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=2, mh_type=4, rsv=0, chksum=54378, msg=[id=4, brr=, hoti=, coti=, hot=, cot=[nonce_idx=13, cookie=15, token=255, options=[]], bu=, back=, be=]]]]] +Care-of Test Init: +[class=0, flow=0, len=16, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=1, mh_type=2, rsv=0, chksum=55181, msg=[id=2, brr=, hoti=, coti=[rsv=0, cookie=1, options=[]], hot=, cot=, bu=, back=, be=]]]]] +Home Test: +[class=0, flow=0, len=24, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=2, mh_type=3, rsv=0, chksum=54634, msg=[id=3, brr=, hoti=, coti=, hot=[nonce_idx=13, cookie=15, token=255, options=[]], cot=, bu=, back=, be=]]]]] +Home Test Init: +[class=0, flow=0, len=16, nxt=135, hlim=64, src=2001:4f8:4:7:2e0:81ff:fe52:ffff, dst=2001:4f8:4:7:2e0:81ff:fe52:9a6b, exts=[[id=135, hopopts=, dstopts=, routing=, fragment=, ah=, esp=, mobility=[nxt=59, len=1, mh_type=1, rsv=0, chksum=55437, msg=[id=1, brr=, hoti=[rsv=0, cookie=1, options=[]], coti=, hot=, cot=, bu=, back=, be=]]]]] diff --git a/testing/btest/Baseline/core.print-bpf-filters-ipv4/conn.log b/testing/btest/Baseline/core.print-bpf-filters-ipv4/conn.log deleted file mode 100644 index 3736748484..0000000000 --- a/testing/btest/Baseline/core.print-bpf-filters-ipv4/conn.log +++ /dev/null @@ -1,5 +0,0 @@ -#separator \x09 -#path conn -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes -#types time string addr port addr port enum string interval count count string bool count string count count count count -1128727435.450898 UWkUyAuUGXf 141.42.64.125 56730 125.190.109.199 80 tcp http 1.733303 98 9417 SF - 0 ShADdFaf 12 730 10 9945 diff --git a/testing/btest/Baseline/core.print-bpf-filters-ipv4/output b/testing/btest/Baseline/core.print-bpf-filters-ipv4/output deleted file mode 100644 index 4f6230b768..0000000000 --- a/testing/btest/Baseline/core.print-bpf-filters-ipv4/output +++ /dev/null @@ -1,20 +0,0 @@ -#separator \x09 -#path packet_filter -#fields ts node filter init success -#types time string string bool bool -1320367155.152502 - not ip6 T T -#separator \x09 -#path packet_filter -#fields ts node filter init success -#types time string string bool bool -1320367155.379066 - (((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (udp and port 5355)) or (tcp port 22)) or (tcp port 995)) or (port 21)) or (tcp port 25 or tcp port 587)) or (port 6667)) or (tcp port 614)) or (tcp port 990)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666)) and (not ip6) T T -#separator \x09 -#path packet_filter -#fields ts node filter init success -#types time string string bool bool -1320367155.601980 - port 42 T T -#separator \x09 -#path packet_filter -#fields ts node filter init success -#types time string string bool bool -1320367155.826539 - port 56730 T T diff --git a/testing/btest/Baseline/core.print-bpf-filters-ipv6/conn.log b/testing/btest/Baseline/core.print-bpf-filters-ipv6/conn.log deleted file mode 100644 index e71eff9d57..0000000000 --- a/testing/btest/Baseline/core.print-bpf-filters-ipv6/conn.log +++ /dev/null @@ -1,2 +0,0 @@ -# ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history notice_tags -1128727435.4509 UWkUyAuUGXf 141.42.64.125 56730 125.190.109.199 80 tcp - 1.73330307006836 98 9417 SF - 0 ShADdFaf - diff --git a/testing/btest/Baseline/core.print-bpf-filters-ipv6/output b/testing/btest/Baseline/core.print-bpf-filters-ipv6/output deleted file mode 100644 index 0065fabfd3..0000000000 --- a/testing/btest/Baseline/core.print-bpf-filters-ipv6/output +++ /dev/null @@ -1,8 +0,0 @@ -# ts node filter init success -1308603220.46822 - ip or not ip F T -# ts node filter init success -1308603220.51607 - tcp port 22 F T -# ts node filter init success -1308603220.55432 - port 42 F T -# ts node filter init success -1308603220.59452 - port 56730 T T diff --git a/testing/btest/Baseline/core.print-bpf-filters/conn.log b/testing/btest/Baseline/core.print-bpf-filters/conn.log new file mode 100644 index 0000000000..0fd86b8dc4 --- /dev/null +++ b/testing/btest/Baseline/core.print-bpf-filters/conn.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2005-10-07-23-23-57 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1128727435.450898 UWkUyAuUGXf 141.42.64.125 56730 125.190.109.199 80 tcp http 1.733303 98 9417 SF - 0 ShADdFaf 12 730 10 9945 (empty) +#close 2005-10-07-23-23-57 diff --git a/testing/btest/Baseline/core.print-bpf-filters/output b/testing/btest/Baseline/core.print-bpf-filters/output new file mode 100644 index 0000000000..cd6e77dfcc --- /dev/null +++ b/testing/btest/Baseline/core.print-bpf-filters/output @@ -0,0 +1,40 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path packet_filter +#open 2012-10-08-16-16-08 +#fields ts node filter init success +#types time string string bool bool +1349712968.812610 - ip or not ip T T +#close 2012-10-08-16-16-08 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path packet_filter +#open 2012-10-08-16-16-09 +#fields ts node filter init success +#types time string string bool bool +1349712969.042094 - (((((((((((((((((((((((((port 53) or (tcp port 989)) or (tcp port 443)) or (port 6669)) or (udp and port 5353)) or (port 6668)) or (tcp port 1080)) or (udp and port 5355)) or (tcp port 995)) or (tcp port 22)) or (port 21 and port 2811)) or (tcp port 25 or tcp port 587)) or (tcp port 614)) or (tcp port 990)) or (port 6667)) or (udp port 137)) or (tcp port 993)) or (tcp port 5223)) or (port 514)) or (tcp port 585)) or (tcp port 992)) or (tcp port 563)) or (tcp port 994)) or (tcp port 636)) or (tcp and port (80 or 81 or 631 or 1080 or 3138 or 8000 or 8080 or 8888))) or (port 6666) T T +#close 2012-10-08-16-16-09 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path packet_filter +#open 2012-10-08-16-16-09 +#fields ts node filter init success +#types time string string bool bool +1349712969.270826 - port 42 T T +#close 2012-10-08-16-16-09 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path packet_filter +#open 2012-10-08-16-16-09 +#fields ts node filter init success +#types time string string bool bool +1349712969.499878 - port 56730 T T +#close 2012-10-08-16-16-09 diff --git a/testing/btest/Baseline/core.reporter-error-in-handler/output b/testing/btest/Baseline/core.reporter-error-in-handler/output index bfb2880ed4..b20b1b2292 100644 --- a/testing/btest/Baseline/core.reporter-error-in-handler/output +++ b/testing/btest/Baseline/core.reporter-error-in-handler/output @@ -1,2 +1,3 @@ -error in /da/home/robin/bro/seth/testing/btest/.tmp/core.reporter-error-in-handler/reporter-error-in-handler.bro, line 22: no such index (a[2]) +error in /home/jsiwek/bro/testing/btest/.tmp/core.reporter-error-in-handler/reporter-error-in-handler.bro, line 22: no such index (a[2]) +ERROR: no such index (a[1]) (/home/jsiwek/bro/testing/btest/.tmp/core.reporter-error-in-handler/reporter-error-in-handler.bro, line 28) 1st error printed on script level diff --git a/testing/btest/Baseline/core.reporter-fmt-strings/output b/testing/btest/Baseline/core.reporter-fmt-strings/output index 10a883cb5d..bbd76f3447 100644 --- a/testing/btest/Baseline/core.reporter-fmt-strings/output +++ b/testing/btest/Baseline/core.reporter-fmt-strings/output @@ -1 +1 @@ -error in /Users/jsiwek/tmp/bro/testing/btest/.tmp/core.reporter-fmt-strings/reporter-fmt-strings.bro, line 9: not an event (dont_interpret_this(%s)) +error in /da/home/robin/bro/master/testing/btest/.tmp/core.reporter-fmt-strings/reporter-fmt-strings.bro, line 9: not an event (dont_interpret_this(%s)) diff --git a/testing/btest/Baseline/core.reporter-parse-error/output b/testing/btest/Baseline/core.reporter-parse-error/output index ca0bc9304b..76535f75d1 100644 --- a/testing/btest/Baseline/core.reporter-parse-error/output +++ b/testing/btest/Baseline/core.reporter-parse-error/output @@ -1 +1 @@ -error in /da/home/robin/bro/seth/testing/btest/.tmp/core.reporter-parse-error/reporter-parse-error.bro, line 7: unknown identifier TESTFAILURE, at or near "TESTFAILURE" +error in /da/home/robin/bro/master/testing/btest/.tmp/core.reporter-parse-error/reporter-parse-error.bro, line 7: unknown identifier TESTFAILURE, at or near "TESTFAILURE" diff --git a/testing/btest/Baseline/core.reporter-runtime-error/output b/testing/btest/Baseline/core.reporter-runtime-error/output index 5c0feedf42..5a03f5feb2 100644 --- a/testing/btest/Baseline/core.reporter-runtime-error/output +++ b/testing/btest/Baseline/core.reporter-runtime-error/output @@ -1 +1,2 @@ -error in /Users/seth/bro.git9/testing/btest/.tmp/core.reporter-runtime-error/reporter-runtime-error.bro, line 12: no such index (a[1]) +error in /home/jsiwek/bro/testing/btest/.tmp/core.reporter-runtime-error/reporter-runtime-error.bro, line 12: no such index (a[1]) +ERROR: no such index (a[2]) (/home/jsiwek/bro/testing/btest/.tmp/core.reporter-runtime-error/reporter-runtime-error.bro, line 9) diff --git a/testing/btest/Baseline/core.reporter-type-mismatch/output b/testing/btest/Baseline/core.reporter-type-mismatch/output index 6211752225..23eefd13e8 100644 --- a/testing/btest/Baseline/core.reporter-type-mismatch/output +++ b/testing/btest/Baseline/core.reporter-type-mismatch/output @@ -1,3 +1,3 @@ -error in string and /da/home/robin/bro/seth/testing/btest/.tmp/core.reporter-type-mismatch/reporter-type-mismatch.bro, line 11: arithmetic mixed with non-arithmetic (string and 42) -error in /da/home/robin/bro/seth/testing/btest/.tmp/core.reporter-type-mismatch/reporter-type-mismatch.bro, line 11 and string: type mismatch (42 and string) -error in /da/home/robin/bro/seth/testing/btest/.tmp/core.reporter-type-mismatch/reporter-type-mismatch.bro, line 11: argument type mismatch in event invocation (foo(42)) +error in string and /da/home/robin/bro/master/testing/btest/.tmp/core.reporter-type-mismatch/reporter-type-mismatch.bro, line 11: arithmetic mixed with non-arithmetic (string and 42) +error in /da/home/robin/bro/master/testing/btest/.tmp/core.reporter-type-mismatch/reporter-type-mismatch.bro, line 11 and string: type mismatch (42 and string) +error in /da/home/robin/bro/master/testing/btest/.tmp/core.reporter-type-mismatch/reporter-type-mismatch.bro, line 11: argument type mismatch in event invocation (foo(42)) diff --git a/testing/btest/Baseline/core.reporter/logger-test.log b/testing/btest/Baseline/core.reporter/logger-test.log index 6f7ba1d8c7..5afd904b63 100644 --- a/testing/btest/Baseline/core.reporter/logger-test.log +++ b/testing/btest/Baseline/core.reporter/logger-test.log @@ -1,6 +1,6 @@ -reporter_info|init test-info|/da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 8|0.000000 -reporter_warning|init test-warning|/da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 9|0.000000 -reporter_error|init test-error|/da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 10|0.000000 -reporter_info|done test-info|/da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 15|0.000000 -reporter_warning|done test-warning|/da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 16|0.000000 -reporter_error|done test-error|/da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 17|0.000000 +reporter_info|init test-info|/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 8|0.000000 +reporter_warning|init test-warning|/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 9|0.000000 +reporter_error|init test-error|/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 10|0.000000 +reporter_info|done test-info|/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 15|0.000000 +reporter_warning|done test-warning|/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 16|0.000000 +reporter_error|done test-error|/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 17|0.000000 diff --git a/testing/btest/Baseline/core.reporter/output b/testing/btest/Baseline/core.reporter/output index 2735adc931..f2c59259c2 100644 --- a/testing/btest/Baseline/core.reporter/output +++ b/testing/btest/Baseline/core.reporter/output @@ -1,3 +1,7 @@ -/da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 52: pre test-info -warning in /da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 53: pre test-warning -error in /da/home/robin/bro/master/testing/btest/.tmp/core.reporter/reporter.bro, line 54: pre test-error +/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 52: pre test-info +warning in /home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 53: pre test-warning +error in /home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 54: pre test-error +WARNING: init test-warning (/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 9) +ERROR: init test-error (/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 10) +WARNING: done test-warning (/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 16) +ERROR: done test-error (/home/jsiwek/bro/testing/btest/.tmp/core.reporter/reporter.bro, line 17) diff --git a/testing/btest/Baseline/core.truncation/output b/testing/btest/Baseline/core.truncation/output new file mode 100644 index 0000000000..9243c2f873 --- /dev/null +++ b/testing/btest/Baseline/core.truncation/output @@ -0,0 +1,40 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-11-16-01-35 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334160095.895421 - - - - - truncated_IP - F bro +#close 2012-04-11-16-01-35 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-11-14-57-21 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334156241.519125 - - - - - truncated_IP - F bro +#close 2012-04-11-14-57-21 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-04-10-21-50-48 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1334094648.590126 - - - - - truncated_IP - F bro +#close 2012-04-10-21-50-48 +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-05-29-22-02-34 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1338328954.078361 - - - - - internally_truncated_header - F bro +#close 2012-05-29-22-02-34 diff --git a/testing/btest/Baseline/core.tunnels.ayiya/conn.log b/testing/btest/Baseline/core.tunnels.ayiya/conn.log new file mode 100644 index 0000000000..7646fa574a --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.ayiya/conn.log @@ -0,0 +1,17 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2009-11-08-04-41-57 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1257655301.595604 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 tcp http 2.101052 2981 4665 S1 - 0 ShADad 10 3605 11 5329 k6kgXLOoSKl +1257655296.585034 k6kgXLOoSKl 192.168.3.101 53859 216.14.98.22 5072 udp ayiya 20.879001 5129 6109 SF - 0 Dd 21 5717 13 6473 (empty) +1257655293.629048 UWkUyAuUGXf 192.168.3.101 53796 216.14.98.22 5072 udp ayiya - - - SHR - 0 d 0 0 1 176 (empty) +1257655296.585333 FrJExwHcSal :: 135 ff02::1:ff00:2 136 icmp - - - - OTH - 0 - 1 64 0 0 k6kgXLOoSKl +1257655293.629048 arKYeMETxOg 2001:4978:f:4c::1 128 2001:4978:f:4c::2 129 icmp - 23.834987 168 56 OTH - 0 - 3 312 1 104 UWkUyAuUGXf,k6kgXLOoSKl +1257655296.585188 TEfuqmmG4bh fe80::216:cbff:fe9a:4cb9 131 ff02::1:ff00:2 130 icmp - 0.919988 32 0 OTH - 0 - 2 144 0 0 k6kgXLOoSKl +1257655296.585151 j4u32Pc5bif fe80::216:cbff:fe9a:4cb9 131 ff02::2:f901:d225 130 icmp - 0.719947 32 0 OTH - 0 - 2 144 0 0 k6kgXLOoSKl +1257655296.585034 nQcgTWjvg4c fe80::216:cbff:fe9a:4cb9 131 ff02::1:ff9a:4cb9 130 icmp - 4.922880 32 0 OTH - 0 - 2 144 0 0 k6kgXLOoSKl +#close 2009-11-08-04-41-57 diff --git a/testing/btest/Baseline/core.tunnels.ayiya/http.log b/testing/btest/Baseline/core.tunnels.ayiya/http.log new file mode 100644 index 0000000000..2a97fd9b69 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.ayiya/http.log @@ -0,0 +1,12 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http +#open 2009-11-08-04-41-41 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1257655301.652206 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 1 GET ipv6.google.com / - Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en; rv:1.9.0.15pre) Gecko/2009091516 Camino/2.0b4 (like Firefox/3.0.15pre) 0 10102 200 OK - - - (empty) - - - text/html - - +1257655302.514424 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 2 GET ipv6.google.com /csi?v=3&s=webhp&action=&tran=undefined&e=17259,19771,21517,21766,21887,22212&ei=BUz2Su7PMJTglQfz3NzCAw&rt=prt.77,xjs.565,ol.645 http://ipv6.google.com/ Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en; rv:1.9.0.15pre) Gecko/2009091516 Camino/2.0b4 (like Firefox/3.0.15pre) 0 0 204 No Content - - - (empty) - - - - - - +1257655303.603569 5OKnoww6xl4 2001:4978:f:4c::2 53382 2001:4860:b002::68 80 3 GET ipv6.google.com /gen_204?atyp=i&ct=fade&cad=1254&ei=BUz2Su7PMJTglQfz3NzCAw&zx=1257655303600 http://ipv6.google.com/ Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en; rv:1.9.0.15pre) Gecko/2009091516 Camino/2.0b4 (like Firefox/3.0.15pre) 0 0 204 No Content - - - (empty) - - - - - - +#close 2009-11-08-04-41-57 diff --git a/testing/btest/Baseline/core.tunnels.ayiya/tunnel.log b/testing/btest/Baseline/core.tunnels.ayiya/tunnel.log new file mode 100644 index 0000000000..60e0a4a108 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.ayiya/tunnel.log @@ -0,0 +1,13 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path tunnel +#open 2009-11-08-04-41-33 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action +#types time string addr port addr port enum enum +1257655293.629048 UWkUyAuUGXf 192.168.3.101 53796 216.14.98.22 5072 Tunnel::AYIYA Tunnel::DISCOVER +1257655296.585034 k6kgXLOoSKl 192.168.3.101 53859 216.14.98.22 5072 Tunnel::AYIYA Tunnel::DISCOVER +1257655317.464035 k6kgXLOoSKl 192.168.3.101 53859 216.14.98.22 5072 Tunnel::AYIYA Tunnel::CLOSE +1257655317.464035 UWkUyAuUGXf 192.168.3.101 53796 216.14.98.22 5072 Tunnel::AYIYA Tunnel::CLOSE +#close 2009-11-08-04-41-57 diff --git a/testing/btest/Baseline/core.tunnels.false-teredo/weird.log b/testing/btest/Baseline/core.tunnels.false-teredo/weird.log new file mode 100644 index 0000000000..a84d469660 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.false-teredo/weird.log @@ -0,0 +1,15 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2009-11-18-17-59-51 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1258567191.405770 - - - - - truncated_header_in_tunnel - F bro +1258578181.260420 - - - - - truncated_header_in_tunnel - F bro +1258579063.557927 - - - - - truncated_header_in_tunnel - F bro +1258581768.568451 - - - - - truncated_header_in_tunnel - F bro +1258584478.859853 - - - - - truncated_header_in_tunnel - F bro +1258600683.934458 - - - - - truncated_header_in_tunnel - F bro +#close 2009-11-19-03-18-03 diff --git a/testing/btest/Baseline/core.tunnels.ip-in-ip/output b/testing/btest/Baseline/core.tunnels.ip-in-ip/output new file mode 100644 index 0000000000..4c8738290f --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.ip-in-ip/output @@ -0,0 +1,22 @@ +new_connection: tunnel + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + encap: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +new_connection: tunnel + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + encap: [[cid=[orig_h=feed::beef, orig_p=0/unknown, resp_h=feed::cafe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf], [cid=[orig_h=babe::beef, orig_p=0/unknown, resp_h=dead::babe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=arKYeMETxOg]] +new_connection: tunnel + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + encap: [[cid=[orig_h=1.2.3.4, orig_p=0/unknown, resp_h=5.6.7.8, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +new_connection: tunnel + conn_id: [orig_h=70.55.213.211, orig_p=31337/tcp, resp_h=192.88.99.1, resp_p=80/tcp] + encap: [[cid=[orig_h=2002:4637:d5d3::4637:d5d3, orig_p=0/unknown, resp_h=2001:4860:0:2001::68, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +new_connection: tunnel + conn_id: [orig_h=10.0.0.1, orig_p=30000/udp, resp_h=10.0.0.2, resp_p=13000/udp] + encap: [[cid=[orig_h=1.2.3.4, orig_p=0/unknown, resp_h=5.6.7.8, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +new_connection: tunnel + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + encap: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +tunnel_changed: + conn_id: [orig_h=dead::beef, orig_p=30000/udp, resp_h=cafe::babe, resp_p=13000/udp] + old: [[cid=[orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=0/unknown, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] + new: [[cid=[orig_h=feed::beef, orig_p=0/unknown, resp_h=feed::cafe, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=k6kgXLOoSKl]] diff --git a/testing/btest/Baseline/core.tunnels.ip-tunnel-uid/output b/testing/btest/Baseline/core.tunnels.ip-tunnel-uid/output new file mode 100644 index 0000000000..afb5837b23 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.ip-tunnel-uid/output @@ -0,0 +1,33 @@ +new_connection: tunnel + conn_id: [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + encap: [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] +NEW_PACKET: + [orig_h=2001:db8:0:1::1, orig_p=128/icmp, resp_h=2001:db8:0:1::2, resp_p=129/icmp] + [[cid=[orig_h=10.0.0.1, orig_p=0/unknown, resp_h=10.0.0.2, resp_p=0/unknown], tunnel_type=Tunnel::IP, uid=UWkUyAuUGXf]] diff --git a/testing/btest/Baseline/core.tunnels.teredo-known-services/known_services.log b/testing/btest/Baseline/core.tunnels.teredo-known-services/known_services.log new file mode 100644 index 0000000000..705cd0e956 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo-known-services/known_services.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path known_services +#open 2012-10-02-20-10-05 +#fields ts host port_num port_proto service +#types time addr port enum table[string] +1258567191.405770 192.168.1.1 53 udp TEREDO +#close 2012-10-02-20-10-05 diff --git a/testing/btest/Baseline/core.tunnels.teredo/conn.log b/testing/btest/Baseline/core.tunnels.teredo/conn.log new file mode 100644 index 0000000000..b71e56f073 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo/conn.log @@ -0,0 +1,30 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2008-05-16-15-50-57 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1210953047.736921 arKYeMETxOg 192.168.2.16 1576 75.126.130.163 80 tcp - 0.000357 0 0 SHR - 0 fA 1 40 1 40 (empty) +1210953050.867067 k6kgXLOoSKl 192.168.2.16 1577 75.126.203.78 80 tcp - 0.000387 0 0 SHR - 0 fA 1 40 1 40 (empty) +1210953057.833364 5OKnoww6xl4 192.168.2.16 1577 75.126.203.78 80 tcp - 0.079208 0 0 SH - 0 Fa 1 40 1 40 (empty) +1210953058.007081 VW0XPVINV8a 192.168.2.16 1576 75.126.130.163 80 tcp - - - - RSTOS0 - 0 R 1 40 0 0 (empty) +1210953057.834454 3PKsZ2Uye21 192.168.2.16 1578 75.126.203.78 80 tcp http 0.407908 790 171 RSTO - 0 ShADadR 6 1038 4 335 (empty) +1210953058.350065 fRFu0wcOle6 192.168.2.16 1920 192.168.2.1 53 udp dns 0.223055 66 438 SF - 0 Dd 2 122 2 494 (empty) +1210953058.577231 qSsw6ESzHV4 192.168.2.16 137 192.168.2.255 137 udp dns 1.499261 150 0 S0 - 0 D 3 234 0 0 (empty) +1210953074.264819 Tw8jXtpTGu6 192.168.2.16 1920 192.168.2.1 53 udp dns 0.297723 123 598 SF - 0 Dd 3 207 3 682 (empty) +1210953061.312379 70MGiRM1Qf4 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 tcp http 12.810848 1675 10467 S1 - 0 ShADad 10 2279 12 11191 GSxOnSLghOa +1210953076.058333 EAr0uf4mhq 192.168.2.16 1578 75.126.203.78 80 tcp - - - - RSTRH - 0 r 0 0 1 40 (empty) +1210953074.055744 h5DsfNtYzi1 192.168.2.16 1577 75.126.203.78 80 tcp - - - - RSTRH - 0 r 0 0 1 40 (empty) +1210953074.057124 P654jzLoe3a 192.168.2.16 1576 75.126.130.163 80 tcp - - - - RSTRH - 0 r 0 0 1 40 (empty) +1210953074.570439 c4Zw9TmAE05 192.168.2.16 1580 67.228.110.120 80 tcp http 0.466677 469 3916 SF - 0 ShADadFf 7 757 6 4164 (empty) +1210953052.202579 nQcgTWjvg4c 192.168.2.16 3797 65.55.158.80 3544 udp teredo 8.928880 129 48 SF - 0 Dd 2 185 1 76 (empty) +1210953060.829233 GSxOnSLghOa 192.168.2.16 3797 83.170.1.38 32900 udp teredo 13.293994 2359 11243 SF - 0 Dd 12 2695 13 11607 (empty) +1210953058.933954 iE6yhOq3SF 0.0.0.0 68 255.255.255.255 67 udp - - - - S0 - 0 D 1 328 0 0 (empty) +1210953052.324629 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 udp - - - - SHR - 0 d 0 0 1 137 (empty) +1210953046.591933 UWkUyAuUGXf 192.168.2.16 138 192.168.2.255 138 udp - 28.448321 416 0 S0 - 0 D 2 472 0 0 (empty) +1210953052.324629 FrJExwHcSal fe80::8000:f227:bec8:61af 134 fe80::8000:ffff:ffff:fffd 133 icmp - - - - OTH - 0 - 1 88 0 0 TEfuqmmG4bh +1210953060.829303 qCaWGmzFtM5 2001:0:4137:9e50:8000:f12a:b9c8:2815 128 2001:4860:0:2001::68 129 icmp - 0.463615 4 4 OTH - 0 - 1 52 1 52 GSxOnSLghOa,nQcgTWjvg4c +1210953052.202579 j4u32Pc5bif fe80::8000:ffff:ffff:fffd 133 ff02::2 134 icmp - - - - OTH - 0 - 1 64 0 0 nQcgTWjvg4c +#close 2008-05-16-15-51-16 diff --git a/testing/btest/Baseline/core.tunnels.teredo/http.log b/testing/btest/Baseline/core.tunnels.teredo/http.log new file mode 100644 index 0000000000..c77297c58d --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo/http.log @@ -0,0 +1,13 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http +#open 2008-05-16-15-50-58 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1210953057.917183 3PKsZ2Uye21 192.168.2.16 1578 75.126.203.78 80 1 POST download913.avast.com /cgi-bin/iavs4stats.cgi - Syncer/4.80 (av_pro-1169;f) 589 0 204 - - - (empty) - - - text/plain - - +1210953061.585996 70MGiRM1Qf4 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 1 GET ipv6.google.com / - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 6640 200 OK - - - (empty) - - - text/html - - +1210953073.381474 70MGiRM1Qf4 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 2 GET ipv6.google.com /search?hl=en&q=Wireshark+!&btnG=Google+Search http://ipv6.google.com/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 25119 200 OK - - - (empty) - - - text/html - - +1210953074.674817 c4Zw9TmAE05 192.168.2.16 1580 67.228.110.120 80 1 GET www.wireshark.org / http://ipv6.google.com/search?hl=en&q=Wireshark+%21&btnG=Google+Search Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 11845 200 OK - - - (empty) - - - text/xml - - +#close 2008-05-16-15-51-16 diff --git a/testing/btest/Baseline/core.tunnels.teredo/output b/testing/btest/Baseline/core.tunnels.teredo/output new file mode 100644 index 0000000000..02d5a41e74 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo/output @@ -0,0 +1,83 @@ +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=24, nxt=58, hlim=255, src=fe80::8000:ffff:ffff:fffd, dst=ff02::2, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] +auth: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=24, nxt=58, hlim=255, src=fe80::8000:ffff:ffff:fffd, dst=ff02::2, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp] + ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] + origin: [p=3797/udp, a=70.55.215.234] +auth: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp] + ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] + origin: [p=3797/udp, a=70.55.215.234] +origin: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp] + ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] + origin: [p=3797/udp, a=70.55.215.234] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=12, nxt=58, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] + origin: [p=32900/udp, a=83.170.1.38] +origin: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] + origin: [p=32900/udp, a=83.170.1.38] +bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] + origin: [p=32900/udp, a=83.170.1.38] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=fe80::708d:fe83:4114:a512, exts=[]] +bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=fe80::708d:fe83:4114:a512, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=12, nxt=58, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=24, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=24, nxt=6, hlim=245, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=817, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=514, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=898, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=812, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=717, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] diff --git a/testing/btest/Baseline/core.tunnels.teredo/tunnel.log b/testing/btest/Baseline/core.tunnels.teredo/tunnel.log new file mode 100644 index 0000000000..120089caa0 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo/tunnel.log @@ -0,0 +1,15 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path tunnel +#open 2008-05-16-15-50-52 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action +#types time string addr port addr port enum enum +1210953052.202579 nQcgTWjvg4c 192.168.2.16 3797 65.55.158.80 3544 Tunnel::TEREDO Tunnel::DISCOVER +1210953052.324629 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 Tunnel::TEREDO Tunnel::DISCOVER +1210953061.292918 GSxOnSLghOa 192.168.2.16 3797 83.170.1.38 32900 Tunnel::TEREDO Tunnel::DISCOVER +1210953076.058333 nQcgTWjvg4c 192.168.2.16 3797 65.55.158.80 3544 Tunnel::TEREDO Tunnel::CLOSE +1210953076.058333 GSxOnSLghOa 192.168.2.16 3797 83.170.1.38 32900 Tunnel::TEREDO Tunnel::CLOSE +1210953076.058333 TEfuqmmG4bh 192.168.2.16 3797 65.55.158.81 3544 Tunnel::TEREDO Tunnel::CLOSE +#close 2008-05-16-15-51-16 diff --git a/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/conn.log b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/conn.log new file mode 100644 index 0000000000..9d4bf86d57 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/conn.log @@ -0,0 +1,16 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2012-06-19-17-39-37 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1340127577.354166 FrJExwHcSal 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 tcp http 0.052829 1675 10467 S1 - 0 ShADad 10 2279 12 11191 j4u32Pc5bif +1340127577.336558 UWkUyAuUGXf 192.168.2.16 3797 65.55.158.80 3544 udp teredo 0.010291 129 52 SF - 0 Dd 2 185 1 80 (empty) +1340127577.341510 j4u32Pc5bif 192.168.2.16 3797 83.170.1.38 32900 udp teredo 0.065485 2367 11243 SF - 0 Dd 12 2703 13 11607 (empty) +1340127577.339015 k6kgXLOoSKl 192.168.2.16 3797 65.55.158.81 3544 udp - - - - SHR - 0 d 0 0 1 137 (empty) +1340127577.339015 nQcgTWjvg4c fe80::8000:f227:bec8:61af 134 fe80::8000:ffff:ffff:fffd 133 icmp - - - - OTH - 0 - 1 88 0 0 k6kgXLOoSKl +1340127577.343969 TEfuqmmG4bh 2001:0:4137:9e50:8000:f12a:b9c8:2815 128 2001:4860:0:2001::68 129 icmp - 0.007778 4 4 OTH - 0 - 1 52 1 52 UWkUyAuUGXf,j4u32Pc5bif +1340127577.336558 arKYeMETxOg fe80::8000:ffff:ffff:fffd 133 ff02::2 134 icmp - - - - OTH - 0 - 1 64 0 0 UWkUyAuUGXf +#close 2012-06-19-17-39-37 diff --git a/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/http.log b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/http.log new file mode 100644 index 0000000000..e0b223d114 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/http.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http +#open 2012-06-19-17-39-37 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1340127577.361683 FrJExwHcSal 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 1 GET ipv6.google.com / - Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 6640 200 OK - - - (empty) - - - text/html - - +1340127577.379360 FrJExwHcSal 2001:0:4137:9e50:8000:f12a:b9c8:2815 1286 2001:4860:0:2001::68 80 2 GET ipv6.google.com /search?hl=en&q=Wireshark+!&btnG=Google+Search http://ipv6.google.com/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b5) Gecko/2008032620 Firefox/3.0b5 0 25119 200 OK - - - (empty) - - - text/html - - +#close 2012-06-19-17-39-37 diff --git a/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/output b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/output new file mode 100644 index 0000000000..02d5a41e74 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/output @@ -0,0 +1,83 @@ +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=24, nxt=58, hlim=255, src=fe80::8000:ffff:ffff:fffd, dst=ff02::2, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] +auth: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=24, nxt=58, hlim=255, src=fe80::8000:ffff:ffff:fffd, dst=ff02::2, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp] + ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] + origin: [p=3797/udp, a=70.55.215.234] +auth: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp] + ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] + origin: [p=3797/udp, a=70.55.215.234] +origin: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.81, resp_p=3544/udp] + ip6: [class=0, flow=0, len=48, nxt=58, hlim=255, src=fe80::8000:f227:bec8:61af, dst=fe80::8000:ffff:ffff:fffd, exts=[]] + auth: [id=, value=, nonce=14796129349558001544, confirm=0] + origin: [p=3797/udp, a=70.55.215.234] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=12, nxt=58, hlim=21, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] + origin: [p=32900/udp, a=83.170.1.38] +origin: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] + origin: [p=32900/udp, a=83.170.1.38] +bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=65.55.158.80, resp_p=3544/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=fe80::708d:fe83:4114:a512, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] + origin: [p=32900/udp, a=83.170.1.38] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=fe80::708d:fe83:4114:a512, exts=[]] +bubble: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=0, nxt=59, hlim=0, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=fe80::708d:fe83:4114:a512, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=12, nxt=58, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=24, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=24, nxt=6, hlim=245, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=817, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=514, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=898, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=812, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=1232, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=717, nxt=6, hlim=58, src=2001:4860:0:2001::68, dst=2001:0:4137:9e50:8000:f12a:b9c8:2815, exts=[]] +packet: [orig_h=192.168.2.16, orig_p=3797/udp, resp_h=83.170.1.38, resp_p=32900/udp] + ip6: [class=0, flow=0, len=20, nxt=6, hlim=128, src=2001:0:4137:9e50:8000:f12a:b9c8:2815, dst=2001:4860:0:2001::68, exts=[]] diff --git a/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/tunnel.log b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/tunnel.log new file mode 100644 index 0000000000..86c2c94c04 --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/tunnel.log @@ -0,0 +1,15 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path tunnel +#open 2012-06-19-17-39-37 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action +#types time string addr port addr port enum enum +1340127577.336558 UWkUyAuUGXf 192.168.2.16 3797 65.55.158.80 3544 Tunnel::TEREDO Tunnel::DISCOVER +1340127577.339015 k6kgXLOoSKl 192.168.2.16 3797 65.55.158.81 3544 Tunnel::TEREDO Tunnel::DISCOVER +1340127577.351747 j4u32Pc5bif 192.168.2.16 3797 83.170.1.38 32900 Tunnel::TEREDO Tunnel::DISCOVER +1340127577.406995 UWkUyAuUGXf 192.168.2.16 3797 65.55.158.80 3544 Tunnel::TEREDO Tunnel::CLOSE +1340127577.406995 j4u32Pc5bif 192.168.2.16 3797 83.170.1.38 32900 Tunnel::TEREDO Tunnel::CLOSE +1340127577.406995 k6kgXLOoSKl 192.168.2.16 3797 65.55.158.81 3544 Tunnel::TEREDO Tunnel::CLOSE +#close 2012-06-19-17-39-37 diff --git a/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/weird.log b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/weird.log new file mode 100644 index 0000000000..764b78656a --- /dev/null +++ b/testing/btest/Baseline/core.tunnels.teredo_bubble_with_payload/weird.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path weird +#open 2012-10-02-16-53-03 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p name addl notice peer +#types time string addr port addr port string string bool string +1340127577.341510 j4u32Pc5bif 192.168.2.16 3797 83.170.1.38 32900 Teredo_bubble_with_payload - F bro +1340127577.346849 UWkUyAuUGXf 192.168.2.16 3797 65.55.158.80 3544 Teredo_bubble_with_payload - F bro +#close 2012-10-02-16-53-03 diff --git a/testing/btest/Baseline/core.vlan-mpls/conn.log b/testing/btest/Baseline/core.vlan-mpls/conn.log index 69e23f3875..d4cc8370a5 100644 --- a/testing/btest/Baseline/core.vlan-mpls/conn.log +++ b/testing/btest/Baseline/core.vlan-mpls/conn.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path conn -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes -#types time string addr port addr port enum string interval count count string bool count string count count count count -952109346.874907 UWkUyAuUGXf 10.1.2.1 11001 10.34.0.1 23 tcp - 2.102560 26 0 SH - 0 SADF 11 470 0 0 -1128727435.450898 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 tcp http 1.733303 98 9417 SF - 0 ShADdFaf 12 730 10 9945 -1278600802.069419 k6kgXLOoSKl 10.20.80.1 50343 10.0.0.15 80 tcp - 0.004152 9 3429 SF - 0 ShADadfF 7 381 7 3801 +#open 2005-10-07-23-23-55 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +952109346.874907 UWkUyAuUGXf 10.1.2.1 11001 10.34.0.1 23 tcp - 2.102560 26 0 SH - 0 SADF 11 470 0 0 (empty) +1128727435.450898 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 tcp http 1.733303 98 9417 SF - 0 ShADdFaf 12 730 10 9945 (empty) +1278600802.069419 k6kgXLOoSKl 10.20.80.1 50343 10.0.0.15 80 tcp - 0.004152 9 3429 SF - 0 ShADadfF 7 381 7 3801 (empty) +#close 2010-07-08-14-53-22 diff --git a/testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log b/testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log index 6819dc0813..41209a4084 100644 --- a/testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log +++ b/testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log @@ -1,5 +1,9 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path loaded_scripts +#open 2012-07-20-14-34-11 #fields name #types string scripts/base/init-bare.bro @@ -14,5 +18,16 @@ scripts/base/init-bare.bro build/src/base/logging.bif.bro scripts/base/frameworks/logging/./postprocessors/__load__.bro scripts/base/frameworks/logging/./postprocessors/./scp.bro + scripts/base/frameworks/logging/./postprocessors/./sftp.bro scripts/base/frameworks/logging/./writers/ascii.bro + scripts/base/frameworks/logging/./writers/dataseries.bro + scripts/base/frameworks/logging/./writers/elasticsearch.bro + scripts/base/frameworks/logging/./writers/none.bro + scripts/base/frameworks/input/__load__.bro + scripts/base/frameworks/input/./main.bro + build/src/base/input.bif.bro + scripts/base/frameworks/input/./readers/ascii.bro + scripts/base/frameworks/input/./readers/raw.bro + scripts/base/frameworks/input/./readers/benchmark.bro scripts/policy/misc/loaded-scripts.bro +#close 2012-07-20-14-34-11 diff --git a/testing/btest/Baseline/coverage.bare-mode-errors/unique_errors_no_elasticsearch b/testing/btest/Baseline/coverage.bare-mode-errors/unique_errors_no_elasticsearch new file mode 100644 index 0000000000..e95f88e74b --- /dev/null +++ b/testing/btest/Baseline/coverage.bare-mode-errors/unique_errors_no_elasticsearch @@ -0,0 +1 @@ +error: unknown writer type requested diff --git a/testing/btest/Baseline/coverage.coverage-blacklist/output b/testing/btest/Baseline/coverage.coverage-blacklist/output new file mode 100644 index 0000000000..c54e4283b2 --- /dev/null +++ b/testing/btest/Baseline/coverage.coverage-blacklist/output @@ -0,0 +1,5 @@ +1 /da/home/robin/bro/master/testing/btest/.tmp/coverage.coverage-blacklist/coverage-blacklist.bro, line 13 print cover me; +1 /da/home/robin/bro/master/testing/btest/.tmp/coverage.coverage-blacklist/coverage-blacklist.bro, line 17 print always executed; +0 /da/home/robin/bro/master/testing/btest/.tmp/coverage.coverage-blacklist/coverage-blacklist.bro, line 26 print also impossible, but included in code coverage analysis; +1 /da/home/robin/bro/master/testing/btest/.tmp/coverage.coverage-blacklist/coverage-blacklist.bro, line 29 print success; +1 /da/home/robin/bro/master/testing/btest/.tmp/coverage.coverage-blacklist/coverage-blacklist.bro, line 5 print first; diff --git a/testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log b/testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log index 7a461a3903..c3ee64cffe 100644 --- a/testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log +++ b/testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log @@ -1,5 +1,9 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path loaded_scripts +#open 2012-07-20-14-34-40 #fields name #types string scripts/base/init-bare.bro @@ -14,7 +18,17 @@ scripts/base/init-bare.bro build/src/base/logging.bif.bro scripts/base/frameworks/logging/./postprocessors/__load__.bro scripts/base/frameworks/logging/./postprocessors/./scp.bro + scripts/base/frameworks/logging/./postprocessors/./sftp.bro scripts/base/frameworks/logging/./writers/ascii.bro + scripts/base/frameworks/logging/./writers/dataseries.bro + scripts/base/frameworks/logging/./writers/elasticsearch.bro + scripts/base/frameworks/logging/./writers/none.bro + scripts/base/frameworks/input/__load__.bro + scripts/base/frameworks/input/./main.bro + build/src/base/input.bif.bro + scripts/base/frameworks/input/./readers/ascii.bro + scripts/base/frameworks/input/./readers/raw.bro + scripts/base/frameworks/input/./readers/benchmark.bro scripts/base/init-default.bro scripts/base/utils/site.bro scripts/base/utils/./patterns.bro @@ -57,10 +71,13 @@ scripts/base/init-default.bro scripts/base/frameworks/intel/./main.bro scripts/base/frameworks/reporter/__load__.bro scripts/base/frameworks/reporter/./main.bro + scripts/base/frameworks/tunnels/__load__.bro + scripts/base/frameworks/tunnels/./main.bro scripts/base/protocols/conn/__load__.bro scripts/base/protocols/conn/./main.bro scripts/base/protocols/conn/./contents.bro scripts/base/protocols/conn/./inactivity.bro + scripts/base/protocols/conn/./polling.bro scripts/base/protocols/dns/__load__.bro scripts/base/protocols/dns/./consts.bro scripts/base/protocols/dns/./main.bro @@ -68,6 +85,11 @@ scripts/base/init-default.bro scripts/base/protocols/ftp/./utils-commands.bro scripts/base/protocols/ftp/./main.bro scripts/base/protocols/ftp/./file-extract.bro + scripts/base/protocols/ftp/./gridftp.bro + scripts/base/protocols/ssl/__load__.bro + scripts/base/protocols/ssl/./consts.bro + scripts/base/protocols/ssl/./main.bro + scripts/base/protocols/ssl/./mozilla-ca-list.bro scripts/base/protocols/http/__load__.bro scripts/base/protocols/http/./main.bro scripts/base/protocols/http/./utils.bro @@ -81,13 +103,13 @@ scripts/base/init-default.bro scripts/base/protocols/smtp/./main.bro scripts/base/protocols/smtp/./entities.bro scripts/base/protocols/smtp/./entities-excerpt.bro + scripts/base/protocols/socks/__load__.bro + scripts/base/protocols/socks/./consts.bro + scripts/base/protocols/socks/./main.bro scripts/base/protocols/ssh/__load__.bro scripts/base/protocols/ssh/./main.bro - scripts/base/protocols/ssl/__load__.bro - scripts/base/protocols/ssl/./consts.bro - scripts/base/protocols/ssl/./main.bro - scripts/base/protocols/ssl/./mozilla-ca-list.bro scripts/base/protocols/syslog/__load__.bro scripts/base/protocols/syslog/./consts.bro scripts/base/protocols/syslog/./main.bro scripts/policy/misc/loaded-scripts.bro +#close 2012-07-20-14-34-40 diff --git a/testing/btest/Baseline/doc.autogen-reST-enums/autogen-reST-enums.rst b/testing/btest/Baseline/doc.autogen-reST-enums/autogen-reST-enums.rst index 519ed708d5..8bd6286c24 100644 --- a/testing/btest/Baseline/doc.autogen-reST-enums/autogen-reST-enums.rst +++ b/testing/btest/Baseline/doc.autogen-reST-enums/autogen-reST-enums.rst @@ -1,14 +1,15 @@ .. Automatically generated. Do not edit. +:tocdepth: 3 + autogen-reST-enums.bro ====================== -:download:`Original Source File ` -Overview --------- +:Source File: :download:`autogen-reST-enums.bro` + Summary ~~~~~~~ Types @@ -27,10 +28,10 @@ Redefinitions :bro:type:`TestEnum1`: :bro:type:`enum` now with a comma ======================================= ======================= -Public Interface ----------------- +Detailed Interface +~~~~~~~~~~~~~~~~~~ Types -~~~~~ +##### .. bro:type:: TestEnum1 :Type: :bro:type:`enum` @@ -74,7 +75,7 @@ Types The final comma is optional Redefinitions -~~~~~~~~~~~~~ +############# :bro:type:`TestEnum1` :Type: :bro:type:`enum` diff --git a/testing/btest/Baseline/doc.autogen-reST-example/example.rst b/testing/btest/Baseline/doc.autogen-reST-example/example.rst index b76b9af59b..bee8658e14 100644 --- a/testing/btest/Baseline/doc.autogen-reST-example/example.rst +++ b/testing/btest/Baseline/doc.autogen-reST-example/example.rst @@ -1,21 +1,32 @@ .. Automatically generated. Do not edit. +:tocdepth: 3 + example.bro =========== +.. bro:namespace:: Example -:download:`Original Source File ` - -Overview --------- -This is an example script that demonstrates how to document. Comments -of the form ``##!`` are for the script summary. The contents of +This is an example script that demonstrates documentation features. +Comments of the form ``##!`` are for the script summary. The contents of these comments are transferred directly into the auto-generated `reStructuredText `_ (reST) document's summary section. .. tip:: You can embed directives and roles within ``##``-stylized comments. +There's also a custom role to reference any identifier node in +the Bro Sphinx domain that's good for "see alsos", e.g. + +See also: :bro:see:`Example::a_var`, :bro:see:`Example::ONE`, +:bro:see:`SSH::Info` + +And a custom directive does the equivalent references: + +.. bro:see:: Example::a_var Example::ONE SSH::Info + +:Namespace: ``Example`` :Imports: :doc:`policy/frameworks/software/vulnerable ` +:Source File: :download:`example.bro` Summary ~~~~~~~ @@ -24,7 +35,7 @@ Options ============================================================================ ====================================== :bro:id:`Example::an_option`: :bro:type:`set` :bro:attr:`&redef` add documentation for "an_option" here -:bro:id:`Example::option_with_init`: :bro:type:`interval` :bro:attr:`&redef` +:bro:id:`Example::option_with_init`: :bro:type:`interval` :bro:attr:`&redef` More docs can be added here. ============================================================================ ====================================== State Variables @@ -76,12 +87,8 @@ Redefinitions :bro:type:`Example::SimpleRecord`: :bro:type:`record` document the record extension redef here ===================================================== ======================================== -Namespaces -~~~~~~~~~~ -.. bro:namespace:: Example - Notices -~~~~~~~ +####### :bro:type:`Notice::Type` :Type: :bro:type:`enum` @@ -100,10 +107,32 @@ Notices .. bro:enum:: Example::Notice_Four Notice::Type -Public Interface ----------------- +Configuration Changes +##################### +Port Analysis +^^^^^^^^^^^^^ +Loading this script makes the following changes to :bro:see:`dpd_config`. + +SSL:: + + [ports={ + 443/tcp, + 562/tcp + }] + +Packet Filter +^^^^^^^^^^^^^ +Loading this script makes the following changes to :bro:see:`capture_filters`. + +Filters added:: + + [ssl] = tcp port 443, + [nntps] = tcp port 562 + +Detailed Interface +~~~~~~~~~~~~~~~~~~ Options -~~~~~~~ +####### .. bro:id:: Example::an_option :Type: :bro:type:`set` [:bro:type:`addr`, :bro:type:`addr`, :bro:type:`string`] @@ -118,8 +147,10 @@ Options :Attributes: :bro:attr:`&redef` :Default: ``10.0 msecs`` + More docs can be added here. + State Variables -~~~~~~~~~~~~~~~ +############### .. bro:id:: Example::a_var :Type: :bro:type:`bool` @@ -137,7 +168,7 @@ State Variables :Default: ``"this works"`` Types -~~~~~ +##### .. bro:type:: Example::SimpleEnum :Type: :bro:type:`enum` @@ -200,13 +231,14 @@ Types An example record to be used with a logging stream. Events -~~~~~~ +###### .. bro:id:: Example::an_event :Type: :bro:type:`event` (name: :bro:type:`string`) Summarize "an_event" here. Give more details about "an_event" here. + Example::an_event should not be confused as a parameter. :param name: describe the argument here @@ -218,7 +250,7 @@ Events logging streams and is raised once for each log entry. Functions -~~~~~~~~~ +######### .. bro:id:: Example::a_function :Type: :bro:type:`function` (tag: :bro:type:`string`, msg: :bro:type:`string`) : :bro:type:`string` @@ -238,7 +270,7 @@ Functions :returns: describe the return type here Redefinitions -~~~~~~~~~~~~~ +############# :bro:type:`Log::ID` :Type: :bro:type:`enum` @@ -269,23 +301,3 @@ Redefinitions document the record extension redef here -Port Analysis -------------- -:ref:`More Information ` - -SSL:: - - [ports={ - 443/tcp, - 562/tcp - }] - -Packet Filter -------------- -:ref:`More Information ` - -Filters added:: - - [ssl] = tcp port 443, - [nntps] = tcp port 562 - diff --git a/testing/btest/Baseline/doc.autogen-reST-func-params/autogen-reST-func-params.rst b/testing/btest/Baseline/doc.autogen-reST-func-params/autogen-reST-func-params.rst index 4de4970c9e..15526f12c7 100644 --- a/testing/btest/Baseline/doc.autogen-reST-func-params/autogen-reST-func-params.rst +++ b/testing/btest/Baseline/doc.autogen-reST-func-params/autogen-reST-func-params.rst @@ -1,14 +1,15 @@ .. Automatically generated. Do not edit. +:tocdepth: 3 + autogen-reST-func-params.bro ============================ -:download:`Original Source File ` -Overview --------- +:Source File: :download:`autogen-reST-func-params.bro` + Summary ~~~~~~~ Types @@ -23,10 +24,10 @@ Functions :bro:id:`test_func`: :bro:type:`func` This is a global function declaration. ===================================== ====================================== -Public Interface ----------------- +Detailed Interface +~~~~~~~~~~~~~~~~~~ Types -~~~~~ +##### .. bro:type:: test_rec :Type: :bro:type:`record` @@ -40,7 +41,7 @@ Types :returns: A string. Functions -~~~~~~~~~ +######### .. bro:id:: test_func :Type: :bro:type:`function` (i: :bro:type:`int`, j: :bro:type:`int`) : :bro:type:`string` diff --git a/testing/btest/Baseline/doc.autogen-reST-records/autogen-reST-records.rst b/testing/btest/Baseline/doc.autogen-reST-records/autogen-reST-records.rst index f43232f5ea..0344fa265c 100644 --- a/testing/btest/Baseline/doc.autogen-reST-records/autogen-reST-records.rst +++ b/testing/btest/Baseline/doc.autogen-reST-records/autogen-reST-records.rst @@ -1,14 +1,15 @@ .. Automatically generated. Do not edit. +:tocdepth: 3 + autogen-reST-records.bro ======================== -:download:`Original Source File ` -Overview --------- +:Source File: :download:`autogen-reST-records.bro` + Summary ~~~~~~~ Types @@ -19,10 +20,10 @@ Types :bro:type:`TestRecord`: :bro:type:`record` Here's the ways records and record fields can be documented. ============================================ ============================================================ -Public Interface ----------------- +Detailed Interface +~~~~~~~~~~~~~~~~~~ Types -~~~~~ +##### .. bro:type:: SimpleRecord :Type: :bro:type:`record` diff --git a/testing/btest/Baseline/doc.autogen-reST-type-aliases/autogen-reST-type-aliases.rst b/testing/btest/Baseline/doc.autogen-reST-type-aliases/autogen-reST-type-aliases.rst index b24d7a01ab..96a3b9377d 100644 --- a/testing/btest/Baseline/doc.autogen-reST-type-aliases/autogen-reST-type-aliases.rst +++ b/testing/btest/Baseline/doc.autogen-reST-type-aliases/autogen-reST-type-aliases.rst @@ -1,14 +1,15 @@ .. Automatically generated. Do not edit. +:tocdepth: 3 + autogen-reST-type-aliases.bro ============================= -:download:`Original Source File ` -Overview --------- +:Source File: :download:`autogen-reST-type-aliases.bro` + Summary ~~~~~~~ State Variables @@ -28,10 +29,10 @@ Types so this type just creates a cross reference to ``bool``. ============================================ ========================================================================== -Public Interface ----------------- +Detailed Interface +~~~~~~~~~~~~~~~~~~ State Variables -~~~~~~~~~~~~~~~ +############### .. bro:id:: a :Type: :bro:type:`TypeAlias` @@ -45,7 +46,7 @@ State Variables And this should reference a type of ``OtherTypeAlias``. Types -~~~~~ +##### .. bro:type:: TypeAlias :Type: :bro:type:`bool` diff --git a/testing/btest/Baseline/istate.bro-ipv6-socket/recv..stdout b/testing/btest/Baseline/istate.bro-ipv6-socket/recv..stdout new file mode 100644 index 0000000000..673af68234 --- /dev/null +++ b/testing/btest/Baseline/istate.bro-ipv6-socket/recv..stdout @@ -0,0 +1 @@ +handshake done with peer: ::1 diff --git a/testing/btest/Baseline/istate.bro-ipv6-socket/send..stdout b/testing/btest/Baseline/istate.bro-ipv6-socket/send..stdout new file mode 100644 index 0000000000..fbc855464d --- /dev/null +++ b/testing/btest/Baseline/istate.bro-ipv6-socket/send..stdout @@ -0,0 +1,2 @@ +handshake done with peer: ::1 +my_event: hello world diff --git a/testing/btest/Baseline/istate.broccoli-ipv6-socket/bro..stdout b/testing/btest/Baseline/istate.broccoli-ipv6-socket/bro..stdout new file mode 100644 index 0000000000..0a7bac52c5 --- /dev/null +++ b/testing/btest/Baseline/istate.broccoli-ipv6-socket/bro..stdout @@ -0,0 +1,9 @@ +handshake done with peer +bro_addr(1.2.3.4) +bro_subnet(10.0.0.0/16) +bro_addr(2607:f8b0:4009:802::1014) +bro_subnet(2607:f8b0::/32) +broccoli_addr(1.2.3.4) +broccoli_subnet(10.0.0.0/16) +broccoli_addr(2607:f8b0:4009:802::1014) +broccoli_subnet(2607:f8b0::/32) diff --git a/testing/btest/Baseline/istate.broccoli-ipv6-socket/broccoli..stdout b/testing/btest/Baseline/istate.broccoli-ipv6-socket/broccoli..stdout new file mode 100644 index 0000000000..dba9318891 --- /dev/null +++ b/testing/btest/Baseline/istate.broccoli-ipv6-socket/broccoli..stdout @@ -0,0 +1,6 @@ +Connected to Bro instance at: ::1:47757 +Received bro_addr(1.2.3.4) +Received bro_subnet(10.0.0.0/16) +Received bro_addr(2607:f8b0:4009:802::1014) +Received bro_subnet(2607:f8b0::/32) +Terminating diff --git a/testing/btest/Baseline/istate.broccoli-ipv6/bro..stdout b/testing/btest/Baseline/istate.broccoli-ipv6/bro..stdout new file mode 100644 index 0000000000..0a7bac52c5 --- /dev/null +++ b/testing/btest/Baseline/istate.broccoli-ipv6/bro..stdout @@ -0,0 +1,9 @@ +handshake done with peer +bro_addr(1.2.3.4) +bro_subnet(10.0.0.0/16) +bro_addr(2607:f8b0:4009:802::1014) +bro_subnet(2607:f8b0::/32) +broccoli_addr(1.2.3.4) +broccoli_subnet(10.0.0.0/16) +broccoli_addr(2607:f8b0:4009:802::1014) +broccoli_subnet(2607:f8b0::/32) diff --git a/testing/btest/Baseline/istate.broccoli-ipv6/broccoli..stdout b/testing/btest/Baseline/istate.broccoli-ipv6/broccoli..stdout new file mode 100644 index 0000000000..481778c98a --- /dev/null +++ b/testing/btest/Baseline/istate.broccoli-ipv6/broccoli..stdout @@ -0,0 +1,6 @@ +Connected to Bro instance at: localhost:47757 +Received bro_addr(1.2.3.4) +Received bro_subnet(10.0.0.0/16) +Received bro_addr(2607:f8b0:4009:802::1014) +Received bro_subnet(2607:f8b0::/32) +Terminating diff --git a/testing/btest/Baseline/istate.broccoli-ssl/bro..stdout b/testing/btest/Baseline/istate.broccoli-ssl/bro..stdout new file mode 100644 index 0000000000..0a7bac52c5 --- /dev/null +++ b/testing/btest/Baseline/istate.broccoli-ssl/bro..stdout @@ -0,0 +1,9 @@ +handshake done with peer +bro_addr(1.2.3.4) +bro_subnet(10.0.0.0/16) +bro_addr(2607:f8b0:4009:802::1014) +bro_subnet(2607:f8b0::/32) +broccoli_addr(1.2.3.4) +broccoli_subnet(10.0.0.0/16) +broccoli_addr(2607:f8b0:4009:802::1014) +broccoli_subnet(2607:f8b0::/32) diff --git a/testing/btest/Baseline/istate.broccoli-ssl/broccoli..stdout b/testing/btest/Baseline/istate.broccoli-ssl/broccoli..stdout new file mode 100644 index 0000000000..481778c98a --- /dev/null +++ b/testing/btest/Baseline/istate.broccoli-ssl/broccoli..stdout @@ -0,0 +1,6 @@ +Connected to Bro instance at: localhost:47757 +Received bro_addr(1.2.3.4) +Received bro_subnet(10.0.0.0/16) +Received bro_addr(2607:f8b0:4009:802::1014) +Received bro_subnet(2607:f8b0::/32) +Terminating diff --git a/testing/btest/Baseline/istate.broccoli/bro.log b/testing/btest/Baseline/istate.broccoli/bro.log index eeebe944ef..70bf23f95a 100644 --- a/testing/btest/Baseline/istate.broccoli/bro.log +++ b/testing/btest/Baseline/istate.broccoli/bro.log @@ -1,3 +1,3 @@ -ping received, seq 0, 1303093042.542125 at src, 1303093042.583423 at dest, -ping received, seq 1, 1303093043.543167 at src, 1303093043.544026 at dest, -ping received, seq 2, 1303093044.544115 at src, 1303093044.545008 at dest, +ping received, seq 0, 1342749173.594568 at src, 1342749173.637317 at dest, +ping received, seq 1, 1342749174.594948 at src, 1342749174.596551 at dest, +ping received, seq 2, 1342749175.595486 at src, 1342749175.596581 at dest, diff --git a/testing/btest/Baseline/istate.events-ssl/events.rec.log b/testing/btest/Baseline/istate.events-ssl/events.rec.log new file mode 100644 index 0000000000..04993fb84a --- /dev/null +++ b/testing/btest/Baseline/istate.events-ssl/events.rec.log @@ -0,0 +1,33 @@ +http_request +http_begin_entity +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_end_entity +http_message_done +http_signature_found +http_reply +http_begin_entity +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_end_entity +http_message_done diff --git a/testing/btest/Baseline/istate.events-ssl/events.snd.log b/testing/btest/Baseline/istate.events-ssl/events.snd.log new file mode 100644 index 0000000000..04993fb84a --- /dev/null +++ b/testing/btest/Baseline/istate.events-ssl/events.snd.log @@ -0,0 +1,33 @@ +http_request +http_begin_entity +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_end_entity +http_message_done +http_signature_found +http_reply +http_begin_entity +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_end_entity +http_message_done diff --git a/testing/btest/Baseline/istate.events-ssl/receiver.http.log b/testing/btest/Baseline/istate.events-ssl/receiver.http.log index 06d453c241..3fc7f1b66f 100644 --- a/testing/btest/Baseline/istate.events-ssl/receiver.http.log +++ b/testing/btest/Baseline/istate.events-ssl/receiver.http.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2012-07-20-01-53-03 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string string file -1319568535.914761 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - - - - - text/html - - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1342749182.906082 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - (empty) - - - text/html - - +#close 2012-07-20-01-53-04 diff --git a/testing/btest/Baseline/istate.events-ssl/sender.http.log b/testing/btest/Baseline/istate.events-ssl/sender.http.log index 06d453c241..3fc7f1b66f 100644 --- a/testing/btest/Baseline/istate.events-ssl/sender.http.log +++ b/testing/btest/Baseline/istate.events-ssl/sender.http.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2012-07-20-01-53-03 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string string file -1319568535.914761 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - - - - - text/html - - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1342749182.906082 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - (empty) - - - text/html - - +#close 2012-07-20-01-53-04 diff --git a/testing/btest/Baseline/istate.events/events.rec.log b/testing/btest/Baseline/istate.events/events.rec.log new file mode 100644 index 0000000000..04993fb84a --- /dev/null +++ b/testing/btest/Baseline/istate.events/events.rec.log @@ -0,0 +1,33 @@ +http_request +http_begin_entity +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_end_entity +http_message_done +http_signature_found +http_reply +http_begin_entity +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_end_entity +http_message_done diff --git a/testing/btest/Baseline/istate.events/events.snd.log b/testing/btest/Baseline/istate.events/events.snd.log new file mode 100644 index 0000000000..04993fb84a --- /dev/null +++ b/testing/btest/Baseline/istate.events/events.snd.log @@ -0,0 +1,33 @@ +http_request +http_begin_entity +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_end_entity +http_message_done +http_signature_found +http_reply +http_begin_entity +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_header +http_all_headers +http_content_type +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_entity_data +http_end_entity +http_message_done diff --git a/testing/btest/Baseline/istate.events/receiver.http.log b/testing/btest/Baseline/istate.events/receiver.http.log index d85d560b6d..6862c08b98 100644 --- a/testing/btest/Baseline/istate.events/receiver.http.log +++ b/testing/btest/Baseline/istate.events/receiver.http.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2012-07-20-01-53-12 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string string file -1319568558.542142 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - - - - - text/html - - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1342749191.765740 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - (empty) - - - text/html - - +#close 2012-07-20-01-53-13 diff --git a/testing/btest/Baseline/istate.events/sender.http.log b/testing/btest/Baseline/istate.events/sender.http.log index d85d560b6d..6862c08b98 100644 --- a/testing/btest/Baseline/istate.events/sender.http.log +++ b/testing/btest/Baseline/istate.events/sender.http.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2012-07-20-01-53-12 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string string file -1319568558.542142 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - - - - - text/html - - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1342749191.765740 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - (empty) - - - text/html - - +#close 2012-07-20-01-53-13 diff --git a/testing/btest/Baseline/istate.pybroccoli/bro..stdout b/testing/btest/Baseline/istate.pybroccoli/bro..stdout index 1e70711932..b73d342967 100644 --- a/testing/btest/Baseline/istate.pybroccoli/bro..stdout +++ b/testing/btest/Baseline/istate.pybroccoli/bro..stdout @@ -1,14 +1,16 @@ ==== atomic -10 2 -1313624487.48817 +1342749196.619505 2.0 mins F 1.5 Servus 5555/tcp 6.7.6.5 +2001:db8:85a3::8a2e:370:7334 192.168.0.0/16 +2001:db8:85a3::/48 ==== record [a=42, b=6.6.7.7] 42, 6.6.7.7 diff --git a/testing/btest/Baseline/istate.pybroccoli/python..stdout.filtered b/testing/btest/Baseline/istate.pybroccoli/python..stdout.filtered index 864a4eb627..2f2a5978d8 100644 --- a/testing/btest/Baseline/istate.pybroccoli/python..stdout.filtered +++ b/testing/btest/Baseline/istate.pybroccoli/python..stdout.filtered @@ -1,7 +1,7 @@ ==== atomic a 1 ==== -4L -4 42 42 -1313624487.4889 +1342749196.6624 60.0 True True 3.14 @@ -9,10 +9,12 @@ True True '12345/udp' 12345/udp '1.2.3.4' 1.2.3.4 '22.33.44.0/24' 22.33.44.0/24 +'2607:f8b0:4009:802::1014' 2607:f8b0:4009:802::1014 +'2607:f8b0::/32' 2607:f8b0::/32 ==== atomic a 2 ==== -10L -10 2 2 -1313624487.4882 +1342749196.6195 120.0 False False 1.5 @@ -20,10 +22,12 @@ False False '5555/tcp' 5555/tcp '6.7.6.5' 6.7.6.5 '192.168.0.0/16' 192.168.0.0/16 +'2001:db8:85a3::8a2e:370:7334' 2001:db8:85a3::8a2e:370:7334 +'2001:db8:85a3::/48' 2001:db8:85a3::/48 ==== atomic b 2 ==== -10L -10 2 - 1313624487.4882 + 1342749196.6195 120.0 False False 1.5 @@ -31,6 +35,8 @@ False False 5555/tcp 6.7.6.5 192.168.0.0/16 + 2001:db8:85a3::8a2e:370:7334 + 2001:db8:85a3::/48 ==== record 1 ==== 42L 42 diff --git a/testing/btest/Baseline/language.addr/out b/testing/btest/Baseline/language.addr/out new file mode 100644 index 0000000000..b04aac5ce3 --- /dev/null +++ b/testing/btest/Baseline/language.addr/out @@ -0,0 +1,15 @@ +IPv4 address inequality (PASS) +IPv4 address equality (PASS) +IPv4 address comparison (PASS) +IPv4 address comparison (PASS) +size of IPv4 address (PASS) +IPv4 address type inference (PASS) +IPv6 address inequality (PASS) +IPv6 address equality (PASS) +IPv6 address equality (PASS) +IPv6 address comparison (PASS) +IPv6 address comparison (PASS) +IPv6 address not case-sensitive (PASS) +size of IPv6 address (PASS) +IPv6 address type inference (PASS) +IPv4 and IPv6 address inequality (PASS) diff --git a/testing/btest/Baseline/language.any/out b/testing/btest/Baseline/language.any/out new file mode 100644 index 0000000000..4072ce3745 --- /dev/null +++ b/testing/btest/Baseline/language.any/out @@ -0,0 +1,14 @@ +count (PASS) +string (PASS) +pattern (PASS) +bool (PASS) +string (PASS) +count (PASS) +int (PASS) +double (PASS) +pattern (PASS) +addr (PASS) +addr (PASS) +subnet (PASS) +subnet (PASS) +port (PASS) diff --git a/testing/btest/Baseline/language.at-if/out b/testing/btest/Baseline/language.at-if/out new file mode 100644 index 0000000000..b63cbbb714 --- /dev/null +++ b/testing/btest/Baseline/language.at-if/out @@ -0,0 +1,3 @@ +@if (PASS) +@if...@else (PASS) +@if...@else (PASS) diff --git a/testing/btest/Baseline/language.at-ifdef/out b/testing/btest/Baseline/language.at-ifdef/out new file mode 100644 index 0000000000..644a42d407 --- /dev/null +++ b/testing/btest/Baseline/language.at-ifdef/out @@ -0,0 +1,3 @@ +@ifdef (PASS) +@ifdef...@else (PASS) +@ifdef...@else (PASS) diff --git a/testing/btest/Baseline/language.at-ifndef/out b/testing/btest/Baseline/language.at-ifndef/out new file mode 100644 index 0000000000..70abba9b3f --- /dev/null +++ b/testing/btest/Baseline/language.at-ifndef/out @@ -0,0 +1,3 @@ +@ifndef (PASS) +@ifndef...@else (PASS) +@ifndef...@else (PASS) diff --git a/testing/btest/Baseline/language.at-load/out b/testing/btest/Baseline/language.at-load/out new file mode 100644 index 0000000000..5b011543b5 --- /dev/null +++ b/testing/btest/Baseline/language.at-load/out @@ -0,0 +1,4 @@ +function (PASS) +global variable (PASS) +const (PASS) +event (PASS) diff --git a/testing/btest/Baseline/language.bool/out b/testing/btest/Baseline/language.bool/out new file mode 100644 index 0000000000..9e4c6c3d6e --- /dev/null +++ b/testing/btest/Baseline/language.bool/out @@ -0,0 +1,9 @@ +equality operator (PASS) +inequality operator (PASS) +logical or operator (PASS) +logical and operator (PASS) +negation operator (PASS) +absolute value (PASS) +absolute value (PASS) +type inference (PASS) +type inference (PASS) diff --git a/testing/btest/Baseline/language.conditional-expression/out b/testing/btest/Baseline/language.conditional-expression/out new file mode 100644 index 0000000000..0dcbdbd7c7 --- /dev/null +++ b/testing/btest/Baseline/language.conditional-expression/out @@ -0,0 +1,7 @@ +true condition (PASS) +false condition (PASS) +true condition (PASS) +false condition (PASS) +associativity (PASS) +associativity (PASS) +associativity (PASS) diff --git a/testing/btest/Baseline/language.copy/out b/testing/btest/Baseline/language.copy/out new file mode 100644 index 0000000000..675d38aa5d --- /dev/null +++ b/testing/btest/Baseline/language.copy/out @@ -0,0 +1,2 @@ +direct assignment (PASS) +using copy (PASS) diff --git a/testing/btest/Baseline/language.count/out b/testing/btest/Baseline/language.count/out new file mode 100644 index 0000000000..4ef65b6098 --- /dev/null +++ b/testing/btest/Baseline/language.count/out @@ -0,0 +1,18 @@ +type inference (PASS) +counter alias (PASS) +hexadecimal (PASS) +inequality operator (PASS) +relational operator (PASS) +relational operator (PASS) +relational operator (PASS) +relational operator (PASS) +absolute value (PASS) +absolute value (PASS) +pre-increment operator (PASS) +pre-decrement operator (PASS) +modulus operator (PASS) +division operator (PASS) +assignment operator (PASS) +assignment operator (PASS) +max count value = 18446744073709551615 (PASS) +max count value = 18446744073709551615 (PASS) diff --git a/testing/btest/Baseline/language.cross-product-init/output b/testing/btest/Baseline/language.cross-product-init/output index 95794f9179..fcb5e43e67 100644 --- a/testing/btest/Baseline/language.cross-product-init/output +++ b/testing/btest/Baseline/language.cross-product-init/output @@ -1,6 +1,6 @@ { -[bar, 1.2.0.0/19] , -[foo, 5.6.0.0/21] , +[foo, 1.2.0.0/19] , [bar, 5.6.0.0/21] , -[foo, 1.2.0.0/19] +[foo, 5.6.0.0/21] , +[bar, 1.2.0.0/19] } diff --git a/testing/btest/Baseline/language.double/out b/testing/btest/Baseline/language.double/out new file mode 100644 index 0000000000..3f70635588 --- /dev/null +++ b/testing/btest/Baseline/language.double/out @@ -0,0 +1,28 @@ +type inference (PASS) +type inference (PASS) +type inference (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +double representations (PASS) +inequality operator (PASS) +absolute value (PASS) +assignment operator (PASS) +assignment operator (PASS) +relational operator (PASS) +relational operator (PASS) +relational operator (PASS) +relational operator (PASS) +division operator (PASS) +max double value = 1.7976931348623157e+308 (PASS) diff --git a/testing/btest/Baseline/language.enum/out b/testing/btest/Baseline/language.enum/out new file mode 100644 index 0000000000..1bafdd73b0 --- /dev/null +++ b/testing/btest/Baseline/language.enum/out @@ -0,0 +1,4 @@ +enum equality comparison (PASS) +enum equality comparison (PASS) +enum equality comparison (PASS) +type inference (PASS) diff --git a/testing/btest/Baseline/language.event/out b/testing/btest/Baseline/language.event/out new file mode 100644 index 0000000000..d5a22b3745 --- /dev/null +++ b/testing/btest/Baseline/language.event/out @@ -0,0 +1,4 @@ +event statement +event part1 +event part2 +schedule statement diff --git a/testing/btest/Baseline/language.expire_func/output b/testing/btest/Baseline/language.expire_func/output new file mode 100644 index 0000000000..91cd2bad16 --- /dev/null +++ b/testing/btest/Baseline/language.expire_func/output @@ -0,0 +1,378 @@ +{ +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +here, +am +} +{ +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +[orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp], +here, +am +} +{ +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +[orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp], +here, +[orig_h=fe80::20c:29ff:febd:6f01, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp], +am +} +{ +[orig_h=172.16.238.131, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +[orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp], +here, +[orig_h=fe80::20c:29ff:febd:6f01, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp], +am +} +{ +[orig_h=172.16.238.131, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +[orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp], +here, +[orig_h=fe80::20c:29ff:febd:6f01, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +am +} +{ +[orig_h=172.16.238.131, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +[orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp], +here, +[orig_h=172.16.238.1, orig_p=49657/tcp, resp_h=172.16.238.131, resp_p=80/tcp], +[orig_h=fe80::20c:29ff:febd:6f01, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +am +} +{ +[orig_h=172.16.238.131, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +[orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp], +here, +[orig_h=172.16.238.1, orig_p=49657/tcp, resp_h=172.16.238.131, resp_p=80/tcp], +[orig_h=172.16.238.1, orig_p=49658/tcp, resp_h=172.16.238.131, resp_p=80/tcp], +[orig_h=fe80::20c:29ff:febd:6f01, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +am +} +{ +[orig_h=172.16.238.131, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=17500/udp, resp_h=172.16.238.255, resp_p=17500/udp], +[orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp], +i, +[orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp], +here, +[orig_h=172.16.238.1, orig_p=49657/tcp, resp_h=172.16.238.131, resp_p=80/tcp], +[orig_h=172.16.238.1, orig_p=49658/tcp, resp_h=172.16.238.131, resp_p=80/tcp], +[orig_h=fe80::20c:29ff:febd:6f01, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp], +[orig_h=172.16.238.1, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp], +am +} +expired [orig_h=172.16.238.131, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp] +expired [orig_h=172.16.238.1, orig_p=17500/udp, resp_h=172.16.238.255, resp_p=17500/udp] +expired [orig_h=172.16.238.1, orig_p=49656/tcp, resp_h=172.16.238.131, resp_p=22/tcp] +expired i +expired [orig_h=172.16.238.131, orig_p=37975/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired here +expired [orig_h=172.16.238.1, orig_p=49657/tcp, resp_h=172.16.238.131, resp_p=80/tcp] +expired [orig_h=172.16.238.1, orig_p=49658/tcp, resp_h=172.16.238.131, resp_p=80/tcp] +expired [orig_h=fe80::20c:29ff:febd:6f01, orig_p=5353/udp, resp_h=ff02::fb, resp_p=5353/udp] +expired [orig_h=172.16.238.1, orig_p=5353/udp, resp_h=224.0.0.251, resp_p=5353/udp] +expired am +{ +[orig_h=172.16.238.1, orig_p=49659/tcp, resp_h=172.16.238.131, resp_p=21/tcp] +} +{ +[orig_h=172.16.238.1, orig_p=49659/tcp, resp_h=172.16.238.131, resp_p=21/tcp], +[orig_h=172.16.238.131, orig_p=45126/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +expired [orig_h=172.16.238.1, orig_p=49659/tcp, resp_h=172.16.238.131, resp_p=21/tcp] +expired [orig_h=172.16.238.131, orig_p=45126/udp, resp_h=172.16.238.2, resp_p=53/udp] +{ +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45140/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55368/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45140/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55368/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45140/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=53102/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55368/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=59573/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45140/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=53102/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55368/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=59573/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=52952/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45140/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=53102/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=48621/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp], +[orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=55368/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=59573/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=52952/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45140/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=53102/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +expired [orig_h=172.16.238.131, orig_p=48621/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=37846/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=57272/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=55515/tcp, resp_h=74.125.225.81, resp_p=80/tcp] +expired [orig_h=172.16.238.131, orig_p=44555/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=55368/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=50205/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=59573/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=33818/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=33109/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=52952/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=45140/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=53102/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=51970/udp, resp_h=172.16.238.2, resp_p=53/udp] +expired [orig_h=172.16.238.131, orig_p=54304/udp, resp_h=172.16.238.2, resp_p=53/udp] +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp], +[orig_h=172.16.238.131, orig_p=36682/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=46552/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp], +[orig_h=172.16.238.131, orig_p=36682/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=58367/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=46552/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp], +[orig_h=172.16.238.131, orig_p=36682/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=58367/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=46552/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=42269/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp], +[orig_h=172.16.238.131, orig_p=36682/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=58367/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56485/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=46552/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=42269/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp], +[orig_h=172.16.238.131, orig_p=36682/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=58367/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56485/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=46552/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=42269/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=39723/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp], +[orig_h=172.16.238.131, orig_p=36682/udp, resp_h=172.16.238.2, resp_p=53/udp] +} +{ +[orig_h=172.16.238.131, orig_p=54935/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=58367/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56214/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=123/udp, resp_h=69.50.219.51, resp_p=123/udp], +[orig_h=172.16.238.131, orig_p=38118/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=56485/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=46552/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=42269/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=33624/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=37934/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=39723/udp, resp_h=172.16.238.2, resp_p=53/udp], +[orig_h=172.16.238.131, orig_p=45908/tcp, resp_h=141.142.192.39, resp_p=22/tcp], +[orig_h=172.16.238.131, orig_p=36682/udp, resp_h=172.16.238.2, resp_p=53/udp] +} diff --git a/testing/btest/Baseline/language.file/out1 b/testing/btest/Baseline/language.file/out1 new file mode 100644 index 0000000000..5ff4194027 --- /dev/null +++ b/testing/btest/Baseline/language.file/out1 @@ -0,0 +1,2 @@ +20 +12 diff --git a/testing/btest/Baseline/language.file/out2 b/testing/btest/Baseline/language.file/out2 new file mode 100644 index 0000000000..12be2d6723 --- /dev/null +++ b/testing/btest/Baseline/language.file/out2 @@ -0,0 +1 @@ +test, 123, 456 diff --git a/testing/btest/Baseline/language.for/out b/testing/btest/Baseline/language.for/out new file mode 100644 index 0000000000..dccc00ce3e --- /dev/null +++ b/testing/btest/Baseline/language.for/out @@ -0,0 +1,3 @@ +for loop (PASS) +for loop with break (PASS) +for loop with next (PASS) diff --git a/testing/btest/Baseline/language.function/out b/testing/btest/Baseline/language.function/out new file mode 100644 index 0000000000..f530024370 --- /dev/null +++ b/testing/btest/Baseline/language.function/out @@ -0,0 +1,11 @@ +no args without return value (PASS) +no args no return value, empty return (PASS) +no args with return value (PASS) +args without return value (PASS) +args with return value (PASS) +multiple args with return value (PASS) +anonymous function without args or return value (PASS) +anonymous function with return value (PASS) +anonymous function with args and return value (PASS) +assign function variable (PASS) +reassign function variable (PASS) diff --git a/testing/btest/Baseline/language.if/out b/testing/btest/Baseline/language.if/out new file mode 100644 index 0000000000..510b66b0cf --- /dev/null +++ b/testing/btest/Baseline/language.if/out @@ -0,0 +1,12 @@ +if T (PASS) +if T else (PASS) +if F else (PASS) +if T else if F (PASS) +if F else if T (PASS) +if T else if T (PASS) +if T else if F else (PASS) +if F else if T else (PASS) +if T else if T else (PASS) +if F else if F else (PASS) +if F else if F else if T else (PASS) +if F else if F else if F else (PASS) diff --git a/testing/btest/Baseline/language.incr-vec-expr/out b/testing/btest/Baseline/language.incr-vec-expr/out new file mode 100644 index 0000000000..b6c108a2d8 --- /dev/null +++ b/testing/btest/Baseline/language.incr-vec-expr/out @@ -0,0 +1,5 @@ +[0, 0, 0] +[a=0, b=test, c=[1, 2, 3]] +[1, 1, 1] +[a=1, b=test, c=[1, 2, 3]] +[a=1, b=test, c=[2, 3, 4]] diff --git a/testing/btest/Baseline/language.int/out b/testing/btest/Baseline/language.int/out new file mode 100644 index 0000000000..01f018acbe --- /dev/null +++ b/testing/btest/Baseline/language.int/out @@ -0,0 +1,23 @@ +type inference (PASS) +optional '+' sign (PASS) +negative vs. positive (PASS) +negative vs. positive (PASS) +hexadecimal (PASS) +hexadecimal (PASS) +hexadecimal (PASS) +relational operator (PASS) +relational operator (PASS) +relational operator (PASS) +relational operator (PASS) +absolute value (PASS) +absolute value (PASS) +pre-increment operator (PASS) +pre-decrement operator (PASS) +modulus operator (PASS) +division operator (PASS) +assignment operator (PASS) +assignment operator (PASS) +max int value = 9223372036854775807 (PASS) +min int value = -9223372036854775808 (PASS) +max int value = 9223372036854775807 (PASS) +min int value = -9223372036854775808 (PASS) diff --git a/testing/btest/Baseline/language.interval/out b/testing/btest/Baseline/language.interval/out new file mode 100644 index 0000000000..ae9ed5d74e --- /dev/null +++ b/testing/btest/Baseline/language.interval/out @@ -0,0 +1,27 @@ +type inference (PASS) +type inference (PASS) +type inference (PASS) +optional space (PASS) +plural/singular interval are same (PASS) +different units with same numeric value (PASS) +compare different time units (PASS) +compare different time units (PASS) +compare different time units (PASS) +compare different time units (PASS) +compare different time units (PASS) +compare different time units (PASS) +compare different time units (PASS) +add different time units (PASS) +subtract different time units (PASS) +absolute value (PASS) +absolute value (PASS) +assignment operator (PASS) +assignment operator (PASS) +multiplication operator (PASS) +division operator (PASS) +division operator (PASS) +relative size of units (PASS) +relative size of units (PASS) +relative size of units (PASS) +relative size of units (PASS) +relative size of units (PASS) diff --git a/testing/btest/Baseline/language.ipv6-literals/output b/testing/btest/Baseline/language.ipv6-literals/output new file mode 100644 index 0000000000..8542af7f91 --- /dev/null +++ b/testing/btest/Baseline/language.ipv6-literals/output @@ -0,0 +1,24 @@ +::1 +::ffff +::255.255.255.255 +::10.10.255.255 +1::1 +1::a +1::1:1 +1::1:a +a::a +a::1 +a::a:a +a::a:1 +a:a::a +aaaa::ffff +192.168.1.100 +ffff::c0a8:164 +::192.168.1.100 +::ffff:0:192.168.1.100 +805b:2d9d:dc28::fc57:d4c8:1fff +aaaa::bbbb +aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222 +aaaa:bbbb:cccc:dddd:eeee:ffff:1:2222 +aaaa:bbbb:cccc:dddd:eeee:ffff:0:2222 +aaaa:bbbb:cccc:dddd:eeee::2222 diff --git a/testing/btest/Baseline/language.match-test/output b/testing/btest/Baseline/language.match-test/output deleted file mode 100644 index 5ee7ba029d..0000000000 --- a/testing/btest/Baseline/language.match-test/output +++ /dev/null @@ -1,3 +0,0 @@ -default -it's big -it's really big diff --git a/testing/btest/Baseline/language.module/out b/testing/btest/Baseline/language.module/out new file mode 100644 index 0000000000..5b011543b5 --- /dev/null +++ b/testing/btest/Baseline/language.module/out @@ -0,0 +1,4 @@ +function (PASS) +global variable (PASS) +const (PASS) +event (PASS) diff --git a/testing/btest/Baseline/language.no-module/out b/testing/btest/Baseline/language.no-module/out new file mode 100644 index 0000000000..5b011543b5 --- /dev/null +++ b/testing/btest/Baseline/language.no-module/out @@ -0,0 +1,4 @@ +function (PASS) +global variable (PASS) +const (PASS) +event (PASS) diff --git a/testing/btest/Baseline/language.null-statement/out b/testing/btest/Baseline/language.null-statement/out new file mode 100644 index 0000000000..19f86f493a --- /dev/null +++ b/testing/btest/Baseline/language.null-statement/out @@ -0,0 +1 @@ +done diff --git a/testing/btest/Baseline/language.pattern/out b/testing/btest/Baseline/language.pattern/out new file mode 100644 index 0000000000..4a5b8de670 --- /dev/null +++ b/testing/btest/Baseline/language.pattern/out @@ -0,0 +1,8 @@ +type inference (PASS) +equality operator (PASS) +equality operator (order of operands) (PASS) +inequality operator (PASS) +inequality operator (order of operands) (PASS) +in operator (PASS) +in operator (PASS) +!in operator (PASS) diff --git a/testing/btest/Baseline/language.port/out b/testing/btest/Baseline/language.port/out new file mode 100644 index 0000000000..b307388c35 --- /dev/null +++ b/testing/btest/Baseline/language.port/out @@ -0,0 +1,9 @@ +type inference (PASS) +protocol ordering (PASS) +protocol ordering (PASS) +protocol ordering (PASS) +protocol ordering (PASS) +protocol ordering (PASS) +different protocol but same numeric value (PASS) +different protocol but same numeric value (PASS) +equality operator (PASS) diff --git a/testing/btest/Baseline/language.precedence/out b/testing/btest/Baseline/language.precedence/out new file mode 100644 index 0000000000..263ca83529 --- /dev/null +++ b/testing/btest/Baseline/language.precedence/out @@ -0,0 +1,31 @@ +++ and * (PASS) +++ and * (PASS) +* and ++ (PASS) +* and % (PASS) +* and % (PASS) +* and % (PASS) +% and * (PASS) +% and * (PASS) +% and * (PASS) ++ and * (PASS) ++ and * (PASS) ++ and * (PASS) +< and + (PASS) +< and + (PASS) ++ and < (PASS) ++ and < (PASS) ++= and + (PASS) ++= and + (PASS) ++= and + (PASS) +&& and || (PASS) +&& and || (PASS) +&& and || (PASS) +|| and && (PASS) +|| and && (PASS) +|| and && (PASS) +|| and conditional operator (PASS) +|| and conditional operator (PASS) +|| and conditional operator (PASS) +conditional operator and || (PASS) +conditional operator and || (PASS) +conditional operator and || (PASS) diff --git a/testing/btest/Baseline/language.record-default-coercion/out b/testing/btest/Baseline/language.record-default-coercion/out new file mode 100644 index 0000000000..2f0e6cd17d --- /dev/null +++ b/testing/btest/Baseline/language.record-default-coercion/out @@ -0,0 +1,4 @@ +[a=13, c=13, v=[]] +0 +[a=13, c=13, v=[test]] +1 diff --git a/testing/btest/Baseline/language.record-index-complex-fields/output b/testing/btest/Baseline/language.record-index-complex-fields/output new file mode 100644 index 0000000000..21591bc210 --- /dev/null +++ b/testing/btest/Baseline/language.record-index-complex-fields/output @@ -0,0 +1,27 @@ +{ +[1.2.3.4] = { +[a=4, tags_v=[0, 1], tags_t={ +[two] = 2, +[one] = 1 +}, tags_s={ +b, +a +}] +} +} +{ +[a=13, tags_v=[, , 2, 3], tags_t={ +[four] = 4, +[five] = 5 +}, tags_s={ +d, +c +}], +[a=4, tags_v=[0, 1], tags_t={ +[two] = 2, +[one] = 1 +}, tags_s={ +b, +a +}] +} diff --git a/testing/btest/Baseline/language.set/out b/testing/btest/Baseline/language.set/out new file mode 100644 index 0000000000..fc157cf7d9 --- /dev/null +++ b/testing/btest/Baseline/language.set/out @@ -0,0 +1,44 @@ +type inference (PASS) +type inference (PASS) +type inference (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +iterate over set (PASS) +iterate over set (PASS) +iterate over set (PASS) +iterate over set (PASS) +add element (PASS) +in operator (PASS) +add element (PASS) +add element (PASS) +in operator (PASS) +in operator (PASS) +add element (PASS) +in operator (PASS) +add element (PASS) +in operator (PASS) +add element (PASS) +in operator (PASS) +add element (PASS) +in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) diff --git a/testing/btest/Baseline/language.short-circuit/out b/testing/btest/Baseline/language.short-circuit/out new file mode 100644 index 0000000000..c92995ea7c --- /dev/null +++ b/testing/btest/Baseline/language.short-circuit/out @@ -0,0 +1,4 @@ +&& operator (eval. both operands) (PASS) +&& operator (eval. 1st operand) (PASS) +|| operator (eval. 1st operand) (PASS) +|| operator (eval. both operands) (PASS) diff --git a/testing/btest/Baseline/language.sizeof/output b/testing/btest/Baseline/language.sizeof/output index 737a999292..43cb73f763 100644 --- a/testing/btest/Baseline/language.sizeof/output +++ b/testing/btest/Baseline/language.sizeof/output @@ -1,4 +1,5 @@ -Address 1.2.3.4: 16909060 +IPv4 Address 1.2.3.4: 32 +IPv6 Address ::1: 128 Boolean T: 1 Count 10: 10 Double -1.23: 1.230000 diff --git a/testing/btest/Baseline/language.string/out b/testing/btest/Baseline/language.string/out new file mode 100644 index 0000000000..5595445ffc --- /dev/null +++ b/testing/btest/Baseline/language.string/out @@ -0,0 +1,29 @@ +type inference (PASS) +tab escape sequence (PASS) +newline escape sequence (PASS) +double quote escape sequence (PASS) +backslash escape sequence (PASS) +1-digit hex escape sequence (PASS) +2-digit hex escape sequence (PASS) +2-digit hex escape sequence (PASS) +2-digit hex escape sequence (PASS) +3-digit octal escape sequence (PASS) +2-digit octal escape sequence (PASS) +1-digit octal escape sequence (PASS) +tab escape sequence (PASS) +tab escape sequence (PASS) +newline escape sequence (PASS) +newline escape sequence (PASS) +double quote escape sequence (PASS) +null escape sequence (PASS) +empty string (PASS) +nonempty string (PASS) +string comparison (PASS) +string comparison (PASS) +string comparison (PASS) +string comparison (PASS) +string concatenation (PASS) +string concatenation (PASS) +multi-line string initialization (PASS) +in operator (PASS) +!in operator (PASS) diff --git a/testing/btest/Baseline/language.subnet/out b/testing/btest/Baseline/language.subnet/out new file mode 100644 index 0000000000..45900a291e --- /dev/null +++ b/testing/btest/Baseline/language.subnet/out @@ -0,0 +1,12 @@ +IPv4 subnet equality (PASS) +IPv4 subnet inequality (PASS) +IPv4 subnet in operator (PASS) +IPv4 subnet !in operator (PASS) +IPv4 subnet type inference (PASS) +IPv6 subnet equality (PASS) +IPv6 subnet inequality (PASS) +IPv6 subnet in operator (PASS) +IPv6 subnet !in operator (PASS) +IPv6 subnet type inference (PASS) +IPv4 and IPv6 subnet inequality (PASS) +IPv4 address and IPv6 subnet (PASS) diff --git a/testing/btest/Baseline/language.table-init/output b/testing/btest/Baseline/language.table-init/output new file mode 100644 index 0000000000..0272e12319 --- /dev/null +++ b/testing/btest/Baseline/language.table-init/output @@ -0,0 +1,10 @@ +{ +[2] = two, +[1] = one +} +global table default +{ +[4] = four, +[3] = three +} +local table default diff --git a/testing/btest/Baseline/language.table/out b/testing/btest/Baseline/language.table/out new file mode 100644 index 0000000000..514cb6b02d --- /dev/null +++ b/testing/btest/Baseline/language.table/out @@ -0,0 +1,42 @@ +type inference (PASS) +type inference (PASS) +type inference (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +iterate over table (PASS) +iterate over table (PASS) +iterate over table (PASS) +iterate over table (PASS) +iterate over table (PASS) +overwrite element (PASS) +add element (PASS) +in operator (PASS) +add element (PASS) +add element (PASS) +in operator (PASS) +in operator (PASS) +add element (PASS) +in operator (PASS) +add element (PASS) +in operator (PASS) +add element (PASS) +in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) +remove element (PASS) +!in operator (PASS) diff --git a/testing/btest/Baseline/language.time/out b/testing/btest/Baseline/language.time/out new file mode 100644 index 0000000000..5e1c8e6b26 --- /dev/null +++ b/testing/btest/Baseline/language.time/out @@ -0,0 +1,7 @@ +type inference (PASS) +add interval (PASS) +subtract interval (PASS) +inequality (PASS) +equality (PASS) +subtract time (PASS) +size operator (PASS) diff --git a/testing/btest/Baseline/language.timeout/out b/testing/btest/Baseline/language.timeout/out new file mode 100644 index 0000000000..790851a6bb --- /dev/null +++ b/testing/btest/Baseline/language.timeout/out @@ -0,0 +1 @@ +timeout diff --git a/testing/btest/Baseline/language.vector/out b/testing/btest/Baseline/language.vector/out new file mode 100644 index 0000000000..0aa3ab0a8f --- /dev/null +++ b/testing/btest/Baseline/language.vector/out @@ -0,0 +1,59 @@ +type inference (PASS) +type inference (PASS) +type inference (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +cardinality (PASS) +zero-based indexing (PASS) +iterate over vector (PASS) +iterate over vector (PASS) +iterate over vector (PASS) +add element (PASS) +access element (PASS) +add element (PASS) +add element (PASS) +access element (PASS) +access element (PASS) +add element (PASS) +access element (PASS) +add element (PASS) +access element (PASS) +add element (PASS) +access element (PASS) +add element (PASS) +access element (PASS) +overwrite element (PASS) +access element (PASS) +overwrite element (PASS) +access element (PASS) +access element (PASS) +overwrite element (PASS) +access element (PASS) +overwrite element (PASS) +access element (PASS) +overwrite element (PASS) +access element (PASS) +overwrite element (PASS) +access element (PASS) +++ operator (PASS) +-- operator (PASS) ++ operator (PASS) +- operator (PASS) +* operator (PASS) +/ operator (PASS) +% operator (PASS) +&& operator (PASS) +|| operator (PASS) diff --git a/testing/btest/Baseline/language.when/out b/testing/btest/Baseline/language.when/out new file mode 100644 index 0000000000..3a052217ab --- /dev/null +++ b/testing/btest/Baseline/language.when/out @@ -0,0 +1,2 @@ +done +lookup successful diff --git a/testing/btest/Baseline/language.wrong-delete-field/output b/testing/btest/Baseline/language.wrong-delete-field/output index f8271e43c2..1eefa1d2fe 100644 --- a/testing/btest/Baseline/language.wrong-delete-field/output +++ b/testing/btest/Baseline/language.wrong-delete-field/output @@ -1 +1 @@ -error in /da/home/robin/bro/seth/testing/btest/.tmp/language.wrong-delete-field/wrong-delete-field.bro, line 10: illegal delete statement (delete x$a) +error in /da/home/robin/bro/master/testing/btest/.tmp/language.wrong-delete-field/wrong-delete-field.bro, line 10: illegal delete statement (delete x$a) diff --git a/testing/btest/Baseline/scripts.base.frameworks.communication.communication_log_baseline/send.log b/testing/btest/Baseline/scripts.base.frameworks.communication.communication_log_baseline/send.log new file mode 100644 index 0000000000..c6a19029b6 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.communication.communication_log_baseline/send.log @@ -0,0 +1,24 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path communication +#open 2012-07-20-01-49-40 +#fields ts peer src_name connected_peer_desc connected_peer_addr connected_peer_port level message +#types time string string string addr port string string +1342748980.737451 bro parent - - - info [#1/127.0.0.1:47757] added peer +1342748980.747149 bro child - - - info [#1/127.0.0.1:47757] connected +1342748980.748489 bro parent - - - info [#1/127.0.0.1:47757] peer connected +1342748980.748489 bro parent - - - info [#1/127.0.0.1:47757] phase: version +1342748980.750749 bro script - - - info connection established +1342748980.750749 bro script - - - info requesting events matching /^?(NOTHING)$?/ +1342748980.750749 bro script - - - info accepting state +1342748980.752225 bro parent - - - info [#1/127.0.0.1:47757] phase: handshake +1342748980.752225 bro parent - - - info warning: no events to request +1342748980.753384 bro parent - - - info [#1/127.0.0.1:47757] peer_description is bro +1342748980.793108 bro parent - - - info [#1/127.0.0.1:47757] peer supports keep-in-cache; using that +1342748980.793108 bro parent - - - info [#1/127.0.0.1:47757] phase: running +1342748980.793108 bro parent - - - info terminating... +1342748980.796454 bro child - - - info terminating +1342748980.797536 bro parent - - - info [#1/127.0.0.1:47757] closing connection +#close 2012-07-20-01-49-40 diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.basic/out b/testing/btest/Baseline/scripts.base.frameworks.input.basic/out new file mode 100644 index 0000000000..c456298062 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.basic/out @@ -0,0 +1,15 @@ +{ +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, ns=4242, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +4242 diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.bignumber/out b/testing/btest/Baseline/scripts.base.frameworks.input.bignumber/out new file mode 100644 index 0000000000..8b95ed8b19 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.bignumber/out @@ -0,0 +1,4 @@ +{ +[9223372036854775800] = [c=18446744073709551612], +[-9223372036854775800] = [c=18446744073709551612] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.binary/out b/testing/btest/Baseline/scripts.base.frameworks.input.binary/out new file mode 100644 index 0000000000..deab902925 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.binary/out @@ -0,0 +1,6 @@ +abc^J\xffdef +DATA2 +abc|\xffdef +DATA2 +abc\xff|def +DATA2 diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.empty-values-hashing/out b/testing/btest/Baseline/scripts.base.frameworks.input.empty-values-hashing/out new file mode 100644 index 0000000000..474ef45cc2 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.empty-values-hashing/out @@ -0,0 +1,155 @@ +============PREDICATE============ +Input::EVENT_NEW +[i=1] +[s=, ss=TEST] +============PREDICATE============ +Input::EVENT_NEW +[i=2] +[s=, ss=] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[2] = [s=, ss=], +[1] = [s=, ss=TEST] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=1] +Right +[s=, ss=TEST] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[2] = [s=, ss=], +[1] = [s=, ss=TEST] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=2] +Right +[s=, ss=] +==========SERVERS============ +{ +[2] = [s=, ss=], +[1] = [s=, ss=TEST] +} +============PREDICATE============ +Input::EVENT_CHANGED +[i=1] +[s=TEST, ss=] +============PREDICATE============ +Input::EVENT_CHANGED +[i=2] +[s=TEST, ss=TEST] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[2] = [s=TEST, ss=TEST], +[1] = [s=TEST, ss=] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_CHANGED +Left +[i=1] +Right +[s=, ss=TEST] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[2] = [s=TEST, ss=TEST], +[1] = [s=TEST, ss=] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_CHANGED +Left +[i=2] +Right +[s=, ss=] +==========SERVERS============ +{ +[2] = [s=TEST, ss=TEST], +[1] = [s=TEST, ss=] +} +done diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.emptyvals/out b/testing/btest/Baseline/scripts.base.frameworks.input.emptyvals/out new file mode 100644 index 0000000000..f75248cf97 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.emptyvals/out @@ -0,0 +1,4 @@ +{ +[2] = [b=], +[1] = [b=T] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.event/out b/testing/btest/Baseline/scripts.base.frameworks.input.event/out new file mode 100644 index 0000000000..c3f6d1ceba --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.event/out @@ -0,0 +1,85 @@ +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::i; +print outfile, A::b; +}, config={ + +}] +Input::EVENT_NEW +1 +T +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::i; +print outfile, A::b; +}, config={ + +}] +Input::EVENT_NEW +2 +T +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::i; +print outfile, A::b; +}, config={ + +}] +Input::EVENT_NEW +3 +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::i; +print outfile, A::b; +}, config={ + +}] +Input::EVENT_NEW +4 +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::i; +print outfile, A::b; +}, config={ + +}] +Input::EVENT_NEW +5 +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::i; +print outfile, A::b; +}, config={ + +}] +Input::EVENT_NEW +6 +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::i; +print outfile, A::b; +}, config={ + +}] +Input::EVENT_NEW +7 +T +End-of-data diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.executeraw/out b/testing/btest/Baseline/scripts.base.frameworks.input.executeraw/out new file mode 100644 index 0000000000..e08ca8ba08 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.executeraw/out @@ -0,0 +1,12 @@ +[source=wc -l ../input.log |, reader=Input::READER_RAW, mode=Input::MANUAL, name=input, fields=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, s; +close(outfile); +terminate(); +}, config={ + +}] +Input::EVENT_NEW +8 ../input.log diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.invalidnumbers/.stderrwithoutfirstline b/testing/btest/Baseline/scripts.base.frameworks.input.invalidnumbers/.stderrwithoutfirstline new file mode 100644 index 0000000000..3ef51e40f2 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.invalidnumbers/.stderrwithoutfirstline @@ -0,0 +1,8 @@ +error: ../input.log/Input::READER_ASCII: Number '12129223372036854775800' out of supported range. +error: ../input.log/Input::READER_ASCII: Could not convert line '12129223372036854775800 121218446744073709551612' to Val. Ignoring line. +warning: ../input.log/Input::READER_ASCII: Number '9223372036854775801TEXTHERE' contained non-numeric trailing characters. Ignored trailing characters 'TEXTHERE' +warning: ../input.log/Input::READER_ASCII: Number '1Justtext' contained non-numeric trailing characters. Ignored trailing characters 'Justtext' +error: ../input.log/Input::READER_ASCII: String 'Justtext' contained no parseable number +error: ../input.log/Input::READER_ASCII: Could not convert line 'Justtext 1' to Val. Ignoring line. +received termination signal +>>> diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.invalidnumbers/out b/testing/btest/Baseline/scripts.base.frameworks.input.invalidnumbers/out new file mode 100644 index 0000000000..56b2736006 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.invalidnumbers/out @@ -0,0 +1,4 @@ +{ +[9223372036854775800] = [c=4], +[9223372036854775801] = [c=1] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.missing-file/bro..stderr b/testing/btest/Baseline/scripts.base.frameworks.input.missing-file/bro..stderr new file mode 100644 index 0000000000..4380007b93 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.missing-file/bro..stderr @@ -0,0 +1,5 @@ +error: does-not-exist.dat/Input::READER_ASCII: Init: cannot open does-not-exist.dat +error: does-not-exist.dat/Input::READER_ASCII: Init failed +warning: Stream input is already queued for removal. Ignoring remove. +error: does-not-exist.dat/Input::READER_ASCII: terminating thread +received termination signal diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.onecolumn-norecord/out b/testing/btest/Baseline/scripts.base.frameworks.input.onecolumn-norecord/out new file mode 100644 index 0000000000..bbce48f4f6 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.onecolumn-norecord/out @@ -0,0 +1,3 @@ +{ +[-42] = T +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.onecolumn-record/out b/testing/btest/Baseline/scripts.base.frameworks.input.onecolumn-record/out new file mode 100644 index 0000000000..3f9af35c59 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.onecolumn-record/out @@ -0,0 +1,3 @@ +{ +[-42] = [b=T] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.optional/out b/testing/btest/Baseline/scripts.base.frameworks.input.optional/out new file mode 100644 index 0000000000..7a304fc918 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.optional/out @@ -0,0 +1,9 @@ +{ +[2] = [b=T, notb=F], +[4] = [b=F, notb=T], +[6] = [b=F, notb=T], +[7] = [b=T, notb=F], +[1] = [b=T, notb=F], +[5] = [b=F, notb=T], +[3] = [b=F, notb=T] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.port/out b/testing/btest/Baseline/scripts.base.frameworks.input.port/out new file mode 100644 index 0000000000..858551aa2f --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.port/out @@ -0,0 +1,3 @@ +[p=80/tcp] +[p=52/udp] +[p=30/unknown] diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.predicate-stream/out b/testing/btest/Baseline/scripts.base.frameworks.input.predicate-stream/out new file mode 100644 index 0000000000..d805f804d8 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.predicate-stream/out @@ -0,0 +1,7 @@ +VALID +VALID +VALID +VALID +VALID +VALID +VALID diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.predicate/out b/testing/btest/Baseline/scripts.base.frameworks.input.predicate/out new file mode 100644 index 0000000000..d805f804d8 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.predicate/out @@ -0,0 +1,7 @@ +VALID +VALID +VALID +VALID +VALID +VALID +VALID diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.predicatemodify/out b/testing/btest/Baseline/scripts.base.frameworks.input.predicatemodify/out new file mode 100644 index 0000000000..c648e63710 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.predicatemodify/out @@ -0,0 +1,4 @@ +{ +[2, idxmodified] = [b=T, s=test2], +[1, idx1] = [b=T, s=testmodified] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.predicatemodifyandreread/out b/testing/btest/Baseline/scripts.base.frameworks.input.predicatemodifyandreread/out new file mode 100644 index 0000000000..0adccc1856 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.predicatemodifyandreread/out @@ -0,0 +1,23 @@ +Update_finished for input, try 1 +{ +[2, idxmodified] = [b=T, s=test2], +[1, idx1] = [b=T, s=testmodified] +} +Update_finished for input, try 2 +{ +[2, idxmodified] = [b=T, s=test2], +[1, idx1] = [b=F, s=testmodified] +} +Update_finished for input, try 3 +{ +[2, idxmodified] = [b=F, s=test2], +[1, idx1] = [b=F, s=testmodified] +} +Update_finished for input, try 4 +{ +[2, idxmodified] = [b=F, s=test2] +} +Update_finished for input, try 5 +{ +[1, idx1] = [b=T, s=testmodified] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.predicaterefusesecondsamerecord/out b/testing/btest/Baseline/scripts.base.frameworks.input.predicaterefusesecondsamerecord/out new file mode 100644 index 0000000000..f752ff451a --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.predicaterefusesecondsamerecord/out @@ -0,0 +1,3 @@ +{ +[1.228.83.33] = [asn=9318 HANARO-AS Hanaro Telecom Inc., severity=medium, confidence=95, detecttime=1342569600.0] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.raw/out b/testing/btest/Baseline/scripts.base.frameworks.input.raw/out new file mode 100644 index 0000000000..fa3625ca74 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.raw/out @@ -0,0 +1,136 @@ +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +q3r3057fdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfs\d +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW + +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +dfsdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (8 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +3rw43wRRERLlL#RWERERERE. diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.repeat/out b/testing/btest/Baseline/scripts.base.frameworks.input.repeat/out new file mode 100644 index 0000000000..12a8c5f581 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.repeat/out @@ -0,0 +1,160 @@ +input0 +../input.log +{ +[1] = T +} +input1 +../input.log +{ +[1] = T +} +input2 +../input.log +{ +[1] = T +} +input3 +../input.log +{ +[1] = T +} +input4 +../input.log +{ +[1] = T +} +input5 +../input.log +{ +[1] = T +} +input6 +../input.log +{ +[1] = T +} +input7 +../input.log +{ +[1] = T +} +input8 +../input.log +{ +[1] = T +} +input9 +../input.log +{ +[1] = T +} +input10 +../input.log +{ +[1] = T +} +input11 +../input.log +{ +[1] = T +} +input12 +../input.log +{ +[1] = T +} +input13 +../input.log +{ +[1] = T +} +input14 +../input.log +{ +[1] = T +} +input15 +../input.log +{ +[1] = T +} +input16 +../input.log +{ +[1] = T +} +input17 +../input.log +{ +[1] = T +} +input18 +../input.log +{ +[1] = T +} +input19 +../input.log +{ +[1] = T +} +input20 +../input.log +{ +[1] = T +} +input21 +../input.log +{ +[1] = T +} +input22 +../input.log +{ +[1] = T +} +input23 +../input.log +{ +[1] = T +} +input24 +../input.log +{ +[1] = T +} +input25 +../input.log +{ +[1] = T +} +input26 +../input.log +{ +[1] = T +} +input27 +../input.log +{ +[1] = T +} +input28 +../input.log +{ +[1] = T +} +input29 +../input.log +{ +[1] = T +} +input30 +../input.log +{ +[1] = T +} +input31 +../input.log +{ +[1] = T +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.reread/out b/testing/btest/Baseline/scripts.base.frameworks.input.reread/out new file mode 100644 index 0000000000..538a6dec18 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.reread/out @@ -0,0 +1,1508 @@ +============PREDICATE============ +Input::EVENT_NEW +[i=-42] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=-42] +Right +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +==========SERVERS============ +{ +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +============PREDICATE============ +Input::EVENT_NEW +[i=-43] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-43] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=-43] +Right +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +==========SERVERS============ +{ +[-43] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +============PREDICATE============ +Input::EVENT_CHANGED +[i=-43] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_CHANGED +Left +[i=-43] +Right +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +==========SERVERS============ +{ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +============PREDICATE============ +Input::EVENT_NEW +[i=-44] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_NEW +[i=-45] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_NEW +[i=-46] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_NEW +[i=-47] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_NEW +[i=-48] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-46] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-44] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-47] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-45] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=-44] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-46] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-44] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-47] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-45] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=-45] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-46] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-44] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-47] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-45] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=-46] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-46] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-44] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-47] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-45] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=-47] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-46] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-44] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-47] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-45] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_NEW +Left +[i=-48] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +==========SERVERS============ +{ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-46] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-44] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-47] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-45] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +============PREDICATE============ +Input::EVENT_REMOVED +[i=-44] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_REMOVED +[i=-42] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_REMOVED +[i=-46] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_REMOVED +[i=-47] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_REMOVED +[i=-45] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_REMOVED +[i=-43] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_REMOVED +Left +[i=-44] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_REMOVED +Left +[i=-42] +Right +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_REMOVED +Left +[i=-46] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_REMOVED +Left +[i=-47] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_REMOVED +Left +[i=-45] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============EVENT============ +Description +[source=../input.log, reader=Input::READER_ASCII, mode=Input::REREAD, name=ssh, destination={ +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +}, idx=, val=, want_record=T, ev=line +{ +print A::outfile, ============EVENT============; +print A::outfile, Description; +print A::outfile, A::description; +print A::outfile, Type; +print A::outfile, A::tpe; +print A::outfile, Left; +print A::outfile, A::left; +print A::outfile, Right; +print A::outfile, A::right; +}, pred=anonymous-function +{ +print A::outfile, ============PREDICATE============; +print A::outfile, A::typ; +print A::outfile, A::left; +print A::outfile, A::right; +return (T); +}, config={ + +}] +Type +Input::EVENT_REMOVED +Left +[i=-43] +Right +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +==========SERVERS============ +{ +[-48] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +done diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.rereadraw/out b/testing/btest/Baseline/scripts.base.frameworks.input.rereadraw/out new file mode 100644 index 0000000000..b7f79e5754 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.rereadraw/out @@ -0,0 +1,272 @@ +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +q3r3057fdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfs\d +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW + +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +dfsdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +3rw43wRRERLlL#RWERERERE. +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +q3r3057fdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfs\d +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW + +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +dfsdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::REREAD, name=input, fields=, want_record=F, ev=line +{ +print outfile, A::description; +print outfile, A::tpe; +print outfile, A::s; +try = try + 1; +if (16 == try) +{ +close(outfile); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +3rw43wRRERLlL#RWERERERE. diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.set/out b/testing/btest/Baseline/scripts.base.frameworks.input.set/out new file mode 100644 index 0000000000..998244cf3f --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.set/out @@ -0,0 +1,7 @@ +{ +192.168.17.7, +192.168.17.42, +192.168.17.14, +192.168.17.1, +192.168.17.2 +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.setseparator/out b/testing/btest/Baseline/scripts.base.frameworks.input.setseparator/out new file mode 100644 index 0000000000..d0e0f53310 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.setseparator/out @@ -0,0 +1,10 @@ +{ +[1] = [s={ +b, +e, +d, +c, +f, +a +}, ss=[1, 2, 3, 4, 5, 6]] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.setspecialcases/out b/testing/btest/Baseline/scripts.base.frameworks.input.setspecialcases/out new file mode 100644 index 0000000000..62229f7f37 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.setspecialcases/out @@ -0,0 +1,23 @@ +{ +[2] = [s={ +, +testing +}, s=[testing, , testing]], +[4] = [s={ +, +testing +}, s=[testing, ]], +[6] = [s={ + +}, s=[]], +[1] = [s={ +testing,testing,testing, +}, s=[testing,testing,testing,]], +[5] = [s={ + +}, s=[, , , ]], +[3] = [s={ +, +testing +}, s=[, testing]] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.stream/out b/testing/btest/Baseline/scripts.base.frameworks.input.stream/out new file mode 100644 index 0000000000..39b06c9092 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.stream/out @@ -0,0 +1,115 @@ +============EVENT============ +Input::EVENT_NEW +[i=-42] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============SERVERS============ +{ +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +============EVENT============ +Input::EVENT_NEW +[i=-43] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============SERVERS============ +{ +[-43] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +============EVENT============ +Input::EVENT_CHANGED +[i=-43] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============SERVERS============ +{ +[-43] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-42] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} +done diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.streamraw/out b/testing/btest/Baseline/scripts.base.frameworks.input.streamraw/out new file mode 100644 index 0000000000..d97e09adfa --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.streamraw/out @@ -0,0 +1,153 @@ +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +q3r3057fdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdfs\d +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW + +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +dfsdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +sdf +[source=../input.log, reader=Input::READER_RAW, mode=Input::STREAM, name=input, fields=, want_record=F, ev=line +{ +print A::outfile, A::description; +print A::outfile, A::tpe; +print A::outfile, A::s; +A::try = A::try + 1; +if (8 == A::try) +{ +print A::outfile, done; +close(A::outfile); +Input::remove(input); +terminate(); +} + +}, config={ + +}] +Input::EVENT_NEW +3rw43wRRERLlL#RWERERERE. +done diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.subrecord-event/out b/testing/btest/Baseline/scripts.base.frameworks.input.subrecord-event/out new file mode 100644 index 0000000000..197cb54df9 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.subrecord-event/out @@ -0,0 +1,12 @@ +[sub=[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, two=[a=1.2.3.4, d=3.14]], t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.subrecord/out b/testing/btest/Baseline/scripts.base.frameworks.input.subrecord/out new file mode 100644 index 0000000000..c7e46dfacd --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.subrecord/out @@ -0,0 +1,14 @@ +{ +[-42] = [sub=[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, two=[a=1.2.3.4, d=3.14]], t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.tableevent/out b/testing/btest/Baseline/scripts.base.frameworks.input.tableevent/out new file mode 100644 index 0000000000..d76c63ef31 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.tableevent/out @@ -0,0 +1,189 @@ +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, destination={ +[2] = T, +[4] = F, +[6] = F, +[7] = T, +[1] = T, +[5] = F, +[3] = F +}, idx=, val=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, left; +print outfile, right; +try = try + 1; +if (7 == try) +{ +close(outfile); +terminate(); +} + +}, pred=, config={ + +}] +Input::EVENT_NEW +[i=1] +T +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, destination={ +[2] = T, +[4] = F, +[6] = F, +[7] = T, +[1] = T, +[5] = F, +[3] = F +}, idx=, val=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, left; +print outfile, right; +try = try + 1; +if (7 == try) +{ +close(outfile); +terminate(); +} + +}, pred=, config={ + +}] +Input::EVENT_NEW +[i=2] +T +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, destination={ +[2] = T, +[4] = F, +[6] = F, +[7] = T, +[1] = T, +[5] = F, +[3] = F +}, idx=, val=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, left; +print outfile, right; +try = try + 1; +if (7 == try) +{ +close(outfile); +terminate(); +} + +}, pred=, config={ + +}] +Input::EVENT_NEW +[i=3] +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, destination={ +[2] = T, +[4] = F, +[6] = F, +[7] = T, +[1] = T, +[5] = F, +[3] = F +}, idx=, val=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, left; +print outfile, right; +try = try + 1; +if (7 == try) +{ +close(outfile); +terminate(); +} + +}, pred=, config={ + +}] +Input::EVENT_NEW +[i=4] +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, destination={ +[2] = T, +[4] = F, +[6] = F, +[7] = T, +[1] = T, +[5] = F, +[3] = F +}, idx=, val=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, left; +print outfile, right; +try = try + 1; +if (7 == try) +{ +close(outfile); +terminate(); +} + +}, pred=, config={ + +}] +Input::EVENT_NEW +[i=5] +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, destination={ +[2] = T, +[4] = F, +[6] = F, +[7] = T, +[1] = T, +[5] = F, +[3] = F +}, idx=, val=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, left; +print outfile, right; +try = try + 1; +if (7 == try) +{ +close(outfile); +terminate(); +} + +}, pred=, config={ + +}] +Input::EVENT_NEW +[i=6] +F +[source=../input.log, reader=Input::READER_ASCII, mode=Input::MANUAL, name=input, destination={ +[2] = T, +[4] = F, +[6] = F, +[7] = T, +[1] = T, +[5] = F, +[3] = F +}, idx=, val=, want_record=F, ev=line +{ +print outfile, description; +print outfile, tpe; +print outfile, left; +print outfile, right; +try = try + 1; +if (7 == try) +{ +close(outfile); +terminate(); +} + +}, pred=, config={ + +}] +Input::EVENT_NEW +[i=7] +T diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.twotables/event.out b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/event.out new file mode 100644 index 0000000000..ebf210031f --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/event.out @@ -0,0 +1,4 @@ +============EVENT============ +============EVENT============ +============EVENT============ +============EVENT============ diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.twotables/fin.out b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/fin.out new file mode 100644 index 0000000000..b7e1031867 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/fin.out @@ -0,0 +1,30 @@ +==========SERVERS============ +==========SERVERS============ +==========SERVERS============ +done +{ +[-43] = [b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]], +[-44] = [b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.twotables/pred1.out b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/pred1.out new file mode 100644 index 0000000000..84d1465428 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/pred1.out @@ -0,0 +1,45 @@ +============PREDICATE============ +Input::EVENT_NEW +[i=-42] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_NEW +[i=-44] +[b=F, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +============PREDICATE============ +Input::EVENT_REMOVED +[i=-42] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.twotables/pred2.out b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/pred2.out new file mode 100644 index 0000000000..ef38fa3210 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.twotables/pred2.out @@ -0,0 +1,15 @@ +============PREDICATE 2============ +Input::EVENT_NEW +[i=-43] +[b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] diff --git a/testing/btest/Baseline/scripts.base.frameworks.input.unsupported_types/out b/testing/btest/Baseline/scripts.base.frameworks.input.unsupported_types/out new file mode 100644 index 0000000000..7ef82cf368 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.input.unsupported_types/out @@ -0,0 +1,14 @@ +{ +[-42] = [fi=, b=T, e=SSH::LOG, c=21, p=123/unknown, sn=10.0.0.0/24, a=1.2.3.4, d=3.14, t=1315801931.273616, iv=100.0, s=hurz, sc={ +2, +4, +1, +3 +}, ss={ +CC, +AA, +BB +}, se={ + +}, vc=[10, 20, 30], ve=[]] +} diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.adapt-filter/ssh-new-default.log b/testing/btest/Baseline/scripts.base.frameworks.logging.adapt-filter/ssh-new-default.log index fc2c133dc6..655d9a5fbd 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.adapt-filter/ssh-new-default.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.adapt-filter/ssh-new-default.log @@ -1,6 +1,11 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh-new-default +#open 2012-07-20-01-49-19 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167052.603186 1.2.3.4 1234 2.3.4.5 80 success unknown -1315167052.603186 1.2.3.4 1234 2.3.4.5 80 failure US +1342748959.430282 1.2.3.4 1234 2.3.4.5 80 success unknown +1342748959.430282 1.2.3.4 1234 2.3.4.5 80 failure US +#close 2012-07-20-01-49-19 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-binary/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-binary/ssh.log index b236cb818b..b2528467a1 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-binary/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-binary/ssh.log @@ -1,7 +1,12 @@ -#separator \x7c +#separator | +#set_separator|, +#empty_field|(empty) +#unset_field|- #path|ssh +#open|2012-07-20-01-49-19 #fields|data|data2 #types|string|string abc\x0a\xffdef|DATA2 abc\x7c\xffdef|DATA2 abc\xff\x7cdef|DATA2 +#close|2012-07-20-01-49-19 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-empty/ssh-filtered.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-empty/ssh-filtered.log new file mode 100644 index 0000000000..b6e4889a21 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-empty/ssh-filtered.log @@ -0,0 +1,12 @@ +PREFIX<>separator | +PREFIX<>set_separator|, +PREFIX<>empty_field|EMPTY +PREFIX<>unset_field|NOT-SET +PREFIX<>path|ssh +PREFIX<>fields|t|id.orig_h|id.orig_p|id.resp_h|id.resp_p|status|country|b +PREFIX<>types|time|addr|port|addr|port|string|string|bool +1342748959.659721|1.2.3.4|1234|2.3.4.5|80|success|unknown|NOT-SET +1342748959.659721|1.2.3.4|1234|2.3.4.5|80|NOT-SET|US|NOT-SET +1342748959.659721|1.2.3.4|1234|2.3.4.5|80|failure|UK|NOT-SET +1342748959.659721|1.2.3.4|1234|2.3.4.5|80|NOT-SET|BR|NOT-SET +1342748959.659721|1.2.3.4|1234|2.3.4.5|80|failure|EMPTY|T diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-empty/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-empty/ssh.log deleted file mode 100644 index e1ba48cf8e..0000000000 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-empty/ssh.log +++ /dev/null @@ -1,9 +0,0 @@ -PREFIX<>separator \x7c -PREFIX<>path|ssh -PREFIX<>fields|t|id.orig_h|id.orig_p|id.resp_h|id.resp_p|status|country|b -PREFIX<>types|time|addr|port|addr|port|string|string|bool -1315167052.828457|1.2.3.4|1234|2.3.4.5|80|success|unknown|NOT-SET -1315167052.828457|1.2.3.4|1234|2.3.4.5|80|NOT-SET|US|NOT-SET -1315167052.828457|1.2.3.4|1234|2.3.4.5|80|failure|UK|NOT-SET -1315167052.828457|1.2.3.4|1234|2.3.4.5|80|NOT-SET|BR|NOT-SET -1315167052.828457|1.2.3.4|1234|2.3.4.5|80|failure|EMPTY|T diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-notset-str/test.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-notset-str/test.log new file mode 100644 index 0000000000..b77541d35e --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-notset-str/test.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path test +#open 2012-07-20-01-49-19 +#fields x y z +#types string string string +\x2d - (empty) +#close 2012-07-20-01-49-19 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-odd-url/http.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-odd-url/http.log index db9ce497ed..f1ff4db3b8 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-odd-url/http.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-odd-url/http.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2011-09-12-03-57-36 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string string file -1315799856.264750 UWkUyAuUGXf 10.0.1.104 64216 193.40.5.162 80 1 GET lepo.it.da.ut.ee /~cect/teoreetilised seminarid_2010/arheoloogia_uurimisr\xfchma_seminar/Joyce et al - The Languages of Archaeology ~ Dialogue, Narrative and Writing.pdf - Wget/1.12 (darwin10.8.0) 0 346 404 Not Found - - - - - - - text/html - - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1315799856.264750 UWkUyAuUGXf 10.0.1.104 64216 193.40.5.162 80 1 GET lepo.it.da.ut.ee /~cect/teoreetilised seminarid_2010/arheoloogia_uurimisr\xfchma_seminar/Joyce et al - The Languages of Archaeology ~ Dialogue, Narrative and Writing.pdf - Wget/1.12 (darwin10.8.0) 0 346 404 Not Found - - - (empty) - - - text/html - - +#close 2011-09-12-03-57-37 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-set-separator/test.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-set-separator/test.log new file mode 100644 index 0000000000..25e9319eec --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape-set-separator/test.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path test +#open 2012-07-20-01-49-19 +#fields ss +#types table[string] +CC,AA,\x2c,\x2c\x2c +#close 2012-07-20-01-49-19 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape/ssh.log index 3100fa0cb2..d61eae873a 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-escape/ssh.log @@ -1,9 +1,12 @@ -#separator \x7c\x7c +#separator || +#set_separator||, +#empty_field||(empty) +#unset_field||- #path||ssh #fields||t||id.orig_h||id.orig_p||id.resp_h||id.resp_p||status||country #types||time||addr||port||addr||port||string||string -1315802040.006123||1.2.3.4||1234||2.3.4.5||80||success||unknown -1315802040.006123||1.2.3.4||1234||2.3.4.5||80||failure||US -1315802040.006123||1.2.3.4||1234||2.3.4.5||80||fa\x7c\x7cure||UK -1315802040.006123||1.2.3.4||1234||2.3.4.5||80||su\x7c\x7cess||BR -1315802040.006123||1.2.3.4||1234||2.3.4.5||80||failure||MX +1343417536.767956||1.2.3.4||1234||2.3.4.5||80||success||unknown +1343417536.767956||1.2.3.4||1234||2.3.4.5||80||failure||US +1343417536.767956||1.2.3.4||1234||2.3.4.5||80||fa\x7c\x7cure||UK +1343417536.767956||1.2.3.4||1234||2.3.4.5||80||su\x7c\x7cess||BR +1343417536.767956||1.2.3.4||1234||2.3.4.5||80||failure||MX diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-line-like-comment/test.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-line-like-comment/test.log new file mode 100644 index 0000000000..0f825462ab --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-line-like-comment/test.log @@ -0,0 +1,12 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path test +#open 2012-07-20-01-49-22 +#fields data c +#types string count +Test1 42 +\x23Kaputt 42 +Test2 42 +#close 2012-07-20-01-49-22 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-options/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-options/ssh.log index 33a922cc2b..6e3263673a 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-options/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-options/ssh.log @@ -1,5 +1,5 @@ -1299718506.38074|1.2.3.4|1234|2.3.4.5|80|success|unknown -1299718506.38074|1.2.3.4|1234|2.3.4.5|80|failure|US -1299718506.38074|1.2.3.4|1234|2.3.4.5|80|failure|UK -1299718506.38074|1.2.3.4|1234|2.3.4.5|80|success|BR -1299718506.38074|1.2.3.4|1234|2.3.4.5|80|failure|MX +1342748960.098729|1.2.3.4|1234|2.3.4.5|80|success|unknown +1342748960.098729|1.2.3.4|1234|2.3.4.5|80|failure|US +1342748960.098729|1.2.3.4|1234|2.3.4.5|80|failure|UK +1342748960.098729|1.2.3.4|1234|2.3.4.5|80|success|BR +1342748960.098729|1.2.3.4|1234|2.3.4.5|80|failure|MX diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-timestamps/test.log b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-timestamps/test.log index 7f512c15d9..c644dab007 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-timestamps/test.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.ascii-timestamps/test.log @@ -1,5 +1,9 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2012-07-20-01-49-20 #fields data #types time 1234567890.000000 @@ -10,3 +14,4 @@ 1234567890.000010 1234567890.000001 1234567890.000000 +#close 2012-07-20-01-49-20 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.attr-extend/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.attr-extend/ssh.log index c2c32c5c6a..9eb2f0e663 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.attr-extend/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.attr-extend/ssh.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-20 #fields status country a1 b1 b2 #types string string count count count success unknown 1 3 4 +#close 2012-07-20-01-49-20 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.attr/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.attr/ssh.log index 18e4d5cbad..bcedd1174e 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.attr/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.attr/ssh.log @@ -1,5 +1,9 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-20 #fields status country #types string string success unknown @@ -7,3 +11,4 @@ failure US failure UK success BR failure MX +#close 2012-07-20-01-49-20 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.options/ssh.ds.xml b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.options/ssh.ds.xml new file mode 100644 index 0000000000..cacc3b0ea4 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.options/ssh.ds.xml @@ -0,0 +1,16 @@ + + + + + + + + + + + + + + + + diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.rotate/out b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.rotate/out new file mode 100644 index 0000000000..1e5e1b05c6 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.rotate/out @@ -0,0 +1,290 @@ +test.2011-03-07-03-00-05.ds test 11-03-07_03.00.05 11-03-07_04.00.05 0 dataseries +test.2011-03-07-04-00-05.ds test 11-03-07_04.00.05 11-03-07_05.00.05 0 dataseries +test.2011-03-07-05-00-05.ds test 11-03-07_05.00.05 11-03-07_06.00.05 0 dataseries +test.2011-03-07-06-00-05.ds test 11-03-07_06.00.05 11-03-07_07.00.05 0 dataseries +test.2011-03-07-07-00-05.ds test 11-03-07_07.00.05 11-03-07_08.00.05 0 dataseries +test.2011-03-07-08-00-05.ds test 11-03-07_08.00.05 11-03-07_09.00.05 0 dataseries +test.2011-03-07-09-00-05.ds test 11-03-07_09.00.05 11-03-07_10.00.05 0 dataseries +test.2011-03-07-10-00-05.ds test 11-03-07_10.00.05 11-03-07_11.00.05 0 dataseries +test.2011-03-07-11-00-05.ds test 11-03-07_11.00.05 11-03-07_12.00.05 0 dataseries +test.2011-03-07-12-00-05.ds test 11-03-07_12.00.05 11-03-07_12.59.55 1 dataseries +> test.2011-03-07-03-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299466805.000000 10.0.0.1 20 10.0.0.2 1024 +1299470395.000000 10.0.0.2 20 10.0.0.3 0 +> test.2011-03-07-04-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299470405.000000 10.0.0.1 20 10.0.0.2 1025 +1299473995.000000 10.0.0.2 20 10.0.0.3 1 +> test.2011-03-07-05-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299474005.000000 10.0.0.1 20 10.0.0.2 1026 +1299477595.000000 10.0.0.2 20 10.0.0.3 2 +> test.2011-03-07-06-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299477605.000000 10.0.0.1 20 10.0.0.2 1027 +1299481195.000000 10.0.0.2 20 10.0.0.3 3 +> test.2011-03-07-07-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299481205.000000 10.0.0.1 20 10.0.0.2 1028 +1299484795.000000 10.0.0.2 20 10.0.0.3 4 +> test.2011-03-07-08-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299484805.000000 10.0.0.1 20 10.0.0.2 1029 +1299488395.000000 10.0.0.2 20 10.0.0.3 5 +> test.2011-03-07-09-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299488405.000000 10.0.0.1 20 10.0.0.2 1030 +1299491995.000000 10.0.0.2 20 10.0.0.3 6 +> test.2011-03-07-10-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299492005.000000 10.0.0.1 20 10.0.0.2 1031 +1299495595.000000 10.0.0.2 20 10.0.0.3 7 +> test.2011-03-07-11-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299495605.000000 10.0.0.1 20 10.0.0.2 1032 +1299499195.000000 10.0.0.2 20 10.0.0.3 8 +> test.2011-03-07-12-00-05.ds +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='test' +t id.orig_h id.orig_p id.resp_h id.resp_p +1299499205.000000 10.0.0.1 20 10.0.0.2 1033 +1299502795.000000 10.0.0.2 20 10.0.0.3 9 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.test-logging/ssh.ds.txt b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.test-logging/ssh.ds.txt new file mode 100644 index 0000000000..e6abc3f1f6 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.test-logging/ssh.ds.txt @@ -0,0 +1,34 @@ +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='ssh' +t id.orig_h id.orig_p id.resp_h id.resp_p status country +1342748962.493341 1.2.3.4 1234 2.3.4.5 80 success unknown +1342748962.493341 1.2.3.4 1234 2.3.4.5 80 failure US +1342748962.493341 1.2.3.4 1234 2.3.4.5 80 failure UK +1342748962.493341 1.2.3.4 1234 2.3.4.5 80 success BR +1342748962.493341 1.2.3.4 1234 2.3.4.5 80 failure MX diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.time-as-int/conn.ds.txt b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.time-as-int/conn.ds.txt new file mode 100644 index 0000000000..c4ac546ab6 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.time-as-int/conn.ds.txt @@ -0,0 +1,89 @@ +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='conn' +ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +1300475167096535 UWkUyAuUGXf 141.142.220.202 5353 224.0.0.251 5353 udp dns 0 0 0 S0 F 0 D 1 73 0 0 +1300475167097012 arKYeMETxOg fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0 0 0 S0 F 0 D 1 199 0 0 +1300475167099816 k6kgXLOoSKl 141.142.220.50 5353 224.0.0.251 5353 udp 0 0 0 S0 F 0 D 1 179 0 0 +1300475168853899 TEfuqmmG4bh 141.142.220.118 43927 141.142.2.2 53 udp dns 435 0 89 SHR F 0 Cd 0 0 1 117 +1300475168854378 FrJExwHcSal 141.142.220.118 37676 141.142.2.2 53 udp dns 420 0 99 SHR F 0 Cd 0 0 1 127 +1300475168854837 5OKnoww6xl4 141.142.220.118 40526 141.142.2.2 53 udp dns 391 0 183 SHR F 0 Cd 0 0 1 211 +1300475168857956 fRFu0wcOle6 141.142.220.118 32902 141.142.2.2 53 udp dns 317 0 89 SHR F 0 Cd 0 0 1 117 +1300475168858306 qSsw6ESzHV4 141.142.220.118 59816 141.142.2.2 53 udp dns 343 0 99 SHR F 0 Cd 0 0 1 127 +1300475168858713 iE6yhOq3SF 141.142.220.118 59714 141.142.2.2 53 udp dns 375 0 183 SHR F 0 Cd 0 0 1 211 +1300475168891644 qCaWGmzFtM5 141.142.220.118 58206 141.142.2.2 53 udp dns 339 0 89 SHR F 0 Cd 0 0 1 117 +1300475168892037 70MGiRM1Qf4 141.142.220.118 38911 141.142.2.2 53 udp dns 334 0 99 SHR F 0 Cd 0 0 1 127 +1300475168892414 h5DsfNtYzi1 141.142.220.118 59746 141.142.2.2 53 udp dns 420 0 183 SHR F 0 Cd 0 0 1 211 +1300475168893988 c4Zw9TmAE05 141.142.220.118 45000 141.142.2.2 53 udp dns 384 0 89 SHR F 0 Cd 0 0 1 117 +1300475168894422 EAr0uf4mhq 141.142.220.118 48479 141.142.2.2 53 udp dns 316 0 99 SHR F 0 Cd 0 0 1 127 +1300475168894787 GvmoxJFXdTa 141.142.220.118 48128 141.142.2.2 53 udp dns 422 0 183 SHR F 0 Cd 0 0 1 211 +1300475168901749 slFea8xwSmb 141.142.220.118 56056 141.142.2.2 53 udp dns 402 0 131 SHR F 0 Cd 0 0 1 159 +1300475168902195 UfGkYA2HI2g 141.142.220.118 55092 141.142.2.2 53 udp dns 374 0 198 SHR F 0 Cd 0 0 1 226 +1300475169899438 BWaU4aSuwkc 141.142.220.44 5353 224.0.0.251 5353 udp dns 0 0 0 S0 F 0 D 1 85 0 0 +1300475170862384 10XodEwRycf 141.142.220.226 137 141.142.220.255 137 udp dns 2613016 350 0 S0 F 0 D 7 546 0 0 +1300475171675372 zno26fFZkrh fe80::3074:17d5:2052:c324 65373 ff02::1:3 5355 udp dns 100096 66 0 S0 F 0 D 2 162 0 0 +1300475171677081 v5rgkJBig5l 141.142.220.226 55131 224.0.0.252 5355 udp dns 100020 66 0 S0 F 0 D 2 122 0 0 +1300475173116749 eWZCH7OONC1 fe80::3074:17d5:2052:c324 54213 ff02::1:3 5355 udp dns 99801 66 0 S0 F 0 D 2 162 0 0 +1300475173117362 0Pwk3ntf8O3 141.142.220.226 55671 224.0.0.252 5355 udp dns 99848 66 0 S0 F 0 D 2 122 0 0 +1300475173153679 0HKorjr8Zp7 141.142.220.238 56641 141.142.220.255 137 udp dns 0 0 0 S0 F 0 D 1 78 0 0 +1300475168859163 GSxOnSLghOa 141.142.220.118 49998 208.80.152.3 80 tcp 215893 1130 734 S1 F 1130 ShACad 4 216 4 950 +1300475168652003 nQcgTWjvg4c 141.142.220.118 35634 208.80.152.2 80 tcp 61328 0 350 OTH F 0 CdA 1 52 1 402 +1300475168895267 0Q4FH8sESw5 141.142.220.118 50001 208.80.152.3 80 tcp 227283 1178 734 S1 F 1178 ShACad 4 216 4 950 +1300475168902635 i2rO3KD1Syg 141.142.220.118 35642 208.80.152.2 80 tcp 120040 534 412 S1 F 534 ShACad 3 164 3 576 +1300475168892936 Tw8jXtpTGu6 141.142.220.118 50000 208.80.152.3 80 tcp 229603 1148 734 S1 F 1148 ShACad 4 216 4 950 +1300475168855305 3PKsZ2Uye21 141.142.220.118 49996 208.80.152.3 80 tcp 218501 1171 733 S1 F 1171 ShACad 4 216 4 949 +1300475168892913 P654jzLoe3a 141.142.220.118 49999 208.80.152.3 80 tcp 220960 1137 733 S1 F 1137 ShACad 4 216 4 949 +1300475169780331 2cx26uAvUPl 141.142.220.235 6705 173.192.163.128 80 tcp 0 0 0 OTH F 0 h 0 0 1 48 +1300475168724007 j4u32Pc5bif 141.142.220.118 48649 208.80.152.118 80 tcp 119904 525 232 S1 F 525 ShACad 3 164 3 396 +1300475168855330 VW0XPVINV8a 141.142.220.118 49997 208.80.152.3 80 tcp 219720 1125 734 S1 F 1125 ShACad 4 216 4 950 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.wikipedia/conn.ds.txt b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.wikipedia/conn.ds.txt new file mode 100644 index 0000000000..b74b9fd7e3 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.wikipedia/conn.ds.txt @@ -0,0 +1,89 @@ +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='conn' +ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +1300475167.096535 UWkUyAuUGXf 141.142.220.202 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 73 0 0 +1300475167.097012 arKYeMETxOg fe80::217:f2ff:fed7:cf65 5353 ff02::fb 5353 udp 0.000000 0 0 S0 F 0 D 1 199 0 0 +1300475167.099816 k6kgXLOoSKl 141.142.220.50 5353 224.0.0.251 5353 udp 0.000000 0 0 S0 F 0 D 1 179 0 0 +1300475168.853899 TEfuqmmG4bh 141.142.220.118 43927 141.142.2.2 53 udp dns 0.000435 0 89 SHR F 0 Cd 0 0 1 117 +1300475168.854378 FrJExwHcSal 141.142.220.118 37676 141.142.2.2 53 udp dns 0.000420 0 99 SHR F 0 Cd 0 0 1 127 +1300475168.854837 5OKnoww6xl4 141.142.220.118 40526 141.142.2.2 53 udp dns 0.000392 0 183 SHR F 0 Cd 0 0 1 211 +1300475168.857956 fRFu0wcOle6 141.142.220.118 32902 141.142.2.2 53 udp dns 0.000317 0 89 SHR F 0 Cd 0 0 1 117 +1300475168.858306 qSsw6ESzHV4 141.142.220.118 59816 141.142.2.2 53 udp dns 0.000343 0 99 SHR F 0 Cd 0 0 1 127 +1300475168.858713 iE6yhOq3SF 141.142.220.118 59714 141.142.2.2 53 udp dns 0.000375 0 183 SHR F 0 Cd 0 0 1 211 +1300475168.891644 qCaWGmzFtM5 141.142.220.118 58206 141.142.2.2 53 udp dns 0.000339 0 89 SHR F 0 Cd 0 0 1 117 +1300475168.892037 70MGiRM1Qf4 141.142.220.118 38911 141.142.2.2 53 udp dns 0.000335 0 99 SHR F 0 Cd 0 0 1 127 +1300475168.892414 h5DsfNtYzi1 141.142.220.118 59746 141.142.2.2 53 udp dns 0.000421 0 183 SHR F 0 Cd 0 0 1 211 +1300475168.893988 c4Zw9TmAE05 141.142.220.118 45000 141.142.2.2 53 udp dns 0.000384 0 89 SHR F 0 Cd 0 0 1 117 +1300475168.894422 EAr0uf4mhq 141.142.220.118 48479 141.142.2.2 53 udp dns 0.000317 0 99 SHR F 0 Cd 0 0 1 127 +1300475168.894787 GvmoxJFXdTa 141.142.220.118 48128 141.142.2.2 53 udp dns 0.000423 0 183 SHR F 0 Cd 0 0 1 211 +1300475168.901749 slFea8xwSmb 141.142.220.118 56056 141.142.2.2 53 udp dns 0.000402 0 131 SHR F 0 Cd 0 0 1 159 +1300475168.902195 UfGkYA2HI2g 141.142.220.118 55092 141.142.2.2 53 udp dns 0.000374 0 198 SHR F 0 Cd 0 0 1 226 +1300475169.899438 BWaU4aSuwkc 141.142.220.44 5353 224.0.0.251 5353 udp dns 0.000000 0 0 S0 F 0 D 1 85 0 0 +1300475170.862384 10XodEwRycf 141.142.220.226 137 141.142.220.255 137 udp dns 2.613017 350 0 S0 F 0 D 7 546 0 0 +1300475171.675372 zno26fFZkrh fe80::3074:17d5:2052:c324 65373 ff02::1:3 5355 udp dns 0.100096 66 0 S0 F 0 D 2 162 0 0 +1300475171.677081 v5rgkJBig5l 141.142.220.226 55131 224.0.0.252 5355 udp dns 0.100021 66 0 S0 F 0 D 2 122 0 0 +1300475173.116749 eWZCH7OONC1 fe80::3074:17d5:2052:c324 54213 ff02::1:3 5355 udp dns 0.099801 66 0 S0 F 0 D 2 162 0 0 +1300475173.117362 0Pwk3ntf8O3 141.142.220.226 55671 224.0.0.252 5355 udp dns 0.099849 66 0 S0 F 0 D 2 122 0 0 +1300475173.153679 0HKorjr8Zp7 141.142.220.238 56641 141.142.220.255 137 udp dns 0.000000 0 0 S0 F 0 D 1 78 0 0 +1300475168.859163 GSxOnSLghOa 141.142.220.118 49998 208.80.152.3 80 tcp 0.215893 1130 734 S1 F 1130 ShACad 4 216 4 950 +1300475168.652003 nQcgTWjvg4c 141.142.220.118 35634 208.80.152.2 80 tcp 0.061329 0 350 OTH F 0 CdA 1 52 1 402 +1300475168.895267 0Q4FH8sESw5 141.142.220.118 50001 208.80.152.3 80 tcp 0.227284 1178 734 S1 F 1178 ShACad 4 216 4 950 +1300475168.902635 i2rO3KD1Syg 141.142.220.118 35642 208.80.152.2 80 tcp 0.120041 534 412 S1 F 534 ShACad 3 164 3 576 +1300475168.892936 Tw8jXtpTGu6 141.142.220.118 50000 208.80.152.3 80 tcp 0.229603 1148 734 S1 F 1148 ShACad 4 216 4 950 +1300475168.855305 3PKsZ2Uye21 141.142.220.118 49996 208.80.152.3 80 tcp 0.218501 1171 733 S1 F 1171 ShACad 4 216 4 949 +1300475168.892913 P654jzLoe3a 141.142.220.118 49999 208.80.152.3 80 tcp 0.220961 1137 733 S1 F 1137 ShACad 4 216 4 949 +1300475169.780331 2cx26uAvUPl 141.142.220.235 6705 173.192.163.128 80 tcp 0.000000 0 0 OTH F 0 h 0 0 1 48 +1300475168.724007 j4u32Pc5bif 141.142.220.118 48649 208.80.152.118 80 tcp 0.119905 525 232 S1 F 525 ShACad 3 164 3 396 +1300475168.855330 VW0XPVINV8a 141.142.220.118 49997 208.80.152.3 80 tcp 0.219720 1125 734 S1 F 1125 ShACad 4 216 4 950 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.wikipedia/http.ds.txt b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.wikipedia/http.ds.txt new file mode 100644 index 0000000000..ae62fbec3d --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.dataseries.wikipedia/http.ds.txt @@ -0,0 +1,81 @@ +# Extent Types ... + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +# Extent, type='http' +ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file +1300475168.843894 j4u32Pc5bif 141.142.220.118 48649 208.80.152.118 80 0 0 0 304 Not Modified 0 +1300475168.975800 VW0XPVINV8a 141.142.220.118 49997 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475168.976327 3PKsZ2Uye21 141.142.220.118 49996 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475168.979160 GSxOnSLghOa 141.142.220.118 49998 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.012666 Tw8jXtpTGu6 141.142.220.118 50000 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.012730 P654jzLoe3a 141.142.220.118 49999 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.014860 0Q4FH8sESw5 141.142.220.118 50001 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.022665 i2rO3KD1Syg 141.142.220.118 35642 208.80.152.2 80 0 0 0 304 Not Modified 0 +1300475169.036294 VW0XPVINV8a 141.142.220.118 49997 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.036798 3PKsZ2Uye21 141.142.220.118 49996 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.039923 GSxOnSLghOa 141.142.220.118 49998 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.074793 Tw8jXtpTGu6 141.142.220.118 50000 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.074938 P654jzLoe3a 141.142.220.118 49999 208.80.152.3 80 0 0 0 304 Not Modified 0 +1300475169.075065 0Q4FH8sESw5 141.142.220.118 50001 208.80.152.3 80 0 0 0 304 Not Modified 0 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.empty-event/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.empty-event/ssh.log index 49272bfd53..b255ac3489 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.empty-event/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.empty-event/ssh.log @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-20 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.369918 1.2.3.4 1234 2.3.4.5 80 success unknown -1315167053.369918 1.2.3.4 1234 2.3.4.5 80 failure US -1315167053.369918 1.2.3.4 1234 2.3.4.5 80 failure UK -1315167053.369918 1.2.3.4 1234 2.3.4.5 80 success BR -1315167053.369918 1.2.3.4 1234 2.3.4.5 80 failure MX +1342748960.468458 1.2.3.4 1234 2.3.4.5 80 success unknown +1342748960.468458 1.2.3.4 1234 2.3.4.5 80 failure US +1342748960.468458 1.2.3.4 1234 2.3.4.5 80 failure UK +1342748960.468458 1.2.3.4 1234 2.3.4.5 80 success BR +1342748960.468458 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-49-20 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.events/output b/testing/btest/Baseline/scripts.base.frameworks.logging.events/output index c3dbf607a6..6bd153946e 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.events/output +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.events/output @@ -1,2 +1,2 @@ -[t=1299718502.96511, id=[orig_h=1.2.3.4, orig_p=1234/tcp, resp_h=2.3.4.5, resp_p=80/tcp], status=success, country=] -[t=1299718502.96511, id=[orig_h=1.2.3.4, orig_p=1234/tcp, resp_h=2.3.4.5, resp_p=80/tcp], status=failure, country=US] +[t=1342748960.593451, id=[orig_h=1.2.3.4, orig_p=1234/tcp, resp_h=2.3.4.5, resp_p=80/tcp], status=success, country=unknown] +[t=1342748960.593451, id=[orig_h=1.2.3.4, orig_p=1234/tcp, resp_h=2.3.4.5, resp_p=80/tcp], status=failure, country=US] diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.exclude/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.exclude/ssh.log index b078b4746a..f795159a16 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.exclude/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.exclude/ssh.log @@ -1,5 +1,9 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-20 #fields id.orig_p id.resp_h id.resp_p status country #types port addr port string string 1234 2.3.4.5 80 success unknown @@ -7,3 +11,4 @@ 1234 2.3.4.5 80 failure UK 1234 2.3.4.5 80 success BR 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-49-20 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.file/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.file/ssh.log index 0a988ff9b9..34d5f28b82 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.file/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.file/ssh.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-20 #fields t f #types time file -1315167053.585834 Foo.log +1342748960.757056 Foo.log +#close 2012-07-20-01-49-20 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.include/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.include/ssh.log index 5675ef6632..8935046687 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.include/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.include/ssh.log @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-20 #fields t id.orig_h #types time addr -1315167053.694473 1.2.3.4 -1315167053.694473 1.2.3.4 -1315167053.694473 1.2.3.4 -1315167053.694473 1.2.3.4 -1315167053.694473 1.2.3.4 +1342748960.796093 1.2.3.4 +1342748960.796093 1.2.3.4 +1342748960.796093 1.2.3.4 +1342748960.796093 1.2.3.4 +1342748960.796093 1.2.3.4 +#close 2012-07-20-01-49-20 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.none-debug/output b/testing/btest/Baseline/scripts.base.frameworks.logging.none-debug/output new file mode 100644 index 0000000000..b2a8921c38 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.none-debug/output @@ -0,0 +1,12 @@ +[logging::writer::None] + path=ssh + rotation_interval=3600 + rotation_base=300 + config[foo] = bar + config[foo2] = bar2 + field id.orig_p: port + field id.resp_h: addr + field id.resp_p: port + field status: string + field country: string + diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/local.log b/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/local.log index d8d90cf1fa..819b7b9bc2 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/local.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/local.log @@ -1,34 +1,39 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path local +#open 2011-03-18-19-06-13 #fields ts id.orig_h #types time addr -1300475168.652003 141.142.220.118 -1300475168.724007 141.142.220.118 1300475168.859163 141.142.220.118 +1300475168.652003 141.142.220.118 +1300475168.895267 141.142.220.118 1300475168.902635 141.142.220.118 1300475168.892936 141.142.220.118 -1300475168.892913 141.142.220.118 1300475168.855305 141.142.220.118 +1300475168.892913 141.142.220.118 +1300475168.724007 141.142.220.118 1300475168.855330 141.142.220.118 -1300475168.895267 141.142.220.118 -1300475168.853899 141.142.220.118 -1300475168.893988 141.142.220.118 -1300475168.894787 141.142.220.118 -1300475173.117362 141.142.220.226 -1300475173.153679 141.142.220.238 -1300475168.857956 141.142.220.118 -1300475168.854378 141.142.220.118 -1300475168.854837 141.142.220.118 -1300475167.099816 141.142.220.50 1300475168.891644 141.142.220.118 -1300475168.892037 141.142.220.118 -1300475171.677081 141.142.220.226 -1300475168.894422 141.142.220.118 -1300475167.096535 141.142.220.202 -1300475168.858713 141.142.220.118 -1300475168.902195 141.142.220.118 -1300475169.899438 141.142.220.44 -1300475168.892414 141.142.220.118 -1300475168.858306 141.142.220.118 -1300475168.901749 141.142.220.118 1300475170.862384 141.142.220.226 +1300475168.853899 141.142.220.118 +1300475168.854378 141.142.220.118 +1300475168.857956 141.142.220.118 +1300475173.117362 141.142.220.226 +1300475168.858713 141.142.220.118 +1300475167.096535 141.142.220.202 +1300475167.099816 141.142.220.50 +1300475168.892037 141.142.220.118 +1300475168.893988 141.142.220.118 +1300475168.854837 141.142.220.118 +1300475169.899438 141.142.220.44 +1300475168.858306 141.142.220.118 +1300475168.894422 141.142.220.118 +1300475173.153679 141.142.220.238 +1300475168.892414 141.142.220.118 +1300475171.677081 141.142.220.226 +1300475168.902195 141.142.220.118 +1300475168.894787 141.142.220.118 +1300475168.901749 141.142.220.118 +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/remote.log b/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/remote.log index a17c2821f5..41f575ef63 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/remote.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.path-func-column-demote/remote.log @@ -1,5 +1,13 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path remote +#open 2011-03-18-19-06-13 #fields ts id.orig_h #types time addr 1300475169.780331 173.192.163.128 +1300475167.097012 fe80::217:f2ff:fed7:cf65 +1300475171.675372 fe80::3074:17d5:2052:c324 +1300475173.116749 fe80::3074:17d5:2052:c324 +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.path-func/output b/testing/btest/Baseline/scripts.base.frameworks.logging.path-func/output index 2c196340cc..c67a12e1d9 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.path-func/output +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.path-func/output @@ -6,37 +6,72 @@ static-prefix-1-US.log static-prefix-2-MX2.log static-prefix-2-UK.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path static-prefix-0-BR +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.803346 1.2.3.4 1234 2.3.4.5 80 success BR +1342748961.180156 1.2.3.4 1234 2.3.4.5 80 success BR +#close 2012-07-20-01-49-21 #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path static-prefix-0-MX3 +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.803346 1.2.3.4 1234 2.3.4.5 80 failure MX3 +1342748961.180156 1.2.3.4 1234 2.3.4.5 80 failure MX3 +#close 2012-07-20-01-49-21 #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path static-prefix-0-unknown +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.803346 1.2.3.4 1234 2.3.4.5 80 success unknown +1342748961.180156 1.2.3.4 1234 2.3.4.5 80 success unknown +#close 2012-07-20-01-49-21 #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path static-prefix-1-MX +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.803346 1.2.3.4 1234 2.3.4.5 80 failure MX +1342748961.180156 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-49-21 #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path static-prefix-1-US +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.803346 1.2.3.4 1234 2.3.4.5 80 failure US +1342748961.180156 1.2.3.4 1234 2.3.4.5 80 failure US +#close 2012-07-20-01-49-21 #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path static-prefix-2-MX2 +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.803346 1.2.3.4 1234 2.3.4.5 80 failure MX2 +1342748961.180156 1.2.3.4 1234 2.3.4.5 80 failure MX2 +#close 2012-07-20-01-49-21 #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path static-prefix-2-UK +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.803346 1.2.3.4 1234 2.3.4.5 80 failure UK +1342748961.180156 1.2.3.4 1234 2.3.4.5 80 failure UK +#close 2012-07-20-01-49-21 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.failure.log b/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.failure.log index ba688d7843..a362135318 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.failure.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.failure.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test.failure +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.923545 1.2.3.4 1234 2.3.4.5 80 failure US +1342748961.488370 1.2.3.4 1234 2.3.4.5 80 failure US +#close 2012-07-20-01-49-21 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.success.log b/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.success.log index 7a91b1a2d9..dd9c300429 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.success.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.pred/test.success.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test.success +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167053.923545 1.2.3.4 1234 2.3.4.5 80 success - +1342748961.488370 1.2.3.4 1234 2.3.4.5 80 success unknown +#close 2012-07-20-01-49-21 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.remote-types/receiver.test.log b/testing/btest/Baseline/scripts.base.frameworks.logging.remote-types/receiver.test.log index c00e7765d5..13364f8e77 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.remote-types/receiver.test.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.remote-types/receiver.test.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field EMPTY +#unset_field - #path test +#open 1970-01-01-00-00-00 #fields b i e c p sn a d t iv s sc ss se vc ve -#types bool int enum count port subnet addr double time interval string table table table vector vector -T -42 Test::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315167054.320958 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY +#types bool int enum count port subnet addr double time interval string table[count] table[string] table[string] vector[count] vector[string] +T -42 Test::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1342749004.579242 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY +#close 2012-07-20-01-50-05 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.failure.log b/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.failure.log index aba9fdddd9..71e1d18c73 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.failure.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.failure.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test.failure +#open 2012-07-20-01-50-18 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 failure US -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 failure UK -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 failure MX +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure US +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure UK +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-50-18 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.log b/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.log index b928c37685..bc3dac5a1a 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.log @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2012-07-20-01-50-18 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 success - -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 failure US -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 failure UK -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 success BR -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 failure MX +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success unknown +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure US +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure UK +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success BR +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-50-18 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.success.log b/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.success.log index a951c6ed1a..f0b26454b4 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.success.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.remote/sender.test.success.log @@ -1,6 +1,11 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test.success +#open 2012-07-20-01-50-18 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 success - -1315167059.502670 1.2.3.4 1234 2.3.4.5 80 success BR +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success unknown +1342749018.970682 1.2.3.4 1234 2.3.4.5 80 success BR +#close 2012-07-20-01-50-18 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.failure.log b/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.failure.log index 6185e86028..de324c337f 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.failure.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.failure.log @@ -1,6 +1,11 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh.failure +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167066.575996 1.2.3.4 1234 2.3.4.5 80 failure US -1315167066.575996 1.2.3.4 1234 2.3.4.5 80 failure UK +1342748961.521536 1.2.3.4 1234 2.3.4.5 80 failure US +1342748961.521536 1.2.3.4 1234 2.3.4.5 80 failure UK +#close 2012-07-20-01-49-21 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.log index a4ec2dc7de..ed0a118cac 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.remove/ssh.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167066.575996 1.2.3.4 1234 2.3.4.5 80 failure US -1315167066.575996 1.2.3.4 1234 2.3.4.5 80 failure UK -1315167066.575996 1.2.3.4 1234 2.3.4.5 80 failure BR +1342748961.521536 1.2.3.4 1234 2.3.4.5 80 failure US +1342748961.521536 1.2.3.4 1234 2.3.4.5 80 failure UK +1342748961.521536 1.2.3.4 1234 2.3.4.5 80 failure BR +#close 2012-07-20-01-49-21 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/.stderr b/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/.stderr index 0954137b7e..e69de29bb2 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/.stderr +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/.stderr @@ -1,10 +0,0 @@ -1st test.2011-03-07-03-00-05.log test 11-03-07_03.00.05 11-03-07_04.00.05 0 -1st test.2011-03-07-04-00-05.log test 11-03-07_04.00.05 11-03-07_05.00.05 0 -1st test.2011-03-07-05-00-05.log test 11-03-07_05.00.05 11-03-07_06.00.05 0 -1st test.2011-03-07-06-00-05.log test 11-03-07_06.00.05 11-03-07_07.00.05 0 -1st test.2011-03-07-07-00-05.log test 11-03-07_07.00.05 11-03-07_08.00.05 0 -1st test.2011-03-07-08-00-05.log test 11-03-07_08.00.05 11-03-07_09.00.05 0 -1st test.2011-03-07-09-00-05.log test 11-03-07_09.00.05 11-03-07_10.00.05 0 -1st test.2011-03-07-10-00-05.log test 11-03-07_10.00.05 11-03-07_11.00.05 0 -1st test.2011-03-07-11-00-05.log test 11-03-07_11.00.05 11-03-07_12.00.05 0 -1st test.2011-03-07-12-00-05.log test 11-03-07_12.00.05 11-03-07_12.59.55 1 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/out b/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/out index 337ed3ca32..3acce6f1ce 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/out +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.rotate-custom/out @@ -1,3 +1,13 @@ +1st test.2011-03-07-03-00-05.log test 11-03-07_03.00.05 11-03-07_04.00.05 0 ascii +1st test.2011-03-07-04-00-05.log test 11-03-07_04.00.05 11-03-07_05.00.05 0 ascii +1st test.2011-03-07-05-00-05.log test 11-03-07_05.00.05 11-03-07_06.00.05 0 ascii +1st test.2011-03-07-06-00-05.log test 11-03-07_06.00.05 11-03-07_07.00.05 0 ascii +1st test.2011-03-07-07-00-05.log test 11-03-07_07.00.05 11-03-07_08.00.05 0 ascii +1st test.2011-03-07-08-00-05.log test 11-03-07_08.00.05 11-03-07_09.00.05 0 ascii +1st test.2011-03-07-09-00-05.log test 11-03-07_09.00.05 11-03-07_10.00.05 0 ascii +1st test.2011-03-07-10-00-05.log test 11-03-07_10.00.05 11-03-07_11.00.05 0 ascii +1st test.2011-03-07-11-00-05.log test 11-03-07_11.00.05 11-03-07_12.00.05 0 ascii +1st test.2011-03-07-12-00-05.log test 11-03-07_12.00.05 11-03-07_12.59.55 1 ascii custom rotate, [writer=Log::WRITER_ASCII, fname=test2-11-03-07_03.00.05.log, path=test2, open=1299466805.0, close=1299470395.0, terminating=F] custom rotate, [writer=Log::WRITER_ASCII, fname=test2-11-03-07_03.59.55.log, path=test2, open=1299470395.0, close=1299470405.0, terminating=F] custom rotate, [writer=Log::WRITER_ASCII, fname=test2-11-03-07_04.00.05.log, path=test2, open=1299470405.0, close=1299473995.0, terminating=F] @@ -18,11 +28,16 @@ custom rotate, [writer=Log::WRITER_ASCII, fname=test2-11-03-07_11.00.05.log, pat custom rotate, [writer=Log::WRITER_ASCII, fname=test2-11-03-07_11.59.55.log, path=test2, open=1299499195.0, close=1299499205.0, terminating=F] custom rotate, [writer=Log::WRITER_ASCII, fname=test2-11-03-07_12.00.05.log, path=test2, open=1299499205.0, close=1299502795.0, terminating=F] custom rotate, [writer=Log::WRITER_ASCII, fname=test2-11-03-07_12.59.55.log, path=test2, open=1299502795.0, close=1299502795.0, terminating=T] +#close 2012-07-27-19-14-39 +#empty_field (empty) #fields t id.orig_h id.orig_p id.resp_h id.resp_p +#open 2012-07-27-19-14-39 #path test #path test2 #separator \x09 +#set_separator , #types time addr port addr port +#unset_field - 1299466805.000000 10.0.0.1 20 10.0.0.2 1024 1299470395.000000 10.0.0.2 20 10.0.0.3 0 1299470405.000000 10.0.0.1 20 10.0.0.2 1025 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.rotate/out b/testing/btest/Baseline/scripts.base.frameworks.logging.rotate/out index 74ce45023a..b26d2fcd1b 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.rotate/out +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.rotate/out @@ -1,80 +1,130 @@ -test.2011-03-07-03-00-05.log test 11-03-07_03.00.05 11-03-07_04.00.05 0 -test.2011-03-07-04-00-05.log test 11-03-07_04.00.05 11-03-07_05.00.05 0 -test.2011-03-07-05-00-05.log test 11-03-07_05.00.05 11-03-07_06.00.05 0 -test.2011-03-07-06-00-05.log test 11-03-07_06.00.05 11-03-07_07.00.05 0 -test.2011-03-07-07-00-05.log test 11-03-07_07.00.05 11-03-07_08.00.05 0 -test.2011-03-07-08-00-05.log test 11-03-07_08.00.05 11-03-07_09.00.05 0 -test.2011-03-07-09-00-05.log test 11-03-07_09.00.05 11-03-07_10.00.05 0 -test.2011-03-07-10-00-05.log test 11-03-07_10.00.05 11-03-07_11.00.05 0 -test.2011-03-07-11-00-05.log test 11-03-07_11.00.05 11-03-07_12.00.05 0 -test.2011-03-07-12-00-05.log test 11-03-07_12.00.05 11-03-07_12.59.55 1 +test.2011-03-07-03-00-05.log test 11-03-07_03.00.05 11-03-07_04.00.05 0 ascii +test.2011-03-07-04-00-05.log test 11-03-07_04.00.05 11-03-07_05.00.05 0 ascii +test.2011-03-07-05-00-05.log test 11-03-07_05.00.05 11-03-07_06.00.05 0 ascii +test.2011-03-07-06-00-05.log test 11-03-07_06.00.05 11-03-07_07.00.05 0 ascii +test.2011-03-07-07-00-05.log test 11-03-07_07.00.05 11-03-07_08.00.05 0 ascii +test.2011-03-07-08-00-05.log test 11-03-07_08.00.05 11-03-07_09.00.05 0 ascii +test.2011-03-07-09-00-05.log test 11-03-07_09.00.05 11-03-07_10.00.05 0 ascii +test.2011-03-07-10-00-05.log test 11-03-07_10.00.05 11-03-07_11.00.05 0 ascii +test.2011-03-07-11-00-05.log test 11-03-07_11.00.05 11-03-07_12.00.05 0 ascii +test.2011-03-07-12-00-05.log test 11-03-07_12.00.05 11-03-07_12.59.55 1 ascii > test.2011-03-07-03-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299466805.000000 10.0.0.1 20 10.0.0.2 1024 1299470395.000000 10.0.0.2 20 10.0.0.3 0 +#close 2011-03-07-04-00-05 > test.2011-03-07-04-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299470405.000000 10.0.0.1 20 10.0.0.2 1025 1299473995.000000 10.0.0.2 20 10.0.0.3 1 +#close 2011-03-07-05-00-05 > test.2011-03-07-05-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299474005.000000 10.0.0.1 20 10.0.0.2 1026 1299477595.000000 10.0.0.2 20 10.0.0.3 2 +#close 2011-03-07-06-00-05 > test.2011-03-07-06-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299477605.000000 10.0.0.1 20 10.0.0.2 1027 1299481195.000000 10.0.0.2 20 10.0.0.3 3 +#close 2011-03-07-07-00-05 > test.2011-03-07-07-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299481205.000000 10.0.0.1 20 10.0.0.2 1028 1299484795.000000 10.0.0.2 20 10.0.0.3 4 +#close 2011-03-07-08-00-05 > test.2011-03-07-08-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299484805.000000 10.0.0.1 20 10.0.0.2 1029 1299488395.000000 10.0.0.2 20 10.0.0.3 5 +#close 2011-03-07-09-00-05 > test.2011-03-07-09-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299488405.000000 10.0.0.1 20 10.0.0.2 1030 1299491995.000000 10.0.0.2 20 10.0.0.3 6 +#close 2011-03-07-10-00-05 > test.2011-03-07-10-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299492005.000000 10.0.0.1 20 10.0.0.2 1031 1299495595.000000 10.0.0.2 20 10.0.0.3 7 +#close 2011-03-07-11-00-05 > test.2011-03-07-11-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299495605.000000 10.0.0.1 20 10.0.0.2 1032 1299499195.000000 10.0.0.2 20 10.0.0.3 8 +#close 2011-03-07-12-00-05 > test.2011-03-07-12-00-05.log #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path test +#open 2011-03-07-03-00-05 #fields t id.orig_h id.orig_p id.resp_h id.resp_p #types time addr port addr port 1299499205.000000 10.0.0.1 20 10.0.0.2 1033 1299502795.000000 10.0.0.2 20 10.0.0.3 9 +#close 2011-03-07-12-59-55 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.stdout/output b/testing/btest/Baseline/scripts.base.frameworks.logging.stdout/output index 84521cb645..6ff5237afa 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.stdout/output +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.stdout/output @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path /dev/stdout +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167067.393739 1.2.3.4 1234 2.3.4.5 80 success unknown -1315167067.393739 1.2.3.4 1234 2.3.4.5 80 failure US -1315167067.393739 1.2.3.4 1234 2.3.4.5 80 failure UK -1315167067.393739 1.2.3.4 1234 2.3.4.5 80 success BR -1315167067.393739 1.2.3.4 1234 2.3.4.5 80 failure MX +1342748961.732599 1.2.3.4 1234 2.3.4.5 80 success unknown +1342748961.732599 1.2.3.4 1234 2.3.4.5 80 failure US +1342748961.732599 1.2.3.4 1234 2.3.4.5 80 failure UK +1342748961.732599 1.2.3.4 1234 2.3.4.5 80 success BR +1342748961.732599 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-49-21 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.test-logging/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.test-logging/ssh.log index 5b93b6e23b..d2d484e02f 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.test-logging/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.test-logging/ssh.log @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-21 #fields t id.orig_h id.orig_p id.resp_h id.resp_p status country #types time addr port addr port string string -1315167067.507542 1.2.3.4 1234 2.3.4.5 80 success unknown -1315167067.507542 1.2.3.4 1234 2.3.4.5 80 failure US -1315167067.507542 1.2.3.4 1234 2.3.4.5 80 failure UK -1315167067.507542 1.2.3.4 1234 2.3.4.5 80 success BR -1315167067.507542 1.2.3.4 1234 2.3.4.5 80 failure MX +1342748961.748481 1.2.3.4 1234 2.3.4.5 80 success unknown +1342748961.748481 1.2.3.4 1234 2.3.4.5 80 failure US +1342748961.748481 1.2.3.4 1234 2.3.4.5 80 failure UK +1342748961.748481 1.2.3.4 1234 2.3.4.5 80 success BR +1342748961.748481 1.2.3.4 1234 2.3.4.5 80 failure MX +#close 2012-07-20-01-49-21 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.types/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.types/ssh.log index ffd579c224..6b75d056cf 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.types/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.types/ssh.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field EMPTY +#unset_field - #path ssh +#open 2012-07-20-01-49-22 #fields b i e c p sn a d t iv s sc ss se vc ve f -#types bool int enum count port subnet addr double time interval string table table table vector vector func -T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +#types bool int enum count port subnet addr double time interval string table[count] table[string] table[string] vector[count] vector[string] func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1342748962.114672 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +#close 2012-07-20-01-49-22 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.unset-record/testing.log b/testing/btest/Baseline/scripts.base.frameworks.logging.unset-record/testing.log index 12bb1d1704..0ebe8838ad 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.unset-record/testing.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.unset-record/testing.log @@ -1,6 +1,11 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path testing +#open 2012-07-20-01-49-22 #fields a.val1 a.val2 b #types count count count - - 6 1 2 3 +#close 2012-07-20-01-49-22 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.vec/ssh.log b/testing/btest/Baseline/scripts.base.frameworks.logging.vec/ssh.log index b9a54404ed..3e8e1e737e 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.logging.vec/ssh.log +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.vec/ssh.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path ssh +#open 2012-07-20-01-49-22 #fields vec -#types vector +#types vector[string] -,2,-,-,5 +#close 2012-07-20-01-49-22 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-2-2.log b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-2-2.log new file mode 100644 index 0000000000..cbc90d9926 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-2-2.log @@ -0,0 +1,23 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http-2-2 +#open 2011-03-18-19-06-08 +#fields status_code +#types count +304 +304 +304 +304 +304 +304 +304 +304 +304 +304 +304 +304 +304 +304 +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-2.log b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-2.log new file mode 100644 index 0000000000..8f66184146 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-2.log @@ -0,0 +1,23 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http-2 +#open 2011-03-18-19-06-08 +#fields host +#types string +bits.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +meta.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +upload.wikimedia.org +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-3.log b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-3.log new file mode 100644 index 0000000000..d64b9aa128 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http-3.log @@ -0,0 +1,23 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http-3 +#open 2011-03-18-19-06-08 +#fields uri +#types string +/skins-1.5/monobook/main.css +/wikipedia/commons/6/63/Wikipedia-logo.png +/wikipedia/commons/thumb/b/bb/Wikipedia_wordmark.svg/174px-Wikipedia_wordmark.svg.png +/wikipedia/commons/b/bd/Bookshelf-40x201_6.png +/wikipedia/commons/thumb/8/8a/Wikinews-logo.png/35px-Wikinews-logo.png +/wikipedia/commons/4/4a/Wiktionary-logo-en-35px.png +/wikipedia/commons/thumb/f/fa/Wikiquote-logo.svg/35px-Wikiquote-logo.svg.png +/images/wikimedia-button.png +/wikipedia/commons/thumb/f/fa/Wikibooks-logo.svg/35px-Wikibooks-logo.svg.png +/wikipedia/commons/thumb/d/df/Wikispecies-logo.svg/35px-Wikispecies-logo.svg.png +/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/35px-Wikisource-logo.svg.png +/wikipedia/commons/thumb/4/4a/Commons-logo.svg/35px-Commons-logo.svg.png +/wikipedia/commons/thumb/9/91/Wikiversity-logo.svg/35px-Wikiversity-logo.svg.png +/wikipedia/commons/thumb/7/75/Wikimedia_Community_Logo.svg/35px-Wikimedia_Community_Logo.svg.png +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http.log b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http.log new file mode 100644 index 0000000000..97273995bc --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/http.log @@ -0,0 +1,23 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http +#open 2011-03-18-19-06-08 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1300475168.784020 j4u32Pc5bif 141.142.220.118 48649 208.80.152.118 80 1 GET bits.wikimedia.org /skins-1.5/monobook/main.css http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.916018 VW0XPVINV8a 141.142.220.118 49997 208.80.152.3 80 1 GET upload.wikimedia.org /wikipedia/commons/6/63/Wikipedia-logo.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.916183 3PKsZ2Uye21 141.142.220.118 49996 208.80.152.3 80 1 GET upload.wikimedia.org /wikipedia/commons/thumb/b/bb/Wikipedia_wordmark.svg/174px-Wikipedia_wordmark.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.918358 GSxOnSLghOa 141.142.220.118 49998 208.80.152.3 80 1 GET upload.wikimedia.org /wikipedia/commons/b/bd/Bookshelf-40x201_6.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.952307 Tw8jXtpTGu6 141.142.220.118 50000 208.80.152.3 80 1 GET upload.wikimedia.org /wikipedia/commons/thumb/8/8a/Wikinews-logo.png/35px-Wikinews-logo.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.952296 P654jzLoe3a 141.142.220.118 49999 208.80.152.3 80 1 GET upload.wikimedia.org /wikipedia/commons/4/4a/Wiktionary-logo-en-35px.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.954820 0Q4FH8sESw5 141.142.220.118 50001 208.80.152.3 80 1 GET upload.wikimedia.org /wikipedia/commons/thumb/f/fa/Wikiquote-logo.svg/35px-Wikiquote-logo.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.962687 i2rO3KD1Syg 141.142.220.118 35642 208.80.152.2 80 1 GET meta.wikimedia.org /images/wikimedia-button.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.975934 VW0XPVINV8a 141.142.220.118 49997 208.80.152.3 80 2 GET upload.wikimedia.org /wikipedia/commons/thumb/f/fa/Wikibooks-logo.svg/35px-Wikibooks-logo.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.976436 3PKsZ2Uye21 141.142.220.118 49996 208.80.152.3 80 2 GET upload.wikimedia.org /wikipedia/commons/thumb/d/df/Wikispecies-logo.svg/35px-Wikispecies-logo.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475168.979264 GSxOnSLghOa 141.142.220.118 49998 208.80.152.3 80 2 GET upload.wikimedia.org /wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/35px-Wikisource-logo.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475169.014619 Tw8jXtpTGu6 141.142.220.118 50000 208.80.152.3 80 2 GET upload.wikimedia.org /wikipedia/commons/thumb/4/4a/Commons-logo.svg/35px-Commons-logo.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475169.014593 P654jzLoe3a 141.142.220.118 49999 208.80.152.3 80 2 GET upload.wikimedia.org /wikipedia/commons/thumb/9/91/Wikiversity-logo.svg/35px-Wikiversity-logo.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +1300475169.014927 0Q4FH8sESw5 141.142.220.118 50001 208.80.152.3 80 2 GET upload.wikimedia.org /wikipedia/commons/thumb/7/75/Wikimedia_Community_Logo.svg/35px-Wikimedia_Community_Logo.svg.png http://www.wikipedia.org/ Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15 0 0 304 Not Modified - - - (empty) - - - - - - +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/reporter.log b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/reporter.log new file mode 100644 index 0000000000..35e9134583 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.logging.writer-path-conflict/reporter.log @@ -0,0 +1,12 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path reporter +#open 2011-03-18-19-06-08 +#fields ts level message location +#types time enum string string +1300475168.843894 Reporter::WARNING Write using filter 'host-only' on path 'http' changed to use new path 'http-2' to avoid conflict with filter 'default' (empty) +1300475168.843894 Reporter::WARNING Write using filter 'uri-only' on path 'http' changed to use new path 'http-3' to avoid conflict with filter 'default' (empty) +1300475168.843894 Reporter::WARNING Write using filter 'status-only' on path 'http-2' changed to use new path 'http-2-2' to avoid conflict with filter 'host-only' (empty) +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.base.frameworks.metrics.basic-cluster/manager-1.metrics.log b/testing/btest/Baseline/scripts.base.frameworks.metrics.basic-cluster/manager-1.metrics.log index 1677297ecc..cb1bd5af01 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.metrics.basic-cluster/manager-1.metrics.log +++ b/testing/btest/Baseline/scripts.base.frameworks.metrics.basic-cluster/manager-1.metrics.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path metrics +#open 2012-07-20-01-50-41 #fields ts metric_id filter_name index.host index.str index.network value #types time enum string addr string subnet count -1317950616.401733 TEST_METRIC foo-bar 6.5.4.3 - - 4 -1317950616.401733 TEST_METRIC foo-bar 1.2.3.4 - - 6 -1317950616.401733 TEST_METRIC foo-bar 7.2.1.5 - - 2 +1342749041.601712 TEST_METRIC foo-bar 6.5.4.3 - - 4 +1342749041.601712 TEST_METRIC foo-bar 7.2.1.5 - - 2 +1342749041.601712 TEST_METRIC foo-bar 1.2.3.4 - - 6 +#close 2012-07-20-01-50-49 diff --git a/testing/btest/Baseline/scripts.base.frameworks.metrics.basic/metrics.log b/testing/btest/Baseline/scripts.base.frameworks.metrics.basic/metrics.log index 45334cf3d7..fb6476ee88 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.metrics.basic/metrics.log +++ b/testing/btest/Baseline/scripts.base.frameworks.metrics.basic/metrics.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path metrics +#open 2012-07-20-01-49-22 #fields ts metric_id filter_name index.host index.str index.network value #types time enum string addr string subnet count -1315167083.455574 TEST_METRIC foo-bar 6.5.4.3 - - 2 -1315167083.455574 TEST_METRIC foo-bar 1.2.3.4 - - 3 -1315167083.455574 TEST_METRIC foo-bar 7.2.1.5 - - 1 +1342748962.841548 TEST_METRIC foo-bar 6.5.4.3 - - 2 +1342748962.841548 TEST_METRIC foo-bar 7.2.1.5 - - 1 +1342748962.841548 TEST_METRIC foo-bar 1.2.3.4 - - 3 +#close 2012-07-20-01-49-22 diff --git a/testing/btest/Baseline/scripts.base.frameworks.metrics.cluster-intermediate-update/manager-1.notice.log b/testing/btest/Baseline/scripts.base.frameworks.metrics.cluster-intermediate-update/manager-1.notice.log index f5df2e96f3..217b3ed49b 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.metrics.cluster-intermediate-update/manager-1.notice.log +++ b/testing/btest/Baseline/scripts.base.frameworks.metrics.cluster-intermediate-update/manager-1.notice.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path notice -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network -#types time string addr port addr port enum string string addr addr port count string table table interval bool string string string double double addr string subnet -1316952194.679491 - - - - - Test_Notice Threshold crossed by metric_index(host=1.2.3.4) 100/100 - 1.2.3.4 - - 100 manager-1 Notice::ACTION_LOG 6 3600.000000 - - - - - - 1.2.3.4 - - +#open 2012-07-20-01-50-59 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network +#types time string addr port addr port enum enum string string addr addr port count string table[enum] table[count] interval bool string string string double double addr string subnet +1342749059.978651 - - - - - - Test_Notice Threshold crossed by metric_index(host=1.2.3.4) 100/100 - 1.2.3.4 - - 100 manager-1 Notice::ACTION_LOG 6 3600.000000 F - - - - - 1.2.3.4 - - +#close 2012-07-20-01-51-08 diff --git a/testing/btest/Baseline/scripts.base.frameworks.metrics.notice/notice.log b/testing/btest/Baseline/scripts.base.frameworks.metrics.notice/notice.log index 33745500e0..ba6c680e27 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.metrics.notice/notice.log +++ b/testing/btest/Baseline/scripts.base.frameworks.metrics.notice/notice.log @@ -1,6 +1,11 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path notice -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network -#types time string addr port addr port enum string string addr addr port count string table table interval bool string string string double double addr string subnet -1316952223.891502 - - - - - Test_Notice Threshold crossed by metric_index(host=1.2.3.4) 3/2 - 1.2.3.4 - - 3 bro Notice::ACTION_LOG 6 3600.000000 - - - - - - 1.2.3.4 - - -1316952223.891502 - - - - - Test_Notice Threshold crossed by metric_index(host=6.5.4.3) 2/2 - 6.5.4.3 - - 2 bro Notice::ACTION_LOG 6 3600.000000 - - - - - - 6.5.4.3 - - +#open 2012-07-20-01-49-23 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network +#types time string addr port addr port enum enum string string addr addr port count string table[enum] table[count] interval bool string string string double double addr string subnet +1342748963.085888 - - - - - - Test_Notice Threshold crossed by metric_index(host=1.2.3.4) 3/2 - 1.2.3.4 - - 3 bro Notice::ACTION_LOG 6 3600.000000 F - - - - - 1.2.3.4 - - +1342748963.085888 - - - - - - Test_Notice Threshold crossed by metric_index(host=6.5.4.3) 2/2 - 6.5.4.3 - - 2 bro Notice::ACTION_LOG 6 3600.000000 F - - - - - 6.5.4.3 - - +#close 2012-07-20-01-49-23 diff --git a/testing/btest/Baseline/scripts.base.frameworks.notice.cluster/manager-1.notice.log b/testing/btest/Baseline/scripts.base.frameworks.notice.cluster/manager-1.notice.log index 0662c13294..6c93cb875e 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.notice.cluster/manager-1.notice.log +++ b/testing/btest/Baseline/scripts.base.frameworks.notice.cluster/manager-1.notice.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path notice -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network -#types time string addr port addr port enum string string addr addr port count string table table interval bool string string string double double addr string subnet -1316952264.931290 - - - - - Test_Notice test notice! - - - - - worker-1 Notice::ACTION_LOG 6 3600.000000 - - - - - - - - - +#open 2012-07-20-01-51-18 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network +#types time string addr port addr port enum enum string string addr addr port count string table[enum] table[count] interval bool string string string double double addr string subnet +1342749078.270791 - - - - - - Test_Notice test notice! - - - - - worker-1 Notice::ACTION_LOG 6 3600.000000 F - - - - - - - - +#close 2012-07-20-01-51-27 diff --git a/testing/btest/Baseline/scripts.base.frameworks.notice.mail-alarms/alarm-mail.txt b/testing/btest/Baseline/scripts.base.frameworks.notice.mail-alarms/alarm-mail.txt new file mode 100644 index 0000000000..e2cd51edd1 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.notice.mail-alarms/alarm-mail.txt @@ -0,0 +1,4 @@ +> 2005-10-07-23:23:55 Test_Notice 141.42.64.125:56730/tcp -> 125.190.109.199:80/tcp (uid arKYeMETxOg) + test + # 141.42.64.125 = 125.190.109.199 = + diff --git a/testing/btest/Baseline/scripts.base.frameworks.notice.suppression-cluster/manager-1.notice.log b/testing/btest/Baseline/scripts.base.frameworks.notice.suppression-cluster/manager-1.notice.log index 6e0214b7d3..88f25b066f 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.notice.suppression-cluster/manager-1.notice.log +++ b/testing/btest/Baseline/scripts.base.frameworks.notice.suppression-cluster/manager-1.notice.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path notice -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network -#types time string addr port addr port enum string string addr addr port count string table table interval bool string string string double double addr string subnet -1316950574.408256 - - - - - Test_Notice test notice! - - - - - worker-2 Notice::ACTION_LOG 6 3600.000000 - - - - - - - - - +#open 2012-07-20-01-51-36 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network +#types time string addr port addr port enum enum string string addr addr port count string table[enum] table[count] interval bool string string string double double addr string subnet +1342749096.545663 - - - - - - Test_Notice test notice! - - - - - worker-2 Notice::ACTION_LOG 6 3600.000000 F - - - - - - - - +#close 2012-07-20-01-51-45 diff --git a/testing/btest/Baseline/scripts.base.frameworks.notice.suppression/notice.log b/testing/btest/Baseline/scripts.base.frameworks.notice.suppression/notice.log index 6b4c925e0f..7c7254f87e 100644 --- a/testing/btest/Baseline/scripts.base.frameworks.notice.suppression/notice.log +++ b/testing/btest/Baseline/scripts.base.frameworks.notice.suppression/notice.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path notice -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude -#types time string addr port addr port enum string string addr addr port count string table table interval bool string string string double double -1316950497.513136 - - - - - Test_Notice test - - - - - bro Notice::ACTION_LOG 6 3600.000000 - - - - - - +#open 2012-07-20-01-49-23 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude +#types time string addr port addr port enum enum string string addr addr port count string table[enum] table[count] interval bool string string string double double +1342748963.685754 - - - - - - Test_Notice test - - - - - bro Notice::ACTION_LOG 6 3600.000000 F - - - - - +#close 2012-07-20-01-49-23 diff --git a/testing/btest/Baseline/scripts.base.frameworks.reporter.disable-stderr/.stderr b/testing/btest/Baseline/scripts.base.frameworks.reporter.disable-stderr/.stderr new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/scripts.base.frameworks.reporter.disable-stderr/reporter.log b/testing/btest/Baseline/scripts.base.frameworks.reporter.disable-stderr/reporter.log new file mode 100644 index 0000000000..144c094b2f --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.reporter.disable-stderr/reporter.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path reporter +#open 2012-08-10-20-09-16 +#fields ts level message location +#types time enum string string +0.000000 Reporter::ERROR no such index (test[3]) /da/home/robin/bro/master/testing/btest/.tmp/scripts.base.frameworks.reporter.disable-stderr/disable-stderr.bro, line 12 +#close 2012-08-10-20-09-16 diff --git a/testing/btest/Baseline/scripts.base.frameworks.reporter.stderr/.stderr b/testing/btest/Baseline/scripts.base.frameworks.reporter.stderr/.stderr new file mode 100644 index 0000000000..78af1e7a73 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.reporter.stderr/.stderr @@ -0,0 +1 @@ +ERROR: no such index (test[3]) (/blah/testing/btest/.tmp/scripts.base.frameworks.reporter.stderr/stderr.bro, line 9) diff --git a/testing/btest/Baseline/scripts.base.frameworks.reporter.stderr/reporter.log b/testing/btest/Baseline/scripts.base.frameworks.reporter.stderr/reporter.log new file mode 100644 index 0000000000..b314bc45c3 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.frameworks.reporter.stderr/reporter.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path reporter +#open 2012-08-10-20-09-23 +#fields ts level message location +#types time enum string string +0.000000 Reporter::ERROR no such index (test[3]) /da/home/robin/bro/master/testing/btest/.tmp/scripts.base.frameworks.reporter.stderr/stderr.bro, line 9 +#close 2012-08-10-20-09-23 diff --git a/testing/btest/Baseline/scripts.base.protocols.conn.contents-default-extract/contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_orig.dat b/testing/btest/Baseline/scripts.base.protocols.conn.contents-default-extract/contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_orig.dat new file mode 100644 index 0000000000..056ab8a44c --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.conn.contents-default-extract/contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_orig.dat @@ -0,0 +1,22 @@ +USER anonymous +PASS test +SYST +FEAT +PWD +EPSV +LIST +EPSV +NLST +TYPE I +SIZE robots.txt +EPSV +RETR robots.txt +MDTM robots.txt +SIZE robots.txt +EPRT |2|2001:470:1f11:81f:c999:d94:aa7c:2e3e|49189| +RETR robots.txt +MDTM robots.txt +TYPE A +EPRT |2|2001:470:1f11:81f:c999:d94:aa7c:2e3e|49190| +LIST +QUIT diff --git a/testing/btest/Baseline/scripts.base.protocols.conn.contents-default-extract/contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_resp.dat b/testing/btest/Baseline/scripts.base.protocols.conn.contents-default-extract/contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_resp.dat new file mode 100644 index 0000000000..05fe8b57d8 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.conn.contents-default-extract/contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_resp.dat @@ -0,0 +1,73 @@ +220 ftp.NetBSD.org FTP server (NetBSD-ftpd 20100320) ready. +331 Guest login ok, type your name as password. +230- + The NetBSD Project FTP Server located in Redwood City, CA, USA + 1 Gbps connectivity courtesy of , , + Internet Systems Consortium WELCOME! /( )` + \ \___ / | + +--- Currently Supported Platforms ----+ /- _ `-/ ' + | acorn[26,32], algor, alpha, amd64, | (/\/ \ \ /\ + | amiga[,ppc], arc, atari, bebox, | / / | ` \ + | cats, cesfic, cobalt, dreamcast, | O O ) / | + | evb[arm,mips,ppc,sh3], hp[300,700], | `-^--'`< ' + | hpc[arm,mips,sh], i386, | (_.) _ ) / + | ibmnws, iyonix, luna68k, | .___/` / + | mac[m68k,ppc], mipsco, mmeye, | `-----' / + | mvme[m68k,ppc], netwinders, | <----. __ / __ \ + | news[m68k,mips], next68k, ofppc, | <----|====O)))==) \) /==== + | playstation2, pmax, prep, sandpoint, | <----' `--' `.__,' \ + | sbmips, sgimips, shark, sparc[,64], | | | + | sun[2,3], vax, x68k, xen | \ / + +--------------------------------------+ ______( (_ / \_____ + See our website at http://www.NetBSD.org/ ,' ,-----' | \ + We log all FTP transfers and commands. `--{__________) (FL) \/ +230- + EXPORT NOTICE + + Please note that portions of this FTP site contain cryptographic + software controlled under the Export Administration Regulations (EAR). + + None of this software may be downloaded or otherwise exported or + re-exported into (or to a national or resident of) Cuba, Iran, Libya, + Sudan, North Korea, Syria or any other country to which the U.S. has + embargoed goods. + + By downloading or using said software, you are agreeing to the + foregoing and you are representing and warranting that you are not + located in, under the control of, or a national or resident of any + such country or on any such list. +230 Guest login ok, access restrictions apply. +215 UNIX Type: L8 Version: NetBSD-ftpd 20100320 +211-Features supported + MDTM + MLST Type*;Size*;Modify*;Perm*;Unique*; + REST STREAM + SIZE + TVFS +211 End +257 "/" is the current directory. +229 Entering Extended Passive Mode (|||57086|) +150 Opening ASCII mode data connection for '/bin/ls'. +226 Transfer complete. +229 Entering Extended Passive Mode (|||57087|) +150 Opening ASCII mode data connection for 'file list'. +226 Transfer complete. +200 Type set to I. +213 77 +229 Entering Extended Passive Mode (|||57088|) +150 Opening BINARY mode data connection for 'robots.txt' (77 bytes). +226 Transfer complete. +213 20090816112038 +213 77 +200 EPRT command successful. +150 Opening BINARY mode data connection for 'robots.txt' (77 bytes). +226 Transfer complete. +213 20090816112038 +200 Type set to A. +200 EPRT command successful. +150 Opening ASCII mode data connection for '/bin/ls'. +226 Transfer complete. +221- + Data traffic for this session was 154 bytes in 2 files. + Total traffic for this session was 4512 bytes in 5 transfers. +221 Thank you for using the FTP service on ftp.NetBSD.org. diff --git a/testing/btest/Baseline/scripts.base.protocols.conn.polling/out1 b/testing/btest/Baseline/scripts.base.protocols.conn.polling/out1 new file mode 100644 index 0000000000..9cba678461 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.conn.polling/out1 @@ -0,0 +1,7 @@ +new_connection, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp] +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 0 +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 1 +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 2 +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 3 +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 4 +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 5 diff --git a/testing/btest/Baseline/scripts.base.protocols.conn.polling/out2 b/testing/btest/Baseline/scripts.base.protocols.conn.polling/out2 new file mode 100644 index 0000000000..8476915d0a --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.conn.polling/out2 @@ -0,0 +1,4 @@ +new_connection, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp] +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 0 +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 1 +callback, [orig_h=192.168.3.103, orig_p=54102/tcp, resp_h=128.146.216.51, resp_p=80/tcp], 2 diff --git a/testing/btest/Baseline/scripts.base.protocols.dns.zero-responses/dns.log b/testing/btest/Baseline/scripts.base.protocols.dns.zero-responses/dns.log new file mode 100644 index 0000000000..14ad7b77bc --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.dns.zero-responses/dns.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path dns +#open 2012-10-05-15-59-39 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected +#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool +1349445121.080922 UWkUyAuUGXf 10.0.0.64 49204 146.186.163.66 53 udp 17323 psu.edu 1 C_INTERNET 28 AAAA 0 NOERROR F F T F 0 - - F +#close 2012-10-05-15-59-39 diff --git a/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv4/conn.log b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv4/conn.log new file mode 100644 index 0000000000..3520980833 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv4/conn.log @@ -0,0 +1,14 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2012-02-21-16-53-13 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1329843175.736107 arKYeMETxOg 141.142.220.235 37604 199.233.217.249 56666 tcp ftp-data 0.112432 0 342 SF - 0 ShAdfFa 4 216 4 562 (empty) +1329843179.871641 k6kgXLOoSKl 141.142.220.235 59378 199.233.217.249 56667 tcp ftp-data 0.111218 0 77 SF - 0 ShAdfFa 4 216 4 297 (empty) +1329843194.151526 nQcgTWjvg4c 199.233.217.249 61920 141.142.220.235 33582 tcp ftp-data 0.056211 342 0 SF - 0 ShADaFf 5 614 3 164 (empty) +1329843197.783443 j4u32Pc5bif 199.233.217.249 61918 141.142.220.235 37835 tcp ftp-data 0.056005 77 0 SF - 0 ShADaFf 5 349 3 164 (empty) +1329843161.968492 UWkUyAuUGXf 141.142.220.235 50003 199.233.217.249 21 tcp ftp 38.055625 180 3146 SF - 0 ShAdDfFa 38 2164 25 4458 (empty) +#close 2012-02-21-16-53-20 diff --git a/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv4/ftp.log b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv4/ftp.log new file mode 100644 index 0000000000..0d0a8f57f1 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv4/ftp.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path ftp +#open 2012-02-21-16-53-13 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p user password command arg mime_type mime_desc file_size reply_code reply_msg tags extraction_file +#types time string addr port addr port string string string string string string count count string table[string] file +1329843179.926563 UWkUyAuUGXf 141.142.220.235 50003 199.233.217.249 21 anonymous test RETR ftp://199.233.217.249/./robots.txt text/plain ASCII text 77 226 Transfer complete. - - +1329843197.727769 UWkUyAuUGXf 141.142.220.235 50003 199.233.217.249 21 anonymous test RETR ftp://199.233.217.249/./robots.txt text/plain ASCII text, with CRLF line terminators 77 226 Transfer complete. - - +#close 2012-02-21-16-53-20 diff --git a/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv6/conn.log b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv6/conn.log new file mode 100644 index 0000000000..3d81f45670 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv6/conn.log @@ -0,0 +1,15 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2012-02-15-17-43-15 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1329327783.316897 arKYeMETxOg 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49186 2001:470:4867:99::21 57086 tcp ftp-data 0.219721 0 342 SF - 0 ShAdfFa 5 372 4 642 (empty) +1329327786.524332 k6kgXLOoSKl 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49187 2001:470:4867:99::21 57087 tcp ftp-data 0.217501 0 43 SF - 0 ShAdfFa 5 372 4 343 (empty) +1329327787.289095 nQcgTWjvg4c 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49188 2001:470:4867:99::21 57088 tcp ftp-data 0.217941 0 77 SF - 0 ShAdfFa 5 372 4 377 (empty) +1329327795.571921 j4u32Pc5bif 2001:470:4867:99::21 55785 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49189 tcp ftp-data 0.109813 77 0 SF - 0 ShADFaf 5 449 4 300 (empty) +1329327777.822004 UWkUyAuUGXf 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49185 2001:470:4867:99::21 21 tcp ftp 26.658219 310 3448 SF - 0 ShAdDfFa 57 4426 34 5908 (empty) +1329327800.017649 TEfuqmmG4bh 2001:470:4867:99::21 55647 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49190 tcp ftp-data 0.109181 342 0 SF - 0 ShADFaf 5 714 4 300 (empty) +#close 2012-02-15-17-43-24 diff --git a/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv6/ftp.log b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv6/ftp.log new file mode 100644 index 0000000000..62ea4df18d --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ftp.ftp-ipv6/ftp.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path ftp +#open 2012-02-15-17-43-07 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p user password command arg mime_type mime_desc file_size reply_code reply_msg tags extraction_file +#types time string addr port addr port string string string string string string count count string table[string] file +1329327787.396984 UWkUyAuUGXf 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49185 2001:470:4867:99::21 21 anonymous test RETR ftp://[2001:470:4867:99::21]/robots.txt - - 77 226 Transfer complete. - - +1329327795.463946 UWkUyAuUGXf 2001:470:1f11:81f:c999:d94:aa7c:2e3e 49185 2001:470:4867:99::21 21 anonymous test RETR ftp://[2001:470:4867:99::21]/robots.txt - - 77 226 Transfer complete. - - +#close 2012-02-15-17-43-24 diff --git a/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/conn.log b/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/conn.log new file mode 100644 index 0000000000..f3ac10b5b0 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/conn.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2012-10-05-21-45-15 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1348168976.274919 UWkUyAuUGXf 192.168.57.103 60108 192.168.57.101 2811 tcp ssl,ftp,gridftp 0.294743 4491 6659 SF - 0 ShAdDaFf 22 5643 21 7759 (empty) +1348168976.546371 arKYeMETxOg 192.168.57.103 35391 192.168.57.101 55968 tcp ssl,gridftp-data 0.011938 2135 3196 S1 - 0 ShADad 8 2559 6 3516 (empty) +#close 2012-10-05-21-45-15 diff --git a/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/notice.log b/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/notice.log new file mode 100644 index 0000000000..f9292344a8 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/notice.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path notice +#open 2012-10-05-21-45-15 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto note msg sub src dst p n peer_descr actions policy_items suppress_for dropped remote_location.country_code remote_location.region remote_location.city remote_location.latitude remote_location.longitude metric_index.host metric_index.str metric_index.network +#types time string addr port addr port enum enum string string addr addr port count string table[enum] table[count] interval bool string string string double double addr string subnet +1348168976.558309 arKYeMETxOg 192.168.57.103 35391 192.168.57.101 55968 tcp GridFTP::Data_Channel GridFTP data channel over threshold 2 bytes - 192.168.57.103 192.168.57.101 55968 - bro Notice::ACTION_LOG 6 3600.000000 F - - - - - - - - +#close 2012-10-05-21-45-15 diff --git a/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/ssl.log b/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/ssl.log new file mode 100644 index 0000000000..512676bbb6 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ftp.gridftp/ssl.log @@ -0,0 +1,11 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path ssl +#open 2012-10-05-21-45-15 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p version cipher server_name session_id subject issuer_subject not_valid_before not_valid_after last_alert client_subject client_issuer_subject +#types time string addr port addr port string string string string string string time time string string string +1348168976.508038 UWkUyAuUGXf 192.168.57.103 60108 192.168.57.101 2811 TLSv10 TLS_RSA_WITH_AES_256_CBC_SHA - - CN=host/alpha,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=Globus Simple CA,OU=simpleCA-alpha,OU=GlobusTest,O=Grid 1348161979.000000 1379697979.000000 - CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid +1348168976.551422 arKYeMETxOg 192.168.57.103 35391 192.168.57.101 55968 TLSv10 TLS_RSA_WITH_NULL_SHA - - CN=932373381,CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid 1348168676.000000 1348206441.000000 - CN=917532944,CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid CN=Jon Siwek,OU=local,OU=simpleCA-alpha,OU=GlobusTest,O=Grid +#close 2012-10-05-21-45-15 diff --git a/testing/btest/Baseline/scripts.base.protocols.http.100-continue/http.log b/testing/btest/Baseline/scripts.base.protocols.http.100-continue/http.log index 812b4bc151..13c8b12502 100644 --- a/testing/btest/Baseline/scripts.base.protocols.http.100-continue/http.log +++ b/testing/btest/Baseline/scripts.base.protocols.http.100-continue/http.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2009-03-19-05-21-36 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string string file -1237440095.634312 UWkUyAuUGXf 192.168.3.103 54102 128.146.216.51 80 1 POST www.osu.edu / - curl/7.17.1 (i386-apple-darwin8.11.1) libcurl/7.17.1 zlib/1.2.3 2001 60731 200 OK 100 Continue - - - - - text/html - - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1237440095.634312 UWkUyAuUGXf 192.168.3.103 54102 128.146.216.51 80 1 POST www.osu.edu / - curl/7.17.1 (i386-apple-darwin8.11.1) libcurl/7.17.1 zlib/1.2.3 2001 60731 200 OK 100 Continue - (empty) - - - text/html - - +#close 2009-03-19-05-21-36 diff --git a/testing/btest/Baseline/scripts.base.protocols.http.http-extract-files/http-item_141.42.64.125:56730-125.190.109.199:80_resp_1.dat b/testing/btest/Baseline/scripts.base.protocols.http.http-extract-files/http-item_141.42.64.125:56730-125.190.109.199:80_resp_1.dat new file mode 100644 index 0000000000..73c369dd14 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.http.http-extract-files/http-item_141.42.64.125:56730-125.190.109.199:80_resp_1.dat @@ -0,0 +1,304 @@ + +ICIR + +ICIR
    +

    +ICIR (The ICSI Center for Internet Research) +is a +non-profit +research institute at +ICSI +in +Berkeley, +California.
    +For the three years from 1999 to 2001 we were named +ACIRI, the AT&T Center for Internet Research at ICSI, +and were funded by AT&T.
    + +The goals of ICIR are to: +

      +
    • Pursue research on the Internet architecture and related networking issues, +
    • +Participate actively in the research (SIGCOMM and IRTF) and +standards (IETF) communities, +
    • Bridge the gap between the Internet research community and commercial +interests by providing a neutral forum where topics of mutual technical +interest can be addressed. +
    +

    + +


    + +
    + + + + + + + + + + +
    + +

    +People +

    + + +
    + +

    +Publications +

    + + +

    +Projects +

    + + + +
    + +

    Research

    +   Transport and Congestion + + +   Traffic and Topology +
      +
    • +IDMaps +(Internet Distance Mapping). +
    • The +Internet Traffic Archive. +
    • +MINC +(Multicast-based Inference of Network-internal Characteristics). +
    • +NIMI +(National Internet Measurement Infrastructure). +
    + +

    + +Collaborators +

    + + + +
    +
    + +
    +

    Information for visitors and local users.

    +
    +Last modified: June 2004. Copyright notice. + +Older versions of this web page, in its ACIRI incarnation.. +
    +For more information about this server, mail www@aciri.org. +
    +To report unusual activity by any of our hosts, mail abuse@aciri.org. + diff --git a/testing/btest/Baseline/scripts.base.protocols.http.http-extract-files/http.log b/testing/btest/Baseline/scripts.base.protocols.http.http-extract-files/http.log new file mode 100644 index 0000000000..0d61a6c8b3 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.http.http-extract-files/http.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path http +#open 2005-10-07-23-23-56 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1128727435.634189 arKYeMETxOg 141.42.64.125 56730 125.190.109.199 80 1 GET www.icir.org / - Wget/1.10 0 9130 200 OK - - - (empty) - - - text/html - http-item_141.42.64.125:56730-125.190.109.199:80_resp_1.dat +#close 2005-10-07-23-23-57 diff --git a/testing/btest/Baseline/scripts.base.protocols.http.http-mime-and-md5/http.log b/testing/btest/Baseline/scripts.base.protocols.http.http-mime-and-md5/http.log index 9515eb8168..409d8fc812 100644 --- a/testing/btest/Baseline/scripts.base.protocols.http.http-mime-and-md5/http.log +++ b/testing/btest/Baseline/scripts.base.protocols.http.http-mime-and-md5/http.log @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2009-11-18-20-58-04 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied mime_type md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string string file -1258577884.844956 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 1 GET www.mozilla.org /style/enhanced.css http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2675 200 OK - - - - - - - FAKE_MIME - - -1258577884.960135 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 2 GET www.mozilla.org /script/urchin.js http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 21421 200 OK - - - - - - - FAKE_MIME - - -1258577885.317160 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 3 GET www.mozilla.org /images/template/screen/bullet_utility.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 94 200 OK - - - - - - - FAKE_MIME - - -1258577885.349639 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 4 GET www.mozilla.org /images/template/screen/key-point-top.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2349 200 OK - - - - - - - image/png e0029eea80812e9a8e57b8d05d52938a - -1258577885.394612 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 5 GET www.mozilla.org /projects/calendar/images/header-sunbird.png http://www.mozilla.org/projects/calendar/calendar.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 27579 200 OK - - - - - - - image/png 30aa926344f58019d047e85ba049ca1e - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string string file +1258577884.844956 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 1 GET www.mozilla.org /style/enhanced.css http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2675 200 OK - - - (empty) - - - FAKE_MIME - - +1258577884.960135 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 2 GET www.mozilla.org /script/urchin.js http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 21421 200 OK - - - (empty) - - - FAKE_MIME - - +1258577885.317160 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 3 GET www.mozilla.org /images/template/screen/bullet_utility.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 94 200 OK - - - (empty) - - - FAKE_MIME - - +1258577885.349639 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 4 GET www.mozilla.org /images/template/screen/key-point-top.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2349 200 OK - - - (empty) - - - image/png e0029eea80812e9a8e57b8d05d52938a - +1258577885.394612 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 5 GET www.mozilla.org /projects/calendar/images/header-sunbird.png http://www.mozilla.org/projects/calendar/calendar.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 27579 200 OK - - - (empty) - - - image/png 30aa926344f58019d047e85ba049ca1e - +#close 2009-11-18-20-58-32 diff --git a/testing/btest/Baseline/scripts.base.protocols.http.http-pipelining/http.log b/testing/btest/Baseline/scripts.base.protocols.http.http-pipelining/http.log index 01d62b3981..6b5e395902 100644 --- a/testing/btest/Baseline/scripts.base.protocols.http.http-pipelining/http.log +++ b/testing/btest/Baseline/scripts.base.protocols.http.http-pipelining/http.log @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path http +#open 2009-11-18-20-58-04 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth method host uri referrer user_agent request_body_len response_body_len status_code status_msg info_code info_msg filename tags username password proxied md5 extraction_file -#types time string addr port addr port count string string string string string count count count string count string string table string string table string file -1258577884.844956 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 1 GET www.mozilla.org /style/enhanced.css http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2675 200 OK - - - - - - - - - -1258577884.960135 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 2 GET www.mozilla.org /script/urchin.js http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 21421 200 OK - - - - - - - - - -1258577885.317160 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 3 GET www.mozilla.org /images/template/screen/bullet_utility.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 94 200 OK - - - - - - - - - -1258577885.349639 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 4 GET www.mozilla.org /images/template/screen/key-point-top.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2349 200 OK - - - - - - - - - -1258577885.394612 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 5 GET www.mozilla.org /projects/calendar/images/header-sunbird.png http://www.mozilla.org/projects/calendar/calendar.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 27579 200 OK - - - - - - - - - +#types time string addr port addr port count string string string string string count count count string count string string table[enum] string string table[string] string file +1258577884.844956 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 1 GET www.mozilla.org /style/enhanced.css http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2675 200 OK - - - (empty) - - - - - +1258577884.960135 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 2 GET www.mozilla.org /script/urchin.js http://www.mozilla.org/projects/calendar/ Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 21421 200 OK - - - (empty) - - - - - +1258577885.317160 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 3 GET www.mozilla.org /images/template/screen/bullet_utility.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 94 200 OK - - - (empty) - - - - - +1258577885.349639 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 4 GET www.mozilla.org /images/template/screen/key-point-top.png http://www.mozilla.org/style/screen.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 2349 200 OK - - - (empty) - - - - - +1258577885.394612 UWkUyAuUGXf 192.168.1.104 1673 63.245.209.11 80 5 GET www.mozilla.org /projects/calendar/images/header-sunbird.png http://www.mozilla.org/projects/calendar/calendar.css Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.5) Gecko/20091102 Firefox/3.5.5 0 27579 200 OK - - - (empty) - - - - - +#close 2009-11-18-20-58-32 diff --git a/testing/btest/Baseline/scripts.base.protocols.irc.basic/irc.log b/testing/btest/Baseline/scripts.base.protocols.irc.basic/irc.log index d224556632..46adaa4c3e 100644 --- a/testing/btest/Baseline/scripts.base.protocols.irc.basic/irc.log +++ b/testing/btest/Baseline/scripts.base.protocols.irc.basic/irc.log @@ -1,8 +1,13 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path irc -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p nick user channels command value addl tags dcc_file_name dcc_file_size extraction_file -#types time string addr port addr port string string table string string string table string count file -1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 - - - NICK bloed - - - - - -1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed - - USER sdkfje sdkfje Montreal.QC.CA.Undernet.org dkdkrwq - - - - -1311189174.474127 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje - JOIN #easymovies - - - - - -1311189316.326025 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje - DCC #easymovies - - ladyvampress-default(2011-07-07)-OS.zip 42208 - +#open 2011-07-20-19-12-44 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p nick user command value addl dcc_file_name dcc_file_size extraction_file +#types time string addr port addr port string string string string string string count file +1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 - - NICK bloed - - - - +1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed - USER sdkfje sdkfje Montreal.QC.CA.Undernet.org dkdkrwq - - - +1311189174.474127 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje JOIN #easymovies (empty) - - - +1311189316.326025 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje DCC #easymovies (empty) ladyvampress-default(2011-07-07)-OS.zip 42208 - +#close 2011-07-20-19-15-42 diff --git a/testing/btest/Baseline/scripts.base.protocols.irc.dcc-extract/irc.log b/testing/btest/Baseline/scripts.base.protocols.irc.dcc-extract/irc.log index a692d2dd4d..e204a627b1 100644 --- a/testing/btest/Baseline/scripts.base.protocols.irc.dcc-extract/irc.log +++ b/testing/btest/Baseline/scripts.base.protocols.irc.dcc-extract/irc.log @@ -1,8 +1,13 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path irc -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p nick user channels command value addl tags dcc_file_name dcc_file_size dcc_mime_type extraction_file -#types time string addr port addr port string string table string string string table string count string file -1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 - - - NICK bloed - - - - - - -1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed - - USER sdkfje sdkfje Montreal.QC.CA.Undernet.org dkdkrwq - - - - - -1311189174.474127 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje - JOIN #easymovies - - - - - - -1311189316.326025 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje - DCC #easymovies - IRC::EXTRACTED_FILE ladyvampress-default(2011-07-07)-OS.zip 42208 FAKE_MIME irc-dcc-item_192.168.1.77:57655-209.197.168.151:1024_1.dat +#open 2011-07-20-19-12-44 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p nick user command value addl dcc_file_name dcc_file_size dcc_mime_type extraction_file +#types time string addr port addr port string string string string string string count string file +1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 - - NICK bloed - - - - - +1311189164.119437 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed - USER sdkfje sdkfje Montreal.QC.CA.Undernet.org dkdkrwq - - - - +1311189174.474127 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje JOIN #easymovies (empty) - - - - +1311189316.326025 UWkUyAuUGXf 192.168.1.77 57640 66.198.80.67 6667 bloed sdkfje DCC #easymovies (empty) ladyvampress-default(2011-07-07)-OS.zip 42208 FAKE_MIME irc-dcc-item_192.168.1.77:57655-209.197.168.151:1024_1.dat +#close 2011-07-20-19-15-42 diff --git a/testing/btest/Baseline/scripts.base.protocols.smtp.basic/smtp.log b/testing/btest/Baseline/scripts.base.protocols.smtp.basic/smtp.log index b93720cfe6..ba16578dfb 100644 --- a/testing/btest/Baseline/scripts.base.protocols.smtp.basic/smtp.log +++ b/testing/btest/Baseline/scripts.base.protocols.smtp.basic/smtp.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path smtp +#open 2009-10-05-06-06-12 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth helo mailfrom rcptto date from to reply_to msg_id in_reply_to subject x_originating_ip first_received second_received last_reply path user_agent -#types time string addr port addr port count string string table string string table string string string string addr string string string vector string +#types time string addr port addr port count string string table[string] string string table[string] string string string string addr string string string vector[addr] string 1254722768.219663 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 GP Mon, 5 Oct 2009 11:36:07 +0530 "Gurpartap Singh" - <000301ca4581$ef9e57f0$cedb07d0$@in> - SMTP - - - 250 OK id=1Mugho-0003Dg-Un 74.53.140.153,10.10.1.4 Microsoft Office Outlook 12.0 +#close 2009-10-05-06-06-16 diff --git a/testing/btest/Baseline/scripts.base.protocols.smtp.mime-extract/smtp_entities.log b/testing/btest/Baseline/scripts.base.protocols.smtp.mime-extract/smtp_entities.log index 63b287a791..396a2e058d 100644 --- a/testing/btest/Baseline/scripts.base.protocols.smtp.mime-extract/smtp_entities.log +++ b/testing/btest/Baseline/scripts.base.protocols.smtp.mime-extract/smtp_entities.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path smtp_entities +#open 2009-10-05-06-06-10 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth filename content_len mime_type md5 extraction_file excerpt #types time string addr port addr port count string count string string file string -1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 79 FAKE_MIME - smtp-entity_10.10.1.4:1470-74.53.140.153:25_1.dat - -1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 1918 FAKE_MIME - - - -1254722770.692804 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 NEWS.txt 10823 FAKE_MIME - smtp-entity_10.10.1.4:1470-74.53.140.153:25_2.dat - +1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 79 FAKE_MIME - smtp-entity_10.10.1.4:1470-74.53.140.153:25_1.dat (empty) +1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 1918 FAKE_MIME - - (empty) +1254722770.692804 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 NEWS.txt 10823 FAKE_MIME - smtp-entity_10.10.1.4:1470-74.53.140.153:25_2.dat (empty) +#close 2009-10-05-06-06-16 diff --git a/testing/btest/Baseline/scripts.base.protocols.smtp.mime/smtp_entities.log b/testing/btest/Baseline/scripts.base.protocols.smtp.mime/smtp_entities.log index e45d8dc757..1abe35e90f 100644 --- a/testing/btest/Baseline/scripts.base.protocols.smtp.mime/smtp_entities.log +++ b/testing/btest/Baseline/scripts.base.protocols.smtp.mime/smtp_entities.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path smtp_entities +#open 2009-10-05-06-06-10 #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p trans_depth filename content_len mime_type md5 extraction_file excerpt #types time string addr port addr port count string count string string file string -1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 79 FAKE_MIME 92bca2e6cdcde73647125da7dccbdd07 - - -1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 1918 FAKE_MIME - - - -1254722770.692804 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 NEWS.txt 10823 FAKE_MIME a968bb0f9f9d95835b2e74c845877e87 - - +1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 79 FAKE_MIME 92bca2e6cdcde73647125da7dccbdd07 - (empty) +1254722770.692743 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 - 1918 FAKE_MIME - - (empty) +1254722770.692804 arKYeMETxOg 10.10.1.4 1470 74.53.140.153 25 1 NEWS.txt 10823 FAKE_MIME a968bb0f9f9d95835b2e74c845877e87 - (empty) +#close 2009-10-05-06-06-16 diff --git a/testing/btest/Baseline/scripts.base.protocols.socks.trace1/socks.log b/testing/btest/Baseline/scripts.base.protocols.socks.trace1/socks.log new file mode 100644 index 0000000000..b2a8ef7d4c --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.socks.trace1/socks.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path socks +#open 2012-06-20-17-23-38 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p version user status request.host request.name request_p bound.host bound.name bound_p +#types time string addr port addr port count string string addr string port addr string port +1340213015.276495 UWkUyAuUGXf 10.0.0.55 53994 60.190.189.214 8124 5 - succeeded - www.osnews.com 80 192.168.0.31 - 2688 +#close 2012-06-20-17-28-10 diff --git a/testing/btest/Baseline/scripts.base.protocols.socks.trace1/tunnel.log b/testing/btest/Baseline/scripts.base.protocols.socks.trace1/tunnel.log new file mode 100644 index 0000000000..d5aa58652e --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.socks.trace1/tunnel.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path tunnel +#open 2012-06-20-17-23-35 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action +#types time string addr port addr port enum enum +1340213015.276495 - 10.0.0.55 0 60.190.189.214 8124 Tunnel::SOCKS Tunnel::DISCOVER +#close 2012-06-20-17-28-10 diff --git a/testing/btest/Baseline/scripts.base.protocols.socks.trace2/socks.log b/testing/btest/Baseline/scripts.base.protocols.socks.trace2/socks.log new file mode 100644 index 0000000000..4053bd7359 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.socks.trace2/socks.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path socks +#open 2012-06-19-13-41-02 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p version user status request.host request.name request_p bound.host bound.name bound_p +#types time string addr port addr port count string string addr string port addr string port +1340113261.914619 UWkUyAuUGXf 10.0.0.50 59580 85.194.84.197 1080 5 - succeeded - www.google.com 443 0.0.0.0 - 443 +#close 2012-06-19-13-41-05 diff --git a/testing/btest/Baseline/scripts.base.protocols.socks.trace2/tunnel.log b/testing/btest/Baseline/scripts.base.protocols.socks.trace2/tunnel.log new file mode 100644 index 0000000000..82df9b76df --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.socks.trace2/tunnel.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path tunnel +#open 2012-06-19-13-41-01 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action +#types time string addr port addr port enum enum +1340113261.914619 - 10.0.0.50 0 85.194.84.197 1080 Tunnel::SOCKS Tunnel::DISCOVER +#close 2012-06-19-13-41-05 diff --git a/testing/btest/Baseline/scripts.base.protocols.socks.trace3/tunnel.log b/testing/btest/Baseline/scripts.base.protocols.socks.trace3/tunnel.log new file mode 100644 index 0000000000..867f3ed157 --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.socks.trace3/tunnel.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path tunnel +#open 2008-04-15-22-43-49 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p tunnel_type action +#types time string addr port addr port enum enum +1208299429.265774 - 127.0.0.1 0 127.0.0.1 1080 Tunnel::SOCKS Tunnel::DISCOVER +#close 2008-04-15-22-43-49 diff --git a/testing/btest/Baseline/scripts.base.protocols.ssl.basic/ssl.log b/testing/btest/Baseline/scripts.base.protocols.ssl.basic/ssl.log new file mode 100644 index 0000000000..872da052ea --- /dev/null +++ b/testing/btest/Baseline/scripts.base.protocols.ssl.basic/ssl.log @@ -0,0 +1,10 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path ssl +#open 2012-10-08-16-18-56 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p version cipher server_name session_id subject issuer_subject not_valid_before not_valid_after last_alert client_subject client_issuer_subject +#types time string addr port addr port string string string string string string time time string string string +1335538392.319381 UWkUyAuUGXf 192.168.1.105 62045 74.125.224.79 443 TLSv10 TLS_ECDHE_RSA_WITH_RC4_128_SHA ssl.gstatic.com - CN=*.gstatic.com,O=Google Inc,L=Mountain View,ST=California,C=US CN=Google Internet Authority,O=Google Inc,C=US 1334102677.000000 1365639277.000000 - - - +#close 2012-10-08-16-18-56 diff --git a/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-all.log b/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-all.log index cde5156594..d5f665e4bc 100644 --- a/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-all.log +++ b/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-all.log @@ -1,8 +1,13 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path known_hosts +#open 2011-03-18-19-06-08 #fields ts host #types time addr 1300475168.783842 141.142.220.118 1300475168.783842 208.80.152.118 1300475168.915940 208.80.152.3 1300475168.962628 208.80.152.2 +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-local.log b/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-local.log index 008eb364ed..a625691aa4 100644 --- a/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-local.log +++ b/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-local.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path known_hosts +#open 2011-03-18-19-06-08 #fields ts host #types time addr 1300475168.783842 141.142.220.118 +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-remote.log b/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-remote.log index 43b28ded8a..d05ccf6081 100644 --- a/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-remote.log +++ b/testing/btest/Baseline/scripts.policy.protocols.conn.known-hosts/knownhosts-remote.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path known_hosts +#open 2011-03-18-19-06-08 #fields ts host #types time addr 1300475168.783842 208.80.152.118 1300475168.915940 208.80.152.3 1300475168.962628 208.80.152.2 +#close 2011-03-18-19-06-13 diff --git a/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-all.log b/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-all.log index ad9fa52e1c..af097e5db3 100644 --- a/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-all.log +++ b/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-all.log @@ -1,9 +1,14 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path known_services +#open 2011-06-24-15-51-31 #fields ts host port_num port_proto service -#types time addr port enum table +#types time addr port enum table[string] 1308930691.049431 172.16.238.131 22 tcp SSH 1308930694.550308 172.16.238.131 80 tcp HTTP 1308930716.462556 74.125.225.81 80 tcp HTTP 1308930718.361665 172.16.238.131 21 tcp FTP 1308930726.872485 141.142.192.39 22 tcp SSH +#close 2011-06-24-15-52-08 diff --git a/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-local.log b/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-local.log index 1607d69f24..7c27e63a24 100644 --- a/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-local.log +++ b/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-local.log @@ -1,7 +1,12 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path known_services +#open 2011-06-24-15-51-31 #fields ts host port_num port_proto service -#types time addr port enum table +#types time addr port enum table[string] 1308930691.049431 172.16.238.131 22 tcp SSH 1308930694.550308 172.16.238.131 80 tcp HTTP 1308930718.361665 172.16.238.131 21 tcp FTP +#close 2011-06-24-15-52-08 diff --git a/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-remote.log b/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-remote.log index 0d1210c941..77fbe1ef70 100644 --- a/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-remote.log +++ b/testing/btest/Baseline/scripts.policy.protocols.conn.known-services/knownservices-remote.log @@ -1,6 +1,11 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path known_services +#open 2011-06-24-15-51-56 #fields ts host port_num port_proto service -#types time addr port enum table +#types time addr port enum table[string] 1308930716.462556 74.125.225.81 80 tcp HTTP 1308930726.872485 141.142.192.39 22 tcp SSH +#close 2011-06-24-15-52-08 diff --git a/testing/btest/Baseline/scripts.policy.protocols.dns.event-priority/dns.log b/testing/btest/Baseline/scripts.policy.protocols.dns.event-priority/dns.log index 945960e03e..74de757007 100644 --- a/testing/btest/Baseline/scripts.policy.protocols.dns.event-priority/dns.log +++ b/testing/btest/Baseline/scripts.policy.protocols.dns.event-priority/dns.log @@ -1,5 +1,10 @@ #separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - #path dns -#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name QR AA TC RD RA Z TTL answers auth addl -#types time string addr port addr port enum count string count string count string count string bool bool bool bool bool count interval table table table -930613226.529070 UWkUyAuUGXf 212.180.42.100 25000 131.243.64.3 53 tcp 34798 - - - - - 0 NOERROR F F F F T 0 31337.000000 4.3.2.1 - - +#open 2012-10-05-17-47-40 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto trans_id query qclass qclass_name qtype qtype_name rcode rcode_name AA TC RD RA Z answers TTLs rejected auth addl +#types time string addr port addr port enum count string count string count string count string bool bool bool bool count vector[string] vector[interval] bool table[string] table[string] +930613226.518174 UWkUyAuUGXf 212.180.42.100 25000 131.243.64.3 53 tcp 34798 - - - - - 0 NOERROR F F F T 0 4.3.2.1 31337.000000 F - - +#close 2012-10-05-17-47-40 diff --git a/testing/btest/Baseline/signatures.bad-eval-condition/.stderr b/testing/btest/Baseline/signatures.bad-eval-condition/.stderr new file mode 100644 index 0000000000..c4de35ffe9 --- /dev/null +++ b/testing/btest/Baseline/signatures.bad-eval-condition/.stderr @@ -0,0 +1,2 @@ +error: Error in signature (./blah.sig:6): eval function parameters must be a 'signature_state' and a 'string' type (mark_conn) + diff --git a/testing/btest/Baseline/signatures.dpd/dpd-ipv4.out b/testing/btest/Baseline/signatures.dpd/dpd-ipv4.out new file mode 100644 index 0000000000..abb41f330c --- /dev/null +++ b/testing/btest/Baseline/signatures.dpd/dpd-ipv4.out @@ -0,0 +1,79 @@ +dpd_config, { + +} +signature_match [orig_h=141.142.220.235, orig_p=50003/tcp, resp_h=199.233.217.249, resp_p=21/tcp] - matched my_ftp_client +ftp_reply 199.233.217.249:21 - 220 ftp.NetBSD.org FTP server (NetBSD-ftpd 20100320) ready. +ftp_request 141.142.220.235:50003 - USER anonymous +ftp_reply 199.233.217.249:21 - 331 Guest login ok, type your name as password. +signature_match [orig_h=141.142.220.235, orig_p=50003/tcp, resp_h=199.233.217.249, resp_p=21/tcp] - matched my_ftp_server +ftp_request 141.142.220.235:50003 - PASS test +ftp_reply 199.233.217.249:21 - 230 +ftp_reply 199.233.217.249:21 - 0 The NetBSD Project FTP Server located in Redwood City, CA, USA +ftp_reply 199.233.217.249:21 - 0 1 Gbps connectivity courtesy of , , +ftp_reply 199.233.217.249:21 - 0 Internet Systems Consortium WELCOME! /( )` +ftp_reply 199.233.217.249:21 - 0 \ \___ / | +ftp_reply 199.233.217.249:21 - 0 +--- Currently Supported Platforms ----+ /- _ `-/ ' +ftp_reply 199.233.217.249:21 - 0 | acorn[26,32], algor, alpha, amd64, | (/\/ \ \ /\ +ftp_reply 199.233.217.249:21 - 0 | amiga[,ppc], arc, atari, bebox, | / / | ` \ +ftp_reply 199.233.217.249:21 - 0 | cats, cesfic, cobalt, dreamcast, | O O ) / | +ftp_reply 199.233.217.249:21 - 0 | evb[arm,mips,ppc,sh3], hp[300,700], | `-^--'`< ' +ftp_reply 199.233.217.249:21 - 0 | hpc[arm,mips,sh], i386, | (_.) _ ) / +ftp_reply 199.233.217.249:21 - 0 | ibmnws, iyonix, luna68k, | .___/` / +ftp_reply 199.233.217.249:21 - 0 | mac[m68k,ppc], mipsco, mmeye, | `-----' / +ftp_reply 199.233.217.249:21 - 0 | mvme[m68k,ppc], netwinders, | <----. __ / __ \ +ftp_reply 199.233.217.249:21 - 0 | news[m68k,mips], next68k, ofppc, | <----|====O)))==) \) /==== +ftp_reply 199.233.217.249:21 - 0 | playstation2, pmax, prep, sandpoint, | <----' `--' `.__,' \ +ftp_reply 199.233.217.249:21 - 0 | sbmips, sgimips, shark, sparc[,64], | | | +ftp_reply 199.233.217.249:21 - 0 | sun[2,3], vax, x68k, xen | \ / +ftp_reply 199.233.217.249:21 - 0 +--------------------------------------+ ______( (_ / \_____ +ftp_reply 199.233.217.249:21 - 0 See our website at http://www.NetBSD.org/ ,' ,-----' | \ +ftp_reply 199.233.217.249:21 - 0 We log all FTP transfers and commands. `--{__________) (FL) \/ +ftp_reply 199.233.217.249:21 - 0 230- +ftp_reply 199.233.217.249:21 - 0 EXPORT NOTICE +ftp_reply 199.233.217.249:21 - 0 +ftp_reply 199.233.217.249:21 - 0 Please note that portions of this FTP site contain cryptographic +ftp_reply 199.233.217.249:21 - 0 software controlled under the Export Administration Regulations (EAR). +ftp_reply 199.233.217.249:21 - 0 +ftp_reply 199.233.217.249:21 - 0 None of this software may be downloaded or otherwise exported or +ftp_reply 199.233.217.249:21 - 0 re-exported into (or to a national or resident of) Cuba, Iran, Libya, +ftp_reply 199.233.217.249:21 - 0 Sudan, North Korea, Syria or any other country to which the U.S. has +ftp_reply 199.233.217.249:21 - 0 embargoed goods. +ftp_reply 199.233.217.249:21 - 0 +ftp_reply 199.233.217.249:21 - 0 By downloading or using said software, you are agreeing to the +ftp_reply 199.233.217.249:21 - 0 foregoing and you are representing and warranting that you are not +ftp_reply 199.233.217.249:21 - 0 located in, under the control of, or a national or resident of any +ftp_reply 199.233.217.249:21 - 0 such country or on any such list. +ftp_reply 199.233.217.249:21 - 230 Guest login ok, access restrictions apply. +ftp_request 141.142.220.235:50003 - SYST +ftp_reply 199.233.217.249:21 - 215 UNIX Type: L8 Version: NetBSD-ftpd 20100320 +ftp_request 141.142.220.235:50003 - PASV +ftp_reply 199.233.217.249:21 - 227 Entering Passive Mode (199,233,217,249,221,90) +ftp_request 141.142.220.235:50003 - LIST +ftp_reply 199.233.217.249:21 - 150 Opening ASCII mode data connection for '/bin/ls'. +ftp_reply 199.233.217.249:21 - 226 Transfer complete. +ftp_request 141.142.220.235:50003 - TYPE I +ftp_reply 199.233.217.249:21 - 200 Type set to I. +ftp_request 141.142.220.235:50003 - PASV +ftp_reply 199.233.217.249:21 - 227 Entering Passive Mode (199,233,217,249,221,91) +ftp_request 141.142.220.235:50003 - RETR robots.txt +ftp_reply 199.233.217.249:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes). +ftp_reply 199.233.217.249:21 - 226 Transfer complete. +ftp_request 141.142.220.235:50003 - TYPE A +ftp_reply 199.233.217.249:21 - 200 Type set to A. +ftp_request 141.142.220.235:50003 - PORT 141,142,220,235,131,46 +ftp_reply 199.233.217.249:21 - 200 PORT command successful. +ftp_request 141.142.220.235:50003 - LIST +ftp_reply 199.233.217.249:21 - 150 Opening ASCII mode data connection for '/bin/ls'. +ftp_reply 199.233.217.249:21 - 226 Transfer complete. +ftp_request 141.142.220.235:50003 - TYPE I +ftp_reply 199.233.217.249:21 - 200 Type set to I. +ftp_request 141.142.220.235:50003 - PORT 141,142,220,235,147,203 +ftp_reply 199.233.217.249:21 - 200 PORT command successful. +ftp_request 141.142.220.235:50003 - RETR robots.txt +ftp_reply 199.233.217.249:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes). +ftp_reply 199.233.217.249:21 - 226 Transfer complete. +ftp_request 141.142.220.235:50003 - QUIT +ftp_reply 199.233.217.249:21 - 221 +ftp_reply 199.233.217.249:21 - 0 Data traffic for this session was 154 bytes in 2 files. +ftp_reply 199.233.217.249:21 - 0 Total traffic for this session was 4037 bytes in 4 transfers. +ftp_reply 199.233.217.249:21 - 221 Thank you for using the FTP service on ftp.NetBSD.org. diff --git a/testing/btest/Baseline/signatures.dpd/dpd-ipv6.out b/testing/btest/Baseline/signatures.dpd/dpd-ipv6.out new file mode 100644 index 0000000000..a2227ee890 --- /dev/null +++ b/testing/btest/Baseline/signatures.dpd/dpd-ipv6.out @@ -0,0 +1,100 @@ +dpd_config, { + +} +signature_match [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] - matched my_ftp_client +ftp_reply [2001:470:4867:99::21]:21 - 220 ftp.NetBSD.org FTP server (NetBSD-ftpd 20100320) ready. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - USER anonymous +ftp_reply [2001:470:4867:99::21]:21 - 331 Guest login ok, type your name as password. +signature_match [orig_h=2001:470:1f11:81f:c999:d94:aa7c:2e3e, orig_p=49185/tcp, resp_h=2001:470:4867:99::21, resp_p=21/tcp] - matched my_ftp_server +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - PASS test +ftp_reply [2001:470:4867:99::21]:21 - 230 +ftp_reply [2001:470:4867:99::21]:21 - 0 The NetBSD Project FTP Server located in Redwood City, CA, USA +ftp_reply [2001:470:4867:99::21]:21 - 0 1 Gbps connectivity courtesy of , , +ftp_reply [2001:470:4867:99::21]:21 - 0 Internet Systems Consortium WELCOME! /( )` +ftp_reply [2001:470:4867:99::21]:21 - 0 \ \___ / | +ftp_reply [2001:470:4867:99::21]:21 - 0 +--- Currently Supported Platforms ----+ /- _ `-/ ' +ftp_reply [2001:470:4867:99::21]:21 - 0 | acorn[26,32], algor, alpha, amd64, | (/\/ \ \ /\ +ftp_reply [2001:470:4867:99::21]:21 - 0 | amiga[,ppc], arc, atari, bebox, | / / | ` \ +ftp_reply [2001:470:4867:99::21]:21 - 0 | cats, cesfic, cobalt, dreamcast, | O O ) / | +ftp_reply [2001:470:4867:99::21]:21 - 0 | evb[arm,mips,ppc,sh3], hp[300,700], | `-^--'`< ' +ftp_reply [2001:470:4867:99::21]:21 - 0 | hpc[arm,mips,sh], i386, | (_.) _ ) / +ftp_reply [2001:470:4867:99::21]:21 - 0 | ibmnws, iyonix, luna68k, | .___/` / +ftp_reply [2001:470:4867:99::21]:21 - 0 | mac[m68k,ppc], mipsco, mmeye, | `-----' / +ftp_reply [2001:470:4867:99::21]:21 - 0 | mvme[m68k,ppc], netwinders, | <----. __ / __ \ +ftp_reply [2001:470:4867:99::21]:21 - 0 | news[m68k,mips], next68k, ofppc, | <----|====O)))==) \) /==== +ftp_reply [2001:470:4867:99::21]:21 - 0 | playstation2, pmax, prep, sandpoint, | <----' `--' `.__,' \ +ftp_reply [2001:470:4867:99::21]:21 - 0 | sbmips, sgimips, shark, sparc[,64], | | | +ftp_reply [2001:470:4867:99::21]:21 - 0 | sun[2,3], vax, x68k, xen | \ / +ftp_reply [2001:470:4867:99::21]:21 - 0 +--------------------------------------+ ______( (_ / \_____ +ftp_reply [2001:470:4867:99::21]:21 - 0 See our website at http://www.NetBSD.org/ ,' ,-----' | \ +ftp_reply [2001:470:4867:99::21]:21 - 0 We log all FTP transfers and commands. `--{__________) (FL) \/ +ftp_reply [2001:470:4867:99::21]:21 - 0 230- +ftp_reply [2001:470:4867:99::21]:21 - 0 EXPORT NOTICE +ftp_reply [2001:470:4867:99::21]:21 - 0 +ftp_reply [2001:470:4867:99::21]:21 - 0 Please note that portions of this FTP site contain cryptographic +ftp_reply [2001:470:4867:99::21]:21 - 0 software controlled under the Export Administration Regulations (EAR). +ftp_reply [2001:470:4867:99::21]:21 - 0 +ftp_reply [2001:470:4867:99::21]:21 - 0 None of this software may be downloaded or otherwise exported or +ftp_reply [2001:470:4867:99::21]:21 - 0 re-exported into (or to a national or resident of) Cuba, Iran, Libya, +ftp_reply [2001:470:4867:99::21]:21 - 0 Sudan, North Korea, Syria or any other country to which the U.S. has +ftp_reply [2001:470:4867:99::21]:21 - 0 embargoed goods. +ftp_reply [2001:470:4867:99::21]:21 - 0 +ftp_reply [2001:470:4867:99::21]:21 - 0 By downloading or using said software, you are agreeing to the +ftp_reply [2001:470:4867:99::21]:21 - 0 foregoing and you are representing and warranting that you are not +ftp_reply [2001:470:4867:99::21]:21 - 0 located in, under the control of, or a national or resident of any +ftp_reply [2001:470:4867:99::21]:21 - 0 such country or on any such list. +ftp_reply [2001:470:4867:99::21]:21 - 230 Guest login ok, access restrictions apply. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - SYST +ftp_reply [2001:470:4867:99::21]:21 - 215 UNIX Type: L8 Version: NetBSD-ftpd 20100320 +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - FEAT +ftp_reply [2001:470:4867:99::21]:21 - 211 Features supported +ftp_reply [2001:470:4867:99::21]:21 - 0 MDTM +ftp_reply [2001:470:4867:99::21]:21 - 0 MLST Type*;Size*;Modify*;Perm*;Unique*; +ftp_reply [2001:470:4867:99::21]:21 - 0 REST STREAM +ftp_reply [2001:470:4867:99::21]:21 - 0 SIZE +ftp_reply [2001:470:4867:99::21]:21 - 0 TVFS +ftp_reply [2001:470:4867:99::21]:21 - 211 End +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - PWD +ftp_reply [2001:470:4867:99::21]:21 - 257 "/" is the current directory. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPSV +ftp_reply [2001:470:4867:99::21]:21 - 229 Entering Extended Passive Mode (|||57086|) +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - LIST +ftp_reply [2001:470:4867:99::21]:21 - 150 Opening ASCII mode data connection for '/bin/ls'. +ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPSV +ftp_reply [2001:470:4867:99::21]:21 - 229 Entering Extended Passive Mode (|||57087|) +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - NLST +ftp_reply [2001:470:4867:99::21]:21 - 150 Opening ASCII mode data connection for 'file list'. +ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - TYPE I +ftp_reply [2001:470:4867:99::21]:21 - 200 Type set to I. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - SIZE robots.txt +ftp_reply [2001:470:4867:99::21]:21 - 213 77 +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPSV +ftp_reply [2001:470:4867:99::21]:21 - 229 Entering Extended Passive Mode (|||57088|) +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - RETR robots.txt +ftp_reply [2001:470:4867:99::21]:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes). +ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - MDTM robots.txt +ftp_reply [2001:470:4867:99::21]:21 - 213 20090816112038 +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - SIZE robots.txt +ftp_reply [2001:470:4867:99::21]:21 - 213 77 +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPRT |2|2001:470:1f11:81f:c999:d94:aa7c:2e3e|49189| +ftp_reply [2001:470:4867:99::21]:21 - 200 EPRT command successful. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - RETR robots.txt +ftp_reply [2001:470:4867:99::21]:21 - 150 Opening BINARY mode data connection for 'robots.txt' (77 bytes). +ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - MDTM robots.txt +ftp_reply [2001:470:4867:99::21]:21 - 213 20090816112038 +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - TYPE A +ftp_reply [2001:470:4867:99::21]:21 - 200 Type set to A. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - EPRT |2|2001:470:1f11:81f:c999:d94:aa7c:2e3e|49190| +ftp_reply [2001:470:4867:99::21]:21 - 200 EPRT command successful. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - LIST +ftp_reply [2001:470:4867:99::21]:21 - 150 Opening ASCII mode data connection for '/bin/ls'. +ftp_reply [2001:470:4867:99::21]:21 - 226 Transfer complete. +ftp_request [2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185 - QUIT +ftp_reply [2001:470:4867:99::21]:21 - 221 +ftp_reply [2001:470:4867:99::21]:21 - 0 Data traffic for this session was 154 bytes in 2 files. +ftp_reply [2001:470:4867:99::21]:21 - 0 Total traffic for this session was 4512 bytes in 5 transfers. +ftp_reply [2001:470:4867:99::21]:21 - 221 Thank you for using the FTP service on ftp.NetBSD.org. diff --git a/testing/btest/Baseline/signatures.dpd/nosig-ipv4.out b/testing/btest/Baseline/signatures.dpd/nosig-ipv4.out new file mode 100644 index 0000000000..55566505d8 --- /dev/null +++ b/testing/btest/Baseline/signatures.dpd/nosig-ipv4.out @@ -0,0 +1,3 @@ +dpd_config, { + +} diff --git a/testing/btest/Baseline/signatures.dpd/nosig-ipv6.out b/testing/btest/Baseline/signatures.dpd/nosig-ipv6.out new file mode 100644 index 0000000000..55566505d8 --- /dev/null +++ b/testing/btest/Baseline/signatures.dpd/nosig-ipv6.out @@ -0,0 +1,3 @@ +dpd_config, { + +} diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq-list.out new file mode 100644 index 0000000000..06d3c27188 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq.out new file mode 100644 index 0000000000..8bad163eeb --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne-list.out new file mode 100644 index 0000000000..a1c0ea8927 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne.out new file mode 100644 index 0000000000..8249781376 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4-masks/dst-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq-list.out new file mode 100644 index 0000000000..06d3c27188 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq.out new file mode 100644 index 0000000000..8bad163eeb --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-eq diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne-list.out new file mode 100644 index 0000000000..a1c0ea8927 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne.out new file mode 100644 index 0000000000..8249781376 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v4/dst-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - dst-ip-ne diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq-list.out new file mode 100644 index 0000000000..7396460f22 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq.out new file mode 100644 index 0000000000..3241ccdf6f --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne-list.out new file mode 100644 index 0000000000..f875da226e --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne.out new file mode 100644 index 0000000000..b074df8891 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6-masks/dst-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq-list.out new file mode 100644 index 0000000000..7396460f22 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq.out new file mode 100644 index 0000000000..3241ccdf6f --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-eq diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne-list.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne-list.out new file mode 100644 index 0000000000..f875da226e --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne-list diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne.out b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne.out new file mode 100644 index 0000000000..b074df8891 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-ip-header-condition-v6/dst-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-ip-ne diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-ip6.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-ip6.out new file mode 100644 index 0000000000..db9d71f669 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-ip6.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-eq diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-list.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-list.out new file mode 100644 index 0000000000..0df42f6000 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - dst-port-eq-list diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-nomatch.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq.out new file mode 100644 index 0000000000..52321f7777 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - dst-port-eq diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gt-nomatch.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gt-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gt.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gt.out new file mode 100644 index 0000000000..87c0c75514 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gt.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-gt diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte-nomatch.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte1.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte1.out new file mode 100644 index 0000000000..a6eb48c84c --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte1.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-gte1 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte2.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte2.out new file mode 100644 index 0000000000..2d13632cd6 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-gte2.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-gte2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lt-nomatch.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lt-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lt.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lt.out new file mode 100644 index 0000000000..5d06777caf --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lt.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-lt diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte-nomatch.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte1.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte1.out new file mode 100644 index 0000000000..4102fdfd9a --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte1.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-lte1 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte2.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte2.out new file mode 100644 index 0000000000..b14823b92e --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-lte2.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-lte2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne-list-nomatch.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne-list.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne-list.out new file mode 100644 index 0000000000..7b68c06787 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-ne-list diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne-nomatch.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne.out b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne.out new file mode 100644 index 0000000000..c92dcb8b31 --- /dev/null +++ b/testing/btest/Baseline/signatures.dst-port-header-condition/dst-port-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - dst-port-ne diff --git a/testing/btest/Baseline/signatures.eval-condition/conn.log b/testing/btest/Baseline/signatures.eval-condition/conn.log new file mode 100644 index 0000000000..a803f74320 --- /dev/null +++ b/testing/btest/Baseline/signatures.eval-condition/conn.log @@ -0,0 +1,14 @@ +#separator \x09 +#set_separator , +#empty_field (empty) +#unset_field - +#path conn +#open 2012-08-23-16-41-23 +#fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents +#types time string addr port addr port enum string interval count count string bool count string count count count count table[string] +1329843175.736107 arKYeMETxOg 141.142.220.235 37604 199.233.217.249 56666 tcp ftp-data 0.112432 0 342 SF - 0 ShAdfFa 4 216 4 562 (empty) +1329843179.871641 k6kgXLOoSKl 141.142.220.235 59378 199.233.217.249 56667 tcp ftp-data 0.111218 0 77 SF - 0 ShAdfFa 4 216 4 297 (empty) +1329843194.151526 nQcgTWjvg4c 199.233.217.249 61920 141.142.220.235 33582 tcp ftp-data 0.056211 342 0 SF - 0 ShADaFf 5 614 3 164 (empty) +1329843197.783443 j4u32Pc5bif 199.233.217.249 61918 141.142.220.235 37835 tcp ftp-data 0.056005 77 0 SF - 0 ShADaFf 5 349 3 164 (empty) +1329843161.968492 UWkUyAuUGXf 141.142.220.235 50003 199.233.217.249 21 tcp ftp,blah 38.055625 180 3146 SF - 0 ShAdDfFa 38 2164 25 4458 (empty) +#close 2012-08-23-16-41-23 diff --git a/testing/btest/Baseline/signatures.header-header-condition/icmp.out b/testing/btest/Baseline/signatures.header-header-condition/icmp.out new file mode 100644 index 0000000000..a626bf85a5 --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/icmp.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - icmp diff --git a/testing/btest/Baseline/signatures.header-header-condition/icmp6.out b/testing/btest/Baseline/signatures.header-header-condition/icmp6.out new file mode 100644 index 0000000000..61b7c927e9 --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/icmp6.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=128/icmp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=129/icmp] - icmp6 diff --git a/testing/btest/Baseline/signatures.header-header-condition/ip-mask.out b/testing/btest/Baseline/signatures.header-header-condition/ip-mask.out new file mode 100644 index 0000000000..bc8045180f --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/ip-mask.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - ip-mask diff --git a/testing/btest/Baseline/signatures.header-header-condition/ip.out b/testing/btest/Baseline/signatures.header-header-condition/ip.out new file mode 100644 index 0000000000..5a7f51a6e3 --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/ip.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - ip diff --git a/testing/btest/Baseline/signatures.header-header-condition/ip6.out b/testing/btest/Baseline/signatures.header-header-condition/ip6.out new file mode 100644 index 0000000000..d3d8aeae90 --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/ip6.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - ip6 diff --git a/testing/btest/Baseline/signatures.header-header-condition/tcp.out b/testing/btest/Baseline/signatures.header-header-condition/tcp.out new file mode 100644 index 0000000000..48241068d4 --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/tcp.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/tcp, resp_h=127.0.0.1, resp_p=80/tcp] - tcp diff --git a/testing/btest/Baseline/signatures.header-header-condition/udp.out b/testing/btest/Baseline/signatures.header-header-condition/udp.out new file mode 100644 index 0000000000..fd54308e9f --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/udp.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - udp diff --git a/testing/btest/Baseline/signatures.header-header-condition/val-mask.out b/testing/btest/Baseline/signatures.header-header-condition/val-mask.out new file mode 100644 index 0000000000..ad7a66e202 --- /dev/null +++ b/testing/btest/Baseline/signatures.header-header-condition/val-mask.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - val-mask diff --git a/testing/btest/Baseline/signatures.id-lookup/id.out b/testing/btest/Baseline/signatures.id-lookup/id.out new file mode 100644 index 0000000000..4a5310a3b2 --- /dev/null +++ b/testing/btest/Baseline/signatures.id-lookup/id.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - id diff --git a/testing/btest/Baseline/signatures.ip-proto-header-condition/icmp6_in_ip6.out b/testing/btest/Baseline/signatures.ip-proto-header-condition/icmp6_in_ip6.out new file mode 100644 index 0000000000..61b7c927e9 --- /dev/null +++ b/testing/btest/Baseline/signatures.ip-proto-header-condition/icmp6_in_ip6.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=128/icmp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=129/icmp] - icmp6 diff --git a/testing/btest/Baseline/signatures.ip-proto-header-condition/icmp_in_ip4.out b/testing/btest/Baseline/signatures.ip-proto-header-condition/icmp_in_ip4.out new file mode 100644 index 0000000000..a626bf85a5 --- /dev/null +++ b/testing/btest/Baseline/signatures.ip-proto-header-condition/icmp_in_ip4.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - icmp diff --git a/testing/btest/Baseline/signatures.ip-proto-header-condition/nomatch.out b/testing/btest/Baseline/signatures.ip-proto-header-condition/nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.ip-proto-header-condition/tcp_in_ip4.out b/testing/btest/Baseline/signatures.ip-proto-header-condition/tcp_in_ip4.out new file mode 100644 index 0000000000..48241068d4 --- /dev/null +++ b/testing/btest/Baseline/signatures.ip-proto-header-condition/tcp_in_ip4.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/tcp, resp_h=127.0.0.1, resp_p=80/tcp] - tcp diff --git a/testing/btest/Baseline/signatures.ip-proto-header-condition/tcp_in_ip6.out b/testing/btest/Baseline/signatures.ip-proto-header-condition/tcp_in_ip6.out new file mode 100644 index 0000000000..8a5d5f17fc --- /dev/null +++ b/testing/btest/Baseline/signatures.ip-proto-header-condition/tcp_in_ip6.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/tcp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=80/tcp] - tcp diff --git a/testing/btest/Baseline/signatures.ip-proto-header-condition/udp_in_ip4.out b/testing/btest/Baseline/signatures.ip-proto-header-condition/udp_in_ip4.out new file mode 100644 index 0000000000..fd54308e9f --- /dev/null +++ b/testing/btest/Baseline/signatures.ip-proto-header-condition/udp_in_ip4.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - udp diff --git a/testing/btest/Baseline/signatures.ip-proto-header-condition/udp_in_ip6.out b/testing/btest/Baseline/signatures.ip-proto-header-condition/udp_in_ip6.out new file mode 100644 index 0000000000..f843e44d2d --- /dev/null +++ b/testing/btest/Baseline/signatures.ip-proto-header-condition/udp_in_ip6.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - udp diff --git a/testing/btest/Baseline/signatures.load-sigs/output b/testing/btest/Baseline/signatures.load-sigs/output new file mode 100644 index 0000000000..2a22b47ad4 --- /dev/null +++ b/testing/btest/Baseline/signatures.load-sigs/output @@ -0,0 +1,3 @@ +[orig_h=141.142.220.118, orig_p=35642/tcp, resp_h=208.80.152.2, resp_p=80/tcp] +works +GET /images/wikimedia-button.png HTTP/1.1^M^JHost: meta.wikimedia.org^M^JUser-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.15) Geck... diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq-list.out new file mode 100644 index 0000000000..60fa5de636 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-eq-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq.out new file mode 100644 index 0000000000..ce46d4b3df --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-eq diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne-list.out new file mode 100644 index 0000000000..3ca3aab914 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-ne-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne.out new file mode 100644 index 0000000000..c0876257e3 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4-masks/src-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-ne diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq-list.out new file mode 100644 index 0000000000..60fa5de636 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-eq-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq.out new file mode 100644 index 0000000000..ce46d4b3df --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-eq diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne-list.out new file mode 100644 index 0000000000..3ca3aab914 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-ne-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne.out new file mode 100644 index 0000000000..c0876257e3 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v4/src-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=192.168.1.100, orig_p=8/icmp, resp_h=192.168.1.101, resp_p=0/icmp] - src-ip-ne diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq-list.out new file mode 100644 index 0000000000..15e7b9848c --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-eq-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq.out new file mode 100644 index 0000000000..12b0192a28 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-eq diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne-list.out new file mode 100644 index 0000000000..2e10e62cec --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-ne-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne.out new file mode 100644 index 0000000000..be5325c4e9 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6-masks/src-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-ne diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq-list.out new file mode 100644 index 0000000000..15e7b9848c --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-eq-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq.out new file mode 100644 index 0000000000..12b0192a28 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-eq diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne-list-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne-list.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne-list.out new file mode 100644 index 0000000000..2e10e62cec --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-ne-list diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne-nomatch.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne.out b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne.out new file mode 100644 index 0000000000..be5325c4e9 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-ip-header-condition-v6/src-ip-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-ip-ne diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-ip6.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-ip6.out new file mode 100644 index 0000000000..9a16e2d533 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-ip6.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-eq diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-list.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-list.out new file mode 100644 index 0000000000..c8a6579af1 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-list.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - src-port-eq-list diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-nomatch.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq.out new file mode 100644 index 0000000000..8e44853a14 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-eq.out @@ -0,0 +1 @@ +signature_match [orig_h=127.0.0.1, orig_p=30000/udp, resp_h=127.0.0.1, resp_p=13000/udp] - src-port-eq diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gt-nomatch.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gt-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gt.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gt.out new file mode 100644 index 0000000000..235b9a0f11 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gt.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-gt diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte-nomatch.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte1.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte1.out new file mode 100644 index 0000000000..82b1a39aab --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte1.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-gte1 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte2.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte2.out new file mode 100644 index 0000000000..4816fe1947 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-gte2.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-gte2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lt-nomatch.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lt-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lt.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lt.out new file mode 100644 index 0000000000..b124a1616d --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lt.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-lt diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte-nomatch.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte1.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte1.out new file mode 100644 index 0000000000..67b2665619 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte1.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-lte1 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte2.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte2.out new file mode 100644 index 0000000000..758b5f1241 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-lte2.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-lte2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne-list-nomatch.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne-list-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne-list.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne-list.out new file mode 100644 index 0000000000..c98df730a8 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne-list.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-ne-list diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne-nomatch.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne-nomatch.out new file mode 100644 index 0000000000..e69de29bb2 diff --git a/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne.out b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne.out new file mode 100644 index 0000000000..f2ec15a667 --- /dev/null +++ b/testing/btest/Baseline/signatures.src-port-header-condition/src-port-ne.out @@ -0,0 +1 @@ +signature_match [orig_h=2001:4f8:4:7:2e0:81ff:fe52:ffff, orig_p=30000/udp, resp_h=2001:4f8:4:7:2e0:81ff:fe52:9a6b, resp_p=13000/udp] - src-port-ne diff --git a/testing/btest/Makefile b/testing/btest/Makefile index 7489d761fb..93ccc8d5ec 100644 --- a/testing/btest/Makefile +++ b/testing/btest/Makefile @@ -2,12 +2,26 @@ DIAG=diag.log BTEST=../../aux/btest/btest -all: - # Showing all tests. - @rm -f $(DIAG) - @$(BTEST) -f $(DIAG) +all: cleanup btest-verbose coverage -brief: - # Brief output showing only failed tests. +# Showing all tests. +btest-verbose: + @$(BTEST) -j 5 -f $(DIAG) + +brief: cleanup btest-brief coverage + +# Brief output showing only failed tests. +btest-brief: + @$(BTEST) -j 5 -b -f $(DIAG) + +coverage: + @../scripts/coverage-calc ".tmp/script-coverage*" coverage.log `pwd`/../../scripts + +cleanup: @rm -f $(DIAG) - @$(BTEST) -b -f $(DIAG) + @rm -f .tmp/script-coverage* + +update-doc-sources: + ../../doc/scripts/genDocSourcesList.sh ../../doc/scripts/DocSourcesList.cmake + +.PHONY: all btest-verbose brief btest-brief coverage cleanup diff --git a/testing/btest/Traces/chksums/ip4-bad-chksum.pcap b/testing/btest/Traces/chksums/ip4-bad-chksum.pcap new file mode 100644 index 0000000000..6d8b9dd27d Binary files /dev/null and b/testing/btest/Traces/chksums/ip4-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip4-icmp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip4-icmp-bad-chksum.pcap new file mode 100644 index 0000000000..cc60d879c4 Binary files /dev/null and b/testing/btest/Traces/chksums/ip4-icmp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip4-icmp-good-chksum.pcap b/testing/btest/Traces/chksums/ip4-icmp-good-chksum.pcap new file mode 100644 index 0000000000..2b07326eab Binary files /dev/null and b/testing/btest/Traces/chksums/ip4-icmp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip4-tcp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip4-tcp-bad-chksum.pcap new file mode 100644 index 0000000000..b9ccd9e6b2 Binary files /dev/null and b/testing/btest/Traces/chksums/ip4-tcp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip4-tcp-good-chksum.pcap b/testing/btest/Traces/chksums/ip4-tcp-good-chksum.pcap new file mode 100644 index 0000000000..ff3f011884 Binary files /dev/null and b/testing/btest/Traces/chksums/ip4-tcp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip4-udp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip4-udp-bad-chksum.pcap new file mode 100644 index 0000000000..f3998c7e1c Binary files /dev/null and b/testing/btest/Traces/chksums/ip4-udp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip4-udp-good-chksum.pcap b/testing/btest/Traces/chksums/ip4-udp-good-chksum.pcap new file mode 100644 index 0000000000..3aec507329 Binary files /dev/null and b/testing/btest/Traces/chksums/ip4-udp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-hoa-tcp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-hoa-tcp-bad-chksum.pcap new file mode 100644 index 0000000000..3aa4bd21fa Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-hoa-tcp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-hoa-tcp-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-hoa-tcp-good-chksum.pcap new file mode 100644 index 0000000000..a6fc9cb017 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-hoa-tcp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-hoa-udp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-hoa-udp-bad-chksum.pcap new file mode 100644 index 0000000000..d2434dea80 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-hoa-udp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-hoa-udp-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-hoa-udp-good-chksum.pcap new file mode 100644 index 0000000000..f3e9d632c3 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-hoa-udp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-icmp6-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-icmp6-bad-chksum.pcap new file mode 100644 index 0000000000..ce1dfa547a Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-icmp6-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-icmp6-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-icmp6-good-chksum.pcap new file mode 100644 index 0000000000..4051fa5bc5 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-icmp6-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-route0-icmp6-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-route0-icmp6-bad-chksum.pcap new file mode 100644 index 0000000000..15e11ed326 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-route0-icmp6-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-route0-icmp6-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-route0-icmp6-good-chksum.pcap new file mode 100644 index 0000000000..b7924cab6f Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-route0-icmp6-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-route0-tcp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-route0-tcp-bad-chksum.pcap new file mode 100644 index 0000000000..0f5711fe2e Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-route0-tcp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-route0-tcp-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-route0-tcp-good-chksum.pcap new file mode 100644 index 0000000000..18f9a366c6 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-route0-tcp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-route0-udp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-route0-udp-bad-chksum.pcap new file mode 100644 index 0000000000..b4eecae7a2 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-route0-udp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-route0-udp-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-route0-udp-good-chksum.pcap new file mode 100644 index 0000000000..deb1310107 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-route0-udp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-tcp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-tcp-bad-chksum.pcap new file mode 100644 index 0000000000..38d8abf18f Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-tcp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-tcp-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-tcp-good-chksum.pcap new file mode 100644 index 0000000000..9ab19b0ad8 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-tcp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-udp-bad-chksum.pcap b/testing/btest/Traces/chksums/ip6-udp-bad-chksum.pcap new file mode 100644 index 0000000000..25aa3fc2c3 Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-udp-bad-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/ip6-udp-good-chksum.pcap b/testing/btest/Traces/chksums/ip6-udp-good-chksum.pcap new file mode 100644 index 0000000000..b72b86678e Binary files /dev/null and b/testing/btest/Traces/chksums/ip6-udp-good-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/mip6-bad-mh-chksum.pcap b/testing/btest/Traces/chksums/mip6-bad-mh-chksum.pcap new file mode 100644 index 0000000000..9a2437baef Binary files /dev/null and b/testing/btest/Traces/chksums/mip6-bad-mh-chksum.pcap differ diff --git a/testing/btest/Traces/chksums/mip6-good-mh-chksum.pcap b/testing/btest/Traces/chksums/mip6-good-mh-chksum.pcap new file mode 100644 index 0000000000..6183fd9cb1 Binary files /dev/null and b/testing/btest/Traces/chksums/mip6-good-mh-chksum.pcap differ diff --git a/testing/btest/Traces/dns-zero-RRs.trace b/testing/btest/Traces/dns-zero-RRs.trace new file mode 100644 index 0000000000..0f4785b3f0 Binary files /dev/null and b/testing/btest/Traces/dns-zero-RRs.trace differ diff --git a/testing/btest/Traces/ftp-ipv4.trace b/testing/btest/Traces/ftp-ipv4.trace new file mode 100644 index 0000000000..02cac6f464 Binary files /dev/null and b/testing/btest/Traces/ftp-ipv4.trace differ diff --git a/testing/btest/Traces/globus-url-copy.trace b/testing/btest/Traces/globus-url-copy.trace new file mode 100644 index 0000000000..b42ce25bca Binary files /dev/null and b/testing/btest/Traces/globus-url-copy.trace differ diff --git a/testing/btest/Traces/icmp/icmp-destunreach-ip.pcap b/testing/btest/Traces/icmp/icmp-destunreach-ip.pcap new file mode 100644 index 0000000000..982f2e4734 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp-destunreach-ip.pcap differ diff --git a/testing/btest/Traces/icmp/icmp-destunreach-no-context.pcap b/testing/btest/Traces/icmp/icmp-destunreach-no-context.pcap new file mode 100644 index 0000000000..1f904e3d91 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp-destunreach-no-context.pcap differ diff --git a/testing/btest/Traces/icmp/icmp-destunreach-udp.pcap b/testing/btest/Traces/icmp/icmp-destunreach-udp.pcap new file mode 100644 index 0000000000..60137bb6fe Binary files /dev/null and b/testing/btest/Traces/icmp/icmp-destunreach-udp.pcap differ diff --git a/testing/btest/Traces/icmp/icmp-ping.pcap b/testing/btest/Traces/icmp/icmp-ping.pcap new file mode 100644 index 0000000000..499769b280 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp-ping.pcap differ diff --git a/testing/btest/Traces/icmp/icmp-timeexceeded.pcap b/testing/btest/Traces/icmp/icmp-timeexceeded.pcap new file mode 100644 index 0000000000..27804b5559 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp-timeexceeded.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext-trunc.pcap b/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext-trunc.pcap new file mode 100644 index 0000000000..bd0e0ccd42 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext-trunc.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext-udp.pcap b/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext-udp.pcap new file mode 100644 index 0000000000..5aca9af1b5 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext-udp.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext.pcap b/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext.pcap new file mode 100644 index 0000000000..996048e5ab Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-destunreach-ip6ext.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-destunreach-no-context.pcap b/testing/btest/Traces/icmp/icmp6-destunreach-no-context.pcap new file mode 100644 index 0000000000..cf15a7cf65 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-destunreach-no-context.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-nd-options.pcap b/testing/btest/Traces/icmp/icmp6-nd-options.pcap new file mode 100644 index 0000000000..1103d9bf9c Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-nd-options.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-neighbor-advert.pcap b/testing/btest/Traces/icmp/icmp6-neighbor-advert.pcap new file mode 100644 index 0000000000..0a06329fb5 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-neighbor-advert.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-neighbor-solicit.pcap b/testing/btest/Traces/icmp/icmp6-neighbor-solicit.pcap new file mode 100644 index 0000000000..248bbae4ea Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-neighbor-solicit.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-paramprob.pcap b/testing/btest/Traces/icmp/icmp6-paramprob.pcap new file mode 100644 index 0000000000..ab2d41cd3a Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-paramprob.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-ping.pcap b/testing/btest/Traces/icmp/icmp6-ping.pcap new file mode 100644 index 0000000000..1638ca0822 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-ping.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-redirect-hdr-opt.pcap b/testing/btest/Traces/icmp/icmp6-redirect-hdr-opt.pcap new file mode 100644 index 0000000000..d053519108 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-redirect-hdr-opt.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-redirect.pcap b/testing/btest/Traces/icmp/icmp6-redirect.pcap new file mode 100644 index 0000000000..f8ae7ed96b Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-redirect.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-router-advert.pcap b/testing/btest/Traces/icmp/icmp6-router-advert.pcap new file mode 100644 index 0000000000..38de434c2f Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-router-advert.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-router-solicit.pcap b/testing/btest/Traces/icmp/icmp6-router-solicit.pcap new file mode 100644 index 0000000000..b33495aa8d Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-router-solicit.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-timeexceeded.pcap b/testing/btest/Traces/icmp/icmp6-timeexceeded.pcap new file mode 100644 index 0000000000..b32fc4ab68 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-timeexceeded.pcap differ diff --git a/testing/btest/Traces/icmp/icmp6-toobig.pcap b/testing/btest/Traces/icmp/icmp6-toobig.pcap new file mode 100644 index 0000000000..92bf50f240 Binary files /dev/null and b/testing/btest/Traces/icmp/icmp6-toobig.pcap differ diff --git a/testing/btest/Traces/ip6_esp.trace b/testing/btest/Traces/ip6_esp.trace new file mode 100644 index 0000000000..8b3b19a99a Binary files /dev/null and b/testing/btest/Traces/ip6_esp.trace differ diff --git a/testing/btest/Traces/ipv6-fragmented-dns.trace b/testing/btest/Traces/ipv6-fragmented-dns.trace new file mode 100755 index 0000000000..9dda47a8a9 Binary files /dev/null and b/testing/btest/Traces/ipv6-fragmented-dns.trace differ diff --git a/testing/btest/Traces/ipv6-ftp.trace b/testing/btest/Traces/ipv6-ftp.trace new file mode 100644 index 0000000000..81313fac11 Binary files /dev/null and b/testing/btest/Traces/ipv6-ftp.trace differ diff --git a/testing/btest/Traces/ipv6-hbh-routing0.trace b/testing/btest/Traces/ipv6-hbh-routing0.trace new file mode 100644 index 0000000000..2a294ed58e Binary files /dev/null and b/testing/btest/Traces/ipv6-hbh-routing0.trace differ diff --git a/testing/btest/Traces/ipv6-http-atomic-frag.trace b/testing/btest/Traces/ipv6-http-atomic-frag.trace new file mode 100644 index 0000000000..d5d9db276c Binary files /dev/null and b/testing/btest/Traces/ipv6-http-atomic-frag.trace differ diff --git a/testing/btest/Traces/ipv6_zero_len_ah.trace b/testing/btest/Traces/ipv6_zero_len_ah.trace new file mode 100644 index 0000000000..7c3922525c Binary files /dev/null and b/testing/btest/Traces/ipv6_zero_len_ah.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/ipv6-mobile-hoa.trace b/testing/btest/Traces/mobile-ipv6/ipv6-mobile-hoa.trace new file mode 100644 index 0000000000..f3e9d632c3 Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/ipv6-mobile-hoa.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/ipv6-mobile-routing.trace b/testing/btest/Traces/mobile-ipv6/ipv6-mobile-routing.trace new file mode 100644 index 0000000000..6289f268e3 Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/ipv6-mobile-routing.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_back.trace b/testing/btest/Traces/mobile-ipv6/mip6_back.trace new file mode 100644 index 0000000000..9b97186979 Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_back.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_be.trace b/testing/btest/Traces/mobile-ipv6/mip6_be.trace new file mode 100644 index 0000000000..19862ee4be Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_be.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_brr.trace b/testing/btest/Traces/mobile-ipv6/mip6_brr.trace new file mode 100644 index 0000000000..4020ae8b14 Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_brr.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_bu.trace b/testing/btest/Traces/mobile-ipv6/mip6_bu.trace new file mode 100644 index 0000000000..1c8c61e09d Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_bu.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_cot.trace b/testing/btest/Traces/mobile-ipv6/mip6_cot.trace new file mode 100644 index 0000000000..2d8d215a41 Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_cot.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_coti.trace b/testing/btest/Traces/mobile-ipv6/mip6_coti.trace new file mode 100644 index 0000000000..2a5790cc7c Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_coti.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_hot.trace b/testing/btest/Traces/mobile-ipv6/mip6_hot.trace new file mode 100644 index 0000000000..0b54c9797d Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_hot.trace differ diff --git a/testing/btest/Traces/mobile-ipv6/mip6_hoti.trace b/testing/btest/Traces/mobile-ipv6/mip6_hoti.trace new file mode 100644 index 0000000000..3daaeb2905 Binary files /dev/null and b/testing/btest/Traces/mobile-ipv6/mip6_hoti.trace differ diff --git a/testing/btest/scripts/base/frameworks/logging/rotation.trace b/testing/btest/Traces/rotation.trace similarity index 100% rename from testing/btest/scripts/base/frameworks/logging/rotation.trace rename to testing/btest/Traces/rotation.trace diff --git a/testing/btest/Traces/socks-with-ssl.trace b/testing/btest/Traces/socks-with-ssl.trace new file mode 100644 index 0000000000..da27cc8873 Binary files /dev/null and b/testing/btest/Traces/socks-with-ssl.trace differ diff --git a/testing/btest/Traces/socks.trace b/testing/btest/Traces/socks.trace new file mode 100644 index 0000000000..00bf07e458 Binary files /dev/null and b/testing/btest/Traces/socks.trace differ diff --git a/testing/btest/Traces/tls-conn-with-extensions.trace b/testing/btest/Traces/tls-conn-with-extensions.trace new file mode 100644 index 0000000000..a3b724b3a1 Binary files /dev/null and b/testing/btest/Traces/tls-conn-with-extensions.trace differ diff --git a/testing/btest/Traces/trunc/icmp-header-trunc.pcap b/testing/btest/Traces/trunc/icmp-header-trunc.pcap new file mode 100644 index 0000000000..5765cf2886 Binary files /dev/null and b/testing/btest/Traces/trunc/icmp-header-trunc.pcap differ diff --git a/testing/btest/Traces/trunc/icmp-payload-trunc.pcap b/testing/btest/Traces/trunc/icmp-payload-trunc.pcap new file mode 100644 index 0000000000..13607dd50c Binary files /dev/null and b/testing/btest/Traces/trunc/icmp-payload-trunc.pcap differ diff --git a/testing/btest/Traces/trunc/ip4-trunc.pcap b/testing/btest/Traces/trunc/ip4-trunc.pcap new file mode 100644 index 0000000000..30df0ea94d Binary files /dev/null and b/testing/btest/Traces/trunc/ip4-trunc.pcap differ diff --git a/testing/btest/Traces/trunc/ip6-ext-trunc.pcap b/testing/btest/Traces/trunc/ip6-ext-trunc.pcap new file mode 100644 index 0000000000..1de659084e Binary files /dev/null and b/testing/btest/Traces/trunc/ip6-ext-trunc.pcap differ diff --git a/testing/btest/Traces/trunc/ip6-trunc.pcap b/testing/btest/Traces/trunc/ip6-trunc.pcap new file mode 100644 index 0000000000..0111caed0f Binary files /dev/null and b/testing/btest/Traces/trunc/ip6-trunc.pcap differ diff --git a/testing/btest/Traces/tunnels/4in4.pcap b/testing/btest/Traces/tunnels/4in4.pcap new file mode 100644 index 0000000000..b0d89eedda Binary files /dev/null and b/testing/btest/Traces/tunnels/4in4.pcap differ diff --git a/testing/btest/Traces/tunnels/4in6.pcap b/testing/btest/Traces/tunnels/4in6.pcap new file mode 100644 index 0000000000..5c813b9b75 Binary files /dev/null and b/testing/btest/Traces/tunnels/4in6.pcap differ diff --git a/testing/btest/Traces/tunnels/6in4.pcap b/testing/btest/Traces/tunnels/6in4.pcap new file mode 100644 index 0000000000..2d0cd5c8c7 Binary files /dev/null and b/testing/btest/Traces/tunnels/6in4.pcap differ diff --git a/testing/btest/Traces/tunnels/6in6-tunnel-change.pcap b/testing/btest/Traces/tunnels/6in6-tunnel-change.pcap new file mode 100644 index 0000000000..c5838fd136 Binary files /dev/null and b/testing/btest/Traces/tunnels/6in6-tunnel-change.pcap differ diff --git a/testing/btest/Traces/tunnels/6in6.pcap b/testing/btest/Traces/tunnels/6in6.pcap new file mode 100644 index 0000000000..ff8aa607bb Binary files /dev/null and b/testing/btest/Traces/tunnels/6in6.pcap differ diff --git a/testing/btest/Traces/tunnels/6in6in6.pcap b/testing/btest/Traces/tunnels/6in6in6.pcap new file mode 100644 index 0000000000..192524aa78 Binary files /dev/null and b/testing/btest/Traces/tunnels/6in6in6.pcap differ diff --git a/testing/btest/Traces/tunnels/Teredo.pcap b/testing/btest/Traces/tunnels/Teredo.pcap new file mode 100644 index 0000000000..2eff14469d Binary files /dev/null and b/testing/btest/Traces/tunnels/Teredo.pcap differ diff --git a/testing/btest/Traces/tunnels/ayiya3.trace b/testing/btest/Traces/tunnels/ayiya3.trace new file mode 100644 index 0000000000..83193050dc Binary files /dev/null and b/testing/btest/Traces/tunnels/ayiya3.trace differ diff --git a/testing/btest/Traces/tunnels/false-teredo.pcap b/testing/btest/Traces/tunnels/false-teredo.pcap new file mode 100644 index 0000000000..e82a6f4169 Binary files /dev/null and b/testing/btest/Traces/tunnels/false-teredo.pcap differ diff --git a/testing/btest/Traces/tunnels/ping6-in-ipv4.pcap b/testing/btest/Traces/tunnels/ping6-in-ipv4.pcap new file mode 100644 index 0000000000..5e0995f80e Binary files /dev/null and b/testing/btest/Traces/tunnels/ping6-in-ipv4.pcap differ diff --git a/testing/btest/Traces/tunnels/socks.pcap b/testing/btest/Traces/tunnels/socks.pcap new file mode 100644 index 0000000000..d70e2cb7dc Binary files /dev/null and b/testing/btest/Traces/tunnels/socks.pcap differ diff --git a/testing/btest/Traces/tunnels/teredo_bubble_with_payload.pcap b/testing/btest/Traces/tunnels/teredo_bubble_with_payload.pcap new file mode 100644 index 0000000000..5036a52b56 Binary files /dev/null and b/testing/btest/Traces/tunnels/teredo_bubble_with_payload.pcap differ diff --git a/testing/btest/analyzers/conn-size-cc.bro b/testing/btest/analyzers/conn-size-cc.bro deleted file mode 100644 index 0ba7977cf5..0000000000 --- a/testing/btest/analyzers/conn-size-cc.bro +++ /dev/null @@ -1,2 +0,0 @@ -# @TEST-EXEC: bro -C -r ${TRACES}/conn-size.trace tcp udp icmp report_conn_size_analyzer=T -# @TEST-EXEC: btest-diff conn.log diff --git a/testing/btest/analyzers/conn-size.bro b/testing/btest/analyzers/conn-size.bro deleted file mode 100644 index 0e413cc554..0000000000 --- a/testing/btest/analyzers/conn-size.bro +++ /dev/null @@ -1,2 +0,0 @@ -# @TEST-EXEC: bro -C -r ${TRACES}/conn-size.trace tcp udp icmp report_conn_size_analyzer=T use_connection_compressor=F -# @TEST-EXEC: btest-diff conn.log diff --git a/testing/btest/bifs/addr_count_conversion.bro b/testing/btest/bifs/addr_count_conversion.bro new file mode 100644 index 0000000000..360994a8e5 --- /dev/null +++ b/testing/btest/bifs/addr_count_conversion.bro @@ -0,0 +1,11 @@ +# @TEST-EXEC: bro %INPUT >output +# @TEST-EXEC: btest-diff output + +global v: index_vec; + +v = addr_to_counts([2001:0db8:85a3:0000:0000:8a2e:0370:7334]); +print v; +print counts_to_addr(v); +v = addr_to_counts(1.2.3.4); +print v; +print counts_to_addr(v); diff --git a/testing/btest/bifs/addr_to_ptr_name.bro b/testing/btest/bifs/addr_to_ptr_name.bro new file mode 100644 index 0000000000..b9c831d061 --- /dev/null +++ b/testing/btest/bifs/addr_to_ptr_name.bro @@ -0,0 +1,6 @@ +# @TEST-EXEC: bro %INPUT >output +# @TEST-EXEC: btest-diff output + +print addr_to_ptr_name([2607:f8b0:4009:802::1012]); +print addr_to_ptr_name(74.125.225.52); + diff --git a/testing/btest/bifs/addr_version.bro b/testing/btest/bifs/addr_version.bro new file mode 100644 index 0000000000..3e0123ef42 --- /dev/null +++ b/testing/btest/bifs/addr_version.bro @@ -0,0 +1,7 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +print is_v4_addr(1.2.3.4); +print is_v4_addr([::1]); +print is_v6_addr(1.2.3.4); +print is_v6_addr([::1]); diff --git a/testing/btest/bifs/all_set.bro b/testing/btest/bifs/all_set.bro new file mode 100644 index 0000000000..31544eb31e --- /dev/null +++ b/testing/btest/bifs/all_set.bro @@ -0,0 +1,15 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = vector( T, F, T ); + print all_set(a); + + local b = vector(); + print all_set(b); + + local c = vector( T ); + print all_set(c); + } diff --git a/testing/btest/bifs/analyzer_name.bro b/testing/btest/bifs/analyzer_name.bro new file mode 100644 index 0000000000..034344f5c4 --- /dev/null +++ b/testing/btest/bifs/analyzer_name.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 1; + print analyzer_name(a); + } diff --git a/testing/btest/bifs/any_set.bro b/testing/btest/bifs/any_set.bro new file mode 100644 index 0000000000..5fe046cdf4 --- /dev/null +++ b/testing/btest/bifs/any_set.bro @@ -0,0 +1,15 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = vector( F, T, F ); + print any_set(a); + + local b = vector(); + print any_set(b); + + local c = vector( F ); + print any_set(c); + } diff --git a/testing/btest/bifs/bro_version.bro b/testing/btest/bifs/bro_version.bro new file mode 100644 index 0000000000..7465cbc0f5 --- /dev/null +++ b/testing/btest/bifs/bro_version.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = bro_version(); + if ( |a| == 0 ) + exit(1); + } diff --git a/testing/btest/bifs/byte_len.bro b/testing/btest/bifs/byte_len.bro new file mode 100644 index 0000000000..25191fd173 --- /dev/null +++ b/testing/btest/bifs/byte_len.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "hello\0there"; + + print byte_len(a); + } diff --git a/testing/btest/bifs/bytestring_to_hexstr.bro b/testing/btest/bifs/bytestring_to_hexstr.bro new file mode 100644 index 0000000000..976a4ccf71 --- /dev/null +++ b/testing/btest/bifs/bytestring_to_hexstr.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print bytestring_to_hexstr("04"); + print bytestring_to_hexstr(""); + print bytestring_to_hexstr("\0"); + } diff --git a/testing/btest/bifs/capture_state_updates.bro b/testing/btest/bifs/capture_state_updates.bro new file mode 100644 index 0000000000..3abfdffdc1 --- /dev/null +++ b/testing/btest/bifs/capture_state_updates.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out +# @TEST-EXEC: test -f testfile + +event bro_init() + { + print capture_state_updates("testfile"); + } diff --git a/testing/btest/bifs/cat.bro b/testing/btest/bifs/cat.bro new file mode 100644 index 0000000000..b85b3af550 --- /dev/null +++ b/testing/btest/bifs/cat.bro @@ -0,0 +1,22 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "foo"; + local b = 3; + local c = T; + + print cat(a, b, c); + + print cat(); + + print cat("", 3, T); + + print cat_sep("|", "", a, b, c); + + print cat_sep("|", ""); + + print cat_sep("|", "", "", b, c); + } diff --git a/testing/btest/bifs/cat_string_array.bro b/testing/btest/bifs/cat_string_array.bro new file mode 100644 index 0000000000..d2c2242411 --- /dev/null +++ b/testing/btest/bifs/cat_string_array.bro @@ -0,0 +1,14 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a: string_array = { + [0] = "this", [1] = "is", [2] = "a", [3] = "test" + }; + + print cat_string_array(a); + print cat_string_array_n(a, 0, |a|-1); + print cat_string_array_n(a, 1, 2); + } diff --git a/testing/btest/bifs/checkpoint_state.bro b/testing/btest/bifs/checkpoint_state.bro new file mode 100644 index 0000000000..2a66bd1729 --- /dev/null +++ b/testing/btest/bifs/checkpoint_state.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT +# @TEST-EXEC: test -f .state/state.bst + +event bro_init() + { + local a = checkpoint_state(); + if ( a != T ) + exit(1); + } diff --git a/testing/btest/bifs/clear_table.bro b/testing/btest/bifs/clear_table.bro new file mode 100644 index 0000000000..94779285af --- /dev/null +++ b/testing/btest/bifs/clear_table.bro @@ -0,0 +1,14 @@ +# +# @TEST-EXEC: bro %INPUT > out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local mytable: table[string] of string = { ["key1"] = "val1" }; + + print |mytable|; + + clear_table(mytable); + + print |mytable|; + } diff --git a/testing/btest/bifs/convert_for_pattern.bro b/testing/btest/bifs/convert_for_pattern.bro new file mode 100644 index 0000000000..11533cd49b --- /dev/null +++ b/testing/btest/bifs/convert_for_pattern.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print convert_for_pattern("foo"); + print convert_for_pattern(""); + print convert_for_pattern("b[a-z]+"); + } diff --git a/testing/btest/bifs/create_file.bro b/testing/btest/bifs/create_file.bro new file mode 100644 index 0000000000..8f3d6cfdcd --- /dev/null +++ b/testing/btest/bifs/create_file.bro @@ -0,0 +1,65 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out +# @TEST-EXEC: btest-diff testfile +# @TEST-EXEC: btest-diff testfile2 +# @TEST-EXEC: test -f testdir/testfile4 + +event bro_init() + { + # Test that creating a file works as expected + local a = open("testfile"); + print active_file(a); + print get_file_name(a); + write_file(a, "This is a test\n"); + close(a); + + print active_file(a); + print file_size("testfile"); + + # Test that "open_for_append" doesn't overwrite an existing file + a = open_for_append("testfile"); + print active_file(a); + write_file(a, "another test\n"); + close(a); + + print active_file(a); + print file_size("testfile"); + + # This should fail + print file_size("doesnotexist"); + + # Test that "open" overwrites existing file + a = open("testfile2"); + write_file(a, "this will be overwritten\n"); + close(a); + a = open("testfile2"); + write_file(a, "new text\n"); + close(a); + + # Test that set_buf and flush_all work correctly + a = open("testfile3"); + set_buf(a, F); + write_file(a, "This is a test\n"); + print file_size("testfile3"); + close(a); + a = open("testfile3"); + set_buf(a, T); + write_file(a, "This is a test\n"); + print file_size("testfile3"); + print flush_all(); + print file_size("testfile3"); + close(a); + + # Create a new directory + print mkdir("testdir"); + + # Create a file in the new directory + a = open("testdir/testfile4"); + print get_file_name(a); + write_file(a, "This is a test\n"); + close(a); + + # This should fail + print mkdir("/thisdoesnotexist/dir"); + } diff --git a/testing/btest/bifs/current_analyzer.bro b/testing/btest/bifs/current_analyzer.bro new file mode 100644 index 0000000000..45b495c046 --- /dev/null +++ b/testing/btest/bifs/current_analyzer.bro @@ -0,0 +1,11 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = current_analyzer(); + if ( a != 0 ) + exit(1); + + # TODO: add a test for non-zero return value + } diff --git a/testing/btest/bifs/current_time.bro b/testing/btest/bifs/current_time.bro new file mode 100644 index 0000000000..5d16df396d --- /dev/null +++ b/testing/btest/bifs/current_time.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = current_time(); + if ( a <= double_to_time(0) ) + exit(1); + } diff --git a/testing/btest/bifs/decode_base64.bro b/testing/btest/bifs/decode_base64.bro new file mode 100644 index 0000000000..d4cbd2f37d --- /dev/null +++ b/testing/btest/bifs/decode_base64.bro @@ -0,0 +1,14 @@ +# @TEST-EXEC: bro -b %INPUT >out +# @TEST-EXEC: btest-diff out + +global default_alphabet: string = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; + +global my_alphabet: string = "!#$%&/(),-.:;<>@[]^ `_{|}~abcdefghijklmnopqrstuvwxyz0123456789+?"; + +print decode_base64("YnJv"); +print decode_base64_custom("YnJv", default_alphabet); +print decode_base64_custom("}n-v", my_alphabet); + +print decode_base64("YnJv"); +print decode_base64_custom("YnJv", default_alphabet); +print decode_base64_custom("}n-v", my_alphabet); diff --git a/testing/btest/bifs/edit.bro b/testing/btest/bifs/edit.bro new file mode 100644 index 0000000000..c9a73d17f1 --- /dev/null +++ b/testing/btest/bifs/edit.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "hello there"; + + print edit(a, "e"); + } diff --git a/testing/btest/bifs/entropy_test.bro b/testing/btest/bifs/entropy_test.bro new file mode 100644 index 0000000000..ca01c79ed7 --- /dev/null +++ b/testing/btest/bifs/entropy_test.bro @@ -0,0 +1,24 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "dh3Hie02uh^s#Sdf9L3frd243h$d78r2G4cM6*Q05d(7rh46f!0|4-f"; + if ( entropy_test_init(1) != T ) + exit(1); + + if ( entropy_test_add(1, a) != T ) + exit(1); + + print entropy_test_finish(1); + + local b = "0011000aaabbbbcccc000011111000000000aaaabbbbcccc0000000"; + if ( entropy_test_init(2) != T ) + exit(1); + + if ( entropy_test_add(2, b) != T ) + exit(1); + + print entropy_test_finish(2); + } diff --git a/testing/btest/bifs/escape_string.bro b/testing/btest/bifs/escape_string.bro new file mode 100644 index 0000000000..92b7b535d8 --- /dev/null +++ b/testing/btest/bifs/escape_string.bro @@ -0,0 +1,27 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "Test \0string"; + + print |a|; + print a; + + local b = clean(a); + print |b|; + print b; + + local c = to_string_literal(a); + print |c|; + print c; + + local d = escape_string(a); + print |d|; + print d; + + local e = string_to_ascii_hex(a); + print |e|; + print e; + } diff --git a/testing/btest/bifs/exit.bro b/testing/btest/bifs/exit.bro new file mode 100644 index 0000000000..e551144caa --- /dev/null +++ b/testing/btest/bifs/exit.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT >out || test $? -eq 7 +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print "hello"; + exit(7); + } diff --git a/testing/btest/bifs/file_mode.bro b/testing/btest/bifs/file_mode.bro new file mode 100644 index 0000000000..c63a2fa188 --- /dev/null +++ b/testing/btest/bifs/file_mode.bro @@ -0,0 +1,36 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 420; # octal: 0644 + print file_mode(a); + + a = 511; # octal: 0777 + print file_mode(a); + + a = 1023; # octal: 01777 + print file_mode(a); + + a = 1000; # octal: 01750 + print file_mode(a); + + a = 2541; # octal: 04755 + print file_mode(a); + + a = 2304; # octal: 04400 + print file_mode(a); + + a = 1517; # octal: 02755 + print file_mode(a); + + a = 1312; # octal: 02440 + print file_mode(a); + + a = 111; # octal: 0157 + print file_mode(a); + + a = 0; + print file_mode(a); + } diff --git a/testing/btest/bifs/find_all.bro b/testing/btest/bifs/find_all.bro new file mode 100644 index 0000000000..edf3530c8a --- /dev/null +++ b/testing/btest/bifs/find_all.bro @@ -0,0 +1,18 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is a test"; + local pat = /hi|es/; + local pat2 = /aa|bb/; + + local b = find_all(a, pat); + local b2 = find_all(a, pat2); + + for (i in b) + print i; + print "-------------------"; + print |b2|; + } diff --git a/testing/btest/bifs/find_entropy.bro b/testing/btest/bifs/find_entropy.bro new file mode 100644 index 0000000000..24f1c0ed84 --- /dev/null +++ b/testing/btest/bifs/find_entropy.bro @@ -0,0 +1,13 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "dh3Hie02uh^s#Sdf9L3frd243h$d78r2G4cM6*Q05d(7rh46f!0|4-f"; + local b = "0011000aaabbbbcccc000011111000000000aaaabbbbcccc0000000"; + + print find_entropy(a); + + print find_entropy(b); + } diff --git a/testing/btest/bifs/find_last.bro b/testing/btest/bifs/find_last.bro new file mode 100644 index 0000000000..b1a567f73a --- /dev/null +++ b/testing/btest/bifs/find_last.bro @@ -0,0 +1,17 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is a test"; + local pat = /hi|es/; + local pat2 = /aa|bb/; + + local b = find_last(a, pat); + local b2 = find_last(a, pat2); + + print b; + print "-------------------"; + print |b2|; + } diff --git a/testing/btest/bifs/fmt.bro b/testing/btest/bifs/fmt.bro new file mode 100644 index 0000000000..53b5f2235d --- /dev/null +++ b/testing/btest/bifs/fmt.bro @@ -0,0 +1,90 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +type color: enum { Red, Blue }; + +event bro_init() + { + local a = Blue; + local b = vector( 1, 2, 3); + local c = set( 1, 2, 3); + local d: table[count] of string = { [1] = "test", [2] = "bro" }; + + # tests with only a format string (no additional args) + print fmt("test"); + print fmt("%%"); + + # no arguments + print fmt(); + + # tests of various data types with field width specified + print fmt("*%-10s*", "test"); + print fmt("*%10s*", "test"); + print fmt("*%10s*", T); + print fmt("*%-10s*", T); + print fmt("*%10.2e*", 3.14159265); + print fmt("*%-10.2e*", 3.14159265); + print fmt("*%10.2f*", 3.14159265); + print fmt("*%10.2g*", 3.14159265); + print fmt("*%10.2e*", -3.14159265); + print fmt("*%10.2f*", -3.14159265); + print fmt("*%10.2g*", -3.14159265); + print fmt("*%-10.2e*", -3.14159265); + print fmt("*%-10.2f*", -3.14159265); + print fmt("*%-10.2g*", -3.14159265); + print fmt("*%10d*", -128); + print fmt("*%-10d*", -128); + print fmt("*%10d*", 128); + print fmt("*%010d*", 128); + print fmt("*%-10d*", 128); + print fmt("*%10x*", 160); + print fmt("*%010x*", 160); + print fmt("*%10x*", 160/tcp); + print fmt("*%10s*", 160/tcp); + print fmt("*%10s*", 127.0.0.1); + print fmt("*%10x*", 127.0.0.1); + print fmt("*%10s*", 192.168.0.0/16); + print fmt("*%10s*", [::1]); + print fmt("*%10x*", [fe00::1]); + print fmt("*%10s*", [fe80:1234::1]); + print fmt("*%10s*", [fe80:1234::]/32); + print fmt("*%10s*", 3hr); + print fmt("*%10s*", /^foo|bar/); + print fmt("*%10s*", a); + print fmt("*%10s*", b); + print fmt("*%10s*", c); + print fmt("*%10s*", d); + + # tests of various data types without field width + print fmt("%e", 3.1e+2); + print fmt("%f", 3.1e+2); + print fmt("%g", 3.1e+2); + print fmt("%.3e", 3.1e+2); + print fmt("%.3f", 3.1e+2); + print fmt("%.3g", 3.1e+2); + print fmt("%.7g", 3.1e+2); + + # Tests comparing "%As" and "%s" (the string length is printed instead + # of the string itself because the print command does its own escaping) + local s0 = "\x00\x07"; + local s1 = fmt("%As", s0); # expands \x00 to "\0" + local s2 = fmt("%s", s0); # expands \x00 to "\0", and \x07 to "^G" + print |s0|; + print |s1|; + print |s2|; + + s0 = "\x07\x1f"; + s1 = fmt("%As", s0); + s2 = fmt("%s", s0); # expands \x07 to "^G", and \x1f to "\x1f" + print |s0|; + print |s1|; + print |s2|; + + s0 = "\x7f\xff"; + s1 = fmt("%As", s0); + s2 = fmt("%s", s0); # expands \x7f to "^?", and \xff to "\xff" + print |s0|; + print |s1|; + print |s2|; + } diff --git a/testing/btest/bifs/fmt_ftp_port.bro b/testing/btest/bifs/fmt_ftp_port.bro new file mode 100644 index 0000000000..09ec5369e2 --- /dev/null +++ b/testing/btest/bifs/fmt_ftp_port.bro @@ -0,0 +1,13 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 192.168.0.2; + local b = 257/tcp; + print fmt_ftp_port(a, b); + + a = [fe80::1234]; + print fmt_ftp_port(a, b); + } diff --git a/testing/btest/bifs/get_matcher_stats.bro b/testing/btest/bifs/get_matcher_stats.bro new file mode 100644 index 0000000000..baee49fe1e --- /dev/null +++ b/testing/btest/bifs/get_matcher_stats.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = get_matcher_stats(); + if ( a$matchers == 0 ) + exit(1); + } diff --git a/testing/btest/bifs/get_port_transport_proto.bro b/testing/btest/bifs/get_port_transport_proto.bro new file mode 100644 index 0000000000..c9b5e626ec --- /dev/null +++ b/testing/btest/bifs/get_port_transport_proto.bro @@ -0,0 +1,13 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 123/tcp; + local b = 123/udp; + local c = 123/icmp; + print get_port_transport_proto(a); + print get_port_transport_proto(b); + print get_port_transport_proto(c); + } diff --git a/testing/btest/bifs/gethostname.bro b/testing/btest/bifs/gethostname.bro new file mode 100644 index 0000000000..97af719745 --- /dev/null +++ b/testing/btest/bifs/gethostname.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = gethostname(); + if ( |a| == 0 ) + exit(1); + } diff --git a/testing/btest/bifs/getpid.bro b/testing/btest/bifs/getpid.bro new file mode 100644 index 0000000000..98edc19a44 --- /dev/null +++ b/testing/btest/bifs/getpid.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = getpid(); + if ( a == 0 ) + exit(1); + } diff --git a/testing/btest/bifs/getsetenv.bro b/testing/btest/bifs/getsetenv.bro new file mode 100644 index 0000000000..b4ee9a0931 --- /dev/null +++ b/testing/btest/bifs/getsetenv.bro @@ -0,0 +1,20 @@ +# +# @TEST-EXEC: TESTBRO=testvalue bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = getenv("NOTDEFINED"); + local b = getenv("TESTBRO"); + if ( |a| == 0 ) + print "OK"; + if ( b == "testvalue" ) + print "OK"; + + if ( setenv("NOTDEFINED", "now defined" ) == T ) + { + if ( getenv("NOTDEFINED") == "now defined" ) + print "OK"; + } + + } diff --git a/testing/btest/bifs/global_ids.bro b/testing/btest/bifs/global_ids.bro new file mode 100644 index 0000000000..65f8944ed4 --- /dev/null +++ b/testing/btest/bifs/global_ids.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = global_ids(); + for ( i in a ) + { + # the table is quite large, so just print one item we expect + if ( i == "bro_init" ) + print a[i]$type_name; + + } + + } diff --git a/testing/btest/bifs/global_sizes.bro b/testing/btest/bifs/global_sizes.bro new file mode 100644 index 0000000000..4862db318b --- /dev/null +++ b/testing/btest/bifs/global_sizes.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = global_sizes(); + for ( i in a ) + { + # the table is quite large, so just look for one item we expect + if ( i == "bro_init" ) + print "found bro_init"; + + } + + } diff --git a/testing/btest/bifs/hexdump.bro b/testing/btest/bifs/hexdump.bro new file mode 100644 index 0000000000..4c248efb77 --- /dev/null +++ b/testing/btest/bifs/hexdump.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "abc\xffdefghijklmnopqrstuvwxyz"; + + print hexdump(a); + } diff --git a/testing/btest/bifs/identify_data.bro b/testing/btest/bifs/identify_data.bro new file mode 100644 index 0000000000..39f289d40b --- /dev/null +++ b/testing/btest/bifs/identify_data.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT | sed 's/PNG image data/PNG image/g' >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + # plain text + local a = "This is a test"; + print identify_data(a, F); + print identify_data(a, T); + + # PNG image + local b = "\x89\x50\x4e\x47\x0d\x0a\x1a\x0a"; + print identify_data(b, F); + print identify_data(b, T); + } diff --git a/testing/btest/bifs/install_src_addr_filter.test b/testing/btest/bifs/install_src_addr_filter.test new file mode 100644 index 0000000000..5b387832de --- /dev/null +++ b/testing/btest/bifs/install_src_addr_filter.test @@ -0,0 +1,13 @@ +# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +event bro_init() + { + install_src_addr_filter(141.142.220.118, TH_SYN, 100.0); + } + +event new_packet(c: connection, p: pkt_hdr) + { + if ( p?$tcp && p$ip$src == 141.142.220.118 ) + print c$id; + } diff --git a/testing/btest/bifs/is_ascii.bro b/testing/btest/bifs/is_ascii.bro new file mode 100644 index 0000000000..4d1daf96b4 --- /dev/null +++ b/testing/btest/bifs/is_ascii.bro @@ -0,0 +1,12 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is a test\xfe"; + local b = "this is a test\x7f"; + + print is_ascii(a); + print is_ascii(b); + } diff --git a/testing/btest/bifs/is_local_interface.bro b/testing/btest/bifs/is_local_interface.bro new file mode 100644 index 0000000000..8befdca385 --- /dev/null +++ b/testing/btest/bifs/is_local_interface.bro @@ -0,0 +1,11 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print is_local_interface(127.0.0.1); + print is_local_interface(1.2.3.4); + print is_local_interface([2607::a:b:c:d]); + print is_local_interface([::1]); + } diff --git a/testing/btest/bifs/is_port.bro b/testing/btest/bifs/is_port.bro new file mode 100644 index 0000000000..fe2c3f7c35 --- /dev/null +++ b/testing/btest/bifs/is_port.bro @@ -0,0 +1,22 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 123/tcp; + local b = 123/udp; + local c = 123/icmp; + + print is_tcp_port(a); + print is_tcp_port(b); + print is_tcp_port(c); + + print is_udp_port(a); + print is_udp_port(b); + print is_udp_port(c); + + print is_icmp_port(a); + print is_icmp_port(b); + print is_icmp_port(c); + } diff --git a/testing/btest/bifs/join_string.bro b/testing/btest/bifs/join_string.bro new file mode 100644 index 0000000000..16222d6303 --- /dev/null +++ b/testing/btest/bifs/join_string.bro @@ -0,0 +1,21 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a: string_array = { + [1] = "this", [2] = "is", [3] = "a", [4] = "test" + }; + local b: string_array = { [1] = "mytest" }; + local c: string_vec = vector( "this", "is", "another", "test" ); + local d: string_vec = vector( "Test" ); + + print join_string_array(" * ", a); + print join_string_array("", a); + print join_string_array("x", b); + + print join_string_vec(c, "__"); + print join_string_vec(c, ""); + print join_string_vec(d, "-"); + } diff --git a/testing/btest/bifs/length.bro b/testing/btest/bifs/length.bro new file mode 100644 index 0000000000..335223c124 --- /dev/null +++ b/testing/btest/bifs/length.bro @@ -0,0 +1,22 @@ +# +# @TEST-EXEC: bro %INPUT > out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local mytable: table[string] of string = { ["key1"] = "val1" }; + local myset: set[count] = set( 3, 6, 2, 7 ); + local myvec: vector of string = vector( "value1", "value2" ); + + print length(mytable); + print length(myset); + print length(myvec); + + mytable = table(); + myset = set(); + myvec = vector(); + + print length(mytable); + print length(myset); + print length(myvec); + } diff --git a/testing/btest/bifs/lookup_ID.bro b/testing/btest/bifs/lookup_ID.bro new file mode 100644 index 0000000000..b8a29ef41f --- /dev/null +++ b/testing/btest/bifs/lookup_ID.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +global a = "bro test"; + +event bro_init() + { + local b = "local value"; + + print lookup_ID("a"); + print lookup_ID(""); + print lookup_ID("xyz"); + print lookup_ID("b"); + print type_name( lookup_ID("bro_init") ); + } diff --git a/testing/btest/bifs/lowerupper.bro b/testing/btest/bifs/lowerupper.bro new file mode 100644 index 0000000000..fcfdcde319 --- /dev/null +++ b/testing/btest/bifs/lowerupper.bro @@ -0,0 +1,11 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is a Test"; + + print to_lower(a); + print to_upper(a); + } diff --git a/testing/btest/bifs/math.bro b/testing/btest/bifs/math.bro new file mode 100644 index 0000000000..90aed5b4e6 --- /dev/null +++ b/testing/btest/bifs/math.bro @@ -0,0 +1,24 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 3.14; + local b = 2.71; + local c = -3.14; + local d = -2.71; + + print floor(a); + print floor(b); + print floor(c); + print floor(d); + + print sqrt(a); + + print exp(a); + + print ln(a); + + print log10(a); + } diff --git a/testing/btest/bifs/md5.test b/testing/btest/bifs/md5.test new file mode 100644 index 0000000000..5a9715edf1 --- /dev/null +++ b/testing/btest/bifs/md5.test @@ -0,0 +1,19 @@ +# @TEST-EXEC: bro -b %INPUT >output +# @TEST-EXEC: btest-diff output + +print md5_hash("one"); +print md5_hash("one", "two", "three"); + +md5_hash_init("a"); +md5_hash_init("b"); + +md5_hash_update("a", "one"); +md5_hash_update("b", "one"); +md5_hash_update("b", "two"); +md5_hash_update("b", "three"); + +print md5_hash_finish("a"); +print md5_hash_finish("b"); + +print md5_hmac("one"); +print md5_hmac("one", "two", "three"); diff --git a/testing/btest/bifs/merge_pattern.bro b/testing/btest/bifs/merge_pattern.bro new file mode 100644 index 0000000000..b447f9a15b --- /dev/null +++ b/testing/btest/bifs/merge_pattern.bro @@ -0,0 +1,17 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = /foo/; + local b = /b[a-z]+/; + local c = merge_pattern(a, b); + + if ( "bar" == c ) + print "match"; + + if ( "foo" == c ) + print "match"; + + } diff --git a/testing/btest/bifs/order.bro b/testing/btest/bifs/order.bro new file mode 100644 index 0000000000..333a8acac1 --- /dev/null +++ b/testing/btest/bifs/order.bro @@ -0,0 +1,49 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function myfunc1(a: addr, b: addr): int + { + local x = addr_to_counts(a); + local y = addr_to_counts(b); + if (x[0] < y[0]) + return -1; + else + return 1; + } + +function myfunc2(a: double, b: double): int + { + if (a < b) + return -1; + else + return 1; + } + +event bro_init() + { + + # Tests without supplying a comparison function + + local a1 = vector( 5, 2, 8, 3 ); + local b1 = order(a1); + print a1; + print b1; + + local a2: vector of interval = vector( 5hr, 2days, 1sec, -7min ); + local b2 = order(a2); + print a2; + print b2; + + # Tests with a comparison function + + local c1: vector of addr = vector( 192.168.123.200, 10.0.0.157, 192.168.0.3 ); + local d1 = order(c1, myfunc1); + print c1; + print d1; + + local c2: vector of double = vector( 3.03, 3.01, 3.02, 3.015 ); + local d2 = order(c2, myfunc2); + print c2; + print d2; + } diff --git a/testing/btest/bifs/parse_ftp.bro b/testing/btest/bifs/parse_ftp.bro new file mode 100644 index 0000000000..ffdc941b4b --- /dev/null +++ b/testing/btest/bifs/parse_ftp.bro @@ -0,0 +1,15 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print parse_ftp_port("192,168,0,2,1,1"); + + print parse_eftp_port("|1|192.168.0.2|257|"); + print parse_eftp_port("|2|fe80::12|1234|"); + + print parse_ftp_pasv("227 Entering Passive Mode (192,168,0,2,1,1)"); + + print parse_ftp_epsv("229 Entering Extended Passive Mode (|||1234|)"); + } diff --git a/testing/btest/bifs/piped_exec.bro b/testing/btest/bifs/piped_exec.bro index 32fd5c5f80..3a76eba8f5 100644 --- a/testing/btest/bifs/piped_exec.bro +++ b/testing/btest/bifs/piped_exec.bro @@ -5,8 +5,10 @@ global cmds = "print \"hello world\";"; cmds = string_cat(cmds, "\nprint \"foobar\";"); -piped_exec("bro", cmds); +if ( piped_exec("bro", cmds) != T ) + exit(1); # Test null output. -piped_exec("cat > test.txt", "\x00\x00hello\x00\x00"); +if ( piped_exec("cat > test.txt", "\x00\x00hello\x00\x00") != T ) + exit(1); diff --git a/testing/btest/bifs/ptr_name_to_addr.bro b/testing/btest/bifs/ptr_name_to_addr.bro new file mode 100644 index 0000000000..89679ba57a --- /dev/null +++ b/testing/btest/bifs/ptr_name_to_addr.bro @@ -0,0 +1,8 @@ +# @TEST-EXEC: bro %INPUT >output +# @TEST-EXEC: btest-diff output + +global v6 = ptr_name_to_addr("2.1.0.1.0.0.0.0.0.0.0.0.0.0.0.0.2.0.8.0.9.0.0.4.0.b.8.f.7.0.6.2.ip6.arpa"); +global v4 = ptr_name_to_addr("52.225.125.74.in-addr.arpa"); + +print v6; +print v4; \ No newline at end of file diff --git a/testing/btest/bifs/rand.bro b/testing/btest/bifs/rand.bro new file mode 100644 index 0000000000..caf3f16031 --- /dev/null +++ b/testing/btest/bifs/rand.bro @@ -0,0 +1,29 @@ +# +# @TEST-EXEC: bro -b %INPUT >out +# @TEST-EXEC: bro -b %INPUT do_seed=F >out.2 +# @TEST-EXEC: btest-diff out +# @TEST-EXEC: btest-diff out.2 + +const do_seed = T &redef; + +event bro_init() + { + local a = rand(1000); + local b = rand(1000); + local c = rand(1000); + + print a; + print b; + print c; + + if ( do_seed ) + srand(575); + + local d = rand(1000); + local e = rand(1000); + local f = rand(1000); + + print d; + print e; + print f; + } diff --git a/testing/btest/bifs/raw_bytes_to_v4_addr.bro b/testing/btest/bifs/raw_bytes_to_v4_addr.bro new file mode 100644 index 0000000000..754580a5b0 --- /dev/null +++ b/testing/btest/bifs/raw_bytes_to_v4_addr.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print raw_bytes_to_v4_addr("ABCD"); + print raw_bytes_to_v4_addr("ABC"); + } diff --git a/testing/btest/bifs/reading_traces.bro b/testing/btest/bifs/reading_traces.bro new file mode 100644 index 0000000000..fc83c50ccb --- /dev/null +++ b/testing/btest/bifs/reading_traces.bro @@ -0,0 +1,10 @@ + +# @TEST-EXEC: bro %INPUT >out1 +# @TEST-EXEC: btest-diff out1 +# @TEST-EXEC: bro -r $TRACES/web.trace %INPUT >out2 +# @TEST-EXEC: btest-diff out2 + +event bro_init() + { + print reading_traces(); + } diff --git a/testing/btest/bifs/record_type_to_vector.bro b/testing/btest/bifs/record_type_to_vector.bro new file mode 100644 index 0000000000..18ddf35022 --- /dev/null +++ b/testing/btest/bifs/record_type_to_vector.bro @@ -0,0 +1,13 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +type myrecord: record { + ct: count; + str1: string; +}; + +event bro_init() + { + print record_type_to_vector("myrecord"); + } diff --git a/testing/btest/bifs/remask_addr.bro b/testing/btest/bifs/remask_addr.bro new file mode 100644 index 0000000000..d387667b6a --- /dev/null +++ b/testing/btest/bifs/remask_addr.bro @@ -0,0 +1,10 @@ +# @TEST-EXEC: bro %INPUT >output +# @TEST-EXEC: btest-diff output + +const one_to_32: vector of count = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32}; + +for ( i in one_to_32 ) + { + print fmt("%s: %s", one_to_32[i], + remask_addr(0.0.255.255, 255.255.0.0, 96+one_to_32[i])); + } diff --git a/testing/btest/bifs/resize.bro b/testing/btest/bifs/resize.bro new file mode 100644 index 0000000000..37e4ac38d9 --- /dev/null +++ b/testing/btest/bifs/resize.bro @@ -0,0 +1,26 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = vector( 5, 3, 8 ); + + print |a|; + + if ( resize(a, 5) != 3 ) + exit(1); + + print |a|; + + if ( resize(a, 0) != 5 ) + exit(1); + + print |a|; + + if ( resize(a, 7) != 0 ) + exit(1); + + print |a|; + + } diff --git a/testing/btest/bifs/resource_usage.bro b/testing/btest/bifs/resource_usage.bro new file mode 100644 index 0000000000..35f5b020d6 --- /dev/null +++ b/testing/btest/bifs/resource_usage.bro @@ -0,0 +1,9 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = resource_usage(); + if ( a$version != bro_version() ) + exit(1); + } diff --git a/testing/btest/bifs/rotate_file.bro b/testing/btest/bifs/rotate_file.bro new file mode 100644 index 0000000000..7132b0aaa8 --- /dev/null +++ b/testing/btest/bifs/rotate_file.bro @@ -0,0 +1,15 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = open("testfile"); + write_file(a, "this is a test\n"); + + local b = rotate_file(a); + if ( b$new_name != "testfile" ) + print "file rotated"; + print file_size(b$new_name); + print file_size("testfile"); + } diff --git a/testing/btest/bifs/rotate_file_by_name.bro b/testing/btest/bifs/rotate_file_by_name.bro new file mode 100644 index 0000000000..952b09aff3 --- /dev/null +++ b/testing/btest/bifs/rotate_file_by_name.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = open("testfile"); + write_file(a, "this is a test\n"); + close(a); + + local b = rotate_file_by_name("testfile"); + if ( b$new_name != "testfile" ) + print "file rotated"; + print file_size(b$new_name); + print file_size("testfile"); + } diff --git a/testing/btest/bifs/routing0_data_to_addrs.test b/testing/btest/bifs/routing0_data_to_addrs.test new file mode 100644 index 0000000000..a20bb3bf59 --- /dev/null +++ b/testing/btest/bifs/routing0_data_to_addrs.test @@ -0,0 +1,10 @@ +# @TEST-EXEC: bro -b -r $TRACES/ipv6-hbh-routing0.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +event ipv6_ext_headers(c: connection, p: pkt_hdr) + { + for ( h in p$ip6$exts ) + if ( p$ip6$exts[h]$id == IPPROTO_ROUTING ) + if ( p$ip6$exts[h]$routing$rtype == 0 ) + print routing0_data_to_addrs(p$ip6$exts[h]$routing$data); + } diff --git a/testing/btest/bifs/same_object.bro b/testing/btest/bifs/same_object.bro new file mode 100644 index 0000000000..eee8b1621d --- /dev/null +++ b/testing/btest/bifs/same_object.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "This is a test"; + local b: string; + local c = "This is a test"; + b = a; + print same_object(a, b); + print same_object(a, c); + + local d = vector(1, 2, 3); + print same_object(a, d); + } diff --git a/testing/btest/bifs/sha1.test b/testing/btest/bifs/sha1.test new file mode 100644 index 0000000000..85c8df99c5 --- /dev/null +++ b/testing/btest/bifs/sha1.test @@ -0,0 +1,16 @@ +# @TEST-EXEC: bro -b %INPUT >output +# @TEST-EXEC: btest-diff output + +print sha1_hash("one"); +print sha1_hash("one", "two", "three"); + +sha1_hash_init("a"); +sha1_hash_init("b"); + +sha1_hash_update("a", "one"); +sha1_hash_update("b", "one"); +sha1_hash_update("b", "two"); +sha1_hash_update("b", "three"); + +print sha1_hash_finish("a"); +print sha1_hash_finish("b"); diff --git a/testing/btest/bifs/sha256.test b/testing/btest/bifs/sha256.test new file mode 100644 index 0000000000..7451f2fad3 --- /dev/null +++ b/testing/btest/bifs/sha256.test @@ -0,0 +1,16 @@ +# @TEST-EXEC: bro -b %INPUT >output +# @TEST-EXEC: btest-diff output + +print sha256_hash("one"); +print sha256_hash("one", "two", "three"); + +sha256_hash_init("a"); +sha256_hash_init("b"); + +sha256_hash_update("a", "one"); +sha256_hash_update("b", "one"); +sha256_hash_update("b", "two"); +sha256_hash_update("b", "three"); + +print sha256_hash_finish("a"); +print sha256_hash_finish("b"); diff --git a/testing/btest/bifs/sort.bro b/testing/btest/bifs/sort.bro new file mode 100644 index 0000000000..14aa286021 --- /dev/null +++ b/testing/btest/bifs/sort.bro @@ -0,0 +1,70 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function myfunc1(a: addr, b: addr): int + { + local x = addr_to_counts(a); + local y = addr_to_counts(b); + if (x[0] < y[0]) + return -1; + else + return 1; + } + +function myfunc2(a: double, b: double): int + { + if (a < b) + return -1; + else + return 1; + } + +event bro_init() + { + # Tests without supplying a comparison function + + local a1 = vector( 5, 2, 8, 3 ); + local b1 = sort(a1); + print a1; + print b1; + + local a2: vector of interval = vector( 5hr, 2days, 1sec, -7min ); + local b2 = sort(a2); + print a2; + print b2; + + local a3: vector of bool = vector( T, F, F, T ); + local b3 = sort(a3); + print a3; + print b3; + + local a4: vector of port = vector( 12/icmp, 123/tcp, 500/udp, 7/udp, 57/tcp ); + local b4 = sort(a4); + print a4; + print b4; + + # this one is expected to fail (i.e., "sort" doesn't sort the vector) + local a5: vector of double = vector( 3.03, 3.01, 3.02, 3.015 ); + local b5 = sort(a5); + print a5; + print b5; + + # this one is expected to fail (i.e., "sort" doesn't sort the vector) + local a6: vector of addr = vector( 192.168.123.200, 10.0.0.157, 192.168.0.3 ); + local b6 = sort(a6); + print a6; + print b6; + + # Tests with a comparison function + + local c1: vector of addr = vector( 192.168.123.200, 10.0.0.157, 192.168.0.3 ); + local d1 = sort(c1, myfunc1); + print c1; + print d1; + + local c2: vector of double = vector( 3.03, 3.01, 3.02, 3.015 ); + local d2 = sort(c2, myfunc2); + print c2; + print d2; + } diff --git a/testing/btest/bifs/sort_string_array.bro b/testing/btest/bifs/sort_string_array.bro new file mode 100644 index 0000000000..23c4f55848 --- /dev/null +++ b/testing/btest/bifs/sort_string_array.bro @@ -0,0 +1,17 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a: string_array = { + [1] = "this", [2] = "is", [3] = "a", [4] = "test" + }; + + local b = sort_string_array(a); + + print b[1]; + print b[2]; + print b[3]; + print b[4]; + } diff --git a/testing/btest/bifs/split.bro b/testing/btest/bifs/split.bro new file mode 100644 index 0000000000..fc1b5e96a0 --- /dev/null +++ b/testing/btest/bifs/split.bro @@ -0,0 +1,59 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is a test"; + local pat = /hi|es/; + local idx = vector( 3, 6, 13); + + local b = split(a, pat); + local c = split1(a, pat); + local d = split_all(a, pat); + local e1 = split_n(a, pat, F, 1); + local e2 = split_n(a, pat, T, 1); + + print b[1]; + print b[2]; + print b[3]; + print b[4]; + print "---------------------"; + print c[1]; + print c[2]; + print "---------------------"; + print d[1]; + print d[2]; + print d[3]; + print d[4]; + print d[5]; + print "---------------------"; + print e1[1]; + print e1[2]; + print "---------------------"; + print e2[1]; + print e2[2]; + print e2[3]; + print "---------------------"; + print str_split(a, idx); + print "---------------------"; + + a = "X-Mailer: Testing Test (http://www.example.com)"; + pat = /:[[:blank:]]*/; + local f = split1(a, pat); + + print f[1]; + print f[2]; + print "---------------------"; + + a = "A = B = C = D"; + pat = /=/; + local g = split_all(a, pat); + print g[1]; + print g[2]; + print g[3]; + print g[4]; + print g[5]; + print g[6]; + print g[7]; + } diff --git a/testing/btest/bifs/str_shell_escape.bro b/testing/btest/bifs/str_shell_escape.bro new file mode 100644 index 0000000000..a71cb4dcf6 --- /dev/null +++ b/testing/btest/bifs/str_shell_escape.bro @@ -0,0 +1,15 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "echo ${TEST} > \"my file\""; + + print |a|; + print a; + + local b = str_shell_escape(a); + print |b|; + print b; + } diff --git a/testing/btest/bifs/strcmp.bro b/testing/btest/bifs/strcmp.bro new file mode 100644 index 0000000000..af46c7fa96 --- /dev/null +++ b/testing/btest/bifs/strcmp.bro @@ -0,0 +1,13 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this"; + local b = "testing"; + + print strcmp(a, b) > 0; + print strcmp(b, a) < 0; + print strcmp(a, a) == 0; + } diff --git a/testing/btest/bifs/strftime.bro b/testing/btest/bifs/strftime.bro new file mode 100644 index 0000000000..31f9538632 --- /dev/null +++ b/testing/btest/bifs/strftime.bro @@ -0,0 +1,17 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local f1 = "%Y-%m-%d %H:%M:%S"; + local f2 = "%H%M%S %Y%m%d"; + + local a = double_to_time(0); + print strftime(f1, a); + print strftime(f2, a); + + a = double_to_time(123456789); + print strftime(f1, a); + print strftime(f2, a); + } diff --git a/testing/btest/bifs/string_fill.bro b/testing/btest/bifs/string_fill.bro new file mode 100644 index 0000000000..c47f1916cc --- /dev/null +++ b/testing/btest/bifs/string_fill.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "test "; + + local b = string_fill(1, a); + local c = string_fill(2, a); + local d = string_fill(10, a); + + print fmt("*%s* %d", b, |b|); + print fmt("*%s* %d", c, |c|); + print fmt("*%s* %d", d, |d|); + } diff --git a/testing/btest/bifs/string_splitting.bro b/testing/btest/bifs/string_splitting.bro deleted file mode 100644 index 44068fe510..0000000000 --- a/testing/btest/bifs/string_splitting.bro +++ /dev/null @@ -1,12 +0,0 @@ -# -# @TEST-EXEC: bro %INPUT >out -# @TEST-EXEC: btest-diff out - -event bro_init() - { - local a = "X-Mailer: Testing Test (http://www.example.com)"; - print split1(a, /:[[:blank:]]*/); - - a = "A = B = C = D"; - print split_all(a, /=/); - } diff --git a/testing/btest/bifs/string_to_pattern.bro b/testing/btest/bifs/string_to_pattern.bro new file mode 100644 index 0000000000..5164c4576f --- /dev/null +++ b/testing/btest/bifs/string_to_pattern.bro @@ -0,0 +1,14 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print string_to_pattern("foo", F); + print string_to_pattern("", F); + print string_to_pattern("b[a-z]+", F); + + print string_to_pattern("foo", T); + print string_to_pattern("", T); + print string_to_pattern("b[a-z]+", T); + } diff --git a/testing/btest/bifs/strip.bro b/testing/btest/bifs/strip.bro new file mode 100644 index 0000000000..de6601b83c --- /dev/null +++ b/testing/btest/bifs/strip.bro @@ -0,0 +1,17 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = " this is a test "; + local b = ""; + local c = " "; + + print fmt("*%s*", a); + print fmt("*%s*", strip(a)); + print fmt("*%s*", b); + print fmt("*%s*", strip(b)); + print fmt("*%s*", c); + print fmt("*%s*", strip(c)); + } diff --git a/testing/btest/bifs/strptime.bro b/testing/btest/bifs/strptime.bro new file mode 100644 index 0000000000..7a58989679 --- /dev/null +++ b/testing/btest/bifs/strptime.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT +# @TEST-EXEC: btest-diff .stdout +# @TEST-EXEC: btest-diff reporter.log + +event bro_init() + { + print strptime("%Y-%m-%d", "2012-10-19"); + print strptime("%m", "1980-10-24"); + } \ No newline at end of file diff --git a/testing/btest/bifs/strstr.bro b/testing/btest/bifs/strstr.bro new file mode 100644 index 0000000000..58f79d593b --- /dev/null +++ b/testing/btest/bifs/strstr.bro @@ -0,0 +1,13 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is a test"; + local b = "his"; + local c = "are"; + + print strstr(a, b); + print strstr(a, c); + } diff --git a/testing/btest/bifs/sub.bro b/testing/btest/bifs/sub.bro new file mode 100644 index 0000000000..f6a956f26a --- /dev/null +++ b/testing/btest/bifs/sub.bro @@ -0,0 +1,12 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is a test"; + local pat = /is|ss/; + + print sub(a, pat, "at"); + print gsub(a, pat, "at"); + } diff --git a/testing/btest/bifs/subst_string.bro b/testing/btest/bifs/subst_string.bro new file mode 100644 index 0000000000..81a3f89424 --- /dev/null +++ b/testing/btest/bifs/subst_string.bro @@ -0,0 +1,12 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "this is another test"; + local b = "is"; + local c = "at"; + + print subst_string(a, b, c); + } diff --git a/testing/btest/bifs/system.bro b/testing/btest/bifs/system.bro new file mode 100644 index 0000000000..ab2642319c --- /dev/null +++ b/testing/btest/bifs/system.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = system("echo thistest > out"); + if ( a != 0 ) + exit(1); + } diff --git a/testing/btest/bifs/system_env.bro b/testing/btest/bifs/system_env.bro new file mode 100644 index 0000000000..23928e9b10 --- /dev/null +++ b/testing/btest/bifs/system_env.bro @@ -0,0 +1,23 @@ +# +# @TEST-EXEC: bro %INPUT +# @TEST-EXEC: btest-diff testfile + +event bro_init() + { + local vars: table[string] of string = { ["TESTBRO"] = "helloworld" }; + + # make sure the env. variable is not set + local myvar = getenv("BRO_ARG_TESTBRO"); + if ( |myvar| != 0 ) + exit(1); + + # check if command runs with the env. variable defined + local a = system_env("echo $BRO_ARG_TESTBRO > testfile", vars); + if ( a != 0 ) + exit(1); + + # make sure the env. variable is still not set + myvar = getenv("BRO_ARG_TESTBRO"); + if ( |myvar| != 0 ) + exit(1); + } diff --git a/testing/btest/bifs/to_addr.bro b/testing/btest/bifs/to_addr.bro new file mode 100644 index 0000000000..3a43438bb7 --- /dev/null +++ b/testing/btest/bifs/to_addr.bro @@ -0,0 +1,20 @@ +# @TEST-EXEC: bro -b %INPUT >output 2>error +# @TEST-EXEC: btest-diff output +# @TEST-EXEC: btest-diff error + +function test_to_addr(ip: string, expect: addr) + { + local result = to_addr(ip); + print fmt("to_addr(%s) = %s (%s)", ip, result, + result == expect ? "SUCCESS" : "FAILURE"); + } + +test_to_addr("0.0.0.0", 0.0.0.0); +test_to_addr("1.2.3.4", 1.2.3.4); +test_to_addr("01.02.03.04", 1.2.3.4); +test_to_addr("001.002.003.004", 1.2.3.4); +test_to_addr("10.20.30.40", 10.20.30.40); +test_to_addr("100.200.30.40", 100.200.30.40); +test_to_addr("10.0.0.0", 10.0.0.0); +test_to_addr("10.00.00.000", 10.0.0.0); +test_to_addr("not an IP", [::]); diff --git a/testing/btest/bifs/to_count.bro b/testing/btest/bifs/to_count.bro new file mode 100644 index 0000000000..c1fe72ce52 --- /dev/null +++ b/testing/btest/bifs/to_count.bro @@ -0,0 +1,27 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a: int = -2; + print int_to_count(a); + + local b: int = 2; + print int_to_count(b); + + local c: double = 3.14; + print double_to_count(c); + + local d: double = 3.9; + print double_to_count(d); + + print to_count("7"); + print to_count(""); + print to_count("-5"); + print to_count("not a count"); + + local e: port = 123/tcp; + print port_to_count(e); + + } diff --git a/testing/btest/bifs/to_double.bro b/testing/btest/bifs/to_double.bro new file mode 100644 index 0000000000..f13d34f69a --- /dev/null +++ b/testing/btest/bifs/to_double.bro @@ -0,0 +1,20 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 1 usec; + print interval_to_double(a); + local b = 1sec; + print interval_to_double(b); + local c = -1min; + print interval_to_double(c); + local d = 1hrs; + print interval_to_double(d); + local e = 1 day; + print interval_to_double(e); + + local f = current_time(); + print time_to_double(f); + } diff --git a/testing/btest/bifs/to_double_from_string.bro b/testing/btest/bifs/to_double_from_string.bro new file mode 100644 index 0000000000..781261084f --- /dev/null +++ b/testing/btest/bifs/to_double_from_string.bro @@ -0,0 +1,16 @@ +# @TEST-EXEC: bro -b %INPUT >output 2>error +# @TEST-EXEC: btest-diff output +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff error + +function test_to_double(d: string, expect: double) + { + local result = to_double(d); + print fmt("to_double(%s) = %s (%s)", d, result, + result == expect ? "SUCCESS" : "FAILURE"); + } + +test_to_double("3.14", 3.14); +test_to_double("-3.14", -3.14); +test_to_double("0", 0); +test_to_double("NotADouble", 0); +test_to_double("", 0); diff --git a/testing/btest/bifs/to_int.bro b/testing/btest/bifs/to_int.bro new file mode 100644 index 0000000000..9d108a9da7 --- /dev/null +++ b/testing/btest/bifs/to_int.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print to_int("1"); + print to_int("-1"); + print to_int("not an int"); + } diff --git a/testing/btest/bifs/to_interval.bro b/testing/btest/bifs/to_interval.bro new file mode 100644 index 0000000000..8fded315d2 --- /dev/null +++ b/testing/btest/bifs/to_interval.bro @@ -0,0 +1,11 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 1234563.14; + print double_to_interval(a); + local b = -1234563.14; + print double_to_interval(b); + } diff --git a/testing/btest/bifs/to_port.bro b/testing/btest/bifs/to_port.bro new file mode 100644 index 0000000000..382bf5d333 --- /dev/null +++ b/testing/btest/bifs/to_port.bro @@ -0,0 +1,18 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + print to_port("123/tcp"); + print to_port("123/udp"); + print to_port("123/icmp"); + print to_port("not a port"); + + local a: transport_proto = tcp; + local b: transport_proto = udp; + local c: transport_proto = icmp; + print count_to_port(256, a); + print count_to_port(256, b); + print count_to_port(256, c); + } diff --git a/testing/btest/bifs/to_subnet.bro b/testing/btest/bifs/to_subnet.bro new file mode 100644 index 0000000000..59064893e1 --- /dev/null +++ b/testing/btest/bifs/to_subnet.bro @@ -0,0 +1,11 @@ +# @TEST-EXEC: bro -b %INPUT >output 2>error +# @TEST-EXEC: btest-diff output +# @TEST-EXEC: btest-diff error + +global sn: subnet; +sn = to_subnet("10.0.0.0/8"); +print sn, sn == 10.0.0.0/8; +sn = to_subnet("2607:f8b0::/32"); +print sn, sn == [2607:f8b0::]/32; +sn = to_subnet("10.0.0.0"); +print sn, sn == [::]/0; diff --git a/testing/btest/bifs/to_time.bro b/testing/btest/bifs/to_time.bro new file mode 100644 index 0000000000..97b109e647 --- /dev/null +++ b/testing/btest/bifs/to_time.bro @@ -0,0 +1,11 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = 1234563.14; + print double_to_time(a); + local b = -1234563.14; + print double_to_time(b); + } diff --git a/testing/btest/bifs/type_name.bro b/testing/btest/bifs/type_name.bro new file mode 100644 index 0000000000..3ec13fb27d --- /dev/null +++ b/testing/btest/bifs/type_name.bro @@ -0,0 +1,73 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +type color: enum { Red, Blue }; + +type myrecord: record { + c: count; + s: string; +}; + +event bro_init() + { + local a = "foo"; + local b = 3; + local c = -3; + local d = 3.14; + local e = T; + local f = current_time(); + local g = 5hr; + local h = /^foo|bar/; + local i = Blue; + local j = 123/tcp; + local k = 192.168.0.2; + local l = [fe80::1]; + local m = 192.168.0.0/16; + local n = [fe80:1234::]/32; + local o = vector( 1, 2, 3); + local p: vector of table[count] of string = vector( + table( [1] = "test", [2] = "bro" ), + table( [1] = "another", [2] = "test" ) ); + local q = set( 1, 2, 3); + local r: set[port, string] = set( [21/tcp, "ftp"], [23/tcp, "telnet"] ); + local s: table[count] of string = { [1] = "test", [2] = "bro" }; + local t: table[string] of table[addr, port] of string = { + ["a"] = table( [192.168.0.2, 21/tcp] = "ftp", + [192.168.0.3, 80/tcp] = "http" ), + ["b"] = table( [192.168.0.2, 22/tcp] = "ssh" ) }; + local u: myrecord = [ $c = 2, $s = "another test" ]; + local v = function(aa: int, bb: int): bool { return aa < bb; }; + local w = function(): any { }; + local x = function() { }; + local y = open("deleteme"); + + print type_name(a); + print type_name(b); + print type_name(c); + print type_name(d); + print type_name(e); + print type_name(f); + print type_name(g); + print type_name(h); + print type_name(i); + print type_name(j); + print type_name(k); + print type_name(l); + print type_name(m); + print type_name(n); + print type_name(o); + print type_name(p); + print type_name(q); + print type_name(r); + print type_name(s); + print type_name(t); + print type_name(u); + print type_name(v); + print type_name(w); + print type_name(x); + print type_name(y); # result is "file of string" which is a bit odd; + # we should remove the (apparently unused) type argument + # from files. + print type_name(bro_init); + } diff --git a/testing/btest/bifs/uuid_to_string.bro b/testing/btest/bifs/uuid_to_string.bro new file mode 100644 index 0000000000..a64e81d783 --- /dev/null +++ b/testing/btest/bifs/uuid_to_string.bro @@ -0,0 +1,10 @@ +# +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +event bro_init() + { + local a = "\xfe\x80abcdefg0123456"; + print uuid_to_string(a); + print uuid_to_string(""); + } diff --git a/testing/btest/bifs/val_size.bro b/testing/btest/bifs/val_size.bro new file mode 100644 index 0000000000..5b2e535c5c --- /dev/null +++ b/testing/btest/bifs/val_size.bro @@ -0,0 +1,16 @@ +# +# @TEST-EXEC: bro %INPUT + +event bro_init() + { + local a = T; + local b = 12; + local c: table[string] of addr = { ["a"] = 192.168.0.2, ["b"] = 10.0.0.2 }; + + if ( val_size(a) > val_size(b) ) + exit(1); + + if ( val_size(b) > val_size(c) ) + exit(1); + + } diff --git a/testing/btest/btest.cfg b/testing/btest/btest.cfg index 7d8283587c..d86b45d8a9 100644 --- a/testing/btest/btest.cfg +++ b/testing/btest/btest.cfg @@ -1,18 +1,21 @@ [btest] -TestDirs = doc bifs language core scripts istate coverage +TestDirs = doc bifs language core scripts istate coverage signatures TmpDir = %(testbase)s/.tmp BaselineDir = %(testbase)s/Baseline IgnoreDirs = .svn CVS .tmp -IgnoreFiles = *.tmp *.swp #* *.trace +IgnoreFiles = *.tmp *.swp #* *.trace .DS_Store [environment] BROPATH=`bash -c %(testbase)s/../../build/bro-path-dev` BRO_SEED_FILE=%(testbase)s/random.seed TZ=UTC LC_ALL=C -PATH=%(testbase)s/../../build/src:%(testbase)s/../../aux/btest:%(default_path)s +BTEST_PATH=%(testbase)s/../../aux/btest +PATH=%(testbase)s/../../build/src:%(testbase)s/../scripts:%(testbase)s/../../aux/btest:%(default_path)s TRACES=%(testbase)s/Traces SCRIPTS=%(testbase)s/../scripts DIST=%(testbase)s/../.. BUILD=%(testbase)s/../../build TEST_DIFF_CANONIFIER=$SCRIPTS/diff-canonifier +TMPDIR=%(testbase)s/.tmp +BRO_PROFILER_FILE=%(testbase)s/.tmp/script-coverage.XXXXXX diff --git a/testing/btest/core/checksums.test b/testing/btest/core/checksums.test new file mode 100644 index 0000000000..77fe2a62d3 --- /dev/null +++ b/testing/btest/core/checksums.test @@ -0,0 +1,42 @@ +# @TEST-EXEC: bro -r $TRACES/chksums/ip4-bad-chksum.pcap +# @TEST-EXEC: mv weird.log bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip4-tcp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip4-udp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip4-icmp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-route0-tcp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-route0-udp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-route0-icmp6-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-tcp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-udp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-icmp6-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out + +# @TEST-EXEC: bro -r $TRACES/chksums/ip4-tcp-good-chksum.pcap +# @TEST-EXEC: mv weird.log good.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip4-udp-good-chksum.pcap +# @TEST-EXEC: test ! -e weird.log +# @TEST-EXEC: bro -r $TRACES/chksums/ip4-icmp-good-chksum.pcap +# @TEST-EXEC: test ! -e weird.log +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-route0-tcp-good-chksum.pcap +# @TEST-EXEC: cat weird.log >> good.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-route0-udp-good-chksum.pcap +# @TEST-EXEC: cat weird.log >> good.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-route0-icmp6-good-chksum.pcap +# @TEST-EXEC: cat weird.log >> good.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-tcp-good-chksum.pcap +# @TEST-EXEC: cat weird.log >> good.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-udp-good-chksum.pcap +# @TEST-EXEC: cat weird.log >> good.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-icmp6-good-chksum.pcap +# @TEST-EXEC: cat weird.log >> good.out + +# @TEST-EXEC: btest-diff bad.out +# @TEST-EXEC: btest-diff good.out diff --git a/testing/btest/core/conn-uid.bro b/testing/btest/core/conn-uid.bro index b2078bc9f5..52ff8fc4d3 100644 --- a/testing/btest/core/conn-uid.bro +++ b/testing/btest/core/conn-uid.bro @@ -9,17 +9,6 @@ # @TEST-EXEC: unset BRO_SEED_FILE && bro -C -r $TRACES/wikipedia.trace %INPUT >output2 # @TEST-EXEC: cat output output2 | sort | uniq -c | wc -l | sed 's/ //g' >counts # @TEST-EXEC: btest-diff counts -# -# Make sure it works without the connection compressor as well. -# -# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace %INPUT use_connection_compressor=F >output.cc -# @TEST-EXEC: btest-diff output.cc -# -# Make sure it works with the full connection compressor as well. -# -# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace %INPUT cc_handle_only_syns=F >output.cc2 -# @TEST-EXEC: btest-diff output.cc2 - event new_connection(c: connection) { diff --git a/testing/btest/core/disable-mobile-ipv6.test b/testing/btest/core/disable-mobile-ipv6.test new file mode 100644 index 0000000000..5151a12b38 --- /dev/null +++ b/testing/btest/core/disable-mobile-ipv6.test @@ -0,0 +1,12 @@ +# @TEST-REQUIRES: grep -q "#undef ENABLE_MOBILE_IPV6" $BUILD/config.h +# @TEST-EXEC: bro -r $TRACES/mobile-ipv6/mip6_back.trace %INPUT +# @TEST-EXEC: btest-diff weird.log + +event mobile_ipv6_message(p: pkt_hdr) + { + if ( ! p?$ip6 ) return; + + for ( i in p$ip6$exts ) + if ( p$ip6$exts[i]$id == IPPROTO_MOBILITY ) + print p$ip6; + } diff --git a/testing/btest/core/discarder.bro b/testing/btest/core/discarder.bro new file mode 100644 index 0000000000..0c87eece18 --- /dev/null +++ b/testing/btest/core/discarder.bro @@ -0,0 +1,92 @@ +# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace discarder-ip.bro >output +# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace discarder-tcp.bro >>output +# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace discarder-udp.bro >>output +# @TEST-EXEC: bro -C -r $TRACES/icmp/icmp-destunreach-udp.pcap discarder-icmp.bro >>output +# @TEST-EXEC: btest-diff output + +@TEST-START-FILE discarder-ip.bro + +event bro_init() + { + print "################ IP Discarder ################"; + } + +function discarder_check_ip(p: pkt_hdr): bool + { + if ( p?$ip && p$ip$src == 141.142.220.118 && p$ip$dst == 208.80.152.2 ) + return F; + return T; + } + + +event new_packet(c: connection, p: pkt_hdr) + { + print c$id; + } + +@TEST-END-FILE + +@TEST-START-FILE discarder-tcp.bro + +event bro_init() + { + print "################ TCP Discarder ################"; + } + +function discarder_check_tcp(p: pkt_hdr, d: string): bool + { + if ( p$tcp$flags == TH_SYN ) + return F; + return T; + } + +event new_packet(c: connection, p: pkt_hdr) + { + if ( p?$tcp ) + print c$id; + } + +@TEST-END-FILE + +@TEST-START-FILE discarder-udp.bro + +event bro_init() + { + print "################ UDP Discarder ################"; + } + +function discarder_check_udp(p: pkt_hdr, d: string): bool + { + if ( p?$ip6 ) + return F; + return T; + } + +event new_packet(c: connection, p: pkt_hdr) + { + if ( p?$udp ) + print c$id; + } + +@TEST-END-FILE + +@TEST-START-FILE discarder-icmp.bro + +event bro_init() + { + print "################ ICMP Discarder ################"; + } + +function discarder_check_icmp(p: pkt_hdr): bool + { + print fmt("Discard icmp packet: %s", p$icmp); + return T; + } + +event new_packet(c: connection, p: pkt_hdr) + { + if ( p?$icmp ) + print c$id; + } + +@TEST-END-FILE diff --git a/testing/btest/core/dns-init.bro b/testing/btest/core/dns-init.bro new file mode 100644 index 0000000000..5a7efff6fb --- /dev/null +++ b/testing/btest/core/dns-init.bro @@ -0,0 +1,9 @@ +# We once had a bug where DNS lookups at init time lead to an immediate crash. +# +# @TEST-EXEC: bro %INPUT >output 2>&1 +# @TEST-EXEC: btest-diff output + +const foo: set[addr] = { + google.com +}; + diff --git a/testing/btest/core/expr-exception.bro b/testing/btest/core/expr-exception.bro index 5225f092ba..9e84717935 100644 --- a/testing/btest/core/expr-exception.bro +++ b/testing/btest/core/expr-exception.bro @@ -1,9 +1,25 @@ -# Bro shouldn't crash when doing nothing, nor outputting anything. +# Expressions in an event handler that raise interpreter exceptions +# shouldn't abort Bro entirely, but just return from the function body. # -# @TEST-EXEC: cat /dev/null | bro -r $TRACES/wikipedia.trace %INPUT -# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff reporter.log +# @TEST-EXEC: bro -r $TRACES/wikipedia.trace %INPUT >output +# @TEST-EXEC: TEST_DIFF_CANONIFIER="$SCRIPTS/diff-remove-abspath | $SCRIPTS/diff-remove-timestamps" btest-diff reporter.log +# @TEST-EXEC: btest-diff output event connection_established(c: connection) { print c$ftp; + print "not reached"; + } + +event connection_established(c: connection) + { + if ( c?$ftp ) + print c$ftp; + else + print "ftp field missing"; + } + +event connection_established(c: connection) + { + print c$id; } diff --git a/testing/btest/core/file-caching-serialization.test b/testing/btest/core/file-caching-serialization.test new file mode 100644 index 0000000000..7ff1d8be8d --- /dev/null +++ b/testing/btest/core/file-caching-serialization.test @@ -0,0 +1,49 @@ +# This checks that the interactions between open-file caching and +# serialization works ok. In the first case, all files can fit +# in the cache, but get serialized before every write. In the +# second case, files are eventually forced out of the cache and +# undergo serialization, which requires re-opening. + +# @TEST-EXEC: bro -b %INPUT "test_file_prefix=one" +# @TEST-EXEC: btest-diff one0 +# @TEST-EXEC: btest-diff one1 +# @TEST-EXEC: btest-diff one2 +# @TEST-EXEC: bro -b %INPUT "test_file_prefix=two" "max_files_in_cache=2" +# @TEST-EXEC: btest-diff two0 +# @TEST-EXEC: btest-diff two1 +# @TEST-EXEC: btest-diff two2 + +const test_file_prefix = "" &redef; +global file_table: table[string] of file; +global iterations: vector of count = vector(0,1,2,3,4,5,6,7,8); + +function write_to_file(c: count) + { + local f: file; + # Take turns writing across three output files. + local filename = fmt("%s%s", test_file_prefix, c % 3 ); + + if ( filename in file_table ) + f = file_table[filename]; + else + { + f = open(filename); + file_table[filename] = f; + } + + # This when block is a trick to get the frame cloned + # and thus serialize the local file value + when ( local s = fmt("write %d", c) ) + print f, s; + } + +event file_opened(f: file) + { + print f, "opened"; + } + +event bro_init() + { + for ( i in iterations ) + write_to_file(iterations[i]); + } diff --git a/testing/btest/core/icmp/icmp-context.test b/testing/btest/core/icmp/icmp-context.test new file mode 100644 index 0000000000..ca7a34c5aa --- /dev/null +++ b/testing/btest/core/icmp/icmp-context.test @@ -0,0 +1,14 @@ +# These tests all check that IPv6 context packet construction for ICMP6 works. + +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp-destunreach-no-context.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp-destunreach-ip.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp-destunreach-udp.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: btest-diff output + +event icmp_unreachable(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_unreachable (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } diff --git a/testing/btest/core/icmp/icmp-events.test b/testing/btest/core/icmp/icmp-events.test new file mode 100644 index 0000000000..1a54f05fba --- /dev/null +++ b/testing/btest/core/icmp/icmp-events.test @@ -0,0 +1,44 @@ +# These tests all check that ICMP6 events get raised with correct arguments. + +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp-destunreach-udp.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp-timeexceeded.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp-ping.pcap %INPUT >>output 2>&1 + +# @TEST-EXEC: btest-diff output + +event icmp_sent(c: connection, icmp: icmp_conn) + { + print "icmp_sent"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + } + +event icmp_echo_request(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string) + { + print "icmp_echo_request (id=" + fmt("%d", id) + ", seq=" + fmt("%d", seq) + ", payload=" + payload + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + } + +event icmp_echo_reply(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string) + { + print "icmp_echo_reply (id=" + fmt("%d", id) + ", seq=" + fmt("%d", seq) + ", payload=" + payload + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + } + +event icmp_unreachable(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_unreachable (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } + +event icmp_time_exceeded(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_time_exceeded (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } diff --git a/testing/btest/core/icmp/icmp6-context.test b/testing/btest/core/icmp/icmp6-context.test new file mode 100644 index 0000000000..dfa8271cbc --- /dev/null +++ b/testing/btest/core/icmp/icmp6-context.test @@ -0,0 +1,15 @@ +# These tests all check that IPv6 context packet construction for ICMP6 works. + +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-destunreach-no-context.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-destunreach-ip6ext-trunc.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-destunreach-ip6ext-udp.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-destunreach-ip6ext.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: btest-diff output + +event icmp_unreachable(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_unreachable (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } diff --git a/testing/btest/core/icmp/icmp6-events.test b/testing/btest/core/icmp/icmp6-events.test new file mode 100644 index 0000000000..5263dd6e7f --- /dev/null +++ b/testing/btest/core/icmp/icmp6-events.test @@ -0,0 +1,128 @@ +# These tests all check that ICMP6 events get raised with correct arguments. + +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-destunreach-ip6ext-udp.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-toobig.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-timeexceeded.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-paramprob.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-ping.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-redirect.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-router-advert.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-neighbor-advert.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-router-solicit.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-neighbor-solicit.pcap %INPUT >>output 2>&1 + +# @TEST-EXEC: btest-diff output + +event icmp_sent(c: connection, icmp: icmp_conn) + { + print "icmp_sent"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + } + +event icmp_echo_request(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string) + { + print "icmp_echo_request (id=" + fmt("%d", id) + ", seq=" + fmt("%d", seq) + ", payload=" + payload + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + } + +event icmp_echo_reply(c: connection, icmp: icmp_conn, id: count, seq: count, payload: string) + { + print "icmp_echo_reply (id=" + fmt("%d", id) + ", seq=" + fmt("%d", seq) + ", payload=" + payload + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + } + +event icmp_unreachable(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_unreachable (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } + +event icmp_packet_too_big(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_packet_too_big (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } + +event icmp_time_exceeded(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_time_exceeded (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } + +event icmp_parameter_problem(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_parameter_problem (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } + +event icmp_redirect(c: connection, icmp: icmp_conn, tgt: addr, dest: addr, options: icmp6_nd_options) + { + print "icmp_redirect (tgt=" + fmt("%s", tgt) + ", dest=" + fmt("%s", dest) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " options: " + fmt("%s", options); + } + +event icmp_error_message(c: connection, icmp: icmp_conn, code: count, context: icmp_context) + { + print "icmp_error_message (code=" + fmt("%d", code) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " icmp_context: " + fmt("%s", context); + } + +event icmp_neighbor_solicitation(c: connection, icmp: icmp_conn, tgt: addr, options: icmp6_nd_options) + { + print "icmp_neighbor_solicitation (tgt=" + fmt("%s", tgt) + ")"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " options: " + fmt("%s", options); + } + +event icmp_neighbor_advertisement(c: connection, icmp: icmp_conn, router: bool, solicited: bool, override: bool, tgt: addr, options: icmp6_nd_options) + { + print "icmp_neighbor_advertisement (tgt=" + fmt("%s", tgt) + ")"; + print " router=" + fmt("%s", router); + print " solicited=" + fmt("%s", solicited); + print " override=" + fmt("%s", override); + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " options: " + fmt("%s", options); + } + +event icmp_router_solicitation(c: connection, icmp: icmp_conn, options: icmp6_nd_options) + { + print "icmp_router_solicitation"; + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " options: " + fmt("%s", options); + } + +event icmp_router_advertisement(c: connection, icmp: icmp_conn, cur_hop_limit: count, managed: bool, other: bool, home_agent: bool, pref: count, proxy: bool, rsv: count, router_lifetime: interval, reachable_time: interval, retrans_timer: interval, options: icmp6_nd_options) + { + print "icmp_router_advertisement"; + print " cur_hop_limit=" + fmt("%s", cur_hop_limit); + print " managed=" + fmt("%s", managed); + print " other=" + fmt("%s", other); + print " home_agent=" + fmt("%s", home_agent); + print " pref=" + fmt("%s", pref); + print " proxy=" + fmt("%s", proxy); + print " rsv=" + fmt("%s", rsv); + print " router_lifetime=" + fmt("%s", router_lifetime); + print " reachable_time=" + fmt("%s", reachable_time); + print " retrans_timer=" + fmt("%s", retrans_timer); + print " conn_id: " + fmt("%s", c$id); + print " icmp_conn: " + fmt("%s", icmp); + print " options: " + fmt("%s", options); + } diff --git a/testing/btest/core/icmp/icmp6-nd-options.test b/testing/btest/core/icmp/icmp6-nd-options.test new file mode 100644 index 0000000000..64543852a3 --- /dev/null +++ b/testing/btest/core/icmp/icmp6-nd-options.test @@ -0,0 +1,35 @@ +# These tests all check that ICMP6 events get raised with correct arguments. + +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-redirect-hdr-opt.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/icmp/icmp6-nd-options.pcap %INPUT >>output 2>&1 + +# @TEST-EXEC: btest-diff output + +event icmp_router_advertisement(c: connection, icmp: icmp_conn, cur_hop_limit: count, managed: bool, other: bool, home_agent: bool, pref: count, proxy: bool, rsv: count, router_lifetime: interval, reachable_time: interval, retrans_timer: interval, options: icmp6_nd_options) + { + print "icmp_router_advertisement options"; + for ( o in options ) + { + print fmt(" %s", options[o]); + if ( options[o]$otype == 1 && options[o]?$link_address ) + print fmt(" MAC: %s", + string_to_ascii_hex(options[o]$link_address)); + } + } + +event icmp_neighbor_advertisement(c: connection, icmp: icmp_conn, router: bool, solicited: bool, override: bool, tgt: addr, options: icmp6_nd_options) + { + print "icmp_neighbor_advertisement options"; + for ( o in options ) + { + print fmt(" %s", options[o]); + if ( options[o]$otype == 2 && options[o]?$link_address ) print fmt(" MAC: %s", string_to_ascii_hex(options[o]$link_address)); + } + } + +event icmp_redirect(c: connection, icmp: icmp_conn, tgt: addr, dest: addr, options: icmp6_nd_options) + { + print "icmp_redirect options"; + for ( o in options ) + print fmt(" %s", options[o]); + } diff --git a/testing/btest/core/ipv6-atomic-frag.test b/testing/btest/core/ipv6-atomic-frag.test new file mode 100644 index 0000000000..8c8fe6ca64 --- /dev/null +++ b/testing/btest/core/ipv6-atomic-frag.test @@ -0,0 +1,8 @@ +# @TEST-EXEC: bro -r $TRACES/ipv6-http-atomic-frag.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +event new_connection(c: connection) + { + if ( c$id$resp_p == 80/tcp ) + print c$id; + } diff --git a/testing/btest/core/ipv6-flow-labels.test b/testing/btest/core/ipv6-flow-labels.test new file mode 100644 index 0000000000..b4e60cb0a4 --- /dev/null +++ b/testing/btest/core/ipv6-flow-labels.test @@ -0,0 +1,32 @@ +# @TEST-EXEC: bro -b -r $TRACES/ipv6-ftp.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +function print_connection(c: connection, event_name: string) + { + print fmt("%s: %s", event_name, c$id); + print fmt(" orig_flow %d", c$orig$flow_label); + print fmt(" resp_flow %d", c$resp$flow_label); + } + +event new_connection(c: connection) + { + print_connection(c, "new_connection"); + } + +event connection_established(c: connection) + { + print_connection(c, "connection_established"); + } + +event connection_state_remove(c: connection) + { + print_connection(c, "connection_state_remove"); + } + +event connection_flow_label_changed(c: connection, is_orig: bool, + old_label: count, new_label: count) + { + print_connection(c, fmt("connection_flow_label_changed(%s)", is_orig ? "orig" : "resp")); + print fmt(" old_label %d", old_label); + print fmt(" new_label %d", new_label); + } diff --git a/testing/btest/core/ipv6-frag.test b/testing/btest/core/ipv6-frag.test new file mode 100644 index 0000000000..32c7c0a8c1 --- /dev/null +++ b/testing/btest/core/ipv6-frag.test @@ -0,0 +1,9 @@ +# @TEST-EXEC: bro -r $TRACES/ipv6-fragmented-dns.trace %INPUT >output +# @TEST-EXEC: btest-diff output +# @TEST-EXEC: btest-diff dns.log + +event new_packet(c: connection, p: pkt_hdr) + { + if ( p?$ip6 && p?$ udp ) + print fmt("ip6=%s, udp = %s", p$ip6, p$udp); + } diff --git a/testing/btest/core/ipv6_esp.test b/testing/btest/core/ipv6_esp.test new file mode 100644 index 0000000000..8744df0036 --- /dev/null +++ b/testing/btest/core/ipv6_esp.test @@ -0,0 +1,11 @@ +# @TEST-EXEC: bro -r $TRACES/ip6_esp.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +# Just check that the event is raised correctly for a packet containing +# ESP extension headers. + +event esp_packet(p: pkt_hdr) + { + if ( p?$ip6 ) + print p$ip6; + } diff --git a/testing/btest/core/ipv6_ext_headers.test b/testing/btest/core/ipv6_ext_headers.test new file mode 100644 index 0000000000..32a0f5d558 --- /dev/null +++ b/testing/btest/core/ipv6_ext_headers.test @@ -0,0 +1,22 @@ +# @TEST-EXEC: bro -b -r $TRACES/ipv6-hbh-routing0.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +# Just check that the event is raised correctly for a packet containing +# extension headers. +event ipv6_ext_headers(c: connection, p: pkt_hdr) + { + print p; + } + +# Also check the weird for routing type 0 extensions headers +event flow_weird(name: string, src: addr, dst: addr) + { + print fmt("weird %s from %s to %s", name, src, dst); + } + +# And the connection for routing type 0 packets with non-zero segments left +# should use the last address in that extension header. +event new_connection(c: connection) + { + print c$id; + } diff --git a/testing/btest/core/ipv6_zero_len_ah.test b/testing/btest/core/ipv6_zero_len_ah.test new file mode 100644 index 0000000000..dc3acf8443 --- /dev/null +++ b/testing/btest/core/ipv6_zero_len_ah.test @@ -0,0 +1,11 @@ +# @TEST-EXEC: bro -r $TRACES/ipv6_zero_len_ah.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +# Shouldn't crash, but we also won't have seq and data fields set of the ip6_ah +# record. + +event ipv6_ext_headers(c: connection, p: pkt_hdr) + { + print c$id; + print p; + } diff --git a/testing/btest/core/leaks/ayiya.test b/testing/btest/core/leaks/ayiya.test new file mode 100644 index 0000000000..2093924c7a --- /dev/null +++ b/testing/btest/core/leaks/ayiya.test @@ -0,0 +1,7 @@ +# Needs perftools support. +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-GROUP: leaks +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -r $TRACES/tunnels/ayiya3.trace diff --git a/testing/btest/core/leaks/basic-cluster.bro b/testing/btest/core/leaks/basic-cluster.bro new file mode 100644 index 0000000000..319368bc6e --- /dev/null +++ b/testing/btest/core/leaks/basic-cluster.bro @@ -0,0 +1,83 @@ +# Needs perftools support. +# +# @TEST-SERIALIZE: comm +# @TEST-GROUP: leaks +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-EXEC: btest-bg-run manager-1 HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local BROPATH=$BROPATH:.. CLUSTER_NODE=manager-1 bro -m %INPUT +# @TEST-EXEC: btest-bg-run proxy-1 HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local BROPATH=$BROPATH:.. CLUSTER_NODE=proxy-1 bro -m %INPUT +# @TEST-EXEC: sleep 1 +# @TEST-EXEC: btest-bg-run worker-1 HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local BROPATH=$BROPATH:.. CLUSTER_NODE=worker-1 bro -m -r $TRACES/web.trace --pseudo-realtime %INPUT +# @TEST-EXEC: btest-bg-run worker-2 HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local BROPATH=$BROPATH:.. CLUSTER_NODE=worker-2 bro -m -r $TRACES/web.trace --pseudo-realtime %INPUT +# @TEST-EXEC: btest-bg-wait 60 +# @TEST-EXEC: btest-diff manager-1/metrics.log + +@TEST-START-FILE cluster-layout.bro +redef Cluster::nodes = { + ["manager-1"] = [$node_type=Cluster::MANAGER, $ip=127.0.0.1, $p=37757/tcp, $workers=set("worker-1", "worker-2")], + ["proxy-1"] = [$node_type=Cluster::PROXY, $ip=127.0.0.1, $p=37758/tcp, $manager="manager-1", $workers=set("worker-1", "worker-2")], + ["worker-1"] = [$node_type=Cluster::WORKER, $ip=127.0.0.1, $p=37760/tcp, $manager="manager-1", $proxy="proxy-1", $interface="eth0"], + ["worker-2"] = [$node_type=Cluster::WORKER, $ip=127.0.0.1, $p=37761/tcp, $manager="manager-1", $proxy="proxy-1", $interface="eth1"], +}; +@TEST-END-FILE + +redef Log::default_rotation_interval = 0secs; + +redef enum Metrics::ID += { + TEST_METRIC, +}; + +event bro_init() &priority=5 + { + Metrics::add_filter(TEST_METRIC, + [$name="foo-bar", + $break_interval=3secs]); + } + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +global ready_for_data: event(); + +redef Cluster::manager2worker_events += /ready_for_data/; + +@if ( Cluster::local_node_type() == Cluster::WORKER ) + +event ready_for_data() + { + Metrics::add_data(TEST_METRIC, [$host=1.2.3.4], 3); + Metrics::add_data(TEST_METRIC, [$host=6.5.4.3], 2); + Metrics::add_data(TEST_METRIC, [$host=7.2.1.5], 1); + } + +@endif + +@if ( Cluster::local_node_type() == Cluster::MANAGER ) + +global n = 0; +global peer_count = 0; + +event Metrics::log_metrics(rec: Metrics::Info) + { + n = n + 1; + if ( n == 3 ) + { + terminate_communication(); + terminate(); + } + } + +event remote_connection_handshake_done(p: event_peer) + { + print p; + peer_count = peer_count + 1; + if ( peer_count == 3 ) + { + event ready_for_data(); + } + } + +@endif diff --git a/testing/btest/core/leaks/dataseries-rotate.bro b/testing/btest/core/leaks/dataseries-rotate.bro new file mode 100644 index 0000000000..6a3b5550cc --- /dev/null +++ b/testing/btest/core/leaks/dataseries-rotate.bro @@ -0,0 +1,35 @@ +# +# @TEST-REQUIRES: has-writer DataSeries && which ds2txt +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-GROUP: leaks +# @TEST-GROUP: dataseries +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -b -r $TRACES/rotation.trace %INPUT Log::default_writer=Log::WRITER_DATASERIES + +module Test; + +export { + # Create a new ID for our log stream + redef enum Log::ID += { LOG }; + + # Define a record with all the columns the log file can have. + # (I'm using a subset of fields from ssh-ext for demonstration.) + type Log: record { + t: time; + id: conn_id; # Will be rolled out into individual columns. + } &log; +} + +redef Log::default_rotation_interval = 1hr; +redef Log::default_rotation_postprocessor_cmd = "echo"; + +event bro_init() +{ + Log::create_stream(Test::LOG, [$columns=Log]); +} + +event new_connection(c: connection) + { + Log::write(Test::LOG, [$t=network_time(), $id=c$id]); + } diff --git a/testing/btest/core/leaks/dataseries.bro b/testing/btest/core/leaks/dataseries.bro new file mode 100644 index 0000000000..b72b880612 --- /dev/null +++ b/testing/btest/core/leaks/dataseries.bro @@ -0,0 +1,10 @@ +# Needs perftools support. +# +# @TEST-REQUIRES: has-writer DataSeries && which ds2txt +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-GROUP: leaks +# @TEST-GROUP: dataseries +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -r $TRACES/wikipedia.trace Log::default_writer=Log::WRITER_DATASERIES diff --git a/testing/btest/core/leaks/dns.bro b/testing/btest/core/leaks/dns.bro index 1dce9c2c82..2816750758 100644 --- a/testing/btest/core/leaks/dns.bro +++ b/testing/btest/core/leaks/dns.bro @@ -1,9 +1,15 @@ # Needs perftools support. # +# @TEST-GROUP: leaks +# # @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks # # @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -r $TRACES/wikipedia.trace %INPUT +const foo: set[addr] = { + google.com +}; + # Add the state tracking information variable to the connection record event connection_established(c: connection) diff --git a/testing/btest/core/leaks/gridftp.test b/testing/btest/core/leaks/gridftp.test new file mode 100644 index 0000000000..6364000b0d --- /dev/null +++ b/testing/btest/core/leaks/gridftp.test @@ -0,0 +1,24 @@ +# Needs perftools support. +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-GROUP: leaks +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -r $TRACES/globus-url-copy.trace %INPUT + +@load base/protocols/ftp/gridftp + +module GridFTP; + +redef size_threshold = 2; + +redef enum Notice::Type += { + Data_Channel +}; + +event GridFTP::data_channel_detected(c: connection) + { + local msg = fmt("GridFTP data channel over threshold %d bytes", + size_threshold); + NOTICE([$note=Data_Channel, $msg=msg, $conn=c]); + } diff --git a/testing/btest/core/leaks/incr-vec-expr.test b/testing/btest/core/leaks/incr-vec-expr.test new file mode 100644 index 0000000000..d2b94a5e63 --- /dev/null +++ b/testing/btest/core/leaks/incr-vec-expr.test @@ -0,0 +1,35 @@ +# Needs perftools support. +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-GROUP: leaks +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -b -m -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT + +type rec: record { + a: count; + b: string; + c: vector of count; +}; + +global vec: vector of count = vector(0,0,0); + +global v: rec = [$a=0, $b="test", $c=vector(1,2,3)]; + +event new_connection(c: connection) + { + print vec; + print v; + + ++vec; + + print vec; + + ++v$a; + + print v; + + ++v$c; + + print v; + } diff --git a/testing/btest/core/leaks/ip-in-ip.test b/testing/btest/core/leaks/ip-in-ip.test new file mode 100644 index 0000000000..64fdf739f6 --- /dev/null +++ b/testing/btest/core/leaks/ip-in-ip.test @@ -0,0 +1,33 @@ +# Needs perftools support. +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-GROUP: leaks +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -b -r $TRACES/tunnels/6in6.pcap %INPUT >>output +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -b -r $TRACES/tunnels/6in6in6.pcap %INPUT >>output +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -b -r $TRACES/tunnels/6in6-tunnel-change.pcap %INPUT >>output +# @TEST-EXEC: btest-diff output + +event new_connection(c: connection) + { + if ( c?$tunnel ) + { + print "new_connection: tunnel"; + print fmt(" conn_id: %s", c$id); + print fmt(" encap: %s", c$tunnel); + } + else + { + print "new_connection: no tunnel"; + } + } + +event tunnel_changed(c: connection, e: EncapsulatingConnVector) + { + print "tunnel_changed:"; + print fmt(" conn_id: %s", c$id); + if ( c?$tunnel ) + print fmt(" old: %s", c$tunnel); + print fmt(" new: %s", e); + } diff --git a/testing/btest/core/leaks/ipv6_ext_headers.test b/testing/btest/core/leaks/ipv6_ext_headers.test new file mode 100644 index 0000000000..3b2497655c --- /dev/null +++ b/testing/btest/core/leaks/ipv6_ext_headers.test @@ -0,0 +1,37 @@ +# Needs perftools support. +# +# @TEST-GROUP: leaks +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -b -r $TRACES/ipv6-hbh-routing0.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +# Just check that the event is raised correctly for a packet containing +# extension headers. +event ipv6_ext_headers(c: connection, p: pkt_hdr) + { + print p; + } + +# Also check the weird for routing type 0 extensions headers +event flow_weird(name: string, src: addr, dst: addr) + { + print fmt("weird %s from %s to %s", name, src, dst); + } + +# And the connection for routing type 0 packets with non-zero segments left +# should use the last address in that extension header. +event new_connection(c: connection) + { + print c$id; + } + +event ipv6_ext_headers(c: connection, p: pkt_hdr) + { + for ( h in p$ip6$exts ) + if ( p$ip6$exts[h]$id == IPPROTO_ROUTING ) + if ( p$ip6$exts[h]$routing$rtype == 0 ) + print routing0_data_to_addrs(p$ip6$exts[h]$routing$data); + } + diff --git a/testing/btest/core/leaks/remote.bro b/testing/btest/core/leaks/remote.bro new file mode 100644 index 0000000000..41bbaec076 --- /dev/null +++ b/testing/btest/core/leaks/remote.bro @@ -0,0 +1,97 @@ +# Needs perftools support. +# +# @TEST-SERIALIZE: comm +# @TEST-GROUP: leaks +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-EXEC: btest-bg-run sender HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -b -m --pseudo-realtime %INPUT ../sender.bro +# @TEST-EXEC: sleep 1 +# @TEST-EXEC: btest-bg-run receiver HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -b -m --pseudo-realtime %INPUT ../receiver.bro +# @TEST-EXEC: sleep 1 +# @TEST-EXEC: btest-bg-wait 30 +# @TEST-EXEC: btest-diff sender/test.log +# @TEST-EXEC: btest-diff sender/test.failure.log +# @TEST-EXEC: btest-diff sender/test.success.log +# @TEST-EXEC: ( cd sender && for i in *.log; do cat $i | $SCRIPTS/diff-remove-timestamps >c.$i; done ) +# @TEST-EXEC: ( cd receiver && for i in *.log; do cat $i | $SCRIPTS/diff-remove-timestamps >c.$i; done ) +# @TEST-EXEC: cmp receiver/c.test.log sender/c.test.log +# @TEST-EXEC: cmp receiver/c.test.failure.log sender/c.test.failure.log +# @TEST-EXEC: cmp receiver/c.test.success.log sender/c.test.success.log + +# This is the common part loaded by both sender and receiver. +module Test; + +export { + # Create a new ID for our log stream + redef enum Log::ID += { LOG }; + + # Define a record with all the columns the log file can have. + # (I'm using a subset of fields from ssh-ext for demonstration.) + type Log: record { + t: time; + id: conn_id; # Will be rolled out into individual columns. + status: string &optional; + country: string &default="unknown"; + } &log; +} + +event bro_init() +{ + Log::create_stream(Test::LOG, [$columns=Log]); + Log::add_filter(Test::LOG, [$name="f1", $path="test.success", $pred=function(rec: Log): bool { return rec$status == "success"; }]); +} + +##### + +@TEST-START-FILE sender.bro + +@load frameworks/communication/listen + +module Test; + +function fail(rec: Log): bool + { + return rec$status != "success"; + } + +event remote_connection_handshake_done(p: event_peer) + { + Log::add_filter(Test::LOG, [$name="f2", $path="test.failure", $pred=fail]); + + local cid = [$orig_h=1.2.3.4, $orig_p=1234/tcp, $resp_h=2.3.4.5, $resp_p=80/tcp]; + + local r: Log = [$t=network_time(), $id=cid, $status="success"]; + + # Log something. + Log::write(Test::LOG, r); + Log::write(Test::LOG, [$t=network_time(), $id=cid, $status="failure", $country="US"]); + Log::write(Test::LOG, [$t=network_time(), $id=cid, $status="failure", $country="UK"]); + Log::write(Test::LOG, [$t=network_time(), $id=cid, $status="success", $country="BR"]); + Log::write(Test::LOG, [$t=network_time(), $id=cid, $status="failure", $country="MX"]); + disconnect(p); + } + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +@TEST-END-FILE + +@TEST-START-FILE receiver.bro + +##### + +@load base/frameworks/communication + +redef Communication::nodes += { + ["foo"] = [$host = 127.0.0.1, $connect=T, $request_logs=T] +}; + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +@TEST-END-FILE diff --git a/testing/btest/core/leaks/teredo.bro b/testing/btest/core/leaks/teredo.bro new file mode 100644 index 0000000000..be298f4d68 --- /dev/null +++ b/testing/btest/core/leaks/teredo.bro @@ -0,0 +1,37 @@ +# Needs perftools support. +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# @TEST-GROUP: leaks +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -r $TRACES/tunnels/Teredo.pcap %INPUT >output + +function print_teredo(name: string, outer: connection, inner: teredo_hdr) + { + print fmt("%s: %s", name, outer$id); + print fmt(" ip6: %s", inner$hdr$ip6); + if ( inner?$auth ) + print fmt(" auth: %s", inner$auth); + if ( inner?$origin ) + print fmt(" origin: %s", inner$origin); + } + +event teredo_packet(outer: connection, inner: teredo_hdr) + { + print_teredo("packet", outer, inner); + } + +event teredo_authentication(outer: connection, inner: teredo_hdr) + { + print_teredo("auth", outer, inner); + } + +event teredo_origin_indication(outer: connection, inner: teredo_hdr) + { + print_teredo("origin", outer, inner); + } + +event teredo_bubble(outer: connection, inner: teredo_hdr) + { + print_teredo("bubble", outer, inner); + } diff --git a/testing/btest/core/leaks/test-all.bro b/testing/btest/core/leaks/test-all.bro index 6e605372c9..f217cc229c 100644 --- a/testing/btest/core/leaks/test-all.bro +++ b/testing/btest/core/leaks/test-all.bro @@ -1,5 +1,7 @@ # Needs perftools support. # +# @TEST-GROUP: leaks +# # @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks # # @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -r $TRACES/wikipedia.trace test-all-policy diff --git a/testing/btest/core/leaks/vector-val-bifs.test b/testing/btest/core/leaks/vector-val-bifs.test new file mode 100644 index 0000000000..d42e273bc5 --- /dev/null +++ b/testing/btest/core/leaks/vector-val-bifs.test @@ -0,0 +1,28 @@ +# Needs perftools support. +# +# @TEST-GROUP: leaks +# +# @TEST-REQUIRES: bro --help 2>&1 | grep -q mem-leaks +# +# The BIFS used in this test originally didn't call the VectorVal() ctor right, +# assuming that it didn't automatically Ref the VectorType argument and thus +# leaked that memeory. +# +# @TEST-EXEC: HEAP_CHECK_DUMP_DIRECTORY=. HEAPCHECK=local bro -m -b -r $TRACES/ftp-ipv4.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +function myfunc(aa: interval, bb: interval): int + { + if ( aa < bb ) + return -1; + else + return 1; + } + +event new_connection(c: connection) + { + local a = vector( 5, 2, 8, 3 ); + print order(a); + str_split("this is a test string", a); + print addr_to_counts(c$id$orig_h); + } diff --git a/testing/btest/core/mobile-ipv6-home-addr.test b/testing/btest/core/mobile-ipv6-home-addr.test new file mode 100644 index 0000000000..536d381f9b --- /dev/null +++ b/testing/btest/core/mobile-ipv6-home-addr.test @@ -0,0 +1,11 @@ +# @TEST-REQUIRES: grep -q "#define ENABLE_MOBILE_IPV6" $BUILD/config.h +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/ipv6-mobile-hoa.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +# Just check that the orig of the connection is the Home Address, but the +# source in the header is the actual source address. +event new_packet(c: connection, p: pkt_hdr) + { + print c$id; + print p; + } diff --git a/testing/btest/core/mobile-ipv6-routing.test b/testing/btest/core/mobile-ipv6-routing.test new file mode 100644 index 0000000000..6ad5be002d --- /dev/null +++ b/testing/btest/core/mobile-ipv6-routing.test @@ -0,0 +1,11 @@ +# @TEST-REQUIRES: grep -q "#define ENABLE_MOBILE_IPV6" $BUILD/config.h +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/ipv6-mobile-routing.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +# Just check that the responder of the connection is the final routing +# address, but the destination in the header is the actual destination address. +event new_packet(c: connection, p: pkt_hdr) + { + print c$id; + print p; + } diff --git a/testing/btest/core/mobility-checksums.test b/testing/btest/core/mobility-checksums.test new file mode 100644 index 0000000000..8a88eb8194 --- /dev/null +++ b/testing/btest/core/mobility-checksums.test @@ -0,0 +1,15 @@ +# @TEST-REQUIRES: grep -q "#define ENABLE_MOBILE_IPV6" $BUILD/config.h +# @TEST-EXEC: bro -r $TRACES/chksums/mip6-bad-mh-chksum.pcap +# @TEST-EXEC: mv weird.log bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-hoa-tcp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-hoa-udp-bad-chksum.pcap +# @TEST-EXEC: cat weird.log >> bad.out +# @TEST-EXEC: rm weird.log +# @TEST-EXEC: bro -r $TRACES/chksums/mip6-good-mh-chksum.pcap +# @TEST-EXEC: test ! -e weird.log +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-hoa-tcp-good-chksum.pcap +# @TEST-EXEC: test ! -e weird.log +# @TEST-EXEC: bro -r $TRACES/chksums/ip6-hoa-udp-good-chksum.pcap +# @TEST-EXEC: test ! -e weird.log +# @TEST-EXEC: btest-diff bad.out diff --git a/testing/btest/core/mobility_msg.test b/testing/btest/core/mobility_msg.test new file mode 100644 index 0000000000..73461e7944 --- /dev/null +++ b/testing/btest/core/mobility_msg.test @@ -0,0 +1,44 @@ +# @TEST-REQUIRES: grep -q "#define ENABLE_MOBILE_IPV6" $BUILD/config.h +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_back.trace %INPUT >output +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_be.trace %INPUT >>output +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_brr.trace %INPUT >>output +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_bu.trace %INPUT >>output +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_cot.trace %INPUT >>output +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_coti.trace %INPUT >>output +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_hot.trace %INPUT >>output +# @TEST-EXEC: bro -b -r $TRACES/mobile-ipv6/mip6_hoti.trace %INPUT >>output +# @TEST-EXEC: btest-diff output + +event mobile_ipv6_message(p: pkt_hdr) + { + if ( ! p?$ip6 ) return; + + for ( i in p$ip6$exts ) + { + if ( p$ip6$exts[i]$id == IPPROTO_MOBILITY ) + { + if ( ! p$ip6$exts[i]?$mobility ) + print "ERROR: Mobility extension header uninitialized"; + + if ( p$ip6$exts[i]$mobility$mh_type == 0 ) + print "Binding Refresh Request:"; + else if ( p$ip6$exts[i]$mobility$mh_type == 1 ) + print "Home Test Init:"; + else if ( p$ip6$exts[i]$mobility$mh_type == 2 ) + print "Care-of Test Init:"; + else if ( p$ip6$exts[i]$mobility$mh_type == 3 ) + print "Home Test:"; + else if ( p$ip6$exts[i]$mobility$mh_type == 4 ) + print "Care-of Test:"; + else if ( p$ip6$exts[i]$mobility$mh_type == 5 ) + print "Binding Update:"; + else if ( p$ip6$exts[i]$mobility$mh_type == 6 ) + print "Binding ACK:"; + else if ( p$ip6$exts[i]$mobility$mh_type == 7 ) + print "Binding Error:"; + else + print "Unknown Mobility Header:"; + print p$ip6; + } + } + } diff --git a/testing/btest/core/print-bpf-filters-ipv6.bro b/testing/btest/core/print-bpf-filters-ipv6.bro deleted file mode 100644 index 824c979d70..0000000000 --- a/testing/btest/core/print-bpf-filters-ipv6.bro +++ /dev/null @@ -1,12 +0,0 @@ -# @TEST-REQUIRES: bro -e 'print bro_has_ipv6()' | grep -q T -# -# @TEST-EXEC: bro -r $TRACES/empty.trace -e '' >output -# @TEST-EXEC: cat packet_filter.log >>output -# @TEST-EXEC: bro -r $TRACES/empty.trace PacketFilter::all_packets=F >>output -# @TEST-EXEC: cat packet_filter.log >>output -# @TEST-EXEC: bro -r $TRACES/empty.trace -f "port 42" -e '' >>output -# @TEST-EXEC: cat packet_filter.log >>output -# @TEST-EXEC: bro -r $TRACES/empty.trace -C -f "port 56730" -r $TRACES/mixed-vlan-mpls.trace >>output -# @TEST-EXEC: cat packet_filter.log >>output -# @TEST-EXEC: btest-diff output -# @TEST-EXEC: btest-diff conn.log diff --git a/testing/btest/core/print-bpf-filters-ipv4.bro b/testing/btest/core/print-bpf-filters.bro similarity index 89% rename from testing/btest/core/print-bpf-filters-ipv4.bro rename to testing/btest/core/print-bpf-filters.bro index 5a3b0cf7ce..6d9cef0220 100644 --- a/testing/btest/core/print-bpf-filters-ipv4.bro +++ b/testing/btest/core/print-bpf-filters.bro @@ -1,5 +1,3 @@ -# @TEST-REQUIRES: bro -e 'print bro_has_ipv6()' | grep -q F -# # @TEST-EXEC: bro -r $TRACES/empty.trace -e '' >output # @TEST-EXEC: cat packet_filter.log >>output # @TEST-EXEC: bro -r $TRACES/empty.trace PacketFilter::all_packets=F >>output diff --git a/testing/btest/core/truncation.test b/testing/btest/core/truncation.test new file mode 100644 index 0000000000..3406879183 --- /dev/null +++ b/testing/btest/core/truncation.test @@ -0,0 +1,22 @@ +# Truncated IP packet's should not be analyzed, and generate truncated_IP weird + +# @TEST-EXEC: bro -r $TRACES/trunc/ip4-trunc.pcap +# @TEST-EXEC: mv weird.log output +# @TEST-EXEC: bro -r $TRACES/trunc/ip6-trunc.pcap +# @TEST-EXEC: cat weird.log >> output +# @TEST-EXEC: bro -r $TRACES/trunc/ip6-ext-trunc.pcap +# @TEST-EXEC: cat weird.log >> output + +# If an ICMP packet's payload is truncated due to too small snaplen, +# the checksum calculation is bypassed (and Bro doesn't crash, of course). + +# @TEST-EXEC: rm -f weird.log +# @TEST-EXEC: bro -r $TRACES/trunc/icmp-payload-trunc.pcap +# @TEST-EXEC: test ! -e weird.log + +# If an ICMP packet has the ICMP header truncated due to too small snaplen, +# an internally_truncated_header weird gets generated. + +# @TEST-EXEC: bro -r $TRACES/trunc/icmp-header-trunc.pcap +# @TEST-EXEC: cat weird.log >> output +# @TEST-EXEC: btest-diff output diff --git a/testing/btest/core/tunnels/ayiya.test b/testing/btest/core/tunnels/ayiya.test new file mode 100644 index 0000000000..043e06c621 --- /dev/null +++ b/testing/btest/core/tunnels/ayiya.test @@ -0,0 +1,4 @@ +# @TEST-EXEC: bro -r $TRACES/tunnels/ayiya3.trace +# @TEST-EXEC: btest-diff tunnel.log +# @TEST-EXEC: btest-diff conn.log +# @TEST-EXEC: btest-diff http.log diff --git a/testing/btest/core/tunnels/false-teredo.bro b/testing/btest/core/tunnels/false-teredo.bro new file mode 100644 index 0000000000..381478bd54 --- /dev/null +++ b/testing/btest/core/tunnels/false-teredo.bro @@ -0,0 +1,50 @@ +# @TEST-EXEC: bro -r $TRACES/tunnels/false-teredo.pcap %INPUT >output +# @TEST-EXEC: test ! -e weird.log +# @TEST-EXEC: test ! -e dpd.log +# @TEST-EXEC: bro -r $TRACES/tunnels/false-teredo.pcap %INPUT Tunnel::yielding_teredo_decapsulation=F >output +# @TEST-EXEC: btest-diff weird.log +# @TEST-EXEC: test ! -e dpd.log + +# In the first case, there isn't any weird or protocol violation logged +# since the teredo analyzer recognizes that the DNS analyzer has confirmed +# the protocol and yields. + +# In the second case, there are weirds since the teredo analyzer decapsulates +# despite the presence of the confirmed DNS analyzer and the resulting +# inner packets are malformed (no surprise there). There's also no dpd.log +# since the teredo analyzer doesn't confirm until it's seen a valid teredo +# encapsulation in both directions and protocol violations aren't logged +# until there's been a confirmation. + +# In either case, the analyzer doesn't, by default, get disabled as a result +# of the protocol violations. + +function print_teredo(name: string, outer: connection, inner: teredo_hdr) + { + print fmt("%s: %s", name, outer$id); + print fmt(" ip6: %s", inner$hdr$ip6); + if ( inner?$auth ) + print fmt(" auth: %s", inner$auth); + if ( inner?$origin ) + print fmt(" origin: %s", inner$origin); + } + +event teredo_packet(outer: connection, inner: teredo_hdr) + { + print_teredo("packet", outer, inner); + } + +event teredo_authentication(outer: connection, inner: teredo_hdr) + { + print_teredo("auth", outer, inner); + } + +event teredo_origin_indication(outer: connection, inner: teredo_hdr) + { + print_teredo("origin", outer, inner); + } + +event teredo_bubble(outer: connection, inner: teredo_hdr) + { + print_teredo("bubble", outer, inner); + } diff --git a/testing/btest/core/tunnels/ip-in-ip.test b/testing/btest/core/tunnels/ip-in-ip.test new file mode 100644 index 0000000000..38f4610445 --- /dev/null +++ b/testing/btest/core/tunnels/ip-in-ip.test @@ -0,0 +1,30 @@ +# @TEST-EXEC: bro -b -r $TRACES/tunnels/6in6.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/tunnels/6in6in6.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/tunnels/6in4.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/tunnels/4in6.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/tunnels/4in4.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: bro -b -r $TRACES/tunnels/6in6-tunnel-change.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: btest-diff output + +event new_connection(c: connection) + { + if ( c?$tunnel ) + { + print "new_connection: tunnel"; + print fmt(" conn_id: %s", c$id); + print fmt(" encap: %s", c$tunnel); + } + else + { + print "new_connection: no tunnel"; + } + } + +event tunnel_changed(c: connection, e: EncapsulatingConnVector) + { + print "tunnel_changed:"; + print fmt(" conn_id: %s", c$id); + if ( c?$tunnel ) + print fmt(" old: %s", c$tunnel); + print fmt(" new: %s", e); + } diff --git a/testing/btest/core/tunnels/ip-tunnel-uid.test b/testing/btest/core/tunnels/ip-tunnel-uid.test new file mode 100644 index 0000000000..f86fd126c9 --- /dev/null +++ b/testing/btest/core/tunnels/ip-tunnel-uid.test @@ -0,0 +1,33 @@ +# @TEST-EXEC: bro -b -r $TRACES/tunnels/ping6-in-ipv4.pcap %INPUT >>output 2>&1 +# @TEST-EXEC: btest-diff output + +event new_connection(c: connection) + { + if ( c?$tunnel ) + { + print "new_connection: tunnel"; + print fmt(" conn_id: %s", c$id); + print fmt(" encap: %s", c$tunnel); + } + else + { + print "new_connection: no tunnel"; + } + } + +event tunnel_changed(c: connection, e: EncapsulatingConnVector) + { + print "tunnel_changed:"; + print fmt(" conn_id: %s", c$id); + if ( c?$tunnel ) + print fmt(" old: %s", c$tunnel); + print fmt(" new: %s", e); + } + +event new_packet(c: connection, p: pkt_hdr) + { + print "NEW_PACKET:"; + print fmt(" %s", c$id); + if ( c?$tunnel ) + print fmt(" %s", c$tunnel); + } diff --git a/testing/btest/core/tunnels/teredo-known-services.test b/testing/btest/core/tunnels/teredo-known-services.test new file mode 100644 index 0000000000..862930758f --- /dev/null +++ b/testing/btest/core/tunnels/teredo-known-services.test @@ -0,0 +1,11 @@ +# @TEST-EXEC: bro -b -r $TRACES/tunnels/false-teredo.pcap base/frameworks/dpd protocols/conn/known-services Tunnel::delay_teredo_confirmation=T "Site::local_nets+={192.168.1.0/24}" +# @TEST-EXEC: test ! -e known_services.log +# @TEST-EXEC: bro -b -r $TRACES/tunnels/false-teredo.pcap base/frameworks/dpd protocols/conn/known-services Tunnel::delay_teredo_confirmation=F "Site::local_nets+={192.168.1.0/24}" +# @TEST-EXEC: btest-diff known_services.log + +# The first case using Tunnel::delay_teredo_confirmation=T doesn't produce +# a known services.log since valid Teredo encapsulations from both endpoints +# of a connection is never witnessed and a protocol_confirmation never issued. + +# The second case issues protocol_confirmations more hastily and so bogus +# entries in known-services.log are more likely to appear. diff --git a/testing/btest/core/tunnels/teredo.bro b/testing/btest/core/tunnels/teredo.bro new file mode 100644 index 0000000000..c457decd98 --- /dev/null +++ b/testing/btest/core/tunnels/teredo.bro @@ -0,0 +1,35 @@ +# @TEST-EXEC: bro -r $TRACES/tunnels/Teredo.pcap %INPUT >output +# @TEST-EXEC: btest-diff output +# @TEST-EXEC: btest-diff tunnel.log +# @TEST-EXEC: btest-diff conn.log +# @TEST-EXEC: btest-diff http.log + +function print_teredo(name: string, outer: connection, inner: teredo_hdr) + { + print fmt("%s: %s", name, outer$id); + print fmt(" ip6: %s", inner$hdr$ip6); + if ( inner?$auth ) + print fmt(" auth: %s", inner$auth); + if ( inner?$origin ) + print fmt(" origin: %s", inner$origin); + } + +event teredo_packet(outer: connection, inner: teredo_hdr) + { + print_teredo("packet", outer, inner); + } + +event teredo_authentication(outer: connection, inner: teredo_hdr) + { + print_teredo("auth", outer, inner); + } + +event teredo_origin_indication(outer: connection, inner: teredo_hdr) + { + print_teredo("origin", outer, inner); + } + +event teredo_bubble(outer: connection, inner: teredo_hdr) + { + print_teredo("bubble", outer, inner); + } diff --git a/testing/btest/core/tunnels/teredo_bubble_with_payload.test b/testing/btest/core/tunnels/teredo_bubble_with_payload.test new file mode 100644 index 0000000000..f45d8ca585 --- /dev/null +++ b/testing/btest/core/tunnels/teredo_bubble_with_payload.test @@ -0,0 +1,36 @@ +# @TEST-EXEC: bro -r $TRACES/tunnels/teredo_bubble_with_payload.pcap %INPUT >output +# @TEST-EXEC: btest-diff output +# @TEST-EXEC: btest-diff tunnel.log +# @TEST-EXEC: btest-diff conn.log +# @TEST-EXEC: btest-diff http.log +# @TEST-EXEC: btest-diff weird.log + +function print_teredo(name: string, outer: connection, inner: teredo_hdr) + { + print fmt("%s: %s", name, outer$id); + print fmt(" ip6: %s", inner$hdr$ip6); + if ( inner?$auth ) + print fmt(" auth: %s", inner$auth); + if ( inner?$origin ) + print fmt(" origin: %s", inner$origin); + } + +event teredo_packet(outer: connection, inner: teredo_hdr) + { + print_teredo("packet", outer, inner); + } + +event teredo_authentication(outer: connection, inner: teredo_hdr) + { + print_teredo("auth", outer, inner); + } + +event teredo_origin_indication(outer: connection, inner: teredo_hdr) + { + print_teredo("origin", outer, inner); + } + +event teredo_bubble(outer: connection, inner: teredo_hdr) + { + print_teredo("bubble", outer, inner); + } diff --git a/testing/btest/coverage/bare-mode-errors.test b/testing/btest/coverage/bare-mode-errors.test index 9fd18308ce..894c9e67f4 100644 --- a/testing/btest/coverage/bare-mode-errors.test +++ b/testing/btest/coverage/bare-mode-errors.test @@ -5,7 +5,10 @@ # Commonly, this test may fail if one forgets to @load some base/ scripts # when writing a new bro scripts. # +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: test -d $DIST/scripts -# @TEST-EXEC: for script in `find $DIST/scripts -name \*\.bro -not -path '*/site/*'`; do echo $script; if echo "$script" | egrep -q 'communication/listen|controllee'; then rm -rf load_attempt .bgprocs; btest-bg-run load_attempt bro -b $script; btest-bg-wait -k 2; cat load_attempt/.stderr >>allerrors; else bro -b $script 2>>allerrors; fi done || exit 0 +# @TEST-EXEC: for script in `find $DIST/scripts/ -name \*\.bro -not -path '*/site/*'`; do echo $script; if echo "$script" | egrep -q 'communication/listen|controllee'; then rm -rf load_attempt .bgprocs; btest-bg-run load_attempt bro -b $script; btest-bg-wait -k 2; cat load_attempt/.stderr >>allerrors; else bro -b $script 2>>allerrors; fi done || exit 0 # @TEST-EXEC: cat allerrors | grep -v "received termination signal" | sort | uniq > unique_errors -# @TEST-EXEC: btest-diff unique_errors +# @TEST-EXEC: if [ $(grep -c LibCURL_INCLUDE_DIR-NOTFOUND $BUILD/CMakeCache.txt) -ne 0 ]; then cp unique_errors unique_errors_no_elasticsearch; fi +# @TEST-EXEC: if [ $(grep -c LibCURL_INCLUDE_DIR-NOTFOUND $BUILD/CMakeCache.txt) -ne 0 ]; then btest-diff unique_errors_no_elasticsearch; else btest-diff unique_errors; fi diff --git a/testing/btest/coverage/coverage-blacklist.bro b/testing/btest/coverage/coverage-blacklist.bro new file mode 100644 index 0000000000..30a5f86efa --- /dev/null +++ b/testing/btest/coverage/coverage-blacklist.bro @@ -0,0 +1,29 @@ +# @TEST-EXEC: BRO_PROFILER_FILE=coverage bro -b %INPUT +# @TEST-EXEC: grep %INPUT coverage | sort -k2 >output +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff output + +print "first"; + +if ( F ) + { # @no-test + print "hello"; + print "world"; + } + +print "cover me"; + +if ( T ) + { + print "always executed"; + } + +print "don't cover me"; # @no-test + +if ( 0 + 0 == 1 ) print "impossible"; # @no-test + +if ( 1 == 0 ) + { + print "also impossible, but included in code coverage analysis"; + } + +print "success"; diff --git a/testing/btest/coverage/doc.test b/testing/btest/coverage/doc.test index 18ed13e6fa..074e397d88 100644 --- a/testing/btest/coverage/doc.test +++ b/testing/btest/coverage/doc.test @@ -1,7 +1,8 @@ # This tests that we're generating bro script documentation for all the # available bro scripts. If this fails, then the genDocSources.sh needs # to be run to produce a new DocSourcesList.cmake or genDocSources.sh needs -# to be updated to blacklist undesired scripts. +# to be updated to blacklist undesired scripts. To update, run +# "make update-doc-sources" # # @TEST-EXEC: $DIST/doc/scripts/genDocSourcesList.sh # @TEST-EXEC: cmp $DIST/doc/scripts/DocSourcesList.cmake ./DocSourcesList.cmake diff --git a/testing/btest/istate/bro-ipv6-socket.bro b/testing/btest/istate/bro-ipv6-socket.bro new file mode 100644 index 0000000000..305f32caab --- /dev/null +++ b/testing/btest/istate/bro-ipv6-socket.bro @@ -0,0 +1,56 @@ +# @TEST-SERIALIZE: comm +# +# @TEST-REQUIRES: ifconfig | grep -q -E "inet6 ::1|inet6 addr: ::1" +# +# @TEST-EXEC: btest-bg-run recv bro -b ../recv.bro +# @TEST-EXEC: btest-bg-run send bro -b ../send.bro +# @TEST-EXEC: btest-bg-wait 20 +# +# @TEST-EXEC: btest-diff recv/.stdout +# @TEST-EXEC: btest-diff send/.stdout + +@TEST-START-FILE send.bro + +@load base/frameworks/communication + +redef Communication::nodes += { + ["foo"] = [$host=[::1], $connect=T, $retry=1sec, $events=/my_event/] +}; + +global my_event: event(s: string); + +event remote_connection_handshake_done(p: event_peer) + { + print fmt("handshake done with peer: %s", p$host); + } + +event my_event(s: string) + { + print fmt("my_event: %s", s); + terminate(); + } + +@TEST-END-FILE + +############# + +@TEST-START-FILE recv.bro + +@load frameworks/communication/listen + +redef Communication::listen_ipv6=T; + +global my_event: event(s: string); + +event remote_connection_handshake_done(p: event_peer) + { + print fmt("handshake done with peer: %s", p$host); + event my_event("hello world"); + } + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +@TEST-END-FILE diff --git a/testing/btest/istate/broccoli-ipv6-socket.bro b/testing/btest/istate/broccoli-ipv6-socket.bro new file mode 100644 index 0000000000..be6266fdec --- /dev/null +++ b/testing/btest/istate/broccoli-ipv6-socket.bro @@ -0,0 +1,11 @@ +# @TEST-SERIALIZE: comm +# +# @TEST-REQUIRES: test -e $BUILD/aux/broccoli/src/libbroccoli.so || test -e $BUILD/aux/broccoli/src/libbroccoli.dylib +# @TEST-REQUIRES: ifconfig | grep -q -E "inet6 ::1|inet6 addr: ::1" +# +# @TEST-EXEC: btest-bg-run bro bro $DIST/aux/broccoli/test/broccoli-v6addrs.bro "Communication::listen_ipv6=T" +# @TEST-EXEC: sleep 1 +# @TEST-EXEC: btest-bg-run broccoli $BUILD/aux/broccoli/test/broccoli-v6addrs -6 ::1 +# @TEST-EXEC: btest-bg-wait 20 +# @TEST-EXEC: btest-diff bro/.stdout +# @TEST-EXEC: btest-diff broccoli/.stdout diff --git a/testing/btest/istate/broccoli-ipv6.bro b/testing/btest/istate/broccoli-ipv6.bro new file mode 100644 index 0000000000..b4fdfb5fcf --- /dev/null +++ b/testing/btest/istate/broccoli-ipv6.bro @@ -0,0 +1,10 @@ +# @TEST-SERIALIZE: comm +# +# @TEST-REQUIRES: test -e $BUILD/aux/broccoli/src/libbroccoli.so || test -e $BUILD/aux/broccoli/src/libbroccoli.dylib +# +# @TEST-EXEC: btest-bg-run bro bro $DIST/aux/broccoli/test/broccoli-v6addrs.bro +# @TEST-EXEC: sleep 1 +# @TEST-EXEC: btest-bg-run broccoli $BUILD/aux/broccoli/test/broccoli-v6addrs +# @TEST-EXEC: btest-bg-wait 20 +# @TEST-EXEC: btest-diff bro/.stdout +# @TEST-EXEC: btest-diff broccoli/.stdout diff --git a/testing/btest/istate/broccoli-ssl.bro b/testing/btest/istate/broccoli-ssl.bro new file mode 100644 index 0000000000..dcbea93150 --- /dev/null +++ b/testing/btest/istate/broccoli-ssl.bro @@ -0,0 +1,69 @@ +# @TEST-SERIALIZE: comm +# +# @TEST-REQUIRES: test -e $BUILD/aux/broccoli/src/libbroccoli.so || test -e $BUILD/aux/broccoli/src/libbroccoli.dylib +# +# @TEST-EXEC: chmod 600 broccoli.conf +# @TEST-EXEC: btest-bg-run bro bro $DIST/aux/broccoli/test/broccoli-v6addrs.bro "Communication::listen_ssl=T" "ssl_ca_certificate=../ca_cert.pem" "ssl_private_key=../bro.pem" +# @TEST-EXEC: sleep 1 +# @TEST-EXEC: btest-bg-run broccoli BROCCOLI_CONFIG_FILE=../broccoli.conf $BUILD/aux/broccoli/test/broccoli-v6addrs +# @TEST-EXEC: btest-bg-wait 20 +# @TEST-EXEC: btest-diff bro/.stdout +# @TEST-EXEC: btest-diff broccoli/.stdout + +@TEST-START-FILE broccoli.conf +/broccoli/use_ssl yes +/broccoli/ca_cert ../ca_cert.pem +/broccoli/host_cert ../bro.pem +/broccoli/host_key ../bro.pem +@TEST-END-FILE + +@TEST-START-FILE bro.pem +-----BEGIN RSA PRIVATE KEY----- +MIICXgIBAAKBgQD17FE8UVaO224Y8UL2bH1okCYxr5dVytTQ93uE5J9caGADzPZe +qYPuvtPt9ivhBtf2L9odK7unQU60v6RsO3bb9bQktQbEdh0FEjnso2UHe/nLreYn +VyLCEp9Sh1OFQnMhJNYuzNwVzWOqH/TYNy3ODueZTS4YBsRyEkpEfgeoaQIDAQAB +AoGAJ/S1Xi94+Mz+Hl9UmeUWmx6QlhIJbI7/9NPA5d6fZcwvjW6HuOmh3fBzTn5o +sq8B96Xesk6gtpQNzaA1fsBKlzDSpGRDVg2odN9vIT3jd0Dub2F47JHdFCqtMUIV +rCsO+fpGtavv1zJ/rzlJz7rx4cRP+/Gwd5YlH0q5cFuHhAECQQD9q328Ye4A7o2e +cLOhzuWUZszqdIY7ZTgDtk06F57VrjLVERrZjrtAwbs77m+ybw4pDKKU7H5inhQQ +03PU40ARAkEA+C6cCM6E4hRwuR+QyIqpNC4CzgPaKlF+VONZLYYvHEwFvx2/EPtX +zOZdE4HdJwnXBYx7+AGFeq8uHhrN2Tq62QJBAMory2JAinejqKsGF6R2SPMlm1ug +0vqziRksShBqkuSqmUjHASczYnoR7S+usMb9S8PblhgrA++FHWjrnf2lwIECQQCj ++/AfpY2J8GWW/HNm/q/UiX5S75qskZI+tsXK3bmtIdI+OIJxzxFxktj3NbyRud+4 +i92xvhebO7rmK2HOYg7pAkEA2wrwY1E237twoYXuUInv9F9kShKLQs19nup/dfmF +xfoVqYjJwidzPfgngowJZij7SoTaIBKv/fKp5Tq6xW3AEg== +-----END RSA PRIVATE KEY----- +-----BEGIN CERTIFICATE----- +MIICZDCCAc2gAwIBAgIJAKoxR9yFGsk8MA0GCSqGSIb3DQEBBQUAMCsxKTAnBgNV +BAMTIEJybyBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MCAXDTExMDYxNTIx +MjgxNVoYDzIxMTEwNTIyMjEyODE1WjArMSkwJwYDVQQDEyBCcm8gUm9vdCBDZXJ0 +aWZpY2F0aW9uIEF1dGhvcml0eTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA +9exRPFFWjttuGPFC9mx9aJAmMa+XVcrU0Pd7hOSfXGhgA8z2XqmD7r7T7fYr4QbX +9i/aHSu7p0FOtL+kbDt22/W0JLUGxHYdBRI57KNlB3v5y63mJ1ciwhKfUodThUJz +ISTWLszcFc1jqh/02Dctzg7nmU0uGAbEchJKRH4HqGkCAwEAAaOBjTCBijAdBgNV +HQ4EFgQU2vIsKYuGhHP8c7GeJLfWAjbKCFgwWwYDVR0jBFQwUoAU2vIsKYuGhHP8 +c7GeJLfWAjbKCFihL6QtMCsxKTAnBgNVBAMTIEJybyBSb290IENlcnRpZmljYXRp +b24gQXV0aG9yaXR5ggkAqjFH3IUayTwwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0B +AQUFAAOBgQAF2oceL61dA7WxA9lxcxsA/Fccr7+J6sO+pLXoZtx5tpknEuIUebkm +UfMGAiyYIenHi8u0Sia8KrIfuCDc2dG3DYmfX7/faCEbtSx8KtNQFIs3aXr1zhsw +3sX9fLS0gp/qHoPMuhbhlvTlMFSE/Mih3KDsZEGcifzI6ooLF0YP5A== +-----END CERTIFICATE----- +@TEST-END-FILE + +@TEST-START-FILE ca_cert.pem +-----BEGIN CERTIFICATE----- +MIICZDCCAc2gAwIBAgIJAKoxR9yFGsk8MA0GCSqGSIb3DQEBBQUAMCsxKTAnBgNV +BAMTIEJybyBSb290IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MCAXDTExMDYxNTIx +MjgxNVoYDzIxMTEwNTIyMjEyODE1WjArMSkwJwYDVQQDEyBCcm8gUm9vdCBDZXJ0 +aWZpY2F0aW9uIEF1dGhvcml0eTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA +9exRPFFWjttuGPFC9mx9aJAmMa+XVcrU0Pd7hOSfXGhgA8z2XqmD7r7T7fYr4QbX +9i/aHSu7p0FOtL+kbDt22/W0JLUGxHYdBRI57KNlB3v5y63mJ1ciwhKfUodThUJz +ISTWLszcFc1jqh/02Dctzg7nmU0uGAbEchJKRH4HqGkCAwEAAaOBjTCBijAdBgNV +HQ4EFgQU2vIsKYuGhHP8c7GeJLfWAjbKCFgwWwYDVR0jBFQwUoAU2vIsKYuGhHP8 +c7GeJLfWAjbKCFihL6QtMCsxKTAnBgNVBAMTIEJybyBSb290IENlcnRpZmljYXRp +b24gQXV0aG9yaXR5ggkAqjFH3IUayTwwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0B +AQUFAAOBgQAF2oceL61dA7WxA9lxcxsA/Fccr7+J6sO+pLXoZtx5tpknEuIUebkm +UfMGAiyYIenHi8u0Sia8KrIfuCDc2dG3DYmfX7/faCEbtSx8KtNQFIs3aXr1zhsw +3sX9fLS0gp/qHoPMuhbhlvTlMFSE/Mih3KDsZEGcifzI6ooLF0YP5A== +-----END CERTIFICATE----- +@TEST-END-FILE diff --git a/testing/btest/istate/broccoli.bro b/testing/btest/istate/broccoli.bro index 7f97f40585..2fdd4cbda4 100644 --- a/testing/btest/istate/broccoli.bro +++ b/testing/btest/istate/broccoli.bro @@ -1,9 +1,11 @@ -# @TEST-REQUIRES: grep -vq '#define BROv6' $BUILD/config.h +# @TEST-SERIALIZE: comm +# # @TEST-REQUIRES: test -e $BUILD/aux/broccoli/src/libbroccoli.so || test -e $BUILD/aux/broccoli/src/libbroccoli.dylib # # @TEST-EXEC: btest-bg-run bro bro %INPUT $DIST/aux/broccoli/test/broping-record.bro +# @TEST-EXEC: sleep 1 # @TEST-EXEC: btest-bg-run broccoli $BUILD/aux/broccoli/test/broping -r -c 3 127.0.0.1 -# @TEST-EXEC: btest-bg-wait -k 20 +# @TEST-EXEC: btest-bg-wait 20 # @TEST-EXEC: cat bro/ping.log | sed 's/one-way.*//g' >bro.log # @TEST-EXEC: cat broccoli/.stdout | sed 's/time=.*//g' >broccoli.log # @TEST-EXEC: btest-diff bro.log diff --git a/testing/btest/istate/events-ssl.bro b/testing/btest/istate/events-ssl.bro index 5a41e1683b..1d285869b4 100644 --- a/testing/btest/istate/events-ssl.bro +++ b/testing/btest/istate/events-ssl.bro @@ -1,14 +1,20 @@ -# +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run sender bro -C -r $TRACES/web.trace --pseudo-realtime ../sender.bro # @TEST-EXEC: btest-bg-run receiver bro ../receiver.bro -# @TEST-EXEC: btest-bg-wait -k 20 +# @TEST-EXEC: btest-bg-wait 20 # # @TEST-EXEC: btest-diff sender/http.log # @TEST-EXEC: btest-diff receiver/http.log -# @TEST-EXEC: cmp sender/http.log receiver/http.log # -# @TEST-EXEC: bro -x sender/events.bst http/base | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' >events.snd.log -# @TEST-EXEC: bro -x receiver/events.bst http/base | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' >events.rec.log +# @TEST-EXEC: cat sender/http.log | $SCRIPTS/diff-remove-timestamps >sender.http.log +# @TEST-EXEC: cat receiver/http.log | $SCRIPTS/diff-remove-timestamps >receiver.http.log +# @TEST-EXEC: cmp sender.http.log receiver.http.log +# +# @TEST-EXEC: bro -x sender/events.bst | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' | $SCRIPTS/diff-remove-timestamps >events.snd.log +# @TEST-EXEC: bro -x receiver/events.bst | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' | $SCRIPTS/diff-remove-timestamps >events.rec.log +# @TEST-EXEC: btest-diff events.rec.log +# @TEST-EXEC: btest-diff events.snd.log # @TEST-EXEC: cmp events.rec.log events.snd.log # # We don't compare the transmitted event paramerters anymore. With the dynamic @@ -49,7 +55,7 @@ event bro_init() redef peer_description = "events-rcv"; redef Communication::nodes += { - ["foo"] = [$host = 127.0.0.1, $events = /http_.*|signature_match/, $connect=T, $ssl=T] + ["foo"] = [$host = 127.0.0.1, $events = /http_.*|signature_match/, $connect=T, $ssl=T, $retry=1sec] }; redef ssl_ca_certificate = "../ca_cert.pem"; diff --git a/testing/btest/istate/events.bro b/testing/btest/istate/events.bro index a0dc494ced..590aabcd23 100644 --- a/testing/btest/istate/events.bro +++ b/testing/btest/istate/events.bro @@ -1,14 +1,20 @@ -# -# @TEST-EXEC: btest-bg-run sender bro -C -r $TRACES/web.trace --pseudo-realtime ../sender.bro -# @TEST-EXEC: btest-bg-run receiver bro ../receiver.bro -# @TEST-EXEC: btest-bg-wait -k 20 +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run sender bro -Bthreading,logging,comm -C -r $TRACES/web.trace --pseudo-realtime ../sender.bro +# @TEST-EXEC: btest-bg-run receiver bro -Bthreading,logging,comm ../receiver.bro +# @TEST-EXEC: btest-bg-wait 20 # # @TEST-EXEC: btest-diff sender/http.log # @TEST-EXEC: btest-diff receiver/http.log -# @TEST-EXEC: cmp sender/http.log receiver/http.log # -# @TEST-EXEC: bro -x sender/events.bst | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' >events.snd.log -# @TEST-EXEC: bro -x receiver/events.bst | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' >events.rec.log +# @TEST-EXEC: cat sender/http.log | $SCRIPTS/diff-remove-timestamps >sender.http.log +# @TEST-EXEC: cat receiver/http.log | $SCRIPTS/diff-remove-timestamps >receiver.http.log +# @TEST-EXEC: cmp sender.http.log receiver.http.log +# +# @TEST-EXEC: bro -x sender/events.bst | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' | $SCRIPTS/diff-remove-timestamps >events.snd.log +# @TEST-EXEC: bro -x receiver/events.bst | sed 's/^Event \[[-0-9.]*\] //g' | grep '^http_' | grep -v http_stats | sed 's/(.*$//g' | $SCRIPTS/diff-remove-timestamps >events.rec.log +# @TEST-EXEC: btest-diff events.rec.log +# @TEST-EXEC: btest-diff events.snd.log # @TEST-EXEC: cmp events.rec.log events.snd.log # # We don't compare the transmitted event paramerters anymore. With the dynamic @@ -44,7 +50,7 @@ event bro_init() redef peer_description = "events-rcv"; redef Communication::nodes += { - ["foo"] = [$host = 127.0.0.1, $events = /http_.*|signature_match/, $connect=T] + ["foo"] = [$host = 127.0.0.1, $events = /http_.*|signature_match/, $connect=T, $retry=1sec] }; event remote_connection_closed(p: event_peer) diff --git a/testing/btest/istate/pybroccoli.py b/testing/btest/istate/pybroccoli.py index b7fb53a955..9f26efca31 100644 --- a/testing/btest/istate/pybroccoli.py +++ b/testing/btest/istate/pybroccoli.py @@ -1,4 +1,5 @@ -# @TEST-REQUIRES: grep -vq '#define BROv6' $BUILD/config.h +# @TEST-SERIALIZE: comm +# # @TEST-REQUIRES: test -e $BUILD/aux/broccoli/src/libbroccoli.so || test -e $BUILD/aux/broccoli/src/libbroccoli.dylib # @TEST-REQUIRES: test -e $BUILD/aux/broccoli/bindings/broccoli-python/_broccoli_intern.so # diff --git a/testing/btest/istate/sync.bro b/testing/btest/istate/sync.bro index 1ccdc5538c..e1364a9553 100644 --- a/testing/btest/istate/sync.bro +++ b/testing/btest/istate/sync.bro @@ -1,3 +1,4 @@ +# @TEST-SERIALIZE: comm # # @TEST-EXEC: btest-bg-run sender bro %INPUT ../sender.bro # @TEST-EXEC: btest-bg-run receiver bro %INPUT ../receiver.bro @@ -153,7 +154,8 @@ event bro_init() } redef Communication::nodes += { - ["foo"] = [$host = 127.0.0.1, $events = /.*/, $connect=T, $sync=T] + ["foo"] = [$host = 127.0.0.1, $events = /.*/, $connect=T, $sync=T, + $retry=1sec] }; event remote_connection_closed(p: event_peer) diff --git a/testing/btest/language/addr.bro b/testing/btest/language/addr.bro new file mode 100644 index 0000000000..1cd93bad03 --- /dev/null +++ b/testing/btest/language/addr.bro @@ -0,0 +1,47 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + # IPv4 addresses + local a1: addr = 0.0.0.0; + local a2: addr = 10.0.0.11; + local a3: addr = 255.255.255.255; + local a4 = 192.1.2.3; + + test_case( "IPv4 address inequality", a1 != a2 ); + test_case( "IPv4 address equality", a1 == 0.0.0.0 ); + test_case( "IPv4 address comparison", a1 < a2 ); + test_case( "IPv4 address comparison", a3 > a2 ); + test_case( "size of IPv4 address", |a1| == 32 ); + test_case( "IPv4 address type inference", type_name(a4) == "addr" ); + + # IPv6 addresses + local b1: addr = [::]; + local b2: addr = [::255.255.255.255]; + local b3: addr = [::ffff:ffff]; + local b4: addr = [ffff::ffff]; + local b5: addr = [0000:0000:0000:0000:0000:0000:0000:0000]; + local b6: addr = [aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222]; + local b7: addr = [AAAA:BBBB:CCCC:DDDD:EEEE:FFFF:1111:2222]; + local b8 = [a::b]; + + test_case( "IPv6 address inequality", b1 != b2 ); + test_case( "IPv6 address equality", b1 == b5 ); + test_case( "IPv6 address equality", b2 == b3 ); + test_case( "IPv6 address comparison", b1 < b2 ); + test_case( "IPv6 address comparison", b4 > b2 ); + test_case( "IPv6 address not case-sensitive", b6 == b7 ); + test_case( "size of IPv6 address", |b1| == 128 ); + test_case( "IPv6 address type inference", type_name(b8) == "addr" ); + + test_case( "IPv4 and IPv6 address inequality", a1 != b1 ); + +} + diff --git a/testing/btest/language/any.bro b/testing/btest/language/any.bro new file mode 100644 index 0000000000..7437ee9851 --- /dev/null +++ b/testing/btest/language/any.bro @@ -0,0 +1,40 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +function anyarg(arg1: any, arg1type: string) + { + test_case( arg1type, type_name(arg1) == arg1type ); + } + +event bro_init() +{ + local any1: any = 5; + local any2: any = "bar"; + local any3: any = /bar/; + + # Test using variable of type "any" + + anyarg( any1, "count" ); + anyarg( any2, "string" ); + anyarg( any3, "pattern" ); + + # Test of other types + + anyarg( T, "bool" ); + anyarg( "foo", "string" ); + anyarg( 15, "count" ); + anyarg( +15, "int" ); + anyarg( 15.0, "double" ); + anyarg( /foo/, "pattern" ); + anyarg( 127.0.0.1, "addr" ); + anyarg( [::1], "addr" ); + anyarg( 127.0.0.1/16, "subnet" ); + anyarg( [ffff::1]/64, "subnet" ); + anyarg( 123/tcp, "port" ); +} + diff --git a/testing/btest/language/at-if.bro b/testing/btest/language/at-if.bro new file mode 100644 index 0000000000..979ed0bb9a --- /dev/null +++ b/testing/btest/language/at-if.bro @@ -0,0 +1,49 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local xyz = 0; + + # Test "if" without "else" + + @if ( F ) + xyz += 1; + @endif + + @if ( T ) + xyz += 2; + @endif + + test_case( "@if", xyz == 2 ); + + # Test "if" with an "else" + + xyz = 0; + + @if ( F ) + xyz += 1; + @else + xyz += 2; + @endif + + test_case( "@if...@else", xyz == 2 ); + + xyz = 0; + + @if ( T ) + xyz += 1; + @else + xyz += 2; + @endif + + test_case( "@if...@else", xyz == 1 ); + +} + diff --git a/testing/btest/language/at-ifdef.bro b/testing/btest/language/at-ifdef.bro new file mode 100644 index 0000000000..c30236f204 --- /dev/null +++ b/testing/btest/language/at-ifdef.bro @@ -0,0 +1,50 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +global thisisdefined = 123; + +event bro_init() +{ + local xyz = 0; + + # Test "ifdef" without "else" + + @ifdef ( notdefined ) + xyz += 1; + @endif + + @ifdef ( thisisdefined ) + xyz += 2; + @endif + + test_case( "@ifdef", xyz == 2 ); + + # Test "ifdef" with an "else" + + xyz = 0; + + @ifdef ( doesnotexist ) + xyz += 1; + @else + xyz += 2; + @endif + + test_case( "@ifdef...@else", xyz == 2 ); + + xyz = 0; + + @ifdef ( thisisdefined ) + xyz += 1; + @else + xyz += 2; + @endif + + test_case( "@ifdef...@else", xyz == 1 ); + +} + diff --git a/testing/btest/language/at-ifndef.bro b/testing/btest/language/at-ifndef.bro new file mode 100644 index 0000000000..c98287590f --- /dev/null +++ b/testing/btest/language/at-ifndef.bro @@ -0,0 +1,50 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +global thisisdefined = 123; + +event bro_init() +{ + local xyz = 0; + + # Test "ifndef" without "else" + + @ifndef ( notdefined ) + xyz += 1; + @endif + + @ifndef ( thisisdefined ) + xyz += 2; + @endif + + test_case( "@ifndef", xyz == 1 ); + + # Test "ifndef" with an "else" + + xyz = 0; + + @ifndef ( doesnotexist ) + xyz += 1; + @else + xyz += 2; + @endif + + test_case( "@ifndef...@else", xyz == 1 ); + + xyz = 0; + + @ifndef ( thisisdefined ) + xyz += 1; + @else + xyz += 2; + @endif + + test_case( "@ifndef...@else", xyz == 2 ); + +} + diff --git a/testing/btest/language/at-load.bro b/testing/btest/language/at-load.bro new file mode 100644 index 0000000000..b51594be16 --- /dev/null +++ b/testing/btest/language/at-load.bro @@ -0,0 +1,43 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +# In this script, we try to access each object defined in a "@load"ed script + +@load secondtestfile + +event bro_init() +{ + test_case( "function", T ); + test_case( "global variable", num == 123 ); + test_case( "const", daysperyear == 365 ); + event testevent( "foo" ); +} + + +# @TEST-START-FILE secondtestfile + +# In this script, we define some objects to be used in another script + +# Note: this script is not listed on the bro command-line (instead, it +# is "@load"ed from the other script) + +global test_case: function(msg: string, expect: bool); + +global testevent: event(msg: string); + +global num: count = 123; + +const daysperyear: count = 365; + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +event testevent(msg: string) + { + test_case( "event", T ); + } + +# @TEST-END-FILE + diff --git a/testing/btest/language/bool.bro b/testing/btest/language/bool.bro new file mode 100644 index 0000000000..b75343025f --- /dev/null +++ b/testing/btest/language/bool.bro @@ -0,0 +1,29 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local b1: bool = T; + local b2: bool = F; + local b3: bool = T; + local b4 = T; + local b5 = F; + + test_case( "equality operator", b1 == b3 ); + test_case( "inequality operator", b1 != b2 ); + test_case( "logical or operator", b1 || b2 ); + test_case( "logical and operator", b1 && b3 ); + test_case( "negation operator", !b2 ); + test_case( "absolute value", |b1| == 1 ); + test_case( "absolute value", |b2| == 0 ); + test_case( "type inference", type_name(b4) == "bool" ); + test_case( "type inference", type_name(b5) == "bool" ); + +} + diff --git a/testing/btest/language/conditional-expression.bro b/testing/btest/language/conditional-expression.bro new file mode 100644 index 0000000000..74648b6ce8 --- /dev/null +++ b/testing/btest/language/conditional-expression.bro @@ -0,0 +1,66 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +global ct: count; + +function f1(): bool + { + ct += 1; + return T; + } + +function f2(): bool + { + ct += 4; + return F; + } + + +event bro_init() +{ + local a: count; + local b: count; + local res: count; + local res2: bool; + + # Test that the correct operand is evaluated + + a = b = 0; + res = T ? ++a : ++b; + test_case( "true condition", a == 1 && b == 0 && res == 1); + + a = b = 0; + res = F ? ++a : ++b; + test_case( "false condition", a == 0 && b == 1 && res == 1); + + # Test again using function calls as operands + + ct = 0; + res2 = ct == 0 ? f1() : f2(); + test_case( "true condition", ct == 1 && res2 == T); + + ct = 0; + res2 = ct != 0 ? f1() : f2(); + test_case( "false condition", ct == 4 && res2 == F); + + # Test that the conditional operator is right-associative + + ct = 0; + T ? f1() : T ? f1() : f2(); + test_case( "associativity", ct == 1 ); + + ct = 0; + T ? f1() : (T ? f1() : f2()); + test_case( "associativity", ct == 1 ); + + ct = 0; + (T ? f1() : T) ? f1() : f2(); + test_case( "associativity", ct == 2 ); + +} + diff --git a/testing/btest/language/copy.bro b/testing/btest/language/copy.bro new file mode 100644 index 0000000000..6740a080c7 --- /dev/null +++ b/testing/btest/language/copy.bro @@ -0,0 +1,30 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + + +event bro_init() +{ + # "b" is not a copy of "a" + local a: set[string] = set("this", "test"); + local b: set[string] = a; + + delete a["this"]; + + test_case( "direct assignment", |b| == 1 && "this" !in b ); + + # "d" is a copy of "c" + local c: set[string] = set("this", "test"); + local d: set[string] = copy(c); + + delete c["this"]; + + test_case( "using copy", |d| == 2 && "this" in d); + +} + diff --git a/testing/btest/language/count.bro b/testing/btest/language/count.bro new file mode 100644 index 0000000000..d6dcf5a97e --- /dev/null +++ b/testing/btest/language/count.bro @@ -0,0 +1,59 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local c1: count = 0; + local c2: count = 5; + local c3: count = 0xFF; + local c4: count = 255; + local c5: count = 18446744073709551615; # maximum allowed value + local c6: count = 0xffffffffffffffff; # maximum allowed value + local c7: counter = 5; + local c8 = 1; + + # Type inference test + + test_case( "type inference", type_name(c8) == "count" ); + + # Counter alias test + + test_case( "counter alias", c2 == c7 ); + + # Test various constant representations + + test_case( "hexadecimal", c3 == c4 ); + + # Operator tests + + test_case( "inequality operator", c1 != c2 ); + test_case( "relational operator", c1 < c2 ); + test_case( "relational operator", c1 <= c2 ); + test_case( "relational operator", c2 > c1 ); + test_case( "relational operator", c2 >= c1 ); + test_case( "absolute value", |c1| == 0 ); + test_case( "absolute value", |c2| == 5 ); + test_case( "pre-increment operator", ++c2 == 6 ); + test_case( "pre-decrement operator", --c2 == 5 ); + test_case( "modulus operator", c2%2 == 1 ); + test_case( "division operator", c2/2 == 2 ); + c2 += 3; + test_case( "assignment operator", c2 == 8 ); + c2 -= 2; + test_case( "assignment operator", c2 == 6 ); + + # Max. value tests + + local str1 = fmt("max count value = %d", c5); + test_case( str1, str1 == "max count value = 18446744073709551615" ); + local str2 = fmt("max count value = %d", c6); + test_case( str2, str2 == "max count value = 18446744073709551615" ); + +} + diff --git a/testing/btest/language/double.bro b/testing/btest/language/double.bro new file mode 100644 index 0000000000..62ca768e22 --- /dev/null +++ b/testing/btest/language/double.bro @@ -0,0 +1,79 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local d1: double = 3; + local d2: double = +3; + local d3: double = 3.; + local d4: double = 3.0; + local d5: double = +3.0; + local d6: double = 3e0; + local d7: double = 3E0; + local d8: double = 3e+0; + local d9: double = 3e-0; + local d10: double = 3.0e0; + local d11: double = +3.0e0; + local d12: double = +3.0e+0; + local d13: double = +3.0E+0; + local d14: double = +3.0E-0; + local d15: double = .03E+2; + local d16: double = .03E2; + local d17: double = 3.0001; + local d18: double = -3.0001; + local d19: double = 1.7976931348623157e308; # maximum allowed value + local d20 = 7.0; + local d21 = 7e0; + local d22 = 7e+1; + + # Type inference tests + + test_case( "type inference", type_name(d20) == "double" ); + test_case( "type inference", type_name(d21) == "double" ); + test_case( "type inference", type_name(d22) == "double" ); + + # Test various constant representations + + test_case( "double representations", d1 == d2 ); + test_case( "double representations", d1 == d3 ); + test_case( "double representations", d1 == d4 ); + test_case( "double representations", d1 == d5 ); + test_case( "double representations", d1 == d6 ); + test_case( "double representations", d1 == d7 ); + test_case( "double representations", d1 == d8 ); + test_case( "double representations", d1 == d9 ); + test_case( "double representations", d1 == d10 ); + test_case( "double representations", d1 == d11 ); + test_case( "double representations", d1 == d12 ); + test_case( "double representations", d1 == d13 ); + test_case( "double representations", d1 == d14 ); + test_case( "double representations", d1 == d15 ); + test_case( "double representations", d1 == d16 ); + + # Operator tests + + test_case( "inequality operator", d18 != d17 ); + test_case( "absolute value", |d18| == d17 ); + d4 += 2; + test_case( "assignment operator", d4 == 5.0 ); + d4 -= 3; + test_case( "assignment operator", d4 == 2.0 ); + test_case( "relational operator", d4 <= d3 ); + test_case( "relational operator", d4 < d3 ); + test_case( "relational operator", d17 >= d3 ); + test_case( "relational operator", d17 > d3 ); + test_case( "division operator", d3/2 == 1.5 ); + + # Max. value test + + local str1 = fmt("max double value = %.16e", d19); + test_case( str1, str1 == "max double value = 1.7976931348623157e+308" ); + +} + diff --git a/testing/btest/language/enum.bro b/testing/btest/language/enum.bro new file mode 100644 index 0000000000..5cafb323a6 --- /dev/null +++ b/testing/btest/language/enum.bro @@ -0,0 +1,32 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +# enum with optional comma at end of definition +type color: enum { Red, White, Blue, }; + +# enum without optional comma +type city: enum { Rome, Paris }; + + +event bro_init() +{ + local e1: color = Blue; + local e2: color = White; + local e3: color = Blue; + local e4: city = Rome; + + test_case( "enum equality comparison", e1 != e2 ); + test_case( "enum equality comparison", e1 == e3 ); + test_case( "enum equality comparison", e1 != e4 ); + + # type inference + local x = Blue; + test_case( "type inference", x == e1 ); +} + diff --git a/testing/btest/language/event.bro b/testing/btest/language/event.bro new file mode 100644 index 0000000000..1ea5c7b6d8 --- /dev/null +++ b/testing/btest/language/event.bro @@ -0,0 +1,49 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + + +event e1() + { + print "event statement"; + return; + print "Error: this should not happen"; + } + +event e2() + { + print "schedule statement"; + } + +event e3(test: string) + { + print "event part1"; + } + +event e4(num: count) + { + print "assign event variable"; + } + +# Note: the name of this event is intentionally the same as one above +event e3(test: string) + { + print "event part2"; + } + +event bro_init() +{ + # Test calling an event with "event" statement + event e1(); + + # Test calling an event with "schedule" statement + schedule 1 sec { e2() }; + + # Test calling an event that has two separate definitions + event e3("foo"); + + # Test assigning an event variable to an event + local e5: event(num: count); + e5 = e4; + event e5(6); # TODO: this does not do anything +} + diff --git a/testing/btest/language/expire_func.test b/testing/btest/language/expire_func.test new file mode 100644 index 0000000000..653a4d9a86 --- /dev/null +++ b/testing/btest/language/expire_func.test @@ -0,0 +1,23 @@ +# @TEST-EXEC: bro -C -r $TRACES/var-services-std-ports.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +function inform_me(s: set[string], idx: string): interval + { + print fmt("expired %s", idx); + return 0secs; + } + +global s: set[string] &create_expire=1secs &expire_func=inform_me; + +event bro_init() + { + add s["i"]; + add s["am"]; + add s["here"]; + } + +event new_connection(c: connection) + { + add s[fmt("%s", c$id)]; + print s; + } diff --git a/testing/btest/language/file.bro b/testing/btest/language/file.bro new file mode 100644 index 0000000000..1f631eb4fe --- /dev/null +++ b/testing/btest/language/file.bro @@ -0,0 +1,19 @@ +# @TEST-EXEC: bro %INPUT +# @TEST-EXEC: btest-diff out1 +# @TEST-EXEC: btest-diff out2 + + +event bro_init() +{ + local f1: file = open( "out1" ); + print f1, 20; + print f1, 12; + close(f1); + + # Type inference test + + local f2 = open( "out2" ); + print f2, "test", 123, 456; + close(f2); +} + diff --git a/testing/btest/language/for.bro b/testing/btest/language/for.bro new file mode 100644 index 0000000000..f10ef0eb1b --- /dev/null +++ b/testing/btest/language/for.bro @@ -0,0 +1,44 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + + +event bro_init() +{ + local vv: vector of string = vector( "a", "b", "c" ); + local ct: count = 0; + + # Test a "for" loop without "break" or "next" + + ct = 0; + for ( i in vv ) ++ct; + test_case("for loop", ct == 3 ); + + # Test the "break" statement + + ct = 0; + for ( i in vv ) + { + ++ct; + break; + test_case("Error: this should not happen", F); + } + test_case("for loop with break", ct == 1 ); + + # Test the "next" statement + + ct = 0; + for ( i in vv ) + { + ++ct; + next; + test_case("Error: this should not happen", F); + } + test_case("for loop with next", ct == 3 ); +} + diff --git a/testing/btest/language/function.bro b/testing/btest/language/function.bro new file mode 100644 index 0000000000..13efbb91f8 --- /dev/null +++ b/testing/btest/language/function.bro @@ -0,0 +1,73 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +function f1() + { + test_case("no args without return value", T ); + } + +function f2() + { + test_case("no args no return value, empty return", T ); + return; + } + +function f3(): bool + { + return T; + } + +function f4(test: string) + { + test_case("args without return value", T ); + } + +function f5(test: string): bool + { + return T; + } + +function f6(test: string, num: count): bool + { + local val: int = -num; + if ( test == "bar" && num == 3 && val < 0 ) return T; + return F; + } + +function f7(test: string): bool + { + return F; + } + +event bro_init() +{ + f1(); + f2(); + test_case("no args with return value", f3() ); + f4("foo"); + test_case("args with return value", f5("foo") ); + test_case("multiple args with return value", f6("bar", 3) ); + + local f10 = function() { test_case("anonymous function without args or return value", T ); }; + f10(); + + local f11 = function(): bool { return T; }; + test_case("anonymous function with return value", f11() ); + + local f12 = function(val: int): bool { if (val > 0) return T; else return F; }; + test_case("anonymous function with args and return value", f12(2) ); + + # Test that a function variable can later be assigned to a function + local f13: function(test: string): bool; + f13 = f5; + test_case("assign function variable", f13("foo") ); + f13 = f7; + test_case("reassign function variable", !f13("bar") ); +} + diff --git a/testing/btest/language/if.bro b/testing/btest/language/if.bro new file mode 100644 index 0000000000..e9acea865f --- /dev/null +++ b/testing/btest/language/if.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + + +event bro_init() +{ + # Test "if" without "else" + + if ( T ) test_case( "if T", T); + + if ( F ) test_case( "Error: this should not happen", F); + + # Test "if" with only an "else" + + if ( T ) test_case( "if T else", T); + else test_case( "Error: this should not happen", F); + + if ( F ) test_case( "Error: this should not happen", F); + else test_case( "if F else", T); + + # Test "if" with only an "else if" + + if ( T ) test_case( "if T else if F", T); + else if ( F ) test_case( "Error: this should not happen", F); + + if ( F ) test_case( "Error: this should not happen", F); + else if ( T ) test_case( "if F else if T", T); + + if ( T ) test_case( "if T else if T", T); + else if ( T ) test_case( "Error: this should not happen", F); + + if ( F ) test_case( "Error: this should not happen", F); + else if ( F ) test_case( "Error: this should not happen", F); + + # Test "if" with both "else if" and "else" + + if ( T ) test_case( "if T else if F else", T); + else if ( F ) test_case( "Error: this should not happen", F); + else test_case( "Error: this should not happen", F); + + if ( F ) test_case( "Error: this should not happen", F); + else if ( T ) test_case( "if F else if T else", T); + else test_case( "Error: this should not happen", F); + + if ( T ) test_case( "if T else if T else", T); + else if ( T ) test_case( "Error: this should not happen", F); + else test_case( "Error: this should not happen", F); + + if ( F ) test_case( "Error: this should not happen", F); + else if ( F ) test_case( "Error: this should not happen", F); + else test_case( "if F else if F else", T); + + # Test "if" with multiple "else if" and an "else" + + if ( F ) test_case( "Error: this should not happen", F); + else if ( F ) test_case( "Error: this should not happen", F); + else if ( T ) test_case( "if F else if F else if T else", T); + else test_case( "Error: this should not happen", F); + + if ( F ) test_case( "Error: this should not happen", F); + else if ( F ) test_case( "Error: this should not happen", F); + else if ( F ) test_case( "Error: this should not happen", F); + else test_case( "if F else if F else if F else", T); +} + diff --git a/testing/btest/language/incr-vec-expr.test b/testing/btest/language/incr-vec-expr.test new file mode 100644 index 0000000000..c9945061a2 --- /dev/null +++ b/testing/btest/language/incr-vec-expr.test @@ -0,0 +1,27 @@ +# @TEST-EXEC: bro -b %INPUT >out +# @TEST-EXEC: btest-diff out + +type rec: record { + a: count; + b: string; + c: vector of count; +}; + +global vec: vector of count = vector(0,0,0); + +global v: rec = [$a=0, $b="test", $c=vector(1,2,3)]; + +print vec; +print v; + +++vec; + +print vec; + +++v$a; + +print v; + +++v$c; + +print v; diff --git a/testing/btest/language/int.bro b/testing/btest/language/int.bro new file mode 100644 index 0000000000..5cfa1620bd --- /dev/null +++ b/testing/btest/language/int.bro @@ -0,0 +1,70 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local i1: int = 3; + local i2: int = +3; + local i3: int = -3; + local i4: int = +0; + local i5: int = -0; + local i6: int = 12; + local i7: int = +0xc; + local i8: int = 0xC; + local i9: int = -0xC; + local i10: int = -12; + local i11: int = 9223372036854775807; # max. allowed value + local i12: int = -9223372036854775808; # min. allowed value + local i13: int = 0x7fffffffffffffff; # max. allowed value + local i14: int = -0x8000000000000000; # min. allowed value + local i15 = +3; + + # Type inference test + + test_case( "type inference", type_name(i15) == "int" ); + + # Test various constant representations + + test_case( "optional '+' sign", i1 == i2 ); + test_case( "negative vs. positive", i1 != i3 ); + test_case( "negative vs. positive", i4 == i5 ); + test_case( "hexadecimal", i6 == i7 ); + test_case( "hexadecimal", i6 == i8 ); + test_case( "hexadecimal", i9 == i10 ); + + # Operator tests + + test_case( "relational operator", i2 > i3 ); + test_case( "relational operator", i2 >= i3 ); + test_case( "relational operator", i3 < i2 ); + test_case( "relational operator", i3 <= i2 ); + test_case( "absolute value", |i4| == 0 ); + test_case( "absolute value", |i3| == 3 ); + test_case( "pre-increment operator", ++i2 == 4 ); + test_case( "pre-decrement operator", --i2 == 3 ); + test_case( "modulus operator", i2%2 == 1 ); + test_case( "division operator", i2/2 == 1 ); + i2 += 4; + test_case( "assignment operator", i2 == 7 ); + i2 -= 2; + test_case( "assignment operator", i2 == 5 ); + + # Max/min value tests + + local str1 = fmt("max int value = %d", i11); + test_case( str1, str1 == "max int value = 9223372036854775807" ); + local str2 = fmt("min int value = %d", i12); + test_case( str2, str2 == "min int value = -9223372036854775808" ); + local str3 = fmt("max int value = %d", i13); + test_case( str3, str3 == "max int value = 9223372036854775807" ); + local str4 = fmt("min int value = %d", i14); + test_case( str4, str4 == "min int value = -9223372036854775808" ); + +} + diff --git a/testing/btest/language/interval.bro b/testing/btest/language/interval.bro new file mode 100644 index 0000000000..66d44206d3 --- /dev/null +++ b/testing/btest/language/interval.bro @@ -0,0 +1,92 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +function approx_equal(x: double, y: double): bool + { + # return T if x and y are approximately equal, and F otherwise + return |(x - y)/x| < 1e-6 ? T : F; + } + +event bro_init() +{ + # Constants without space and no letter "s" + + local in11: interval = 2usec; + local in12: interval = 2msec; + local in13: interval = 120sec; + local in14: interval = 2min; + local in15: interval = -2hr; + local in16: interval = 2.5day; + + # Constants with space and no letter "s" + + local in21: interval = 2 usec; + local in22: interval = 2 msec; + local in23: interval = 120 sec; + local in24: interval = 2 min; + local in25: interval = -2 hr; + local in26: interval = 2.5 day; + + # Constants with space and letter "s" + + local in31: interval = 2 usecs; + local in32: interval = 2 msecs; + local in33: interval = 1.2e2 secs; + local in34: interval = 2 mins; + local in35: interval = -2 hrs; + local in36: interval = 2.5 days; + + # Type inference + + local in41 = 2 usec; + local in42 = 2.1usec; + local in43 = 3usecs; + + # Type inference tests + + test_case( "type inference", type_name(in41) == "interval" ); + test_case( "type inference", type_name(in42) == "interval" ); + test_case( "type inference", type_name(in43) == "interval" ); + + # Test various constant representations + + test_case( "optional space", in11 == in21 ); + test_case( "plural/singular interval are same", in11 == in31 ); + + # Operator tests + + test_case( "different units with same numeric value", in11 != in12 ); + test_case( "compare different time units", in13 == in34 ); + test_case( "compare different time units", in13 <= in34 ); + test_case( "compare different time units", in13 >= in34 ); + test_case( "compare different time units", in13 < in36 ); + test_case( "compare different time units", in13 <= in36 ); + test_case( "compare different time units", in13 > in35 ); + test_case( "compare different time units", in13 >= in35 ); + test_case( "add different time units", in13 + in14 == 4min ); + test_case( "subtract different time units", in24 - in23 == 0sec ); + test_case( "absolute value", |in25| == 2.0*3600 ); + test_case( "absolute value", |in36| == 2.5*86400 ); + in34 += 2hr; + test_case( "assignment operator", in34 == 122min ); + in34 -= 2hr; + test_case( "assignment operator", in34 == 2min ); + test_case( "multiplication operator", in33*2 == 4min ); + test_case( "division operator", in35/2 == -1hr ); + test_case( "division operator", approx_equal(in32/in31, 1e3) ); + + # Test relative size of each interval unit + + test_case( "relative size of units", approx_equal(1msec/1usec, 1000) ); + test_case( "relative size of units", approx_equal(1sec/1msec, 1000) ); + test_case( "relative size of units", approx_equal(1min/1sec, 60) ); + test_case( "relative size of units", approx_equal(1hr/1min, 60) ); + test_case( "relative size of units", approx_equal(1day/1hr, 24) ); + +} + diff --git a/testing/btest/language/ipv6-literals.bro b/testing/btest/language/ipv6-literals.bro new file mode 100644 index 0000000000..004d104c6e --- /dev/null +++ b/testing/btest/language/ipv6-literals.bro @@ -0,0 +1,32 @@ +# @TEST-EXEC: bro -b %INPUT >output +# @TEST-EXEC: btest-diff output + +local v: vector of addr = vector(); + +v[|v|] = [::1]; +v[|v|] = [::ffff]; +v[|v|] = [::ffff:ffff]; +v[|v|] = [::0a0a:ffff]; +v[|v|] = [1::1]; +v[|v|] = [1::a]; +v[|v|] = [1::1:1]; +v[|v|] = [1::1:a]; +v[|v|] = [a::a]; +v[|v|] = [a::1]; +v[|v|] = [a::a:a]; +v[|v|] = [a::a:1]; +v[|v|] = [a:a::a]; +v[|v|] = [aaaa:0::ffff]; +v[|v|] = [::ffff:192.168.1.100]; +v[|v|] = [ffff::192.168.1.100]; +v[|v|] = [::192.168.1.100]; +v[|v|] = [::ffff:0:192.168.1.100]; +v[|v|] = [805B:2D9D:DC28::FC57:212.200.31.255]; +v[|v|] = [0xaaaa::bbbb]; +v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:ffff:1111:2222]; +v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:ffff:1:2222]; +v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:ffff:0:2222]; +v[|v|] = [aaaa:bbbb:cccc:dddd:eeee:0:0:2222]; + +for (i in v) + print v[i]; diff --git a/testing/btest/language/match-test.bro b/testing/btest/language/match-test.bro deleted file mode 100644 index 9352d0f39f..0000000000 --- a/testing/btest/language/match-test.bro +++ /dev/null @@ -1,20 +0,0 @@ -# @TEST-EXEC: bro %INPUT >output 2>&1 -# @TEST-EXEC: btest-diff output - -global match_stuff = { - [$pred(a: count) = { return a > 5; }, - $result = "it's big", - $priority = 2], - - [$pred(a: count) = { return a > 15; }, - $result = "it's really big", - $priority = 3], - - [$pred(a: count) = { return T; }, - $result = "default", - $priority = 0], -}; - -print match 0 using match_stuff; -print match 10 using match_stuff; -print match 20 using match_stuff; diff --git a/testing/btest/language/match-test2.bro b/testing/btest/language/match-test2.bro deleted file mode 100644 index f1c120adf2..0000000000 --- a/testing/btest/language/match-test2.bro +++ /dev/null @@ -1,51 +0,0 @@ -# @TEST-EXEC: bro %INPUT >output 2>&1 -# @TEST-EXEC: btest-diff output - -type fakealert : record { - alert: string; -}; - - -type match_rec : record { - result : count; - pred : function(rec : fakealert) : bool; - priority: count; -}; - - -#global test_set : set[int] = -#{ -#1, 2, 3 -#}; - -global match_set : set[match_rec] = -{ - [$result = 1, $pred(a: fakealert) = { return T; }, $priority = 8 ], - [$result = 2, $pred(a: fakealert) = { return T; }, $priority = 9 ] -}; - -global al : fakealert; - -#global testset : set[fakealert] = -#{ -# [$alert="hithere"] -#}; - - -type nonalert: record { - alert : string; - pred : function(a : int) : int; -}; - -#global na : nonalert; -#na$alert = "5"; - -#al$alert = "hithere2"; -#if (al in testset) -# print 1; -#else -# print 0; - - -al$alert = "hi"; -print (match al using match_set); diff --git a/testing/btest/language/module.bro b/testing/btest/language/module.bro new file mode 100644 index 0000000000..4c70546406 --- /dev/null +++ b/testing/btest/language/module.bro @@ -0,0 +1,41 @@ +# @TEST-EXEC: bro %INPUT secondtestfile >out +# @TEST-EXEC: btest-diff out + +# In this source file, we define a module and export some objects + +module thisisatest; + +export { + global test_case: function(msg: string, expect: bool); + + global testevent: event(msg: string); + + global num: count = 123; + + const daysperyear: count = 365; +} + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +event testevent(msg: string) + { + test_case( "event", T ); + } + + +# @TEST-START-FILE secondtestfile + +# In this source file, we try to access each exported object from the module + +event bro_init() +{ + thisisatest::test_case( "function", T ); + thisisatest::test_case( "global variable", thisisatest::num == 123 ); + thisisatest::test_case( "const", thisisatest::daysperyear == 365 ); + event thisisatest::testevent( "foo" ); +} + +# @TEST-END-FILE diff --git a/testing/btest/language/no-module.bro b/testing/btest/language/no-module.bro new file mode 100644 index 0000000000..eadce66c18 --- /dev/null +++ b/testing/btest/language/no-module.bro @@ -0,0 +1,34 @@ +# @TEST-EXEC: bro %INPUT secondtestfile >out +# @TEST-EXEC: btest-diff out + +# This is the same test as "module.bro", but here we omit the module definition + + +global num: count = 123; + +const daysperyear: count = 365; + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +event testevent(msg: string) + { + test_case( "event", T ); + } + + +# @TEST-START-FILE secondtestfile + +# In this script, we try to access each object defined in the other script + +event bro_init() +{ + test_case( "function", T ); + test_case( "global variable", num == 123 ); + test_case( "const", daysperyear == 365 ); + event testevent( "foo" ); +} + +# @TEST-END-FILE diff --git a/testing/btest/language/null-statement.bro b/testing/btest/language/null-statement.bro new file mode 100644 index 0000000000..420ebd8a6c --- /dev/null +++ b/testing/btest/language/null-statement.bro @@ -0,0 +1,34 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + + +function f1(test: string) + { + ; # null statement in function + } + +event bro_init() +{ + local s1: set[string] = set( "this", "test" ); + + ; # null statement in event + + for ( i in s1 ) + ; # null statement in for loop + + if ( |s1| > 0 ) ; # null statement in if statement + + f1("foo"); + + { ; } # null compound statement + + if ( |s1| == 0 ) + { + print "Error: this should not happen"; + } + else + ; # null statement in else + + print "done"; +} + diff --git a/testing/btest/language/pattern.bro b/testing/btest/language/pattern.bro new file mode 100644 index 0000000000..ec50dc66fe --- /dev/null +++ b/testing/btest/language/pattern.bro @@ -0,0 +1,32 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local p1: pattern = /foo|bar/; + local p2: pattern = /oob/; + local p3: pattern = /^oob/; + local p4 = /foo/; + + # Type inference tests + + test_case( "type inference", type_name(p4) == "pattern" ); + + # Operator tests + + test_case( "equality operator", "foo" == p1 ); + test_case( "equality operator (order of operands)", p1 == "foo" ); + test_case( "inequality operator", "foobar" != p1 ); + test_case( "inequality operator (order of operands)", p1 != "foobar" ); + test_case( "in operator", p1 in "foobar" ); + test_case( "in operator", p2 in "foobar" ); + test_case( "!in operator", p3 !in "foobar" ); + +} + diff --git a/testing/btest/language/port.bro b/testing/btest/language/port.bro new file mode 100644 index 0000000000..1874e1dca3 --- /dev/null +++ b/testing/btest/language/port.bro @@ -0,0 +1,40 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local p1: port = 1/icmp; + local p2: port = 2/udp; + local p3: port = 3/tcp; + local p4: port = 4/unknown; + local p5 = 123/tcp; + + # maximum allowed values for each port type + local p6: port = 255/icmp; + local p7: port = 65535/udp; + local p8: port = 65535/tcp; + local p9: port = 255/unknown; + + # Type inference test + + test_case( "type inference", type_name(p5) == "port" ); + + # Operator tests + + test_case( "protocol ordering", p1 > p2 ); + test_case( "protocol ordering", p2 > p3 ); + test_case( "protocol ordering", p3 > p4 ); + test_case( "protocol ordering", p8 < p7 ); + test_case( "protocol ordering", p9 < p6 ); + test_case( "different protocol but same numeric value", p7 != p8 ); + test_case( "different protocol but same numeric value", p6 != p9 ); + test_case( "equality operator", 65535/tcp == p8 ); + +} + diff --git a/testing/btest/language/precedence.bro b/testing/btest/language/precedence.bro new file mode 100644 index 0000000000..da8fef311c --- /dev/null +++ b/testing/btest/language/precedence.bro @@ -0,0 +1,110 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +# This is an incomplete set of tests to demonstrate the order of precedence +# of bro script operators + +event bro_init() +{ + local n1: int; + local n2: int; + local n3: int; + + # Tests that show "++" has higher precedence than "*" + + n1 = n2 = 5; + n1 = ++n1 * 3; + n2 = (++n2) * 3; + test_case( "++ and *", n1 == 18 ); + test_case( "++ and *", n2 == 18 ); + + n1 = 5; + n1 = 3 * ++n1; + test_case( "* and ++", n1 == 18 ); + + # Tests that show "*" has same precedence as "%" + + n1 = 3 * 5 % 2; + n2 = (3 * 5) % 2; + n3 = 3 * (5 % 2); + test_case( "* and %", n1 == 1 ); + test_case( "* and %", n2 == 1 ); + test_case( "* and %", n3 == 3 ); + + n1 = 7 % 3 * 2; + n2 = (7 % 3) * 2; + n3 = 7 % (3 * 2); + test_case( "% and *", n1 == 2 ); + test_case( "% and *", n2 == 2 ); + test_case( "% and *", n3 == 1 ); + + # Tests that show "*" has higher precedence than "+" + + n1 = 1 + 2 * 3; + n2 = 1 + (2 * 3); + n3 = (1 + 2) * 3; + test_case( "+ and *", n1 == 7 ); + test_case( "+ and *", n2 == 7 ); + test_case( "+ and *", n3 == 9 ); + + # Tests that show "+" has higher precedence than "<" + + test_case( "< and +", 5 < 3 + 7 ); + test_case( "< and +", 5 < (3 + 7) ); + + test_case( "+ and <", 7 + 3 > 5 ); + test_case( "+ and <", (7 + 3) > 5 ); + + # Tests that show "+" has higher precedence than "+=" + + n1 = n2 = n3 = 0; + n1 += 1 + 2; + n2 += (1 + 2); + (n3 += 1) + 2; + test_case( "+= and +", n1 == 3 ); + test_case( "+= and +", n2 == 3 ); + test_case( "+= and +", n3 == 1 ); + + local r1: bool; + local r2: bool; + local r3: bool; + + # Tests that show "&&" has higher precedence than "||" + + r1 = F && F || T; + r2 = (F && F) || T; + r3 = F && (F || T); + test_case( "&& and ||", r1 ); + test_case( "&& and ||", r2 ); + test_case( "&& and ||", !r3 ); + + r1 = T || F && F; + r2 = T || (F && F); + r3 = (T || F) && F; + test_case( "|| and &&", r1 ); + test_case( "|| and &&", r2 ); + test_case( "|| and &&", !r3 ); + + # Tests that show "||" has higher precedence than conditional operator + + r1 = T || T ? F : F; + r2 = (T || T) ? F : F; + r3 = T || (T ? F : F); + test_case( "|| and conditional operator", !r1 ); + test_case( "|| and conditional operator", !r2 ); + test_case( "|| and conditional operator", r3 ); + + r1 = T ? F : F || T; + r2 = T ? F : (F || T); + r3 = (T ? F : F) || T; + test_case( "conditional operator and ||", !r1 ); + test_case( "conditional operator and ||", !r2 ); + test_case( "conditional operator and ||", r3 ); + +} + diff --git a/testing/btest/language/record-default-coercion.bro b/testing/btest/language/record-default-coercion.bro new file mode 100644 index 0000000000..7e717c39e2 --- /dev/null +++ b/testing/btest/language/record-default-coercion.bro @@ -0,0 +1,18 @@ +# @TEST-EXEC: bro -b %INPUT >out +# @TEST-EXEC: btest-diff out + +type MyRecord: record { + a: count &default=13; + c: count; + v: vector of string &default=vector(); +}; + +event bro_init() + { + local r: MyRecord = [$c=13]; + print r; + print |r$v|; + r$v[|r$v|] = "test"; + print r; + print |r$v|; + } diff --git a/testing/btest/language/record-index-complex-fields.bro b/testing/btest/language/record-index-complex-fields.bro new file mode 100644 index 0000000000..ae45648728 --- /dev/null +++ b/testing/btest/language/record-index-complex-fields.bro @@ -0,0 +1,38 @@ +# @TEST-EXEC: bro -b %INPUT >output +# @TEST-EXEC: btest-diff output + +# This test checks whether records with complex fields (tables, sets, vectors) +# can be used as table/set indices. + +type MetaData: record { + a: count; + tags_v: vector of count; + tags_t: table[string] of count; + tags_s: set[string]; +}; + +global ip_data: table[addr] of set[MetaData] = table(); + +global t1_t: table[string] of count = { ["one"] = 1, ["two"] = 2 }; +global t2_t: table[string] of count = { ["four"] = 4, ["five"] = 5 }; + +global t1_v: vector of count = vector(); +global t2_v: vector of count = vector(); +t1_v[0] = 0; +t1_v[1] = 1; +t2_v[2] = 2; +t2_v[3] = 3; + +local m: MetaData = [$a=4, $tags_v=t1_v, $tags_t=t1_t, $tags_s=set("a", "b")]; +local n: MetaData = [$a=13, $tags_v=t2_v, $tags_t=t2_t, $tags_s=set("c", "d")]; + +if ( 1.2.3.4 !in ip_data ) + ip_data[1.2.3.4] = set(m); +else + add ip_data[1.2.3.4][m]; + +print ip_data; + +add ip_data[1.2.3.4][n]; + +print ip_data[1.2.3.4]; diff --git a/testing/btest/language/set.bro b/testing/btest/language/set.bro new file mode 100644 index 0000000000..5e56e3b9b8 --- /dev/null +++ b/testing/btest/language/set.bro @@ -0,0 +1,140 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +# Note: only global sets can be initialized with curly braces +global sg1: set[string] = { "curly", "braces" }; +global sg2: set[port, string, bool] = { [10/udp, "curly", F], + [11/udp, "braces", T] }; +global sg3 = { "more", "curly", "braces" }; + +event bro_init() +{ + local s1: set[string] = set( "test", "example" ); + local s2: set[string] = set(); + local s3: set[string]; + local s4 = set( "type inference" ); + local s5: set[port, string, bool] = set( [1/tcp, "test", T], + [2/tcp, "example", F] ); + local s6: set[port, string, bool] = set(); + local s7: set[port, string, bool]; + local s8 = set( [8/tcp, "type inference", T] ); + + # Type inference tests + + test_case( "type inference", type_name(s4) == "set[string]" ); + test_case( "type inference", type_name(s8) == "set[port,string,bool]" ); + test_case( "type inference", type_name(sg3) == "set[string]" ); + + # Test the size of each set + + test_case( "cardinality", |s1| == 2 ); + test_case( "cardinality", |s2| == 0 ); + test_case( "cardinality", |s3| == 0 ); + test_case( "cardinality", |s4| == 1 ); + test_case( "cardinality", |s5| == 2 ); + test_case( "cardinality", |s6| == 0 ); + test_case( "cardinality", |s7| == 0 ); + test_case( "cardinality", |s8| == 1 ); + test_case( "cardinality", |sg1| == 2 ); + test_case( "cardinality", |sg2| == 2 ); + test_case( "cardinality", |sg3| == 3 ); + + # Test iterating over each set + + local ct: count; + ct = 0; + for ( c in s1 ) + { + if ( type_name(c) != "string" ) + print "Error: wrong set element type"; + ++ct; + } + test_case( "iterate over set", ct == 2 ); + + ct = 0; + for ( c in s2 ) + { + ++ct; + } + test_case( "iterate over set", ct == 0 ); + + ct = 0; + for ( [c1,c2,c3] in s5 ) + { + ++ct; + } + test_case( "iterate over set", ct == 2 ); + + ct = 0; + for ( [c1,c2,c3] in sg2 ) + { + ++ct; + } + test_case( "iterate over set", ct == 2 ); + + # Test adding elements to each set (Note: cannot add elements to sets + # of multiple types) + + add s1["added"]; + add s1["added"]; # element already exists (nothing happens) + test_case( "add element", |s1| == 3 ); + test_case( "in operator", "added" in s1 ); + + add s2["another"]; + test_case( "add element", |s2| == 1 ); + add s2["test"]; + test_case( "add element", |s2| == 2 ); + test_case( "in operator", "another" in s2 ); + test_case( "in operator", "test" in s2 ); + + add s3["foo"]; + test_case( "add element", |s3| == 1 ); + test_case( "in operator", "foo" in s3 ); + + add s4["local"]; + test_case( "add element", |s4| == 2 ); + test_case( "in operator", "local" in s4 ); + + add sg1["global"]; + test_case( "add element", |sg1| == 3 ); + test_case( "in operator", "global" in sg1 ); + + add sg3["more global"]; + test_case( "add element", |sg3| == 4 ); + test_case( "in operator", "more global" in sg3 ); + + # Test removing elements from each set (Note: cannot remove elements + # from sets of multiple types) + + delete s1["test"]; + delete s1["foobar"]; # element does not exist (nothing happens) + test_case( "remove element", |s1| == 2 ); + test_case( "!in operator", "test" !in s1 ); + + delete s2["test"]; + test_case( "remove element", |s2| == 1 ); + test_case( "!in operator", "test" !in s2 ); + + delete s3["foo"]; + test_case( "remove element", |s3| == 0 ); + test_case( "!in operator", "foo" !in s3 ); + + delete s4["type inference"]; + test_case( "remove element", |s4| == 1 ); + test_case( "!in operator", "type inference" !in s4 ); + + delete sg1["braces"]; + test_case( "remove element", |sg1| == 2 ); + test_case( "!in operator", "braces" !in sg1 ); + + delete sg3["curly"]; + test_case( "remove element", |sg3| == 3 ); + test_case( "!in operator", "curly" !in sg3 ); +} + diff --git a/testing/btest/language/short-circuit.bro b/testing/btest/language/short-circuit.bro new file mode 100644 index 0000000000..f0ba585cea --- /dev/null +++ b/testing/btest/language/short-circuit.bro @@ -0,0 +1,48 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +global ct: count; + +function t_func(): bool + { + ct += 1; + return T; + } + +function f_func(): bool + { + ct += 2; + return F; + } + + +event bro_init() +{ + local res: bool; + + # both functions should be called + ct = 0; + res = t_func() && f_func(); + test_case("&& operator (eval. both operands)", res == F && ct == 3 ); + + # only first function should be called + ct = 0; + res = f_func() && t_func(); + test_case("&& operator (eval. 1st operand)", res == F && ct == 2 ); + + # only first function should be called + ct = 0; + res = t_func() || f_func(); + test_case("|| operator (eval. 1st operand)", res == T && ct == 1 ); + + # both functions should be called + ct = 0; + res = f_func() || t_func(); + test_case("|| operator (eval. both operands)", res == T && ct == 3 ); +} + diff --git a/testing/btest/language/sizeof.bro b/testing/btest/language/sizeof.bro index 3457f895c9..99d7b51ce8 100644 --- a/testing/btest/language/sizeof.bro +++ b/testing/btest/language/sizeof.bro @@ -20,6 +20,7 @@ type example_record: record { }; global a: addr = 1.2.3.4; +global a6: addr = [::1]; global b: bool = T; global c: count = 10; global d: double = -1.23; @@ -52,8 +53,10 @@ v[4] = "World"; # Print out the sizes of the various vals: #----------------------------------------- -# Size of addr: returns integer representation for IPv4, 0 for IPv6. -print fmt("Address %s: %d", a, |a|); +# Size of addr: returns number of bits required to represent the address +# which is 32 for IPv4 or 128 for IPv6 +print fmt("IPv4 Address %s: %d", a, |a|); +print fmt("IPv6 Address %s: %d", a6, |a6|); # Size of boolean: returns 1 or 0. print fmt("Boolean %s: %d", b, |b|); diff --git a/testing/btest/language/string.bro b/testing/btest/language/string.bro new file mode 100644 index 0000000000..3b9137cda5 --- /dev/null +++ b/testing/btest/language/string.bro @@ -0,0 +1,74 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local s1: string = "a\ty"; # tab + local s2: string = "a\nb"; # newline + local s3: string = "a\"b"; # double quote + local s4: string = "a\\b"; # backslash + local s5: string = "a\x9y"; # 1-digit hex value (tab character) + local s6: string = "a\x0ab"; # 2-digit hex value (newline character) + local s7: string = "a\x22b"; # 2-digit hex value (double quote) + local s8: string = "a\x00b"; # 2-digit hex value (null character) + local s9: string = "a\011y"; # 3-digit octal value (tab character) + local s10: string = "a\12b"; # 2-digit octal value (newline character) + local s11: string = "a\0b"; # 1-digit octal value (null character) + + local s20: string = ""; + local s21: string = "x"; + local s22: string = s21 + s11; + local s23: string = "test"; + local s24: string = "this is a very long string" + + "which continues on the next line" + + "the end"; + local s25: string = "on"; + local s26 = "x"; + + # Type inference test + + test_case( "type inference", type_name(s26) == "string" ); + + # Escape sequence tests + + test_case( "tab escape sequence", |s1| == 3 ); + test_case( "newline escape sequence", |s2| == 3 ); + test_case( "double quote escape sequence", |s3| == 3 ); + test_case( "backslash escape sequence", |s4| == 3 ); + test_case( "1-digit hex escape sequence", |s5| == 3 ); + test_case( "2-digit hex escape sequence", |s6| == 3 ); + test_case( "2-digit hex escape sequence", |s7| == 3 ); + test_case( "2-digit hex escape sequence", |s8| == 3 ); + test_case( "3-digit octal escape sequence", |s9| == 3 ); + test_case( "2-digit octal escape sequence", |s10| == 3 ); + test_case( "1-digit octal escape sequence", |s11| == 3 ); + test_case( "tab escape sequence", s1 == s5 ); + test_case( "tab escape sequence", s5 == s9 ); + test_case( "newline escape sequence", s2 == s6 ); + test_case( "newline escape sequence", s6 == s10 ); + test_case( "double quote escape sequence", s3 == s7 ); + test_case( "null escape sequence", s8 == s11 ); + + # Operator tests + + test_case( "empty string", |s20| == 0 ); + test_case( "nonempty string", |s21| == 1 ); + test_case( "string comparison", s21 > s11 ); + test_case( "string comparison", s21 >= s11 ); + test_case( "string comparison", s11 < s21 ); + test_case( "string comparison", s11 <= s21 ); + test_case( "string concatenation", |s22| == 4 ); + s23 += s21; + test_case( "string concatenation", s23 == "testx" ); + test_case( "multi-line string initialization", |s24| == 65 ); + test_case( "in operator", s25 in s24 ); + test_case( "!in operator", s25 !in s23 ); + +} + diff --git a/testing/btest/language/subnet.bro b/testing/btest/language/subnet.bro new file mode 100644 index 0000000000..ea641f6983 --- /dev/null +++ b/testing/btest/language/subnet.bro @@ -0,0 +1,47 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + # IPv4 addr + local a1: addr = 192.1.2.3; + + # IPv4 subnets + local s1: subnet = 0.0.0.0/0; + local s2: subnet = 192.0.0.0/8; + local s3: subnet = 255.255.255.255/32; + local s4 = 10.0.0.0/16; + + test_case( "IPv4 subnet equality", a1/8 == s2 ); + test_case( "IPv4 subnet inequality", a1/4 != s2 ); + test_case( "IPv4 subnet in operator", a1 in s2 ); + test_case( "IPv4 subnet !in operator", a1 !in s3 ); + test_case( "IPv4 subnet type inference", type_name(s4) == "subnet" ); + + # IPv6 addrs + local b1: addr = [ffff::]; + local b2: addr = [ffff::1]; + local b3: addr = [ffff:1::1]; + + # IPv6 subnets + local t1: subnet = [::]/0; + local t2: subnet = [ffff::]/64; + local t3 = [a::]/32; + + test_case( "IPv6 subnet equality", b1/64 == t2 ); + test_case( "IPv6 subnet inequality", b3/64 != t2 ); + test_case( "IPv6 subnet in operator", b2 in t2 ); + test_case( "IPv6 subnet !in operator", b3 !in t2 ); + test_case( "IPv6 subnet type inference", type_name(t3) == "subnet" ); + + test_case( "IPv4 and IPv6 subnet inequality", s1 != t1 ); + test_case( "IPv4 address and IPv6 subnet", a1 !in t2 ); + +} + diff --git a/testing/btest/language/table-init.bro b/testing/btest/language/table-init.bro new file mode 100644 index 0000000000..5df682c5d2 --- /dev/null +++ b/testing/btest/language/table-init.bro @@ -0,0 +1,20 @@ +# @TEST-EXEC: bro %INPUT >output +# @TEST-EXEC: btest-diff output + +global global_table: table[count] of string = { + [1] = "one", + [2] = "two" +} &default = "global table default"; + +event bro_init() + { + local local_table: table[count] of string = { + [3] = "three", + [4] = "four" + } &default = "local table default"; + + print global_table; + print global_table[0]; + print local_table; + print local_table[0]; + } diff --git a/testing/btest/language/table.bro b/testing/btest/language/table.bro new file mode 100644 index 0000000000..d1b0751970 --- /dev/null +++ b/testing/btest/language/table.bro @@ -0,0 +1,149 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + +# Note: only global tables can be initialized with curly braces when the table +# type is not explicitly specified +global tg1 = { [1] = "type", [2] = "inference", [3] = "test" }; + +event bro_init() +{ + local t1: table[count] of string = table( [5] = "test", [0] = "example" ); + local t2: table[count] of string = table(); + local t3: table[count] of string; + local t4 = table( [1] = "type inference" ); + local t5: table[count] of string = { [1] = "curly", [3] = "braces" }; + local t6: table[port, string, bool] of string = table( + [1/tcp, "test", T] = "test1", + [2/tcp, "example", F] = "test2" ); + local t7: table[port, string, bool] of string = table(); + local t8: table[port, string, bool] of string; + local t9 = table( [8/tcp, "type inference", T] = "this" ); + local t10: table[port, string, bool] of string = { + [10/udp, "curly", F] = "first", + [11/udp, "braces", T] = "second" }; + + # Type inference tests + + test_case( "type inference", type_name(t4) == "table[count] of string" ); + test_case( "type inference", type_name(t9) == "table[port,string,bool] of string" ); + test_case( "type inference", type_name(tg1) == "table[count] of string" ); + + # Test the size of each table + + test_case( "cardinality", |t1| == 2 ); + test_case( "cardinality", |t2| == 0 ); + test_case( "cardinality", |t3| == 0 ); + test_case( "cardinality", |t4| == 1 ); + test_case( "cardinality", |t5| == 2 ); + test_case( "cardinality", |t6| == 2 ); + test_case( "cardinality", |t7| == 0 ); + test_case( "cardinality", |t8| == 0 ); + test_case( "cardinality", |t9| == 1 ); + test_case( "cardinality", |t10| == 2 ); + test_case( "cardinality", |tg1| == 3 ); + + # Test iterating over each table + + local ct: count; + ct = 0; + for ( c in t1 ) + { + if ( type_name(c) != "count" ) + print "Error: wrong index type"; + if ( type_name(t1[c]) != "string" ) + print "Error: wrong table type"; + ++ct; + } + test_case( "iterate over table", ct == 2 ); + + ct = 0; + for ( c in t2 ) + { + ++ct; + } + test_case( "iterate over table", ct == 0 ); + + ct = 0; + for ( c in t3 ) + { + ++ct; + } + test_case( "iterate over table", ct == 0 ); + + ct = 0; + for ( [c1, c2, c3] in t6 ) + { + ++ct; + } + test_case( "iterate over table", ct == 2 ); + + ct = 0; + for ( [c1, c2, c3] in t7 ) + { + ++ct; + } + test_case( "iterate over table", ct == 0 ); + + # Test overwriting elements in each table (Note: cannot overwrite + # elements in tables of multiple types) + + t1[5] = "overwrite"; + test_case( "overwrite element", |t1| == 2 && t1[5] == "overwrite" ); + + # Test adding elements to each table (Note: cannot add elements to + # tables of multiple types) + + t1[1] = "added"; + test_case( "add element", |t1| == 3 ); + test_case( "in operator", 1 in t1 ); + + t2[11] = "another"; + test_case( "add element", |t2| == 1 ); + t2[0] = "test"; + test_case( "add element", |t2| == 2 ); + test_case( "in operator", 11 in t2 ); + test_case( "in operator", 0 in t2 ); + + t3[3] = "foo"; + test_case( "add element", |t3| == 1 ); + test_case( "in operator", 3 in t3 ); + + t4[4] = "local"; + test_case( "add element", |t4| == 2 ); + test_case( "in operator", 4 in t4 ); + + t5[10] = "local2"; + test_case( "add element", |t5| == 3 ); + test_case( "in operator", 10 in t5 ); + + # Test removing elements from each table (Note: cannot remove elements + # from tables of multiple types) + + delete t1[0]; + delete t1[17]; # element does not exist (nothing happens) + test_case( "remove element", |t1| == 2 ); + test_case( "!in operator", 0 !in t1 ); + + delete t2[0]; + test_case( "remove element", |t2| == 1 ); + test_case( "!in operator", 0 !in t2 ); + + delete t3[3]; + test_case( "remove element", |t3| == 0 ); + test_case( "!in operator", 3 !in t3 ); + + delete t4[1]; + test_case( "remove element", |t4| == 1 ); + test_case( "!in operator", 1 !in t4 ); + + delete t5[1]; + test_case( "remove element", |t5| == 2 ); + test_case( "!in operator", 1 !in t5 ); + +} + diff --git a/testing/btest/language/time.bro b/testing/btest/language/time.bro new file mode 100644 index 0000000000..43b6694101 --- /dev/null +++ b/testing/btest/language/time.bro @@ -0,0 +1,33 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +event bro_init() +{ + local t1: time = current_time(); + local t2: time = t1 + 3 sec; + local t3: time = t2 - 10 sec; + local t4: time = t1; + local t5: time = double_to_time(1234567890); + local t6 = current_time(); + + # Type inference test + + test_case( "type inference", type_name(t6) == "time" ); + + # Operator tests + + test_case( "add interval", t1 < t2 ); + test_case( "subtract interval", t1 > t3 ); + test_case( "inequality", t1 != t3 ); + test_case( "equality", t1 == t4 ); + test_case( "subtract time", t2 - t1 == 3sec); + test_case( "size operator", |t5| == 1234567890.0 ); + +} + diff --git a/testing/btest/language/timeout.bro b/testing/btest/language/timeout.bro new file mode 100644 index 0000000000..6bc0419b2f --- /dev/null +++ b/testing/btest/language/timeout.bro @@ -0,0 +1,19 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + + +event bro_init() +{ + local h1: addr = 1.2.3.4; + + when ( local h1name = lookup_addr(h1) ) + { + print "lookup successful"; + } + timeout 3 secs + { + print "timeout"; + } + +} + diff --git a/testing/btest/language/vector.bro b/testing/btest/language/vector.bro new file mode 100644 index 0000000000..928ddcb645 --- /dev/null +++ b/testing/btest/language/vector.bro @@ -0,0 +1,167 @@ +# @TEST-EXEC: bro %INPUT >out +# @TEST-EXEC: btest-diff out + +function test_case(msg: string, expect: bool) + { + print fmt("%s (%s)", msg, expect ? "PASS" : "FAIL"); + } + + +# Note: only global vectors can be initialized with curly braces +global vg1: vector of string = { "curly", "braces" }; + +event bro_init() +{ + local v1: vector of string = vector( "test", "example" ); + local v2: vector of string = vector(); + local v3: vector of string; + local v4 = vector( "type inference" ); + local v5 = vector( 1, 2, 3 ); + local v6 = vector( 10, 20, 30 ); + local v7 = v5 + v6; + local v8 = v6 - v5; + local v9 = v5 * v6; + local v10 = v6 / v5; + local v11 = v6 % v5; + local v12 = vector( T, F, T ); + local v13 = vector( F, F, T ); + local v14 = v12 && v13; + local v15 = v12 || v13; + + # Type inference tests + + test_case( "type inference", type_name(v4) == "vector of string" ); + test_case( "type inference", type_name(v5) == "vector of count" ); + test_case( "type inference", type_name(v12) == "vector of bool" ); + + # Test the size of each vector + + test_case( "cardinality", |v1| == 2 ); + test_case( "cardinality", |v2| == 0 ); + test_case( "cardinality", |v3| == 0 ); + test_case( "cardinality", |v4| == 1 ); + test_case( "cardinality", |v5| == 3 ); + test_case( "cardinality", |v6| == 3 ); + test_case( "cardinality", |v7| == 3 ); + test_case( "cardinality", |v8| == 3 ); + test_case( "cardinality", |v9| == 3 ); + test_case( "cardinality", |v10| == 3 ); + test_case( "cardinality", |v11| == 3 ); + test_case( "cardinality", |v12| == 3 ); + test_case( "cardinality", |v13| == 3 ); + test_case( "cardinality", |v14| == 3 ); + test_case( "cardinality", |v15| == 3 ); + test_case( "cardinality", |vg1| == 2 ); + + # Test that vectors use zero-based indexing + + test_case( "zero-based indexing", v1[0] == "test" && v5[0] == 1 ); + + # Test iterating over each vector + + local ct: count; + ct = 0; + for ( c in v1 ) + { + if ( type_name(c) != "int" ) + print "Error: wrong index type"; + if ( type_name(v1[c]) != "string" ) + print "Error: wrong vector type"; + ++ct; + } + test_case( "iterate over vector", ct == 2 ); + + ct = 0; + for ( c in v2 ) + { + ++ct; + } + test_case( "iterate over vector", ct == 0 ); + + ct = 0; + for ( c in vg1 ) + { + ++ct; + } + test_case( "iterate over vector", ct == 2 ); + + # Test adding elements to each vector + + v1[2] = "added"; + test_case( "add element", |v1| == 3 ); + test_case( "access element", v1[2] == "added" ); + + v2[0] = "another"; + test_case( "add element", |v2| == 1 ); + v2[1] = "test"; + test_case( "add element", |v2| == 2 ); + test_case( "access element", v2[0] == "another" ); + test_case( "access element", v2[1] == "test" ); + + v3[0] = "foo"; + test_case( "add element", |v3| == 1 ); + test_case( "access element", v3[0] == "foo" ); + + v4[1] = "local"; + test_case( "add element", |v4| == 2 ); + test_case( "access element", v4[1] == "local" ); + + v5[3] = 77; + test_case( "add element", |v5| == 4 ); + test_case( "access element", v5[3] == 77 ); + + vg1[2] = "global"; + test_case( "add element", |vg1| == 3 ); + test_case( "access element", vg1[2] == "global" ); + + # Test overwriting elements of each vector + + v1[0] = "new1"; + test_case( "overwrite element", |v1| == 3 ); + test_case( "access element", v1[0] == "new1" ); + + v2[1] = "new2"; + test_case( "overwrite element", |v2| == 2 ); + test_case( "access element", v2[0] == "another" ); + test_case( "access element", v2[1] == "new2" ); + + v3[0] = "new3"; + test_case( "overwrite element", |v3| == 1 ); + test_case( "access element", v3[0] == "new3" ); + + v4[0] = "new4"; + test_case( "overwrite element", |v4| == 2 ); + test_case( "access element", v4[0] == "new4" ); + + v5[0] = 0; + test_case( "overwrite element", |v5| == 4 ); + test_case( "access element", v5[0] == 0 ); + + vg1[1] = "new5"; + test_case( "overwrite element", |vg1| == 3 ); + test_case( "access element", vg1[1] == "new5" ); + + # Test increment/decrement operators + + ++v5; + test_case( "++ operator", |v5| == 4 && v5[0] == 1 && v5[1] == 3 + && v5[2] == 4 && v5[3] == 78 ); + --v5; + test_case( "-- operator", |v5| == 4 && v5[0] == 0 && v5[1] == 2 + && v5[2] == 3 && v5[3] == 77 ); + + # Test +,-,*,/,% of two vectors + + test_case( "+ operator", v7[0] == 11 && v7[1] == 22 && v7[2] == 33 ); + test_case( "- operator", v8[0] == 9 && v8[1] == 18 && v8[2] == 27 ); + test_case( "* operator", v9[0] == 10 && v9[1] == 40 && v9[2] == 90 ); + test_case( "/ operator", v10[0] == 10 && v10[1] == 10 && v10[2] == 10 ); + test_case( "% operator", v11[0] == 0 && v11[1] == 0 && v11[2] == 0 ); + + # Test &&,|| of two vectors + + test_case( "&& operator", v14[0] == F && v14[1] == F && v14[2] == T ); + test_case( "|| operator", v15[0] == T && v15[1] == F && v15[2] == T ); + +} + diff --git a/testing/btest/language/when.bro b/testing/btest/language/when.bro new file mode 100644 index 0000000000..84c1f06cef --- /dev/null +++ b/testing/btest/language/when.bro @@ -0,0 +1,20 @@ +# @TEST-SERIALIZE: comm +# @TEST-EXEC: btest-bg-run test1 bro %INPUT +# @TEST-EXEC: btest-bg-wait 10 +# @TEST-EXEC: mv test1/.stdout out +# @TEST-EXEC: btest-diff out + +@load frameworks/communication/listen + +event bro_init() +{ + local h1: addr = 127.0.0.1; + + when ( local h1name = lookup_addr(h1) ) + { + print "lookup successful"; + terminate(); + } + print "done"; +} + diff --git a/testing/btest/scripts/base/frameworks/cluster/start-it-up.bro b/testing/btest/scripts/base/frameworks/cluster/start-it-up.bro index d1eb94d5e1..acb9c3676a 100644 --- a/testing/btest/scripts/base/frameworks/cluster/start-it-up.bro +++ b/testing/btest/scripts/base/frameworks/cluster/start-it-up.bro @@ -1,9 +1,13 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run manager-1 BROPATH=$BROPATH:.. CLUSTER_NODE=manager-1 bro %INPUT +# @TEST-EXEC: sleep 1 # @TEST-EXEC: btest-bg-run proxy-1 BROPATH=$BROPATH:.. CLUSTER_NODE=proxy-1 bro %INPUT # @TEST-EXEC: btest-bg-run proxy-2 BROPATH=$BROPATH:.. CLUSTER_NODE=proxy-2 bro %INPUT +# @TEST-EXEC: sleep 1 # @TEST-EXEC: btest-bg-run worker-1 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-1 bro %INPUT # @TEST-EXEC: btest-bg-run worker-2 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-2 bro %INPUT -# @TEST-EXEC: btest-bg-wait -k 2 +# @TEST-EXEC: btest-bg-wait 30 # @TEST-EXEC: btest-diff manager-1/.stdout # @TEST-EXEC: btest-diff proxy-1/.stdout # @TEST-EXEC: btest-diff proxy-2/.stdout @@ -20,7 +24,42 @@ redef Cluster::nodes = { }; @TEST-END-FILE +global fully_connected: event(); + +global peer_count = 0; + +global fully_connected_nodes = 0; + +event fully_connected() + { + fully_connected_nodes = fully_connected_nodes + 1; + if ( Cluster::node == "manager-1" ) + { + if ( peer_count == 4 && fully_connected_nodes == 4 ) + terminate_communication(); + } + } + +redef Cluster::worker2manager_events += /fully_connected/; +redef Cluster::proxy2manager_events += /fully_connected/; + event remote_connection_handshake_done(p: event_peer) { print "Connected to a peer"; - } \ No newline at end of file + peer_count = peer_count + 1; + if ( Cluster::node == "manager-1" ) + { + if ( peer_count == 4 && fully_connected_nodes == 4 ) + terminate_communication(); + } + else + { + if ( peer_count == 2 ) + event fully_connected(); + } + } + +event remote_connection_closed(p: event_peer) + { + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/communication/communication_log_baseline.bro b/testing/btest/scripts/base/frameworks/communication/communication_log_baseline.bro new file mode 100644 index 0000000000..4a2ed735ef --- /dev/null +++ b/testing/btest/scripts/base/frameworks/communication/communication_log_baseline.bro @@ -0,0 +1,42 @@ +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run receiver bro -b ../receiver.bro +# @TEST-EXEC: btest-bg-run sender bro -b ../sender.bro +# @TEST-EXEC: btest-bg-wait -k 10 +# +# Don't diff the receiver log just because port is always going to change +# @TEST-EXEC: egrep -v 'CPU|bytes|pid|socket buffer size' sender/communication.log >send.log +# @TEST-EXEC: btest-diff send.log + +@TEST-START-FILE sender.bro + +@load base/frameworks/communication/main + +redef Communication::nodes += { + ["foo"] = [$host = 127.0.0.1, $events = /NOTHING/, $connect=T] +}; + +event remote_connection_handshake_done(p: event_peer) + { + terminate_communication(); + } + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +@TEST-END-FILE + +############# + +@TEST-START-FILE receiver.bro + +@load frameworks/communication/listen + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +@TEST-END-FILE diff --git a/testing/btest/scripts/base/frameworks/control/configuration_update.bro b/testing/btest/scripts/base/frameworks/control/configuration_update.bro index eb86ec58e8..d9e62efe08 100644 --- a/testing/btest/scripts/base/frameworks/control/configuration_update.bro +++ b/testing/btest/scripts/base/frameworks/control/configuration_update.bro @@ -1,7 +1,11 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run controllee BROPATH=$BROPATH:.. bro %INPUT frameworks/control/controllee Communication::listen_port=65531/tcp +# @TEST-EXEC: sleep 5 # @TEST-EXEC: btest-bg-run controller BROPATH=$BROPATH:.. bro %INPUT test-redef frameworks/control/controller Control::host=127.0.0.1 Control::host_port=65531/tcp Control::cmd=configuration_update +# @TEST-EXEC: sleep 5 # @TEST-EXEC: btest-bg-run controller2 BROPATH=$BROPATH:.. bro %INPUT frameworks/control/controller Control::host=127.0.0.1 Control::host_port=65531/tcp Control::cmd=shutdown -# @TEST-EXEC: btest-bg-wait 1 +# @TEST-EXEC: btest-bg-wait 10 # @TEST-EXEC: btest-diff controllee/.stdout redef Communication::nodes = { @@ -23,4 +27,4 @@ event bro_init() event bro_done() { print test_var; - } \ No newline at end of file + } diff --git a/testing/btest/scripts/base/frameworks/control/id_value.bro b/testing/btest/scripts/base/frameworks/control/id_value.bro index 90a5367f76..ffbb9a10cf 100644 --- a/testing/btest/scripts/base/frameworks/control/id_value.bro +++ b/testing/btest/scripts/base/frameworks/control/id_value.bro @@ -1,6 +1,8 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run controllee BROPATH=$BROPATH:.. bro %INPUT only-for-controllee frameworks/control/controllee Communication::listen_port=65532/tcp # @TEST-EXEC: btest-bg-run controller BROPATH=$BROPATH:.. bro %INPUT frameworks/control/controller Control::host=127.0.0.1 Control::host_port=65532/tcp Control::cmd=id_value Control::arg=test_var -# @TEST-EXEC: btest-bg-wait -k 1 +# @TEST-EXEC: btest-bg-wait -k 10 # @TEST-EXEC: btest-diff controller/.stdout redef Communication::nodes = { @@ -20,4 +22,5 @@ redef test_var = "This is the value from the controllee"; event Control::id_value_response(id: string, val: string) { print fmt("Got an id_value_response(%s, %s) event", id, val); + terminate(); } diff --git a/testing/btest/scripts/base/frameworks/control/shutdown.bro b/testing/btest/scripts/base/frameworks/control/shutdown.bro index 73319a7c4a..7b6e5713f8 100644 --- a/testing/btest/scripts/base/frameworks/control/shutdown.bro +++ b/testing/btest/scripts/base/frameworks/control/shutdown.bro @@ -1,6 +1,8 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run controllee BROPATH=$BROPATH:.. bro %INPUT frameworks/control/controllee Communication::listen_port=65530/tcp # @TEST-EXEC: btest-bg-run controller BROPATH=$BROPATH:.. bro %INPUT frameworks/control/controller Control::host=127.0.0.1 Control::host_port=65530/tcp Control::cmd=shutdown -# @TEST-EXEC: btest-bg-wait 1 +# @TEST-EXEC: btest-bg-wait 10 redef Communication::nodes = { # We're waiting for connections from this host for control. diff --git a/testing/btest/scripts/base/frameworks/input/basic.bro b/testing/btest/scripts/base/frameworks/input/basic.bro new file mode 100644 index 0000000000..dfac84d062 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/basic.bro @@ -0,0 +1,64 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve ns +#types bool int enum count port subnet addr double time interval string table table table vector vector string +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY 4242 +@TEST-END-FILE + +@load base/protocols/ssh +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; + e: Log::ID; + c: count; + p: port; + sn: subnet; + a: addr; + d: double; + t: time; + iv: interval; + s: string; + ns: string; + sc: set[count]; + ss: set[string]; + se: set[string]; + vc: vector of int; + ve: vector of int; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + print outfile, to_count(servers[-42]$ns); # try to actually use a string. If null-termination is wrong this will fail. + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/bignumber.bro b/testing/btest/scripts/base/frameworks/input/bignumber.bro new file mode 100644 index 0000000000..5b93472551 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/bignumber.bro @@ -0,0 +1,45 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#fields i c +#types int count +9223372036854775800 18446744073709551612 +-9223372036854775800 18446744073709551612 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + c: count; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/binary.bro b/testing/btest/scripts/base/frameworks/input/binary.bro new file mode 100644 index 0000000000..8d75abc5a9 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/binary.bro @@ -0,0 +1,56 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +redef InputAscii::separator = "|"; +redef InputAscii::set_separator = ","; +redef InputAscii::empty_field = "(empty)"; +redef InputAscii::unset_field = "-"; + +@TEST-START-FILE input.log +#separator | +#set_separator|, +#empty_field|(empty) +#unset_field|- +#path|ssh +#open|2012-07-20-01-49-19 +#fields|data|data2 +#types|string|string +abc\x0a\xffdef|DATA2 +abc\x7c\xffdef|DATA2 +abc\xff\x7cdef|DATA2 +#end|2012-07-20-01-49-19 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; +global try: count; + +type Val: record { + data: string; + data2: string; +}; + +event line(description: Input::EventDescription, tpe: Input::Event, a: string, b: string) + { + print outfile, a; + print outfile, b; + try = try + 1; + if ( try == 3 ) + { + close(outfile); + terminate(); + } + } + +event bro_init() + { + try = 0; + outfile = open("../out"); + Input::add_event([$source="../input.log", $name="input", $fields=Val, $ev=line, $want_record=F]); + Input::remove("input"); + } diff --git a/testing/btest/scripts/base/frameworks/input/empty-values-hashing.bro b/testing/btest/scripts/base/frameworks/input/empty-values-hashing.bro new file mode 100644 index 0000000000..c8760b467e --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/empty-values-hashing.bro @@ -0,0 +1,89 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: cp input1.log input.log +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input2.log input.log +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input1.log +#separator \x09 +#fields i s ss +#types int sting string +1 - TEST +2 - - +@TEST-END-FILE +@TEST-START-FILE input2.log +#separator \x09 +#fields i s ss +#types int sting string +1 TEST - +2 TEST TEST +@TEST-END-FILE + +@load frameworks/communication/listen + + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + s: string; + ss: string; +}; + +global servers: table[int] of Val = table(); + +global outfile: file; + +global try: count; + +event line(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) + { + print outfile, "============EVENT============"; + print outfile, "Description"; + print outfile, description; + print outfile, "Type"; + print outfile, tpe; + print outfile, "Left"; + print outfile, left; + print outfile, "Right"; + print outfile, right; + } + +event bro_init() + { + outfile = open("../out"); + try = 0; + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $mode=Input::REREAD, $name="ssh", $idx=Idx, $val=Val, $destination=servers, $ev=line, + $pred(typ: Input::Event, left: Idx, right: Val) = { + print outfile, "============PREDICATE============"; + print outfile, typ; + print outfile, left; + print outfile, right; + return T; + } + ]); + } + + +event Input::end_of_data(name: string, source: string) + { + print outfile, "==========SERVERS============"; + print outfile, servers; + + try = try + 1; + if ( try == 2 ) + { + print outfile, "done"; + close(outfile); + Input::remove("input"); + terminate(); + } + } diff --git a/testing/btest/scripts/base/frameworks/input/emptyvals.bro b/testing/btest/scripts/base/frameworks/input/emptyvals.bro new file mode 100644 index 0000000000..94b0f1b620 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/emptyvals.bro @@ -0,0 +1,48 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields b i +##types bool int +T 1 +- 2 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/event.bro b/testing/btest/scripts/base/frameworks/input/event.bro new file mode 100644 index 0000000000..ba47d5e3f2 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/event.bro @@ -0,0 +1,53 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields i b +#types int bool +1 T +2 T +3 F +4 F +5 F +6 F +7 T +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +module A; + +type Val: record { + i: int; + b: bool; +}; + +event line(description: Input::EventDescription, tpe: Input::Event, i: int, b: bool) + { + print outfile, description; + print outfile, tpe; + print outfile, i; + print outfile, b; + } + +event bro_init() + { + outfile = open("../out"); + Input::add_event([$source="../input.log", $name="input", $fields=Val, $ev=line, $want_record=F]); + Input::remove("input"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, "End-of-data"; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/executeraw.bro b/testing/btest/scripts/base/frameworks/input/executeraw.bro new file mode 100644 index 0000000000..626b9cdfd2 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/executeraw.bro @@ -0,0 +1,42 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: cat out.tmp | sed 's/^ *//g' >out +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +q3r3057fdf +sdfs\d + +dfsdf +sdf +3rw43wRRERLlL#RWERERERE. +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +type Val: record { + s: string; +}; + +event line(description: Input::EventDescription, tpe: Input::Event, s: string) + { + print outfile, description; + print outfile, tpe; + print outfile, s; + close(outfile); + terminate(); + } + +event bro_init() + { + outfile = open("../out.tmp"); + Input::add_event([$source="wc -l ../input.log |", $reader=Input::READER_RAW, $name="input", $fields=Val, $ev=line, $want_record=F]); + Input::remove("input"); + } diff --git a/testing/btest/scripts/base/frameworks/input/invalidnumbers.bro b/testing/btest/scripts/base/frameworks/input/invalidnumbers.bro new file mode 100644 index 0000000000..1deec605ae --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/invalidnumbers.bro @@ -0,0 +1,48 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out +# @TEST-EXEC: sed 1d .stderr > .stderrwithoutfirstline +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff .stderrwithoutfirstline + +@TEST-START-FILE input.log +#separator \x09 +#fields i c +#types int count +12129223372036854775800 121218446744073709551612 +9223372036854775801TEXTHERE 1Justtext +Justtext 1 +9223372036854775800 -18446744073709551612 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + c: count; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/missing-file.bro b/testing/btest/scripts/base/frameworks/input/missing-file.bro new file mode 100644 index 0000000000..aa5acf619e --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/missing-file.bro @@ -0,0 +1,30 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff bro/.stderr + +@load frameworks/communication/listen + +global outfile: file; +global try: count; + +module A; + +type Val: record { + i: int; + b: bool; +}; + +event line(description: Input::EventDescription, tpe: Input::Event, i: int, b: bool) + { + } + +event bro_init() + { + try = 0; + outfile = open("../out"); + Input::add_event([$source="does-not-exist.dat", $name="input", $fields=Val, $ev=line, $want_record=F]); + Input::remove("input"); + } diff --git a/testing/btest/scripts/base/frameworks/input/onecolumn-norecord.bro b/testing/btest/scripts/base/frameworks/input/onecolumn-norecord.bro new file mode 100644 index 0000000000..c08b1420fb --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/onecolumn-norecord.bro @@ -0,0 +1,47 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields b i +#types bool int +T -42 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=servers, $want_record=F]); + Input::remove("input"); + } + +event Input::end_of_data(name: string, source: string) + { + print outfile, servers; + close(outfile); + terminate(); + } + diff --git a/testing/btest/scripts/base/frameworks/input/onecolumn-record.bro b/testing/btest/scripts/base/frameworks/input/onecolumn-record.bro new file mode 100644 index 0000000000..9e420e75fe --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/onecolumn-record.bro @@ -0,0 +1,47 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields b i +#types bool int +T -42 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + Input::add_table([$name="input", $source="../input.log", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("input"); + } + +event Input::end_of_data(name: string, source: string) + { + print outfile, servers; + close(outfile); + terminate(); + } + diff --git a/testing/btest/scripts/base/frameworks/input/optional.bro b/testing/btest/scripts/base/frameworks/input/optional.bro new file mode 100644 index 0000000000..2fe0e5c86f --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/optional.bro @@ -0,0 +1,56 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields i b +#types int bool +1 T +2 T +3 F +4 F +5 F +6 F +7 T +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; + notb: bool &optional; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=servers, + $pred(typ: Input::Event, left: Idx, right: Val) = { right$notb = !right$b; return T; } + ]); + Input::remove("input"); + } + +event Input::end_of_data(name: string, source: string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/port.bro b/testing/btest/scripts/base/frameworks/input/port.bro new file mode 100644 index 0000000000..081c59559b --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/port.bro @@ -0,0 +1,53 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#fields i p t +1.2.3.4 80 tcp +1.2.3.5 52 udp +1.2.3.6 30 unknown +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: addr; +}; + +type Val: record { + p: port &type_column="t"; +}; + +global servers: table[addr] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=servers]); + if ( 1.2.3.4 in servers ) + print outfile, servers[1.2.3.4]; + if ( 1.2.3.5 in servers ) + print outfile, servers[1.2.3.5]; + if ( 1.2.3.6 in servers ) + print outfile, servers[1.2.3.6]; + Input::remove("input"); + } + +event Input::end_of_data(name: string, source: string) + { + print outfile, servers[1.2.3.4]; + print outfile, servers[1.2.3.5]; + print outfile, servers[1.2.3.6]; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/predicate-stream.bro b/testing/btest/scripts/base/frameworks/input/predicate-stream.bro new file mode 100644 index 0000000000..8cf927e346 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/predicate-stream.bro @@ -0,0 +1,79 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out +# +# only difference from predicate.bro is, that this one uses a stream source. +# the reason is, that the code-paths are quite different, because then the +# ascii reader uses the put and not the sendevent interface + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields i b +#types int bool +1 T +2 T +3 F +4 F +5 F +6 F +7 T +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; +}; + +global servers: table[int] of Val = table(); +global ct: int; + +event line(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: bool) + { + ct = ct + 1; + if ( ct < 3 ) + return; + + if ( 1 in servers ) + print outfile, "VALID"; + if ( 2 in servers ) + print outfile, "VALID"; + if ( !(3 in servers) ) + print outfile, "VALID"; + if ( !(4 in servers) ) + print outfile, "VALID"; + if ( !(5 in servers) ) + print outfile, "VALID"; + if ( !(6 in servers) ) + print outfile, "VALID"; + if ( 7 in servers ) + print outfile, "VALID"; + close(outfile); + terminate(); + } + +event bro_init() + { + outfile = open("../out"); + ct = 0; + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $mode=Input::STREAM, $name="input", $idx=Idx, $val=Val, $destination=servers, $want_record=F, $ev=line, + $pred(typ: Input::Event, left: Idx, right: bool) = { return right; } + ]); + Input::remove("input"); + } + diff --git a/testing/btest/scripts/base/frameworks/input/predicate.bro b/testing/btest/scripts/base/frameworks/input/predicate.bro new file mode 100644 index 0000000000..8fb33242e8 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/predicate.bro @@ -0,0 +1,68 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields i b +#types int bool +1 T +2 T +3 F +4 F +5 F +6 F +7 T +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; +}; + +global servers: table[int] of bool = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=servers, $want_record=F, + $pred(typ: Input::Event, left: Idx, right: bool) = { return right; } + ]); + Input::remove("input"); + } + +event Input::end_of_data(name: string, source: string) + { + if ( 1 in servers ) + print outfile, "VALID"; + if ( 2 in servers ) + print outfile, "VALID"; + if ( !(3 in servers) ) + print outfile, "VALID"; + if ( !(4 in servers) ) + print outfile, "VALID"; + if ( !(5 in servers) ) + print outfile, "VALID"; + if ( !(6 in servers) ) + print outfile, "VALID"; + if ( 7 in servers ) + print outfile, "VALID"; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/predicatemodify.bro b/testing/btest/scripts/base/frameworks/input/predicatemodify.bro new file mode 100644 index 0000000000..17467bbc27 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/predicatemodify.bro @@ -0,0 +1,59 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields i b s ss +#types int bool string string +1 T test1 idx1 +2 T test2 idx2 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; + ss: string; +}; + +type Val: record { + b: bool; + s: string; +}; + +global servers: table[int, string] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=servers, + $pred(typ: Input::Event, left: Idx, right: Val) = { + if ( left$i == 1 ) + right$s = "testmodified"; + if ( left$i == 2 ) + left$ss = "idxmodified"; + return T; + } + ]); + Input::remove("input"); + } + +event Input::end_of_data(name: string, source: string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/predicatemodifyandreread.bro b/testing/btest/scripts/base/frameworks/input/predicatemodifyandreread.bro new file mode 100644 index 0000000000..5a9e993651 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/predicatemodifyandreread.bro @@ -0,0 +1,109 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: cp input1.log input.log +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input2.log input.log +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input3.log input.log +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input4.log input.log +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input5.log input.log +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out +# + +@TEST-START-FILE input1.log +#separator \x09 +#path ssh +#fields i b s ss +#types int bool string string +1 T test1 idx1 +2 T test2 idx2 +@TEST-END-FILE + +@TEST-START-FILE input2.log +#separator \x09 +#path ssh +#fields i b s ss +#types int bool string string +1 F test1 idx1 +2 T test2 idx2 +@TEST-END-FILE + +@TEST-START-FILE input3.log +#separator \x09 +#path ssh +#fields i b s ss +#types int bool string string +1 F test1 idx1 +2 F test2 idx2 +@TEST-END-FILE + +@TEST-START-FILE input4.log +#separator \x09 +#path ssh +#fields i b s ss +#types int bool string string +2 F test2 idx2 +@TEST-END-FILE + +@TEST-START-FILE input5.log +#separator \x09 +#path ssh +#fields i b s ss +#types int bool string string +1 T test1 idx1 +@TEST-END-FILE + +@load frameworks/communication/listen + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; + ss: string; +}; + +type Val: record { + b: bool; + s: string; +}; + +global servers: table[int, string] of Val = table(); +global outfile: file; +global try: count; + +event bro_init() + { + try = 0; + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=servers, $mode=Input::REREAD, + $pred(typ: Input::Event, left: Idx, right: Val) = { + if ( left$i == 1 ) + right$s = "testmodified"; + if ( left$i == 2 ) + left$ss = "idxmodified"; + return T; + } + ]); + } + +event Input::end_of_data(name: string, source: string) + { + try = try + 1; + print outfile, fmt("Update_finished for %s, try %d", name, try); + print outfile, servers; + + if ( try == 5 ) + { + close(outfile); + Input::remove("input"); + terminate(); + } + } diff --git a/testing/btest/scripts/base/frameworks/input/predicaterefusesecondsamerecord.bro b/testing/btest/scripts/base/frameworks/input/predicaterefusesecondsamerecord.bro new file mode 100644 index 0000000000..ba0b468cdc --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/predicaterefusesecondsamerecord.bro @@ -0,0 +1,56 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +# Ok, this one tests a fun case. +# Input file contains two lines mapping to the same index, but with different values, +# where the predicate accepts the first one and refuses the second one. +# Desired result -> first entry stays. + +@TEST-START-FILE input.log +#fields restriction guid severity confidence detecttime address protocol portlist asn prefix rir cc impact description alternativeid_restriction alternativeid +need-to-know 8c864306-d21a-37b1-8705-746a786719bf medium 65 1342656000 1.0.17.227 - - 2519 VECTANT VECTANT Ltd. 1.0.16.0/23 apnic JP spam infrastructure spamming public http://reputation.alienvault.com/reputation.generic +need-to-know 8c864306-d21a-37b1-8705-746a786719bf medium 95 1342569600 1.228.83.33 6 25 9318 HANARO-AS Hanaro Telecom Inc. 1.224.0.0/13 apnic KR spam infrastructure direct ube sources, spam operations & spam services public http://www.spamhaus.org/query/bl?ip=1.228.83.33 +need-to-know 8c864306-d21a-37b1-8705-746a786719bf medium 65 1342656000 1.228.83.33 - - 9318 HANARO-AS Hanaro Telecom Inc. 1.224.0.0/13 apnic KR spam infrastructure spamming;malware domain public http://reputation.alienvault.com/reputation.generic +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + address: addr; +}; + +type Val: record { + asn: string; + severity: string; + confidence: count; + detecttime: time; +}; + +global servers: table[addr] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=servers, + $pred(typ: Input::Event, left: Idx, right: Val) = { if ( right$confidence > 90 ) { return T; } return F; } + ]); + Input::remove("input"); + } + +event Input::end_of_data(name: string, source: string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/raw.bro b/testing/btest/scripts/base/frameworks/input/raw.bro new file mode 100644 index 0000000000..d15aec22bb --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/raw.bro @@ -0,0 +1,49 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +q3r3057fdf +sdfs\d + +dfsdf +sdf +3rw43wRRERLlL#RWERERERE. +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; +global try: count; + +module A; + +type Val: record { + s: string; +}; + +event line(description: Input::EventDescription, tpe: Input::Event, s: string) + { + print outfile, description; + print outfile, tpe; + print outfile, s; + try = try + 1; + if ( try == 8 ) + { + close(outfile); + terminate(); + } + } + +event bro_init() + { + try = 0; + outfile = open("../out"); + Input::add_event([$source="../input.log", $reader=Input::READER_RAW, $mode=Input::STREAM, $name="input", $fields=Val, $ev=line, $want_record=F]); + Input::remove("input"); + } diff --git a/testing/btest/scripts/base/frameworks/input/repeat.bro b/testing/btest/scripts/base/frameworks/input/repeat.bro new file mode 100644 index 0000000000..a966ac064e --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/repeat.bro @@ -0,0 +1,59 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-sort btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields i b +#types int bool +1 T +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; +global try: count; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; +}; + +global destination: table[int] of Val = table(); + +const one_to_32: vector of count = {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32}; + +event bro_init() + { + try = 0; + outfile = open("../out"); + for ( i in one_to_32 ) + { + Input::add_table([$source="../input.log", $name=fmt("input%d", i), $idx=Idx, $val=Val, $destination=destination, $want_record=F]); + Input::remove(fmt("input%d", i)); + } + } + +event Input::end_of_data(name: string, source: string) + { + print outfile, name; + print outfile, source; + print outfile, destination; + try = try + 1; + if ( try == 32 ) + { + close(outfile); + terminate(); + } + } diff --git a/testing/btest/scripts/base/frameworks/input/reread.bro b/testing/btest/scripts/base/frameworks/input/reread.bro new file mode 100644 index 0000000000..11aa873f9d --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/reread.bro @@ -0,0 +1,139 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: cp input1.log input.log +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input2.log input.log +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input3.log input.log +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input4.log input.log +# @TEST-EXEC: sleep 2 +# @TEST-EXEC: cp input5.log input.log +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input1.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input2.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +T -43 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input3.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +F -43 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input4.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +F -43 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +F -44 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +F -45 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +F -46 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +F -47 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +F -48 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input5.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +F -48 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE + +@load base/protocols/ssh +@load frameworks/communication/listen + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; + e: Log::ID; + c: count; + p: port; + sn: subnet; + a: addr; + d: double; + t: time; + iv: interval; + s: string; + sc: set[count]; + ss: set[string]; + se: set[string]; + vc: vector of int; + ve: vector of int; +}; + +global servers: table[int] of Val = table(); + +global outfile: file; + +global try: count; + +event line(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) + { + print outfile, "============EVENT============"; + print outfile, "Description"; + print outfile, description; + print outfile, "Type"; + print outfile, tpe; + print outfile, "Left"; + print outfile, left; + print outfile, "Right"; + print outfile, right; + } + +event bro_init() + { + outfile = open("../out"); + try = 0; + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $mode=Input::REREAD, $name="ssh", $idx=Idx, $val=Val, $destination=servers, $ev=line, + $pred(typ: Input::Event, left: Idx, right: Val) = { + print outfile, "============PREDICATE============"; + print outfile, typ; + print outfile, left; + print outfile, right; + return T; + } + ]); + } + + +event Input::end_of_data(name: string, source: string) + { + print outfile, "==========SERVERS============"; + print outfile, servers; + + try = try + 1; + if ( try == 5 ) + { + print outfile, "done"; + close(outfile); + Input::remove("input"); + terminate(); + } + } diff --git a/testing/btest/scripts/base/frameworks/input/rereadraw.bro b/testing/btest/scripts/base/frameworks/input/rereadraw.bro new file mode 100644 index 0000000000..2fdcdc8f9e --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/rereadraw.bro @@ -0,0 +1,50 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +q3r3057fdf +sdfs\d + +dfsdf +sdf +3rw43wRRERLlL#RWERERERE. +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; +global try: count; + +module A; + +type Val: record { + s: string; +}; + +event line(description: Input::EventDescription, tpe: Input::Event, s: string) + { + print outfile, description; + print outfile, tpe; + print outfile, s; + try = try + 1; + if ( try == 16 ) + { + close(outfile); + terminate(); + } + } + +event bro_init() + { + try = 0; + outfile = open("../out"); + Input::add_event([$source="../input.log", $reader=Input::READER_RAW, $mode=Input::REREAD, $name="input", $fields=Val, $ev=line, $want_record=F]); + Input::force_update("input"); + Input::remove("input"); + } diff --git a/testing/btest/scripts/base/frameworks/input/set.bro b/testing/btest/scripts/base/frameworks/input/set.bro new file mode 100644 index 0000000000..b2b5cea323 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/set.bro @@ -0,0 +1,46 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-sort btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#fields ip +#types addr +192.168.17.1 +192.168.17.2 +192.168.17.7 +192.168.17.14 +192.168.17.42 +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + ip: addr; +}; + +global servers: set[addr] = set(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/setseparator.bro b/testing/btest/scripts/base/frameworks/input/setseparator.bro new file mode 100644 index 0000000000..b7148d80bd --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/setseparator.bro @@ -0,0 +1,46 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-sort btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#fields i s ss +1 a|b|c|d|e|f 1|2|3|4|5|6 +@TEST-END-FILE + +redef InputAscii::set_separator = "|"; + +@load frameworks/communication/listen + +global outfile: file; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + s: set[string]; + ss:vector of count; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/setspecialcases.bro b/testing/btest/scripts/base/frameworks/input/setspecialcases.bro new file mode 100644 index 0000000000..022eac9731 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/setspecialcases.bro @@ -0,0 +1,50 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-sort btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#fields i s ss +1 testing\x2ctesting\x2ctesting\x2c testing\x2ctesting\x2ctesting\x2c +2 testing,,testing testing,,testing +3 ,testing ,testing +4 testing, testing, +5 ,,, ,,, +6 +@TEST-END-FILE + + +@load frameworks/communication/listen + +global outfile: file; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + s: set[string]; + s: vector of string; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/stream.bro b/testing/btest/scripts/base/frameworks/input/stream.bro new file mode 100644 index 0000000000..1ecd8a2eb0 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/stream.bro @@ -0,0 +1,88 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: cp input1.log input.log +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: sleep 3 +# @TEST-EXEC: cat input2.log >> input.log +# @TEST-EXEC: sleep 3 +# @TEST-EXEC: cat input3.log >> input.log +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input1.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input2.log +T -43 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input3.log +F -43 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE + +@load base/protocols/ssh +@load frameworks/communication/listen + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; + e: Log::ID; + c: count; + p: port; + sn: subnet; + a: addr; + d: double; + t: time; + iv: interval; + s: string; + sc: set[count]; + ss: set[string]; + se: set[string]; + vc: vector of int; + ve: vector of int; +}; + +global servers: table[int] of Val = table(); + +global outfile: file; + +global try: count; + +event line(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) + { + print outfile, "============EVENT============"; + print outfile, tpe; + print outfile, left; + print outfile, right; + print outfile, "============SERVERS============"; + print outfile, servers; + + try = try + 1; + + if ( try == 3 ) + { + print outfile, "done"; + close(outfile); + Input::remove("input"); + terminate(); + } + } + +event bro_init() + { + outfile = open("../out"); + try = 0; + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $mode=Input::STREAM, $name="ssh", $idx=Idx, $val=Val, $destination=servers, $ev=line]); + } diff --git a/testing/btest/scripts/base/frameworks/input/streamraw.bro b/testing/btest/scripts/base/frameworks/input/streamraw.bro new file mode 100644 index 0000000000..3bc06f7dea --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/streamraw.bro @@ -0,0 +1,62 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: cp input1.log input.log +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: sleep 3 +# @TEST-EXEC: cat input2.log >> input.log +# @TEST-EXEC: sleep 3 +# @TEST-EXEC: cat input3.log >> input.log +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input1.log +sdfkh:KH;fdkncv;ISEUp34:Fkdj;YVpIODhfDF +@TEST-END-FILE + +@TEST-START-FILE input2.log +DSF"DFKJ"SDFKLh304yrsdkfj@#(*U$34jfDJup3UF +q3r3057fdf +@TEST-END-FILE + +@TEST-START-FILE input3.log +sdfs\d + +dfsdf +sdf +3rw43wRRERLlL#RWERERERE. +@TEST-END-FILE + +@load frameworks/communication/listen + +module A; + +type Val: record { + s: string; +}; + +global try: count; +global outfile: file; + +event line(description: Input::EventDescription, tpe: Input::Event, s: string) + { + print outfile, description; + print outfile, tpe; + print outfile, s; + + try = try + 1; + if ( try == 8 ) + { + print outfile, "done"; + close(outfile); + Input::remove("input"); + terminate(); + } + } + +event bro_init() + { + outfile = open("../out"); + try = 0; + Input::add_event([$source="../input.log", $reader=Input::READER_RAW, $mode=Input::STREAM, $name="input", $fields=Val, $ev=line, $want_record=F]); + } diff --git a/testing/btest/scripts/base/frameworks/input/subrecord-event.bro b/testing/btest/scripts/base/frameworks/input/subrecord-event.bro new file mode 100644 index 0000000000..4e7dc1690a --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/subrecord-event.bro @@ -0,0 +1,75 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields sub.b i sub.e sub.c sub.p sub.sn sub.two.a sub.two.d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE + +@load base/protocols/ssh +@load frameworks/communication/listen + +global outfile: file; +global try: count; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type SubVal2: record { + a: addr; + d: double; +}; + +type SubVal: record { + b: bool; + e: Log::ID; + c: count; + p: port; + sn: subnet; + two: SubVal2; +}; + +type Val: record { + sub: SubVal; + t: time; + iv: interval; + s: string; + sc: set[count]; + ss: set[string]; + se: set[string]; + vc: vector of int; + ve: vector of int; +}; + + + +event line(description: Input::EventDescription, tpe: Input::Event, value: Val) + { + print outfile, value; + try = try + 1; + if ( try == 1 ) + { + close(outfile); + terminate(); + } + } + +event bro_init() + { + try = 0; + outfile = open("../out"); + Input::add_event([$source="../input.log", $name="ssh", $fields=Val, $ev=line, $want_record=T]); + Input::remove("ssh"); + } diff --git a/testing/btest/scripts/base/frameworks/input/subrecord.bro b/testing/btest/scripts/base/frameworks/input/subrecord.bro new file mode 100644 index 0000000000..512b8ec58f --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/subrecord.bro @@ -0,0 +1,70 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields sub.b i sub.e sub.c sub.p sub.sn sub.two.a sub.two.d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE + +@load base/protocols/ssh +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type SubVal2: record { + a: addr; + d: double; +}; + +type SubVal: record { + b: bool; + e: Log::ID; + c: count; + p: port; + sn: subnet; + two: SubVal2; +}; + +type Val: record { + sub: SubVal; + t: time; + iv: interval; + s: string; + sc: set[count]; + ss: set[string]; + se: set[string]; + vc: vector of int; + ve: vector of int; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/input/tableevent.bro b/testing/btest/scripts/base/frameworks/input/tableevent.bro new file mode 100644 index 0000000000..723e519237 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/tableevent.bro @@ -0,0 +1,59 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields i b +#types int bool +1 T +2 T +3 F +4 F +5 F +6 F +7 T +@TEST-END-FILE + +@load frameworks/communication/listen + +global outfile: file; +global try: count; + +redef InputAscii::empty_field = "EMPTY"; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; +}; + +global destination: table[int] of Val = table(); + +event line(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: bool) + { + print outfile, description; + print outfile, tpe; + print outfile, left; + print outfile, right; + try = try + 1; + if ( try == 7 ) + { + close(outfile); + terminate(); + } + } + +event bro_init() + { + try = 0; + outfile = open("../out"); + Input::add_table([$source="../input.log", $name="input", $idx=Idx, $val=Val, $destination=destination, $want_record=F,$ev=line]); + Input::remove("input"); + } diff --git a/testing/btest/scripts/base/frameworks/input/twotables.bro b/testing/btest/scripts/base/frameworks/input/twotables.bro new file mode 100644 index 0000000000..83ae86cd46 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/twotables.bro @@ -0,0 +1,134 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: cp input1.log input.log +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: sleep 5 +# @TEST-EXEC: cp input3.log input.log +# @TEST-EXEC: btest-bg-wait -k 10 +# @TEST-EXEC: btest-diff event.out +# @TEST-EXEC: btest-diff pred1.out +# @TEST-EXEC: btest-diff pred2.out +# @TEST-EXEC: btest-diff fin.out + +@TEST-START-FILE input1.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input2.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +T -43 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE +@TEST-START-FILE input3.log +#separator \x09 +#path ssh +#fields b i e c p sn a d t iv s sc ss se vc ve f +#types bool int enum count port subnet addr double time interval string table table table vector vector func +F -44 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE + +@load base/protocols/ssh +@load frameworks/communication/listen + +redef InputAscii::empty_field = "EMPTY"; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + b: bool; + e: Log::ID; + c: count; + p: port; + sn: subnet; + a: addr; + d: double; + t: time; + iv: interval; + s: string; + sc: set[count]; + ss: set[string]; + se: set[string]; + vc: vector of int; + ve: vector of int; +}; + +global servers: table[int] of Val = table(); + +global event_out: file; +global pred1_out: file; +global pred2_out: file; +global fin_out: file; + +global try: count; + +event line(description: Input::TableDescription, tpe: Input::Event, left: Idx, right: Val) + { + print event_out, "============EVENT============"; +# print event_out, "Description"; +# print event_out, description; +# print event_out, "Type"; +# print event_out, tpe; +# print event_out, "Left"; +# print event_out, left; +# print event_out, "Right"; +# print event_out, right; + } + +event bro_init() + { + event_out = open ("../event.out"); + pred1_out = open ("../pred1.out"); + pred2_out = open ("../pred2.out"); + fin_out = open ("../fin.out"); + try = 0; + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $mode=Input::REREAD, $name="ssh", $idx=Idx, $val=Val, $destination=servers, $ev=line, + $pred(typ: Input::Event, left: Idx, right: Val) = { + print pred1_out, "============PREDICATE============"; + print pred1_out, typ; + print pred1_out, left; + print pred1_out, right; + return T; + } + ]); + Input::add_table([$source="../input2.log", $mode=Input::REREAD, $name="ssh2", $idx=Idx, $val=Val, $destination=servers, $ev=line, + $pred(typ: Input::Event, left: Idx, right: Val) = { + print pred2_out, "============PREDICATE 2============"; + print pred2_out, typ; + print pred2_out, left; + print pred2_out, right; + return T; + } + ]); + } + + +event Input::end_of_data(name: string, source: string) + { + print fin_out, "==========SERVERS============"; + #print fin_out, servers; + + try = try + 1; + if ( try == 3 ) + { + print fin_out, "done"; + print fin_out, servers; + close(event_out); + close(pred1_out); + close(pred2_out); + close(fin_out); + Input::remove("input"); + Input::remove("input2"); + terminate(); + } + } diff --git a/testing/btest/scripts/base/frameworks/input/unsupported_types.bro b/testing/btest/scripts/base/frameworks/input/unsupported_types.bro new file mode 100644 index 0000000000..e1350f61a9 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/input/unsupported_types.bro @@ -0,0 +1,64 @@ +# (uses listen.bro just to ensure input sources are more reliably fully-read). +# @TEST-SERIALIZE: comm +# +# @TEST-EXEC: btest-bg-run bro bro -b %INPUT +# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-diff out + +@TEST-START-FILE input.log +#separator \x09 +#path ssh +#fields fi b i e c p sn a d t iv s sc ss se vc ve f +#types file bool int enum count port subnet addr double time interval string table table table vector vector func +whatever T -42 SSH::LOG 21 123 10.0.0.0/24 1.2.3.4 3.14 1315801931.273616 100.000000 hurz 2,4,1,3 CC,AA,BB EMPTY 10,20,30 EMPTY SSH::foo\x0a{ \x0aif (0 < SSH::i) \x0a\x09return (Foo);\x0aelse\x0a\x09return (Bar);\x0a\x0a} +@TEST-END-FILE + +@load base/protocols/ssh +@load frameworks/communication/listen + +global outfile: file; + +redef InputAscii::empty_field = "EMPTY"; +redef Input::accept_unsupported_types = T; + +module A; + +type Idx: record { + i: int; +}; + +type Val: record { + fi: file &optional; + b: bool; + e: Log::ID; + c: count; + p: port; + sn: subnet; + a: addr; + d: double; + t: time; + iv: interval; + s: string; + sc: set[count]; + ss: set[string]; + se: set[string]; + vc: vector of int; + ve: vector of int; +}; + +global servers: table[int] of Val = table(); + +event bro_init() + { + outfile = open("../out"); + # first read in the old stuff into the table... + Input::add_table([$source="../input.log", $name="ssh", $idx=Idx, $val=Val, $destination=servers]); + Input::remove("ssh"); + } + +event Input::end_of_data(name: string, source:string) + { + print outfile, servers; + close(outfile); + terminate(); + } diff --git a/testing/btest/scripts/base/frameworks/logging/ascii-empty.bro b/testing/btest/scripts/base/frameworks/logging/ascii-empty.bro index 9dace5d52a..0bb5900e30 100644 --- a/testing/btest/scripts/base/frameworks/logging/ascii-empty.bro +++ b/testing/btest/scripts/base/frameworks/logging/ascii-empty.bro @@ -1,12 +1,13 @@ # # @TEST-EXEC: bro -b %INPUT -# @TEST-EXEC: btest-diff ssh.log +# @TEST-EXEC: cat ssh.log | grep -v PREFIX.*20..- >ssh-filtered.log +# @TEST-EXEC: btest-diff ssh-filtered.log redef LogAscii::output_to_stdout = F; redef LogAscii::separator = "|"; redef LogAscii::empty_field = "EMPTY"; redef LogAscii::unset_field = "NOT-SET"; -redef LogAscii::header_prefix = "PREFIX<>"; +redef LogAscii::meta_prefix = "PREFIX<>"; module SSH; diff --git a/testing/btest/scripts/base/frameworks/logging/ascii-escape-notset-str.bro b/testing/btest/scripts/base/frameworks/logging/ascii-escape-notset-str.bro new file mode 100644 index 0000000000..8c1401b179 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/ascii-escape-notset-str.bro @@ -0,0 +1,23 @@ +# +# @TEST-EXEC: bro -b %INPUT +# @TEST-EXEC: btest-diff test.log + +module Test; + +export { + redef enum Log::ID += { LOG }; + + type Log: record { + x: string &optional; + y: string &optional; + z: string &optional; + } &log; +} + +event bro_init() +{ + Log::create_stream(Test::LOG, [$columns=Log]); + Log::write(Test::LOG, [$x=LogAscii::unset_field, $z=""]); +} + + diff --git a/testing/btest/scripts/base/frameworks/logging/ascii-escape-set-separator.bro b/testing/btest/scripts/base/frameworks/logging/ascii-escape-set-separator.bro new file mode 100644 index 0000000000..f5fb7a6259 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/ascii-escape-set-separator.bro @@ -0,0 +1,21 @@ +# @TEST-EXEC: bro -b %INPUT +# @TEST-EXEC: btest-diff test.log + +module Test; + +export { + redef enum Log::ID += { LOG }; + + type Log: record { + ss: set[string]; + } &log; +} + +event bro_init() +{ + Log::create_stream(Test::LOG, [$columns=Log]); + + + Log::write(Test::LOG, [$ss=set("AA", ",", ",,", "CC")]); +} + diff --git a/testing/btest/scripts/base/frameworks/logging/ascii-escape.bro b/testing/btest/scripts/base/frameworks/logging/ascii-escape.bro index f2c370a27a..d73464777a 100644 --- a/testing/btest/scripts/base/frameworks/logging/ascii-escape.bro +++ b/testing/btest/scripts/base/frameworks/logging/ascii-escape.bro @@ -1,5 +1,6 @@ # # @TEST-EXEC: bro -b %INPUT +# @TEST-EXEC: cat ssh.log | egrep -v '#open|#close' >ssh.log.tmp && mv ssh.log.tmp ssh.log # @TEST-EXEC: btest-diff ssh.log redef LogAscii::separator = "||"; diff --git a/testing/btest/scripts/base/frameworks/logging/ascii-line-like-comment.bro b/testing/btest/scripts/base/frameworks/logging/ascii-line-like-comment.bro new file mode 100644 index 0000000000..4670811b2a --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/ascii-line-like-comment.bro @@ -0,0 +1,23 @@ +# +# @TEST-EXEC: bro -b %INPUT +# @TEST-EXEC: btest-diff test.log + +module Test; + +export { + redef enum Log::ID += { LOG }; + + type Info: record { + data: string &log; + c: count &log &default=42; + }; +} + +event bro_init() +{ + Log::create_stream(Test::LOG, [$columns=Info]); + Log::write(Test::LOG, [$data="Test1"]); + Log::write(Test::LOG, [$data="#Kaputt"]); + Log::write(Test::LOG, [$data="Test2"]); +} + diff --git a/testing/btest/scripts/base/frameworks/logging/ascii-options.bro b/testing/btest/scripts/base/frameworks/logging/ascii-options.bro index 8c228c1384..474b179536 100644 --- a/testing/btest/scripts/base/frameworks/logging/ascii-options.bro +++ b/testing/btest/scripts/base/frameworks/logging/ascii-options.bro @@ -4,7 +4,7 @@ redef LogAscii::output_to_stdout = F; redef LogAscii::separator = "|"; -redef LogAscii::include_header = F; +redef LogAscii::include_meta = F; module SSH; diff --git a/testing/btest/scripts/base/frameworks/logging/dataseries/options.bro b/testing/btest/scripts/base/frameworks/logging/dataseries/options.bro new file mode 100644 index 0000000000..fc3752a168 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/dataseries/options.bro @@ -0,0 +1,44 @@ +# +# @TEST-REQUIRES: has-writer DataSeries && which ds2txt +# @TEST-GROUP: dataseries +# +# @TEST-EXEC: bro -b %INPUT Log::default_writer=Log::WRITER_DATASERIES +# @TEST-EXEC: test -e ssh.ds.xml +# @TEST-EXEC: btest-diff ssh.ds.xml + +module SSH; + +redef LogDataSeries::dump_schema = T; + +# Haven't yet found a way to check for the effect of these. +redef LogDataSeries::compression = "bz2"; +redef LogDataSeries::extent_size = 1000; +redef LogDataSeries::num_threads = 5; + +# LogDataSeries::use_integer_for_time is tested separately. + +export { + redef enum Log::ID += { LOG }; + + type Log: record { + t: time; + id: conn_id; # Will be rolled out into individual columns. + status: string &optional; + country: string &default="unknown"; + } &log; +} + +event bro_init() +{ + Log::create_stream(SSH::LOG, [$columns=Log]); + + local cid = [$orig_h=1.2.3.4, $orig_p=1234/tcp, $resp_h=2.3.4.5, $resp_p=80/tcp]; + + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="success"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="failure", $country="US"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="failure", $country="UK"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="success", $country="BR"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="failure", $country="MX"]); + +} + diff --git a/testing/btest/scripts/base/frameworks/logging/dataseries/rotate.bro b/testing/btest/scripts/base/frameworks/logging/dataseries/rotate.bro new file mode 100644 index 0000000000..7b708473e3 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/dataseries/rotate.bro @@ -0,0 +1,34 @@ +# +# @TEST-REQUIRES: has-writer DataSeries && which ds2txt +# @TEST-GROUP: dataseries +# +# @TEST-EXEC: bro -b -r ${TRACES}/rotation.trace %INPUT 2>&1 Log::default_writer=Log::WRITER_DATASERIES | grep "test" >out +# @TEST-EXEC: for i in test.*.ds; do printf '> %s\n' $i; ds2txt --skip-index $i; done >>out +# @TEST-EXEC: btest-diff out + +module Test; + +export { + # Create a new ID for our log stream + redef enum Log::ID += { LOG }; + + # Define a record with all the columns the log file can have. + # (I'm using a subset of fields from ssh-ext for demonstration.) + type Log: record { + t: time; + id: conn_id; # Will be rolled out into individual columns. + } &log; +} + +redef Log::default_rotation_interval = 1hr; +redef Log::default_rotation_postprocessor_cmd = "echo"; + +event bro_init() +{ + Log::create_stream(Test::LOG, [$columns=Log]); +} + +event new_connection(c: connection) + { + Log::write(Test::LOG, [$t=network_time(), $id=c$id]); + } diff --git a/testing/btest/scripts/base/frameworks/logging/dataseries/test-logging.bro b/testing/btest/scripts/base/frameworks/logging/dataseries/test-logging.bro new file mode 100644 index 0000000000..ee0426ae55 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/dataseries/test-logging.bro @@ -0,0 +1,35 @@ +# +# @TEST-REQUIRES: has-writer DataSeries && which ds2txt +# @TEST-GROUP: dataseries +# +# @TEST-EXEC: bro -b %INPUT Log::default_writer=Log::WRITER_DATASERIES +# @TEST-EXEC: ds2txt --skip-index ssh.ds >ssh.ds.txt +# @TEST-EXEC: btest-diff ssh.ds.txt + +module SSH; + +export { + redef enum Log::ID += { LOG }; + + type Log: record { + t: time; + id: conn_id; # Will be rolled out into individual columns. + status: string &optional; + country: string &default="unknown"; + } &log; +} + +event bro_init() +{ + Log::create_stream(SSH::LOG, [$columns=Log]); + + local cid = [$orig_h=1.2.3.4, $orig_p=1234/tcp, $resp_h=2.3.4.5, $resp_p=80/tcp]; + + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="success"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="failure", $country="US"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="failure", $country="UK"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="success", $country="BR"]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="failure", $country="MX"]); + +} + diff --git a/testing/btest/scripts/base/frameworks/logging/dataseries/time-as-int.bro b/testing/btest/scripts/base/frameworks/logging/dataseries/time-as-int.bro new file mode 100644 index 0000000000..5e3f864b33 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/dataseries/time-as-int.bro @@ -0,0 +1,9 @@ +# +# @TEST-REQUIRES: has-writer DataSeries && which ds2txt +# @TEST-GROUP: dataseries +# +# @TEST-EXEC: bro -r $TRACES/wikipedia.trace %INPUT Log::default_writer=Log::WRITER_DATASERIES +# @TEST-EXEC: ds2txt --skip-index conn.ds >conn.ds.txt +# @TEST-EXEC: btest-diff conn.ds.txt + +redef LogDataSeries::use_integer_for_time = T; diff --git a/testing/btest/scripts/base/frameworks/logging/dataseries/wikipedia.bro b/testing/btest/scripts/base/frameworks/logging/dataseries/wikipedia.bro new file mode 100644 index 0000000000..ee1342c470 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/dataseries/wikipedia.bro @@ -0,0 +1,9 @@ +# +# @TEST-REQUIRES: has-writer DataSeries && which ds2txt +# @TEST-GROUP: dataseries +# +# @TEST-EXEC: bro -r $TRACES/wikipedia.trace Log::default_writer=Log::WRITER_DATASERIES +# @TEST-EXEC: ds2txt --skip-index conn.ds >conn.ds.txt +# @TEST-EXEC: ds2txt --skip-index http.ds >http.ds.txt +# @TEST-EXEC: btest-diff conn.ds.txt +# @TEST-EXEC: btest-diff http.ds.txt diff --git a/testing/btest/scripts/base/frameworks/logging/env-ext.test b/testing/btest/scripts/base/frameworks/logging/env-ext.test new file mode 100644 index 0000000000..e9f690caa4 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/env-ext.test @@ -0,0 +1,2 @@ +# @TEST-EXEC: BRO_LOG_SUFFIX=txt bro -r $TRACES/wikipedia.trace +# @TEST-EXEC: test -f conn.txt diff --git a/testing/btest/scripts/base/frameworks/logging/none-debug.bro b/testing/btest/scripts/base/frameworks/logging/none-debug.bro new file mode 100644 index 0000000000..5d2e98323a --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/none-debug.bro @@ -0,0 +1,37 @@ +# +# @TEST-EXEC: bro -b %INPUT >output +# @TEST-EXEC: btest-diff output + +redef Log::default_writer = Log::WRITER_NONE; +redef LogNone::debug = T; +redef Log::default_rotation_interval= 1hr; +redef log_rotate_base_time = "00:05"; + +module SSH; + +export { + redef enum Log::ID += { LOG }; + + type Log: record { + t: time; + id: conn_id; # Will be rolled out into individual columns. + status: string &optional; + country: string &default="unknown"; + } &log; +} + +event bro_init() +{ + local config: table[string] of string; + config["foo"]="bar"; + config["foo2"]="bar2"; + + local cid = [$orig_h=1.2.3.4, $orig_p=1234/tcp, $resp_h=2.3.4.5, $resp_p=80/tcp]; + + Log::create_stream(SSH::LOG, [$columns=Log]); + + Log::remove_default_filter(SSH::LOG); + Log::add_filter(SSH::LOG, [$name="f1", $exclude=set("t", "id.orig_h"), $config=config]); + Log::write(SSH::LOG, [$t=network_time(), $id=cid, $status="success"]); +} + diff --git a/testing/btest/scripts/base/frameworks/logging/remote-types.bro b/testing/btest/scripts/base/frameworks/logging/remote-types.bro index 9af45cf991..b8425428d3 100644 --- a/testing/btest/scripts/base/frameworks/logging/remote-types.bro +++ b/testing/btest/scripts/base/frameworks/logging/remote-types.bro @@ -1,9 +1,12 @@ +# @TEST-SERIALIZE: comm # -# @TEST-EXEC: btest-bg-run sender bro --pseudo-realtime %INPUT ../sender.bro -# @TEST-EXEC: btest-bg-run receiver bro --pseudo-realtime %INPUT ../receiver.bro -# @TEST-EXEC: btest-bg-wait -k 1 +# @TEST-EXEC: btest-bg-run sender bro -B threading,logging --pseudo-realtime %INPUT ../sender.bro +# @TEST-EXEC: btest-bg-run receiver bro -B threading,logging --pseudo-realtime %INPUT ../receiver.bro +# @TEST-EXEC: btest-bg-wait -k 10 # @TEST-EXEC: btest-diff receiver/test.log -# @TEST-EXEC: cmp receiver/test.log sender/test.log +# @TEST-EXEC: cat receiver/test.log | egrep -v '#open|#close' >r.log +# @TEST-EXEC: cat sender/test.log | egrep -v '#open|#close' >s.log +# @TEST-EXEC: cmp r.log s.log # Remote version testing all types. diff --git a/testing/btest/scripts/base/frameworks/logging/remote.bro b/testing/btest/scripts/base/frameworks/logging/remote.bro index b244c72cdf..ba577cc92b 100644 --- a/testing/btest/scripts/base/frameworks/logging/remote.bro +++ b/testing/btest/scripts/base/frameworks/logging/remote.bro @@ -1,15 +1,18 @@ +# @TEST-SERIALIZE: comm # -# @TEST-EXEC: btest-bg-run sender bro --pseudo-realtime %INPUT ../sender.bro +# @TEST-EXEC: btest-bg-run sender bro -b --pseudo-realtime %INPUT ../sender.bro # @TEST-EXEC: sleep 1 -# @TEST-EXEC: btest-bg-run receiver bro --pseudo-realtime %INPUT ../receiver.bro +# @TEST-EXEC: btest-bg-run receiver bro -b --pseudo-realtime %INPUT ../receiver.bro # @TEST-EXEC: sleep 1 -# @TEST-EXEC: btest-bg-wait -k 1 +# @TEST-EXEC: btest-bg-wait 15 # @TEST-EXEC: btest-diff sender/test.log # @TEST-EXEC: btest-diff sender/test.failure.log # @TEST-EXEC: btest-diff sender/test.success.log -# @TEST-EXEC: cmp receiver/test.log sender/test.log -# @TEST-EXEC: cmp receiver/test.failure.log sender/test.failure.log -# @TEST-EXEC: cmp receiver/test.success.log sender/test.success.log +# @TEST-EXEC: ( cd sender && for i in *.log; do cat $i | $SCRIPTS/diff-remove-timestamps >c.$i; done ) +# @TEST-EXEC: ( cd receiver && for i in *.log; do cat $i | $SCRIPTS/diff-remove-timestamps >c.$i; done ) +# @TEST-EXEC: cmp receiver/c.test.log sender/c.test.log +# @TEST-EXEC: cmp receiver/c.test.failure.log sender/c.test.failure.log +# @TEST-EXEC: cmp receiver/c.test.success.log sender/c.test.success.log # This is the common part loaded by both sender and receiver. module Test; @@ -38,10 +41,10 @@ event bro_init() @TEST-START-FILE sender.bro -module Test; - @load frameworks/communication/listen +module Test; + function fail(rec: Log): bool { return rec$status != "success"; @@ -63,14 +66,27 @@ event remote_connection_handshake_done(p: event_peer) Log::write(Test::LOG, [$t=network_time(), $id=cid, $status="failure", $country="MX"]); disconnect(p); } + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + @TEST-END-FILE @TEST-START-FILE receiver.bro ##### +@load base/frameworks/communication + redef Communication::nodes += { ["foo"] = [$host = 127.0.0.1, $connect=T, $request_logs=T] }; +event remote_connection_closed(p: event_peer) + { + terminate(); + } + @TEST-END-FILE diff --git a/testing/btest/scripts/base/frameworks/logging/rotate-custom.bro b/testing/btest/scripts/base/frameworks/logging/rotate-custom.bro index 7c06ff9248..c0f0ef8643 100644 --- a/testing/btest/scripts/base/frameworks/logging/rotate-custom.bro +++ b/testing/btest/scripts/base/frameworks/logging/rotate-custom.bro @@ -1,8 +1,9 @@ # -# @TEST-EXEC: bro -b -r %DIR/rotation.trace %INPUT | egrep "test|test2" | sort >out -# @TEST-EXEC: for i in `ls test*.log | sort`; do printf '> %s\n' $i; cat $i; done | sort | uniq >>out +# @TEST-EXEC: bro -b -r ${TRACES}/rotation.trace %INPUT | egrep "test|test2" | sort >out.tmp +# @TEST-EXEC: cat out.tmp pp.log | sort >out +# @TEST-EXEC: for i in `ls test*.log | sort`; do printf '> %s\n' $i; cat $i; done | sort | $SCRIPTS/diff-remove-timestamps | uniq >>out # @TEST-EXEC: btest-diff out -# @TEST-EXEC: btest-diff .stderr +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-sort btest-diff .stderr module Test; @@ -19,7 +20,7 @@ export { } redef Log::default_rotation_interval = 1hr; -redef Log::default_rotation_postprocessor_cmd = "echo 1st"; +redef Log::default_rotation_postprocessor_cmd = "echo 1st >>pp.log"; function custom_rotate(info: Log::RotationInfo) : bool { diff --git a/testing/btest/scripts/base/frameworks/logging/rotate.bro b/testing/btest/scripts/base/frameworks/logging/rotate.bro index 14123c56c6..86f659c193 100644 --- a/testing/btest/scripts/base/frameworks/logging/rotate.bro +++ b/testing/btest/scripts/base/frameworks/logging/rotate.bro @@ -1,6 +1,6 @@ # -# @TEST-EXEC: bro -b -r %DIR/rotation.trace %INPUT 2>&1 | grep "test" >out -# @TEST-EXEC: for i in test.*.log; do printf '> %s\n' $i; cat $i; done >>out +# @TEST-EXEC: bro -b -r ${TRACES}/rotation.trace %INPUT 2>&1 | grep "test" >out +# @TEST-EXEC: for i in `ls test.*.log | sort`; do printf '> %s\n' $i; cat $i; done >>out # @TEST-EXEC: btest-diff out module Test; diff --git a/testing/btest/scripts/base/frameworks/logging/writer-path-conflict.bro b/testing/btest/scripts/base/frameworks/logging/writer-path-conflict.bro new file mode 100644 index 0000000000..908fb43c72 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/logging/writer-path-conflict.bro @@ -0,0 +1,24 @@ +# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace %INPUT +# @TEST-EXEC: btest-diff reporter.log +# @TEST-EXEC: btest-diff http.log +# @TEST-EXEC: btest-diff http-2.log +# @TEST-EXEC: btest-diff http-3.log +# @TEST-EXEC: btest-diff http-2-2.log + +@load base/protocols/http + +event bro_init() + { + # Both the default filter for the http stream and this new one will + # attempt to have the same writer write to path "http", which will + # be reported as a warning and the path auto-corrected to "http-2" + local filter: Log::Filter = [$name="host-only", $include=set("host")]; + # Same deal here, but should be auto-corrected to "http-3". + local filter2: Log::Filter = [$name="uri-only", $include=set("uri")]; + # Conflict between auto-correct paths needs to be corrected, too, this + # time it will be "http-2-2". + local filter3: Log::Filter = [$path="http-2", $name="status-only", $include=set("status_code")]; + Log::add_filter(HTTP::LOG, filter); + Log::add_filter(HTTP::LOG, filter2); + Log::add_filter(HTTP::LOG, filter3); + } diff --git a/testing/btest/scripts/base/frameworks/metrics/basic-cluster.bro b/testing/btest/scripts/base/frameworks/metrics/basic-cluster.bro index 4b7f177f15..89ae5bf79f 100644 --- a/testing/btest/scripts/base/frameworks/metrics/basic-cluster.bro +++ b/testing/btest/scripts/base/frameworks/metrics/basic-cluster.bro @@ -1,15 +1,17 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run manager-1 BROPATH=$BROPATH:.. CLUSTER_NODE=manager-1 bro %INPUT # @TEST-EXEC: btest-bg-run proxy-1 BROPATH=$BROPATH:.. CLUSTER_NODE=proxy-1 bro %INPUT # @TEST-EXEC: sleep 1 # @TEST-EXEC: btest-bg-run worker-1 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-1 bro %INPUT # @TEST-EXEC: btest-bg-run worker-2 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-2 bro %INPUT -# @TEST-EXEC: btest-bg-wait -k 6 +# @TEST-EXEC: btest-bg-wait 30 # @TEST-EXEC: btest-diff manager-1/metrics.log @TEST-START-FILE cluster-layout.bro redef Cluster::nodes = { - ["manager-1"] = [$node_type=Cluster::MANAGER, $ip=127.0.0.1, $p=37757/tcp, $workers=set("worker-1")], - ["proxy-1"] = [$node_type=Cluster::PROXY, $ip=127.0.0.1, $p=37758/tcp, $manager="manager-1", $workers=set("worker-1")], + ["manager-1"] = [$node_type=Cluster::MANAGER, $ip=127.0.0.1, $p=37757/tcp, $workers=set("worker-1", "worker-2")], + ["proxy-1"] = [$node_type=Cluster::PROXY, $ip=127.0.0.1, $p=37758/tcp, $manager="manager-1", $workers=set("worker-1", "worker-2")], ["worker-1"] = [$node_type=Cluster::WORKER, $ip=127.0.0.1, $p=37760/tcp, $manager="manager-1", $proxy="proxy-1", $interface="eth0"], ["worker-2"] = [$node_type=Cluster::WORKER, $ip=127.0.0.1, $p=37761/tcp, $manager="manager-1", $proxy="proxy-1", $interface="eth1"], }; @@ -26,11 +28,51 @@ event bro_init() &priority=5 Metrics::add_filter(TEST_METRIC, [$name="foo-bar", $break_interval=3secs]); - - if ( Cluster::local_node_type() == Cluster::WORKER ) + } + +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +global ready_for_data: event(); + +redef Cluster::manager2worker_events += /ready_for_data/; + +@if ( Cluster::local_node_type() == Cluster::WORKER ) + +event ready_for_data() + { + Metrics::add_data(TEST_METRIC, [$host=1.2.3.4], 3); + Metrics::add_data(TEST_METRIC, [$host=6.5.4.3], 2); + Metrics::add_data(TEST_METRIC, [$host=7.2.1.5], 1); + } + +@endif + +@if ( Cluster::local_node_type() == Cluster::MANAGER ) + +global n = 0; +global peer_count = 0; + +event Metrics::log_metrics(rec: Metrics::Info) + { + n = n + 1; + if ( n == 3 ) { - Metrics::add_data(TEST_METRIC, [$host=1.2.3.4], 3); - Metrics::add_data(TEST_METRIC, [$host=6.5.4.3], 2); - Metrics::add_data(TEST_METRIC, [$host=7.2.1.5], 1); + terminate_communication(); + terminate(); } } + +event remote_connection_handshake_done(p: event_peer) + { + print p; + peer_count = peer_count + 1; + if ( peer_count == 3 ) + { + event ready_for_data(); + } + } + +@endif diff --git a/testing/btest/scripts/base/frameworks/metrics/cluster-intermediate-update.bro b/testing/btest/scripts/base/frameworks/metrics/cluster-intermediate-update.bro index 89d771e05e..db2c7e9f5d 100644 --- a/testing/btest/scripts/base/frameworks/metrics/cluster-intermediate-update.bro +++ b/testing/btest/scripts/base/frameworks/metrics/cluster-intermediate-update.bro @@ -1,9 +1,11 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run manager-1 BROPATH=$BROPATH:.. CLUSTER_NODE=manager-1 bro %INPUT # @TEST-EXEC: btest-bg-run proxy-1 BROPATH=$BROPATH:.. CLUSTER_NODE=proxy-1 bro %INPUT # @TEST-EXEC: sleep 1 # @TEST-EXEC: btest-bg-run worker-1 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-1 bro %INPUT # @TEST-EXEC: btest-bg-run worker-2 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-2 bro %INPUT -# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-bg-wait 20 # @TEST-EXEC: btest-diff manager-1/notice.log @TEST-START-FILE cluster-layout.bro @@ -35,6 +37,21 @@ event bro_init() &priority=5 $log=T]); } +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +@if ( Cluster::local_node_type() == Cluster::MANAGER ) + +event Notice::log_notice(rec: Notice::Info) + { + terminate_communication(); + terminate(); + } + +@endif + @if ( Cluster::local_node_type() == Cluster::WORKER ) event do_metrics(i: count) diff --git a/testing/btest/scripts/base/frameworks/notice/cluster.bro b/testing/btest/scripts/base/frameworks/notice/cluster.bro index f44ba72f3a..47932edb8e 100644 --- a/testing/btest/scripts/base/frameworks/notice/cluster.bro +++ b/testing/btest/scripts/base/frameworks/notice/cluster.bro @@ -1,8 +1,10 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run manager-1 BROPATH=$BROPATH:.. CLUSTER_NODE=manager-1 bro %INPUT # @TEST-EXEC: btest-bg-run proxy-1 BROPATH=$BROPATH:.. CLUSTER_NODE=proxy-1 bro %INPUT -# @TEST-EXEC: sleep 1 +# @TEST-EXEC: sleep 2 # @TEST-EXEC: btest-bg-run worker-1 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-1 bro %INPUT -# @TEST-EXEC: btest-bg-wait -k 6 +# @TEST-EXEC: btest-bg-wait 20 # @TEST-EXEC: btest-diff manager-1/notice.log @TEST-START-FILE cluster-layout.bro @@ -19,13 +21,44 @@ redef enum Notice::Type += { Test_Notice, }; +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +global ready: event(); + +redef Cluster::manager2worker_events += /ready/; + event delayed_notice() { if ( Cluster::node == "worker-1" ) NOTICE([$note=Test_Notice, $msg="test notice!"]); } -event bro_init() +@if ( Cluster::local_node_type() == Cluster::WORKER ) + +event ready() { schedule 1secs { delayed_notice() }; } + +@endif + +@if ( Cluster::local_node_type() == Cluster::MANAGER ) + +global peer_count = 0; + +event remote_connection_handshake_done(p: event_peer) + { + peer_count = peer_count + 1; + if ( peer_count == 2 ) + event ready(); + } + +event Notice::log_notice(rec: Notice::Info) + { + terminate_communication(); + } + +@endif diff --git a/testing/btest/scripts/base/frameworks/notice/default-policy-order.test b/testing/btest/scripts/base/frameworks/notice/default-policy-order.test index 6e53bd3b54..d5d3f4c3fa 100644 --- a/testing/btest/scripts/base/frameworks/notice/default-policy-order.test +++ b/testing/btest/scripts/base/frameworks/notice/default-policy-order.test @@ -1,10 +1,10 @@ # This test checks that the default notice policy ordering does not # change from run to run. # @TEST-EXEC: bro -e '' -# @TEST-EXEC: mv notice_policy.log notice_policy.log.1 +# @TEST-EXEC: cat notice_policy.log | $SCRIPTS/diff-remove-timestamps > notice_policy.log.1 # @TEST-EXEC: bro -e '' -# @TEST-EXEC: mv notice_policy.log notice_policy.log.2 +# @TEST-EXEC: cat notice_policy.log | $SCRIPTS/diff-remove-timestamps > notice_policy.log.2 # @TEST-EXEC: bro -e '' -# @TEST-EXEC: mv notice_policy.log notice_policy.log.3 +# @TEST-EXEC: cat notice_policy.log | $SCRIPTS/diff-remove-timestamps > notice_policy.log.3 # @TEST-EXEC: diff notice_policy.log.1 notice_policy.log.2 # @TEST-EXEC: diff notice_policy.log.1 notice_policy.log.3 diff --git a/testing/btest/scripts/base/frameworks/notice/mail-alarms.bro b/testing/btest/scripts/base/frameworks/notice/mail-alarms.bro new file mode 100644 index 0000000000..3116b1025a --- /dev/null +++ b/testing/btest/scripts/base/frameworks/notice/mail-alarms.bro @@ -0,0 +1,17 @@ +# @TEST-EXEC: bro -C -r $TRACES/web.trace %INPUT +# @TEST-EXEC: btest-diff alarm-mail.txt + +redef Notice::policy += { [$action = Notice::ACTION_ALARM, $priority = 1 ] }; +redef Notice::force_email_summaries = T; + +redef enum Notice::Type += { + Test_Notice, +}; + +event connection_established(c: connection) + { + NOTICE([$note=Test_Notice, $conn=c, $msg="test", $identifier="static"]); + } + + + diff --git a/testing/btest/scripts/base/frameworks/notice/suppression-cluster.bro b/testing/btest/scripts/base/frameworks/notice/suppression-cluster.bro index a7e720d5f5..5010da82cc 100644 --- a/testing/btest/scripts/base/frameworks/notice/suppression-cluster.bro +++ b/testing/btest/scripts/base/frameworks/notice/suppression-cluster.bro @@ -1,9 +1,11 @@ +# @TEST-SERIALIZE: comm +# # @TEST-EXEC: btest-bg-run manager-1 BROPATH=$BROPATH:.. CLUSTER_NODE=manager-1 bro %INPUT # @TEST-EXEC: btest-bg-run proxy-1 BROPATH=$BROPATH:.. CLUSTER_NODE=proxy-1 bro %INPUT -# @TEST-EXEC: sleep 1 +# @TEST-EXEC: sleep 2 # @TEST-EXEC: btest-bg-run worker-1 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-1 bro %INPUT # @TEST-EXEC: btest-bg-run worker-2 BROPATH=$BROPATH:.. CLUSTER_NODE=worker-2 bro %INPUT -# @TEST-EXEC: btest-bg-wait -k 5 +# @TEST-EXEC: btest-bg-wait 20 # @TEST-EXEC: btest-diff manager-1/notice.log @TEST-START-FILE cluster-layout.bro @@ -21,6 +23,15 @@ redef enum Notice::Type += { Test_Notice, }; +event remote_connection_closed(p: event_peer) + { + terminate(); + } + +global ready: event(); + +redef Cluster::manager2worker_events += /ready/; + event delayed_notice() { NOTICE([$note=Test_Notice, @@ -28,10 +39,33 @@ event delayed_notice() $identifier="this identifier is static"]); } -event bro_init() &priority=5 - { +@if ( Cluster::local_node_type() == Cluster::WORKER ) + +event ready() + { if ( Cluster::node == "worker-1" ) schedule 4secs { delayed_notice() }; if ( Cluster::node == "worker-2" ) schedule 1secs { delayed_notice() }; + } + +event Notice::suppressed(n: Notice::Info) + { + if ( Cluster::node == "worker-1" ) + terminate_communication(); } + +@endif + +@if ( Cluster::local_node_type() == Cluster::MANAGER ) + +global peer_count = 0; + +event remote_connection_handshake_done(p: event_peer) + { + peer_count = peer_count + 1; + if ( peer_count == 3 ) + event ready(); + } + +@endif diff --git a/testing/btest/scripts/base/frameworks/reporter/disable-stderr.bro b/testing/btest/scripts/base/frameworks/reporter/disable-stderr.bro new file mode 100644 index 0000000000..b1afb99b5c --- /dev/null +++ b/testing/btest/scripts/base/frameworks/reporter/disable-stderr.bro @@ -0,0 +1,13 @@ +# @TEST-EXEC: bro %INPUT +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff .stderr +# @TEST-EXEC: TEST_DIFF_CANONIFIER="$SCRIPTS/diff-remove-abspath | $SCRIPTS/diff-remove-timestamps" btest-diff reporter.log + +redef Reporter::warnings_to_stderr = F; +redef Reporter::errors_to_stderr = F; + +global test: table[count] of string = {}; + +event bro_init() + { + print test[3]; + } diff --git a/testing/btest/scripts/base/frameworks/reporter/stderr.bro b/testing/btest/scripts/base/frameworks/reporter/stderr.bro new file mode 100644 index 0000000000..ef01c9fdf9 --- /dev/null +++ b/testing/btest/scripts/base/frameworks/reporter/stderr.bro @@ -0,0 +1,10 @@ +# @TEST-EXEC: bro %INPUT +# @TEST-EXEC: TEST_DIFF_CANONIFIER=$SCRIPTS/diff-remove-abspath btest-diff .stderr +# @TEST-EXEC: TEST_DIFF_CANONIFIER="$SCRIPTS/diff-remove-abspath | $SCRIPTS/diff-remove-timestamps" btest-diff reporter.log + +global test: table[count] of string = {}; + +event bro_init() + { + print test[3]; + } diff --git a/testing/btest/scripts/base/frameworks/software/version-parsing.bro b/testing/btest/scripts/base/frameworks/software/version-parsing.bro index dda3edea4b..03327a25cd 100644 --- a/testing/btest/scripts/base/frameworks/software/version-parsing.bro +++ b/testing/btest/scripts/base/frameworks/software/version-parsing.bro @@ -1,116 +1,116 @@ # @TEST-EXEC: bro %INPUT > output # @TEST-EXEC: btest-diff output -global ts = network_time(); -global host = 0.0.0.0; +module Software; -global matched_software: table[string] of Software::Info = { +global matched_software: table[string] of Software::Description = { ["OpenSSH_4.4"] = - [$name="OpenSSH", $version=[$major=4,$minor=4], $host=host, $ts=ts], + [$name="OpenSSH", $version=[$major=4,$minor=4], $unparsed_version=""], ["OpenSSH_5.2"] = - [$name="OpenSSH", $version=[$major=5,$minor=2], $host=host, $ts=ts], + [$name="OpenSSH", $version=[$major=5,$minor=2], $unparsed_version=""], ["Apache/2.0.63 (Unix) mod_auth_kerb/5.3 mod_ssl/2.0.63 OpenSSL/0.9.7a mod_fastcgi/2.4.2"] = - [$name="Apache", $version=[$major=2,$minor=0,$minor2=63,$addl="Unix"], $host=host, $ts=ts], + [$name="Apache", $version=[$major=2,$minor=0,$minor2=63,$addl="Unix"], $unparsed_version=""], ["Apache/1.3.19 (Unix)"] = - [$name="Apache", $version=[$major=1,$minor=3,$minor2=19,$addl="Unix"], $host=host, $ts=ts], + [$name="Apache", $version=[$major=1,$minor=3,$minor2=19,$addl="Unix"], $unparsed_version=""], ["ProFTPD 1.2.5rc1 Server (Debian)"] = - [$name="ProFTPD", $version=[$major=1,$minor=2,$minor2=5,$addl="rc1"], $host=host, $ts=ts], + [$name="ProFTPD", $version=[$major=1,$minor=2,$minor2=5,$addl="rc1"], $unparsed_version=""], ["wu-2.4.2-academ[BETA-18-VR14](1)"] = - [$name="wu", $version=[$major=2,$minor=4,$minor2=2,$addl="academ"], $host=host, $ts=ts], + [$name="wu", $version=[$major=2,$minor=4,$minor2=2,$addl="academ"], $unparsed_version=""], ["wu-2.6.2(1)"] = - [$name="wu", $version=[$major=2,$minor=6,$minor2=2,$addl="1"], $host=host, $ts=ts], + [$name="wu", $version=[$major=2,$minor=6,$minor2=2,$addl="1"], $unparsed_version=""], ["Java1.2.2-JDeveloper"] = - [$name="Java", $version=[$major=1,$minor=2,$minor2=2,$addl="JDeveloper"], $host=host, $ts=ts], + [$name="Java", $version=[$major=1,$minor=2,$minor2=2,$addl="JDeveloper"], $unparsed_version=""], ["Java/1.6.0_13"] = - [$name="Java", $version=[$major=1,$minor=6,$minor2=0,$addl="13"], $host=host, $ts=ts], + [$name="Java", $version=[$major=1,$minor=6,$minor2=0,$addl="13"], $unparsed_version=""], ["Python-urllib/3.1"] = - [$name="Python-urllib", $version=[$major=3,$minor=1], $host=host, $ts=ts], + [$name="Python-urllib", $version=[$major=3,$minor=1], $unparsed_version=""], ["libwww-perl/5.820"] = - [$name="libwww-perl", $version=[$major=5,$minor=820], $host=host, $ts=ts], + [$name="libwww-perl", $version=[$major=5,$minor=820], $unparsed_version=""], ["Wget/1.9+cvs-stable (Red Hat modified)"] = - [$name="Wget", $version=[$major=1,$minor=9,$addl="+cvs"], $host=host, $ts=ts], + [$name="Wget", $version=[$major=1,$minor=9,$addl="+cvs"], $unparsed_version=""], ["Wget/1.11.4 (Red Hat modified)"] = - [$name="Wget", $version=[$major=1,$minor=11,$minor2=4,$addl="Red Hat modified"], $host=host, $ts=ts], + [$name="Wget", $version=[$major=1,$minor=11,$minor2=4,$addl="Red Hat modified"], $unparsed_version=""], ["curl/7.15.1 (i486-pc-linux-gnu) libcurl/7.15.1 OpenSSL/0.9.8a zlib/1.2.3 libidn/0.5.18"] = - [$name="curl", $version=[$major=7,$minor=15,$minor2=1,$addl="i486-pc-linux-gnu"], $host=host, $ts=ts], + [$name="curl", $version=[$major=7,$minor=15,$minor2=1,$addl="i486-pc-linux-gnu"], $unparsed_version=""], ["Apache"] = - [$name="Apache", $host=host, $ts=ts], + [$name="Apache", $unparsed_version=""], ["Zope/(Zope 2.7.8-final, python 2.3.5, darwin) ZServer/1.1 Plone/Unknown"] = - [$name="Zope/(Zope", $version=[$major=2,$minor=7,$minor2=8,$addl="final"], $host=host, $ts=ts], + [$name="Zope/(Zope", $version=[$major=2,$minor=7,$minor2=8,$addl="final"], $unparsed_version=""], ["The Bat! (v2.00.9) Personal"] = - [$name="The Bat!", $version=[$major=2,$minor=0,$minor2=9,$addl="Personal"], $host=host, $ts=ts], + [$name="The Bat!", $version=[$major=2,$minor=0,$minor2=9,$addl="Personal"], $unparsed_version=""], ["Flash/10,2,153,1"] = - [$name="Flash", $version=[$major=10,$minor=2,$minor2=153,$addl="1"], $host=host, $ts=ts], + [$name="Flash", $version=[$major=10,$minor=2,$minor2=153,$addl="1"], $unparsed_version=""], ["mt2/1.2.3.967 Oct 13 2010-13:40:24 ord-pixel-x2 pid 0x35a3 13731"] = - [$name="mt2", $version=[$major=1,$minor=2,$minor2=3,$addl="967"], $host=host, $ts=ts], + [$name="mt2", $version=[$major=1,$minor=2,$minor2=3,$addl="967"], $unparsed_version=""], ["CacheFlyServe v26b"] = - [$name="CacheFlyServe", $version=[$major=26,$addl="b"], $host=host, $ts=ts], + [$name="CacheFlyServe", $version=[$major=26,$addl="b"], $unparsed_version=""], ["Apache/2.0.46 (Win32) mod_ssl/2.0.46 OpenSSL/0.9.7b mod_jk2/2.0.4"] = - [$name="Apache", $version=[$major=2,$minor=0,$minor2=46,$addl="Win32"], $host=host, $ts=ts], + [$name="Apache", $version=[$major=2,$minor=0,$minor2=46,$addl="Win32"], $unparsed_version=""], # I have no clue how I'd support this without a special case. #["Apache mod_fcgid/2.3.6 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635"] = - # [$name="Apache", $version=[], $host=host, $ts=ts], + # [$name="Apache", $version=[], $unparsed_version=""], ["Apple iPhone v4.3.1 Weather v1.0.0.8G4"] = - [$name="Apple iPhone", $version=[$major=4,$minor=3,$minor2=1,$addl="Weather"], $host=host, $ts=ts], + [$name="Apple iPhone", $version=[$major=4,$minor=3,$minor2=1,$addl="Weather"], $unparsed_version=""], ["Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_2 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8H7 Safari/6533.18.5"] = - [$name="Safari", $version=[$major=5,$minor=0,$minor2=2,$addl="Mobile"], $host=host, $ts=ts], + [$name="Safari", $version=[$major=5,$minor=0,$minor2=2,$addl="Mobile"], $unparsed_version=""], ["Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_7; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.205 Safari/534.16"] = - [$name="Chrome", $version=[$major=10,$minor=0,$minor2=648,$addl="205"], $host=host, $ts=ts], + [$name="Chrome", $version=[$major=10,$minor=0,$minor2=648,$addl="205"], $unparsed_version=""], ["Opera/9.80 (Windows NT 6.1; U; sv) Presto/2.7.62 Version/11.01"] = - [$name="Opera", $version=[$major=11,$minor=1], $host=host, $ts=ts], + [$name="Opera", $version=[$major=11,$minor=1], $unparsed_version=""], ["Mozilla/5.0 (Windows; U; Windows NT 5.1; de; rv:1.9.2.11) Gecko/20101013 Lightning/1.0b2 Thunderbird/3.1.5"] = - [$name="Thunderbird", $version=[$major=3,$minor=1,$minor2=5], $host=host, $ts=ts], + [$name="Thunderbird", $version=[$major=3,$minor=1,$minor2=5], $unparsed_version=""], ["iTunes/9.0 (Macintosh; Intel Mac OS X 10.5.8) AppleWebKit/531.9"] = - [$name="iTunes", $version=[$major=9,$minor=0,$addl="Macintosh"], $host=host, $ts=ts], + [$name="iTunes", $version=[$major=9,$minor=0,$addl="Macintosh"], $unparsed_version=""], ["Java1.3.1_04"] = - [$name="Java", $version=[$major=1,$minor=3,$minor2=1,$addl="04"], $host=host, $ts=ts], + [$name="Java", $version=[$major=1,$minor=3,$minor2=1,$addl="04"], $unparsed_version=""], ["Mozilla/5.0 (Linux; U; Android 2.3.3; zh-tw; HTC Pyramid Build/GRI40) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1"] = - [$name="Safari", $version=[$major=4,$minor=0,$addl="Mobile"], $host=host, $ts=ts], + [$name="Safari", $version=[$major=4,$minor=0,$addl="Mobile"], $unparsed_version=""], ["Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-us) AppleWebKit/533.20.25 (KHTML, like Gecko) Version/5.0.4 Safari/533.20.27"] = - [$name="Safari", $version=[$major=5,$minor=0,$minor2=4], $host=host, $ts=ts], + [$name="Safari", $version=[$major=5,$minor=0,$minor2=4], $unparsed_version=""], ["Mozilla/5.0 (iPod; U; CPU iPhone OS 4_0 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8A293 Safari/6531.22.7"] = - [$name="Safari", $version=[$major=4,$minor=0,$minor2=5,$addl="Mobile"], $host=host, $ts=ts], + [$name="Safari", $version=[$major=4,$minor=0,$minor2=5,$addl="Mobile"], $unparsed_version=""], ["Opera/9.80 (J2ME/MIDP; Opera Mini/9.80 (S60; SymbOS; Opera Mobi/23.348; U; en) Presto/2.5.25 Version/10.54"] = - [$name="Opera Mini", $version=[$major=10,$minor=54], $host=host, $ts=ts], + [$name="Opera Mini", $version=[$major=10,$minor=54], $unparsed_version=""], ["Opera/9.80 (J2ME/MIDP; Opera Mini/5.0.18741/18.794; U; en) Presto/2.4.15"] = - [$name="Opera Mini", $version=[$major=5,$minor=0,$minor2=18741], $host=host, $ts=ts], + [$name="Opera Mini", $version=[$major=5,$minor=0,$minor2=18741], $unparsed_version=""], ["Opera/9.80 (Windows NT 5.1; Opera Mobi/49; U; en) Presto/2.4.18 Version/10.00"] = - [$name="Opera Mobi", $version=[$major=10,$minor=0], $host=host, $ts=ts], + [$name="Opera Mobi", $version=[$major=10,$minor=0], $unparsed_version=""], ["Mozilla/4.0 (compatible; MSIE 8.0; Android 2.2.2; Linux; Opera Mobi/ADR-1103311355; en) Opera 11.00"] = - [$name="Opera", $version=[$major=11,$minor=0], $host=host, $ts=ts], + [$name="Opera", $version=[$major=11,$minor=0], $unparsed_version=""], ["Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.2) Gecko/20040804 Netscape/7.2 (ax)"] = - [$name="Netscape", $version=[$major=7,$minor=2], $host=host, $ts=ts], + [$name="Netscape", $version=[$major=7,$minor=2], $unparsed_version=""], ["Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; GTB5; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506; InfoPath.2)"] = - [$name="MSIE", $version=[$major=7,$minor=0], $host=host, $ts=ts], + [$name="MSIE", $version=[$major=7,$minor=0], $unparsed_version=""], ["Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.1; Media Center PC 3.0; .NET CLR 1.0.3705; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.1)"] = - [$name="MSIE", $version=[$major=7,$minor=0,$addl="b"], $host=host, $ts=ts], + [$name="MSIE", $version=[$major=7,$minor=0,$addl="b"], $unparsed_version=""], ["Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Tablet PC 2.0; InfoPath.2; InfoPath.3)"] = - [$name="MSIE", $version=[$major=8,$minor=0], $host=host, $ts=ts], + [$name="MSIE", $version=[$major=8,$minor=0], $unparsed_version=""], ["Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)"] = - [$name="MSIE", $version=[$major=9,$minor=0], $host=host, $ts=ts], + [$name="MSIE", $version=[$major=9,$minor=0], $unparsed_version=""], ["Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.3; Creative AutoUpdate v1.40.02)"] = - [$name="MSIE", $version=[$major=9,$minor=0], $host=host, $ts=ts], + [$name="MSIE", $version=[$major=9,$minor=0], $unparsed_version=""], ["Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)"] = - [$name="MSIE", $version=[$major=10,$minor=0], $host=host, $ts=ts], + [$name="MSIE", $version=[$major=10,$minor=0], $unparsed_version=""], ["The Bat! (3.0.1 RC3) Professional"] = - [$name="The Bat!", $version=[$major=3,$minor=0,$minor2=1,$addl="RC3"], $host=host, $ts=ts], + [$name="The Bat!", $version=[$major=3,$minor=0,$minor2=1,$addl="RC3"], $unparsed_version=""], # This is an FTP client (found with CLNT command) ["Total Commander"] = - [$name="Total Commander", $version=[], $host=host, $ts=ts], + [$name="Total Commander", $version=[], $unparsed_version=""], ["(vsFTPd 2.0.5)"] = - [$name="vsFTPd", $version=[$major=2,$minor=0,$minor2=5], $host=host, $ts=ts], + [$name="vsFTPd", $version=[$major=2,$minor=0,$minor2=5], $unparsed_version=""], ["Apple Mail (2.1084)"] = - [$name="Apple Mail", $version=[$major=2,$minor=1084], $host=host, $ts=ts], + [$name="Apple Mail", $version=[$major=2,$minor=1084], $unparsed_version=""], }; event bro_init() { for ( sw in matched_software ) { - local output = Software::parse(sw, host, Software::UNKNOWN); - local baseline: Software::Info; - baseline = matched_software[sw]; + local output = Software::parse(sw); + local baseline = matched_software[sw]; + if ( baseline$name == output$name && + sw == output$unparsed_version && Software::cmp_versions(baseline$version,output$version) == 0 ) print fmt("success on: %s", sw); else diff --git a/testing/btest/scripts/base/protocols/conn/contents-default-extract.test b/testing/btest/scripts/base/protocols/conn/contents-default-extract.test new file mode 100644 index 0000000000..82f46b62c8 --- /dev/null +++ b/testing/btest/scripts/base/protocols/conn/contents-default-extract.test @@ -0,0 +1,3 @@ +# @TEST-EXEC: bro -f "tcp port 21" -r $TRACES/ipv6-ftp.trace "Conn::default_extract=T" +# @TEST-EXEC: btest-diff contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_orig.dat +# @TEST-EXEC: btest-diff contents_[2001:470:1f11:81f:c999:d94:aa7c:2e3e]:49185-[2001:470:4867:99::21]:21_resp.dat diff --git a/testing/btest/scripts/base/protocols/conn/polling.test b/testing/btest/scripts/base/protocols/conn/polling.test new file mode 100644 index 0000000000..a6fbc35f66 --- /dev/null +++ b/testing/btest/scripts/base/protocols/conn/polling.test @@ -0,0 +1,20 @@ +# @TEST-EXEC: bro -b -r $TRACES/http-100-continue.trace %INPUT >out1 +# @TEST-EXEC: btest-diff out1 +# @TEST-EXEC: bro -b -r $TRACES/http-100-continue.trace %INPUT stop_cnt=2 >out2 +# @TEST-EXEC: btest-diff out2 + +@load base/protocols/conn + +const stop_cnt = 10 &redef; + +function callback(c: connection, cnt: count): interval + { + print "callback", c$id, cnt; + return cnt >= stop_cnt ? -1 sec : .2 sec; + } + +event new_connection(c: connection) + { + print "new_connection", c$id; + ConnPolling::watch(c, callback, 0, 0secs); + } diff --git a/testing/btest/scripts/base/protocols/dns/zero-responses.bro b/testing/btest/scripts/base/protocols/dns/zero-responses.bro new file mode 100644 index 0000000000..54f7d7b7d3 --- /dev/null +++ b/testing/btest/scripts/base/protocols/dns/zero-responses.bro @@ -0,0 +1,4 @@ +# This tests the case where the DNS server responded with zero RRs. +# +# @TEST-EXEC: bro -r $TRACES/dns-zero-RRs.trace +# @TEST-EXEC: btest-diff dns.log \ No newline at end of file diff --git a/testing/btest/scripts/base/protocols/ftp/ftp-ipv4.bro b/testing/btest/scripts/base/protocols/ftp/ftp-ipv4.bro new file mode 100644 index 0000000000..5cb8b808d5 --- /dev/null +++ b/testing/btest/scripts/base/protocols/ftp/ftp-ipv4.bro @@ -0,0 +1,6 @@ +# This tests both active and passive FTP over IPv4. +# +# @TEST-EXEC: bro -r $TRACES/ftp-ipv4.trace +# @TEST-EXEC: btest-diff conn.log +# @TEST-EXEC: btest-diff ftp.log + diff --git a/testing/btest/scripts/base/protocols/ftp/ftp-ipv6.bro b/testing/btest/scripts/base/protocols/ftp/ftp-ipv6.bro new file mode 100644 index 0000000000..7ce31808c9 --- /dev/null +++ b/testing/btest/scripts/base/protocols/ftp/ftp-ipv6.bro @@ -0,0 +1,6 @@ +# This tests both active and passive FTP over IPv6. +# +# @TEST-EXEC: bro -r $TRACES/ipv6-ftp.trace +# @TEST-EXEC: btest-diff conn.log +# @TEST-EXEC: btest-diff ftp.log + diff --git a/testing/btest/scripts/base/protocols/ftp/gridftp.test b/testing/btest/scripts/base/protocols/ftp/gridftp.test new file mode 100644 index 0000000000..494729cf5f --- /dev/null +++ b/testing/btest/scripts/base/protocols/ftp/gridftp.test @@ -0,0 +1,21 @@ +# @TEST-EXEC: bro -r $TRACES/globus-url-copy.trace %INPUT +# @TEST-EXEC: btest-diff notice.log +# @TEST-EXEC: btest-diff conn.log +# @TEST-EXEC: btest-diff ssl.log + +@load base/protocols/ftp/gridftp + +module GridFTP; + +redef size_threshold = 2; + +redef enum Notice::Type += { + Data_Channel +}; + +event GridFTP::data_channel_detected(c: connection) + { + local msg = fmt("GridFTP data channel over threshold %d bytes", + size_threshold); + NOTICE([$note=Data_Channel, $msg=msg, $conn=c]); + } diff --git a/testing/btest/scripts/base/protocols/http/http-extract-files.bro b/testing/btest/scripts/base/protocols/http/http-extract-files.bro new file mode 100644 index 0000000000..4338cddb47 --- /dev/null +++ b/testing/btest/scripts/base/protocols/http/http-extract-files.bro @@ -0,0 +1,5 @@ +# @TEST-EXEC: bro -C -r $TRACES/web.trace %INPUT +# @TEST-EXEC: btest-diff http.log +# @TEST-EXEC: btest-diff http-item_141.42.64.125:56730-125.190.109.199:80_resp_1.dat + +redef HTTP::extract_file_types += /text\/html/; \ No newline at end of file diff --git a/testing/btest/scripts/base/protocols/http/http-mime-and-md5.bro b/testing/btest/scripts/base/protocols/http/http-mime-and-md5.bro index 53b340d174..dd01e62413 100644 --- a/testing/btest/scripts/base/protocols/http/http-mime-and-md5.bro +++ b/testing/btest/scripts/base/protocols/http/http-mime-and-md5.bro @@ -2,7 +2,6 @@ # will normalize mime types other than the target type to prevent sensitivity # to varying versions of libmagic. -# @TEST-REQUIRES: grep -q '#define HAVE_LIBMAGIC' $BUILD/config.h # @TEST-EXEC: bro -r $TRACES/http-pipelined-requests.trace %INPUT > output # @TEST-EXEC: btest-diff http.log diff --git a/testing/btest/scripts/base/protocols/irc/dcc-extract.test b/testing/btest/scripts/base/protocols/irc/dcc-extract.test index 0324a3f28f..b6bf43ac50 100644 --- a/testing/btest/scripts/base/protocols/irc/dcc-extract.test +++ b/testing/btest/scripts/base/protocols/irc/dcc-extract.test @@ -2,7 +2,6 @@ # correctly extracted. The mime type of the file transferred is normalized # to prevent sensitivity to libmagic version being used. -# @TEST-REQUIRES: grep -q '#define HAVE_LIBMAGIC' $BUILD/config.h # @TEST-EXEC: bro -r $TRACES/irc-dcc-send.trace %INPUT # @TEST-EXEC: btest-diff irc.log # @TEST-EXEC: btest-diff irc-dcc-item_192.168.1.77:57655-209.197.168.151:1024_1.dat diff --git a/testing/btest/scripts/base/protocols/smtp/mime-extract.test b/testing/btest/scripts/base/protocols/smtp/mime-extract.test index ba0869f6d9..8dc8bc9e31 100644 --- a/testing/btest/scripts/base/protocols/smtp/mime-extract.test +++ b/testing/btest/scripts/base/protocols/smtp/mime-extract.test @@ -1,4 +1,3 @@ -# @TEST-REQUIRES: grep -q '#define HAVE_LIBMAGIC' $BUILD/config.h # @TEST-EXEC: bro -r $TRACES/smtp.trace %INPUT # @TEST-EXEC: btest-diff smtp_entities.log # @TEST-EXEC: btest-diff smtp-entity_10.10.1.4:1470-74.53.140.153:25_1.dat diff --git a/testing/btest/scripts/base/protocols/smtp/mime.test b/testing/btest/scripts/base/protocols/smtp/mime.test index 6d50d5919f..2b80e148ff 100644 --- a/testing/btest/scripts/base/protocols/smtp/mime.test +++ b/testing/btest/scripts/base/protocols/smtp/mime.test @@ -1,7 +1,6 @@ # Checks logging of mime types and md5 calculation. Mime type in the log # is normalized to prevent sensitivity to libmagic version. -# @TEST-REQUIRES: grep -q '#define HAVE_LIBMAGIC' $BUILD/config.h # @TEST-EXEC: bro -r $TRACES/smtp.trace %INPUT # @TEST-EXEC: btest-diff smtp_entities.log diff --git a/testing/btest/scripts/base/protocols/socks/trace1.test b/testing/btest/scripts/base/protocols/socks/trace1.test new file mode 100644 index 0000000000..fb1d9ebaf2 --- /dev/null +++ b/testing/btest/scripts/base/protocols/socks/trace1.test @@ -0,0 +1,5 @@ +# @TEST-EXEC: bro -r $TRACES/socks.trace %INPUT +# @TEST-EXEC: btest-diff socks.log +# @TEST-EXEC: btest-diff tunnel.log + +@load base/protocols/socks diff --git a/testing/btest/scripts/base/protocols/socks/trace2.test b/testing/btest/scripts/base/protocols/socks/trace2.test new file mode 100644 index 0000000000..5e3a449120 --- /dev/null +++ b/testing/btest/scripts/base/protocols/socks/trace2.test @@ -0,0 +1,5 @@ +# @TEST-EXEC: bro -r $TRACES/socks-with-ssl.trace %INPUT +# @TEST-EXEC: btest-diff socks.log +# @TEST-EXEC: btest-diff tunnel.log + +@load base/protocols/socks diff --git a/testing/btest/scripts/base/protocols/socks/trace3.test b/testing/btest/scripts/base/protocols/socks/trace3.test new file mode 100644 index 0000000000..c3b3b091eb --- /dev/null +++ b/testing/btest/scripts/base/protocols/socks/trace3.test @@ -0,0 +1,4 @@ +# @TEST-EXEC: bro -C -r $TRACES/tunnels/socks.pcap %INPUT +# @TEST-EXEC: btest-diff tunnel.log + +@load base/protocols/socks diff --git a/testing/btest/scripts/base/protocols/ssl/basic.test b/testing/btest/scripts/base/protocols/ssl/basic.test new file mode 100644 index 0000000000..94b0e87ec1 --- /dev/null +++ b/testing/btest/scripts/base/protocols/ssl/basic.test @@ -0,0 +1,4 @@ +# This tests a normal SSL connection and the log it outputs. + +# @TEST-EXEC: bro -r $TRACES/tls-conn-with-extensions.trace %INPUT +# @TEST-EXEC: btest-diff ssl.log diff --git a/testing/btest/scripts/policy/protocols/http/test-sql-injection-regex.bro b/testing/btest/scripts/policy/protocols/http/test-sql-injection-regex.bro index bf8be22210..2e82eb9dfb 100644 --- a/testing/btest/scripts/policy/protocols/http/test-sql-injection-regex.bro +++ b/testing/btest/scripts/policy/protocols/http/test-sql-injection-regex.bro @@ -42,6 +42,8 @@ event bro_init () #add positive_matches["/index.asp?ARF_ID=(1/(1-(asc(mid(now(),18,1))\(2^7) mod 2)))"]; #add positive_matches["/index.php' and 1=convert(int,(select top 1 table_name from information_schema.tables))--sp_password"]; #add positive_matches["/index.php?id=873 and user=0--"]; + #add positive_matches["?id=1;+if+(1=1)+waitfor+delay+'00:00:01'--9"]; + #add positive_matches["?id=1+and+if(1=1,BENCHMARK(728000,MD5(0x41)),0)9"]; # The positive_matches below are from the mod_security evasion challenge. # All supported attacks are uncommented. @@ -95,14 +97,6 @@ event bro_init () #add negative_matches["/index/hmm.gif?utmdt=Record > Create a Graph"]; #add negative_matches["/index.php?test='||\x0aTO_CHAR(foo_bar.Foo_Bar_ID)||"]; - local regex = - /[\?&][^[:blank:]\x00-\x37\|]+?=[\-[:alnum:]%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+.*?([hH][aA][vV][iI][nN][gG]|[uU][nN][iI][oO][nN]|[eE][xX][eE][cC]|[sS][eE][lL][eE][cC][tT]|[dD][eE][lL][eE][tT][eE]|[dD][rR][oO][pP]|[dD][eE][cC][lL][aA][rR][eE]|[cC][rR][eE][aA][tT][eE]|[iI][nN][sS][eE][rR][tT])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+/ - | /[\?&][^[:blank:]\x00-\x37\|]+?=[\-0-9%]+([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]?([[:blank:]\x00-\x37]|\/\*.*?\*\/|\)?;)+([xX]?[oO][rR]|[nN]?[aA][nN][dD])([[:blank:]\x00-\x37]|\/\*.*?\*\/)+['"]?(([^a-zA-Z&]+)?=|[eE][xX][iI][sS][tT][sS])/ - | /[\?&][^[:blank:]\x00-\x37]+?=[\-0-9%]*([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]([[:blank:]\x00-\x37]|\/\*.*?\*\/)*(-|=|\+|\|\|)([[:blank:]\x00-\x37]|\/\*.*?\*\/)*([0-9]|\(?[cC][oO][nN][vV][eE][rR][tT]|[cC][aA][sS][tT])/ - | /[\?&][^[:blank:]\x00-\x37\|]+?=([[:blank:]\x00-\x37]|\/\*.*?\*\/)*['"]([[:blank:]\x00-\x37]|\/\*.*?\*\/|;)*([xX]?[oO][rR]|[nN]?[aA][nN][dD]|[hH][aA][vV][iI][nN][gG]|[uU][nN][iI][oO][nN]|[eE][xX][eE][cC]|[sS][eE][lL][eE][cC][tT]|[dD][eE][lL][eE][tT][eE]|[dD][rR][oO][pP]|[dD][eE][cC][lL][aA][rR][eE]|[cC][rR][eE][aA][tT][eE]|[rR][eE][gG][eE][xX][pP]|[iI][nN][sS][eE][rR][tT])([[:blank:]\x00-\x37]|\/\*.*?\*\/|[\[(])+[a-zA-Z&]{2,}/ - | /[\?&][^[:blank:]\x00-\x37]+?=[^\.]*?([cC][hH][aA][rR]|[aA][sS][cC][iI][iI]|[sS][uU][bB][sS][tT][rR][iI][nN][gG]|[tT][rR][uU][nN][cC][aA][tT][eE]|[vV][eE][rR][sS][iI][oO][nN]|[lL][eE][nN][gG][tT][hH])\(/ - | /\/\*![[:digit:]]{5}.*?\*\//; - print "If anything besides this line prints out, there is a problem."; for ( test in positive_matches ) { diff --git a/testing/btest/signatures/bad-eval-condition.bro b/testing/btest/signatures/bad-eval-condition.bro new file mode 100644 index 0000000000..34997b1124 --- /dev/null +++ b/testing/btest/signatures/bad-eval-condition.bro @@ -0,0 +1,22 @@ +# @TEST-EXEC-FAIL: bro -r $TRACES/ftp-ipv4.trace %INPUT +# @TEST-EXEC: btest-diff .stderr + +@load-sigs blah.sig + +@TEST-START-FILE blah.sig +signature blah + { + ip-proto == tcp + src-port == 21 + payload /.*/ + eval mark_conn + } +@TEST-END-FILE + +# wrong function signature for use with signature 'eval' conditions +# needs to be reported +function mark_conn(state: signature_state): bool + { + add state$conn$service["blah"]; + return T; + } diff --git a/testing/btest/signatures/dpd.bro b/testing/btest/signatures/dpd.bro new file mode 100644 index 0000000000..d6ae02cb50 --- /dev/null +++ b/testing/btest/signatures/dpd.bro @@ -0,0 +1,54 @@ +# @TEST-EXEC: bro -b -s myftp -r $TRACES/ftp-ipv4.trace %INPUT >dpd-ipv4.out +# @TEST-EXEC: bro -b -s myftp -r $TRACES/ipv6-ftp.trace %INPUT >dpd-ipv6.out +# @TEST-EXEC: bro -b -r $TRACES/ftp-ipv4.trace %INPUT >nosig-ipv4.out +# @TEST-EXEC: bro -b -r $TRACES/ipv6-ftp.trace %INPUT >nosig-ipv6.out +# @TEST-EXEC: btest-diff dpd-ipv4.out +# @TEST-EXEC: btest-diff dpd-ipv6.out +# @TEST-EXEC: btest-diff nosig-ipv4.out +# @TEST-EXEC: btest-diff nosig-ipv6.out + +# DPD based on 'ip-proto' and 'payload' signatures should be independent +# of IP protocol. + +@TEST-START-FILE myftp.sig +signature my_ftp_client { + ip-proto == tcp + payload /(|.*[\n\r]) *[uU][sS][eE][rR] / + tcp-state originator + event "matched my_ftp_client" +} + +signature my_ftp_server { + ip-proto == tcp + payload /[\n\r ]*(120|220)[^0-9].*[\n\r] *(230|331)[^0-9]/ + tcp-state responder + requires-reverse-signature my_ftp_client + enable "ftp" + event "matched my_ftp_server" +} +@TEST-END-FILE + +@load base/utils/addrs + +event bro_init() + { + # no analyzer attached to any port by default, depends entirely on sigs + print "dpd_config", dpd_config; + } + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } + +event ftp_request(c: connection, command: string, arg: string) + { + print fmt("ftp_request %s:%s - %s %s", addr_to_uri(c$id$orig_h), + port_to_count(c$id$orig_p), command, arg); + } + +event ftp_reply(c: connection, code: count, msg: string, cont_resp: bool) + { + print fmt("ftp_reply %s:%s - %s %s", addr_to_uri(c$id$resp_h), + port_to_count(c$id$resp_p), code, msg); + } diff --git a/testing/btest/signatures/dst-ip-header-condition-v4-masks.bro b/testing/btest/signatures/dst-ip-header-condition-v4-masks.bro new file mode 100644 index 0000000000..dc5b0f48b8 --- /dev/null +++ b/testing/btest/signatures/dst-ip-header-condition-v4-masks.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s dst-ip-eq -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-eq.out +# @TEST-EXEC: bro -b -s dst-ip-eq-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-eq-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-eq-list.out + +# @TEST-EXEC: bro -b -s dst-ip-ne -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne.out +# @TEST-EXEC: bro -b -s dst-ip-ne-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne-list.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff dst-ip-eq.out +# @TEST-EXEC: btest-diff dst-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-eq-list.out + +# @TEST-EXEC: btest-diff dst-ip-ne.out +# @TEST-EXEC: btest-diff dst-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-ne-list.out +# @TEST-EXEC: btest-diff dst-ip-ne-list-nomatch.out + +@TEST-START-FILE dst-ip-eq.sig +signature id { + dst-ip == 192.168.1.0/24 + event "dst-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-nomatch.sig +signature id { + dst-ip == 10.0.0.0/8 + event "dst-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-list.sig +signature id { + dst-ip == 10.0.0.0/8,[fe80::0]/16,192.168.1.0/24 + event "dst-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne.sig +signature id { + dst-ip != 10.0.0.0/8 + event "dst-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-nomatch.sig +signature id { + dst-ip != 192.168.1.0/24 + event "dst-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list.sig +signature id { + dst-ip != 10.0.0.0/8,[fe80::0]/16 + event "dst-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list-nomatch.sig +signature id { + dst-ip != 10.0.0.0/8,[fe80::0]/16,192.168.1.0/24 + event "dst-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/dst-ip-header-condition-v4.bro b/testing/btest/signatures/dst-ip-header-condition-v4.bro new file mode 100644 index 0000000000..0d0d3e644c --- /dev/null +++ b/testing/btest/signatures/dst-ip-header-condition-v4.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s dst-ip-eq -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-eq.out +# @TEST-EXEC: bro -b -s dst-ip-eq-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-eq-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-eq-list.out + +# @TEST-EXEC: bro -b -s dst-ip-ne -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne.out +# @TEST-EXEC: bro -b -s dst-ip-ne-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne-list.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >dst-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff dst-ip-eq.out +# @TEST-EXEC: btest-diff dst-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-eq-list.out + +# @TEST-EXEC: btest-diff dst-ip-ne.out +# @TEST-EXEC: btest-diff dst-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-ne-list.out +# @TEST-EXEC: btest-diff dst-ip-ne-list-nomatch.out + +@TEST-START-FILE dst-ip-eq.sig +signature id { + dst-ip == 192.168.1.101 + event "dst-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-nomatch.sig +signature id { + dst-ip == 10.0.0.1 + event "dst-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-list.sig +signature id { + dst-ip == 10.0.0.1,10.0.0.2,[fe80::1],192.168.1.101 + event "dst-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne.sig +signature id { + dst-ip != 10.0.0.1 + event "dst-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-nomatch.sig +signature id { + dst-ip != 192.168.1.101 + event "dst-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list.sig +signature id { + dst-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1] + event "dst-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list-nomatch.sig +signature id { + dst-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1],192.168.1.101 + event "dst-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/dst-ip-header-condition-v6-masks.bro b/testing/btest/signatures/dst-ip-header-condition-v6-masks.bro new file mode 100644 index 0000000000..d82a76e78d --- /dev/null +++ b/testing/btest/signatures/dst-ip-header-condition-v6-masks.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s dst-ip-eq -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-eq.out +# @TEST-EXEC: bro -b -s dst-ip-eq-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-eq-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-eq-list.out + +# @TEST-EXEC: bro -b -s dst-ip-ne -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne.out +# @TEST-EXEC: bro -b -s dst-ip-ne-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne-list.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff dst-ip-eq.out +# @TEST-EXEC: btest-diff dst-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-eq-list.out + +# @TEST-EXEC: btest-diff dst-ip-ne.out +# @TEST-EXEC: btest-diff dst-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-ne-list.out +# @TEST-EXEC: btest-diff dst-ip-ne-list-nomatch.out + +@TEST-START-FILE dst-ip-eq.sig +signature id { + dst-ip == [2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "dst-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-nomatch.sig +signature id { + dst-ip == [fe80::0]/16 + event "dst-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-list.sig +signature id { + dst-ip == 10.0.0.0/8,[fe80::0]/16,[2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "dst-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne.sig +signature id { + dst-ip != [2001:4f8:4:7:2e0:81ff:fe52:0]/120 + event "dst-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-nomatch.sig +signature id { + dst-ip != [2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "dst-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list.sig +signature id { + dst-ip != 10.0.0.0/8,[fe80::0]/16 + event "dst-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list-nomatch.sig +signature id { + dst-ip != 10.0.0.0/8,[fe80::1]/16,[2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "dst-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/dst-ip-header-condition-v6.bro b/testing/btest/signatures/dst-ip-header-condition-v6.bro new file mode 100644 index 0000000000..e629fb4462 --- /dev/null +++ b/testing/btest/signatures/dst-ip-header-condition-v6.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s dst-ip-eq -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-eq.out +# @TEST-EXEC: bro -b -s dst-ip-eq-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-eq-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-eq-list.out + +# @TEST-EXEC: bro -b -s dst-ip-ne -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne.out +# @TEST-EXEC: bro -b -s dst-ip-ne-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne-list.out +# @TEST-EXEC: bro -b -s dst-ip-ne-list-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff dst-ip-eq.out +# @TEST-EXEC: btest-diff dst-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-eq-list.out + +# @TEST-EXEC: btest-diff dst-ip-ne.out +# @TEST-EXEC: btest-diff dst-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff dst-ip-ne-list.out +# @TEST-EXEC: btest-diff dst-ip-ne-list-nomatch.out + +@TEST-START-FILE dst-ip-eq.sig +signature id { + dst-ip == [2001:4f8:4:7:2e0:81ff:fe52:9a6b] + event "dst-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-nomatch.sig +signature id { + dst-ip == 10.0.0.1 + event "dst-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-eq-list.sig +signature id { + dst-ip == 10.0.0.1,10.0.0.2,[fe80::1],[2001:4f8:4:7:2e0:81ff:fe52:9a6b] + event "dst-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne.sig +signature id { + dst-ip != 10.0.0.1 + event "dst-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-nomatch.sig +signature id { + dst-ip != [2001:4f8:4:7:2e0:81ff:fe52:9a6b] + event "dst-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list.sig +signature id { + dst-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1] + event "dst-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-ip-ne-list-nomatch.sig +signature id { + dst-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1],[2001:4f8:4:7:2e0:81ff:fe52:9a6b] + event "dst-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/dst-port-header-condition.bro b/testing/btest/signatures/dst-port-header-condition.bro new file mode 100644 index 0000000000..08ba07b0de --- /dev/null +++ b/testing/btest/signatures/dst-port-header-condition.bro @@ -0,0 +1,164 @@ +# @TEST-EXEC: bro -b -s dst-port-eq -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >dst-port-eq.out +# @TEST-EXEC: bro -b -s dst-port-eq-nomatch -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >dst-port-eq-nomatch.out +# @TEST-EXEC: bro -b -s dst-port-eq-list -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >dst-port-eq-list.out +# @TEST-EXEC: bro -b -s dst-port-eq -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-eq-ip6.out + +# @TEST-EXEC: bro -b -s dst-port-ne -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-ne.out +# @TEST-EXEC: bro -b -s dst-port-ne-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-ne-nomatch.out +# @TEST-EXEC: bro -b -s dst-port-ne-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-ne-list.out +# @TEST-EXEC: bro -b -s dst-port-ne-list-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-ne-list-nomatch.out + +# @TEST-EXEC: bro -b -s dst-port-lt -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-lt.out +# @TEST-EXEC: bro -b -s dst-port-lt-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-lt-nomatch.out +# @TEST-EXEC: bro -b -s dst-port-lte1 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-lte1.out +# @TEST-EXEC: bro -b -s dst-port-lte2 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-lte2.out +# @TEST-EXEC: bro -b -s dst-port-lte-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-lte-nomatch.out + +# @TEST-EXEC: bro -b -s dst-port-gt -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-gt.out +# @TEST-EXEC: bro -b -s dst-port-gt-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-gt-nomatch.out +# @TEST-EXEC: bro -b -s dst-port-gte1 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-gte1.out +# @TEST-EXEC: bro -b -s dst-port-gte2 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-gte2.out +# @TEST-EXEC: bro -b -s dst-port-gte-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >dst-port-gte-nomatch.out + +# @TEST-EXEC: btest-diff dst-port-eq.out +# @TEST-EXEC: btest-diff dst-port-eq-nomatch.out +# @TEST-EXEC: btest-diff dst-port-eq-list.out +# @TEST-EXEC: btest-diff dst-port-eq-ip6.out +# @TEST-EXEC: btest-diff dst-port-ne.out +# @TEST-EXEC: btest-diff dst-port-ne-nomatch.out +# @TEST-EXEC: btest-diff dst-port-ne-list.out +# @TEST-EXEC: btest-diff dst-port-ne-list-nomatch.out +# @TEST-EXEC: btest-diff dst-port-lt.out +# @TEST-EXEC: btest-diff dst-port-lt-nomatch.out +# @TEST-EXEC: btest-diff dst-port-lte1.out +# @TEST-EXEC: btest-diff dst-port-lte2.out +# @TEST-EXEC: btest-diff dst-port-lte-nomatch.out +# @TEST-EXEC: btest-diff dst-port-gt.out +# @TEST-EXEC: btest-diff dst-port-gt-nomatch.out +# @TEST-EXEC: btest-diff dst-port-gte1.out +# @TEST-EXEC: btest-diff dst-port-gte2.out +# @TEST-EXEC: btest-diff dst-port-gte-nomatch.out + +@TEST-START-FILE dst-port-eq.sig +signature id { + dst-port == 13000 + event "dst-port-eq" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-eq-nomatch.sig +signature id { + dst-port == 22 + event "dst-port-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-eq-list.sig +signature id { + dst-port == 22,23,24,13000 + event "dst-port-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-ne.sig +signature id { + dst-port != 22 + event "dst-port-ne" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-ne-nomatch.sig +signature id { + dst-port != 13000 + event "dst-port-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-ne-list.sig +signature id { + dst-port != 22,23,24,25 + event "dst-port-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-ne-list-nomatch.sig +signature id { + dst-port != 22,23,24,25,13000 + event "dst-port-ne-list-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-lt.sig +signature id { + dst-port < 13001 + event "dst-port-lt" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-lt-nomatch.sig +signature id { + dst-port < 13000 + event "dst-port-lt-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-lte1.sig +signature id { + dst-port <= 13000 + event "dst-port-lte1" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-lte2.sig +signature id { + dst-port <= 13001 + event "dst-port-lte2" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-lte-nomatch.sig +signature id { + dst-port <= 12999 + event "dst-port-lte-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-gt.sig +signature id { + dst-port > 12999 + event "dst-port-gt" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-gt-nomatch.sig +signature id { + dst-port > 13000 + event "dst-port-gt-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-gte1.sig +signature id { + dst-port >= 13000 + event "dst-port-gte1" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-gte2.sig +signature id { + dst-port >= 12999 + event "dst-port-gte2" +} +@TEST-END-FILE + +@TEST-START-FILE dst-port-gte-nomatch.sig +signature id { + dst-port >= 13001 + event "dst-port-gte-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/eval-condition.bro b/testing/btest/signatures/eval-condition.bro new file mode 100644 index 0000000000..f3f1171da6 --- /dev/null +++ b/testing/btest/signatures/eval-condition.bro @@ -0,0 +1,20 @@ +# @TEST-EXEC: bro -r $TRACES/ftp-ipv4.trace %INPUT +# @TEST-EXEC: btest-diff conn.log + +@load-sigs blah.sig + +@TEST-START-FILE blah.sig +signature blah + { + ip-proto == tcp + src-port == 21 + payload /.*/ + eval mark_conn + } +@TEST-END-FILE + +function mark_conn(state: signature_state, data: string): bool + { + add state$conn$service["blah"]; + return T; + } diff --git a/testing/btest/signatures/header-header-condition.bro b/testing/btest/signatures/header-header-condition.bro new file mode 100644 index 0000000000..ad78ba4513 --- /dev/null +++ b/testing/btest/signatures/header-header-condition.bro @@ -0,0 +1,78 @@ +# @TEST-EXEC: bro -b -s ip -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >ip.out +# @TEST-EXEC: bro -b -s ip-mask -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >ip-mask.out +# @TEST-EXEC: bro -b -s ip6 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >ip6.out +# @TEST-EXEC: bro -b -s udp -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >udp.out +# @TEST-EXEC: bro -b -s tcp -r $TRACES/chksums/ip4-tcp-good-chksum.pcap %INPUT >tcp.out +# @TEST-EXEC: bro -b -s icmp -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >icmp.out +# @TEST-EXEC: bro -b -s icmp6 -r $TRACES/chksums/ip6-icmp6-good-chksum.pcap %INPUT >icmp6.out +# @TEST-EXEC: bro -b -s val-mask -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >val-mask.out + +# @TEST-EXEC: btest-diff ip.out +# @TEST-EXEC: btest-diff ip-mask.out +# @TEST-EXEC: btest-diff ip6.out +# @TEST-EXEC: btest-diff udp.out +# @TEST-EXEC: btest-diff tcp.out +# @TEST-EXEC: btest-diff icmp.out +# @TEST-EXEC: btest-diff icmp6.out +# @TEST-EXEC: btest-diff val-mask.out + +@TEST-START-FILE ip.sig +signature id { + header ip[10:1] == 0x7c + event "ip" +} +@TEST-END-FILE + +@TEST-START-FILE ip-mask.sig +signature id { + header ip[16:4] == 127.0.0.0/24 + event "ip-mask" +} +@TEST-END-FILE + +@TEST-START-FILE ip6.sig +signature id { + header ip6[10:1] == 0x04 + event "ip6" +} +@TEST-END-FILE + +@TEST-START-FILE udp.sig +signature id { + header udp[2:1] == 0x32 + event "udp" +} +@TEST-END-FILE + +@TEST-START-FILE tcp.sig +signature id { + header tcp[3:4] == 0x50000000 + event "tcp" +} +@TEST-END-FILE + +@TEST-START-FILE icmp.sig +signature id { + header icmp[2:2] == 0xf7ff + event "icmp" +} +@TEST-END-FILE + +@TEST-START-FILE icmp6.sig +signature id { + header icmp6[0:1] == 0x80 + event "icmp6" +} +@TEST-END-FILE + +@TEST-START-FILE val-mask.sig +signature id { + header udp[2:1] & 0x0f == 0x02 + event "val-mask" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/id-lookup.bro b/testing/btest/signatures/id-lookup.bro new file mode 100644 index 0000000000..2e32224bc8 --- /dev/null +++ b/testing/btest/signatures/id-lookup.bro @@ -0,0 +1,16 @@ +# @TEST-EXEC: bro -b -s id -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >id.out +# @TEST-EXEC: btest-diff id.out + +@TEST-START-FILE id.sig +signature id { + ip-proto == udp_proto_number + event "id" +} +@TEST-END-FILE + +const udp_proto_number = 17; + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/ip-proto-header-condition.bro b/testing/btest/signatures/ip-proto-header-condition.bro new file mode 100644 index 0000000000..52d58ea223 --- /dev/null +++ b/testing/btest/signatures/ip-proto-header-condition.bro @@ -0,0 +1,48 @@ +# @TEST-EXEC: bro -b -s tcp -r $TRACES/chksums/ip4-tcp-good-chksum.pcap %INPUT >tcp_in_ip4.out +# @TEST-EXEC: bro -b -s udp -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >udp_in_ip4.out +# @TEST-EXEC: bro -b -s icmp -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >icmp_in_ip4.out +# @TEST-EXEC: bro -b -s tcp -r $TRACES/chksums/ip6-tcp-good-chksum.pcap %INPUT >tcp_in_ip6.out +# @TEST-EXEC: bro -b -s udp -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >udp_in_ip6.out +# @TEST-EXEC: bro -b -s icmp6 -r $TRACES/chksums/ip6-icmp6-good-chksum.pcap %INPUT >icmp6_in_ip6.out +# @TEST-EXEC: bro -b -s icmp -r $TRACES/chksums/ip6-icmp6-good-chksum.pcap %INPUT >nomatch.out + +# @TEST-EXEC: btest-diff tcp_in_ip4.out +# @TEST-EXEC: btest-diff udp_in_ip4.out +# @TEST-EXEC: btest-diff icmp_in_ip4.out +# @TEST-EXEC: btest-diff tcp_in_ip6.out +# @TEST-EXEC: btest-diff udp_in_ip6.out +# @TEST-EXEC: btest-diff icmp6_in_ip6.out +# @TEST-EXEC: btest-diff nomatch.out + +@TEST-START-FILE tcp.sig +signature tcp_transport { + ip-proto == tcp + event "tcp" +} +@TEST-END-FILE + +@TEST-START-FILE udp.sig +signature udp_transport { + ip-proto == udp + event "udp" +} +@TEST-END-FILE + +@TEST-START-FILE icmp.sig +signature icmp_transport { + ip-proto == icmp + event "icmp" +} +@TEST-END-FILE + +@TEST-START-FILE icmp6.sig +signature icmp6_transport { + ip-proto == icmp6 + event "icmp6" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/load-sigs.bro b/testing/btest/signatures/load-sigs.bro new file mode 100644 index 0000000000..3e08338f2c --- /dev/null +++ b/testing/btest/signatures/load-sigs.bro @@ -0,0 +1,21 @@ +# A test of signature loading using @load-sigs. + +# @TEST-EXEC: bro -C -r $TRACES/wikipedia.trace %INPUT >output +# @TEST-EXEC: btest-diff output + +@load-sigs ./subdir/mysigs.sig + +event signature_match(state: signature_state, msg: string, data: string) + { + print state$conn$id; + print msg; + print data; + } + +@TEST-START-FILE subdir/mysigs.sig +signature my-sig { +ip-proto == tcp +payload /GET \/images/ +event "works" +} +@TEST-END-FILE diff --git a/testing/btest/signatures/src-ip-header-condition-v4-masks.bro b/testing/btest/signatures/src-ip-header-condition-v4-masks.bro new file mode 100644 index 0000000000..1e272c81ee --- /dev/null +++ b/testing/btest/signatures/src-ip-header-condition-v4-masks.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s src-ip-eq -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-eq.out +# @TEST-EXEC: bro -b -s src-ip-eq-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-eq-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-eq-list.out + +# @TEST-EXEC: bro -b -s src-ip-ne -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne.out +# @TEST-EXEC: bro -b -s src-ip-ne-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-ne-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne-list.out +# @TEST-EXEC: bro -b -s src-ip-ne-list-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff src-ip-eq.out +# @TEST-EXEC: btest-diff src-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff src-ip-eq-list.out + +# @TEST-EXEC: btest-diff src-ip-ne.out +# @TEST-EXEC: btest-diff src-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff src-ip-ne-list.out +# @TEST-EXEC: btest-diff src-ip-ne-list-nomatch.out + +@TEST-START-FILE src-ip-eq.sig +signature id { + src-ip == 192.168.1.0/24 + event "src-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-nomatch.sig +signature id { + src-ip == 10.0.0.0/8 + event "src-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-list.sig +signature id { + src-ip == 10.0.0.0/8,[fe80::0]/16,192.168.1.0/24 + event "src-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne.sig +signature id { + src-ip != 10.0.0.0/8 + event "src-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-nomatch.sig +signature id { + src-ip != 192.168.1.0/24 + event "src-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list.sig +signature id { + src-ip != 10.0.0.0/8,[fe80::0]/16 + event "src-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list-nomatch.sig +signature id { + src-ip != 10.0.0.0/8,[fe80::0]/16,192.168.1.0/24 + event "src-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/src-ip-header-condition-v4.bro b/testing/btest/signatures/src-ip-header-condition-v4.bro new file mode 100644 index 0000000000..746e41a4be --- /dev/null +++ b/testing/btest/signatures/src-ip-header-condition-v4.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s src-ip-eq -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-eq.out +# @TEST-EXEC: bro -b -s src-ip-eq-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-eq-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-eq-list.out + +# @TEST-EXEC: bro -b -s src-ip-ne -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne.out +# @TEST-EXEC: bro -b -s src-ip-ne-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-ne-list -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne-list.out +# @TEST-EXEC: bro -b -s src-ip-ne-list-nomatch -r $TRACES/chksums/ip4-icmp-good-chksum.pcap %INPUT >src-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff src-ip-eq.out +# @TEST-EXEC: btest-diff src-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff src-ip-eq-list.out + +# @TEST-EXEC: btest-diff src-ip-ne.out +# @TEST-EXEC: btest-diff src-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff src-ip-ne-list.out +# @TEST-EXEC: btest-diff src-ip-ne-list-nomatch.out + +@TEST-START-FILE src-ip-eq.sig +signature id { + src-ip == 192.168.1.100 + event "src-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-nomatch.sig +signature id { + src-ip == 10.0.0.1 + event "src-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-list.sig +signature id { + src-ip == 10.0.0.1,10.0.0.2,[fe80::1],192.168.1.100 + event "src-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne.sig +signature id { + src-ip != 10.0.0.1 + event "src-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-nomatch.sig +signature id { + src-ip != 192.168.1.100 + event "src-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list.sig +signature id { + src-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1] + event "src-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list-nomatch.sig +signature id { + src-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1],192.168.1.100 + event "src-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/src-ip-header-condition-v6-masks.bro b/testing/btest/signatures/src-ip-header-condition-v6-masks.bro new file mode 100644 index 0000000000..3c4fbf5526 --- /dev/null +++ b/testing/btest/signatures/src-ip-header-condition-v6-masks.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s src-ip-eq -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-eq.out +# @TEST-EXEC: bro -b -s src-ip-eq-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-eq-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-eq-list.out + +# @TEST-EXEC: bro -b -s src-ip-ne -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne.out +# @TEST-EXEC: bro -b -s src-ip-ne-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-ne-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne-list.out +# @TEST-EXEC: bro -b -s src-ip-ne-list-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff src-ip-eq.out +# @TEST-EXEC: btest-diff src-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff src-ip-eq-list.out + +# @TEST-EXEC: btest-diff src-ip-ne.out +# @TEST-EXEC: btest-diff src-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff src-ip-ne-list.out +# @TEST-EXEC: btest-diff src-ip-ne-list-nomatch.out + +@TEST-START-FILE src-ip-eq.sig +signature id { + src-ip == [2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "src-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-nomatch.sig +signature id { + src-ip == [fe80::0]/16 + event "src-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-list.sig +signature id { + src-ip == 10.0.0.0/8,[fe80::0]/16,[2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "src-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne.sig +signature id { + src-ip != [2001:4f8:4:7:2e0:81ff:fe52:0]/120 + event "src-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-nomatch.sig +signature id { + src-ip != [2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "src-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list.sig +signature id { + src-ip != 10.0.0.0/8,[fe80::0]/16 + event "src-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list-nomatch.sig +signature id { + src-ip != 10.0.0.0/8,[fe80::1]/16,[2001:4f8:4:7:2e0:81ff:fe52:0]/112 + event "src-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/src-ip-header-condition-v6.bro b/testing/btest/signatures/src-ip-header-condition-v6.bro new file mode 100644 index 0000000000..613a3dd4c1 --- /dev/null +++ b/testing/btest/signatures/src-ip-header-condition-v6.bro @@ -0,0 +1,71 @@ +# @TEST-EXEC: bro -b -s src-ip-eq -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-eq.out +# @TEST-EXEC: bro -b -s src-ip-eq-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-eq-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-eq-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-eq-list.out + +# @TEST-EXEC: bro -b -s src-ip-ne -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne.out +# @TEST-EXEC: bro -b -s src-ip-ne-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne-nomatch.out +# @TEST-EXEC: bro -b -s src-ip-ne-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne-list.out +# @TEST-EXEC: bro -b -s src-ip-ne-list-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-ip-ne-list-nomatch.out + +# @TEST-EXEC: btest-diff src-ip-eq.out +# @TEST-EXEC: btest-diff src-ip-eq-nomatch.out +# @TEST-EXEC: btest-diff src-ip-eq-list.out + +# @TEST-EXEC: btest-diff src-ip-ne.out +# @TEST-EXEC: btest-diff src-ip-ne-nomatch.out +# @TEST-EXEC: btest-diff src-ip-ne-list.out +# @TEST-EXEC: btest-diff src-ip-ne-list-nomatch.out + +@TEST-START-FILE src-ip-eq.sig +signature id { + src-ip == [2001:4f8:4:7:2e0:81ff:fe52:ffff] + event "src-ip-eq" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-nomatch.sig +signature id { + src-ip == 10.0.0.1 + event "src-ip-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-eq-list.sig +signature id { + src-ip == 10.0.0.1,10.0.0.2,[fe80::1],[2001:4f8:4:7:2e0:81ff:fe52:ffff] + event "src-ip-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne.sig +signature id { + src-ip != 10.0.0.1 + event "src-ip-ne" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-nomatch.sig +signature id { + src-ip != [2001:4f8:4:7:2e0:81ff:fe52:ffff] + event "src-ip-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list.sig +signature id { + src-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1] + event "src-ip-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-ip-ne-list-nomatch.sig +signature id { + src-ip != 10.0.0.1,10.0.0.2,10.0.0.3,[fe80::1],[2001:4f8:4:7:2e0:81ff:fe52:ffff] + event "src-ip-ne-list-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/btest/signatures/src-port-header-condition.bro b/testing/btest/signatures/src-port-header-condition.bro new file mode 100644 index 0000000000..ea9e08ce2b --- /dev/null +++ b/testing/btest/signatures/src-port-header-condition.bro @@ -0,0 +1,164 @@ +# @TEST-EXEC: bro -b -s src-port-eq -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >src-port-eq.out +# @TEST-EXEC: bro -b -s src-port-eq-nomatch -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >src-port-eq-nomatch.out +# @TEST-EXEC: bro -b -s src-port-eq-list -r $TRACES/chksums/ip4-udp-good-chksum.pcap %INPUT >src-port-eq-list.out +# @TEST-EXEC: bro -b -s src-port-eq -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-eq-ip6.out + +# @TEST-EXEC: bro -b -s src-port-ne -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-ne.out +# @TEST-EXEC: bro -b -s src-port-ne-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-ne-nomatch.out +# @TEST-EXEC: bro -b -s src-port-ne-list -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-ne-list.out +# @TEST-EXEC: bro -b -s src-port-ne-list-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-ne-list-nomatch.out + +# @TEST-EXEC: bro -b -s src-port-lt -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-lt.out +# @TEST-EXEC: bro -b -s src-port-lt-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-lt-nomatch.out +# @TEST-EXEC: bro -b -s src-port-lte1 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-lte1.out +# @TEST-EXEC: bro -b -s src-port-lte2 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-lte2.out +# @TEST-EXEC: bro -b -s src-port-lte-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-lte-nomatch.out + +# @TEST-EXEC: bro -b -s src-port-gt -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-gt.out +# @TEST-EXEC: bro -b -s src-port-gt-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-gt-nomatch.out +# @TEST-EXEC: bro -b -s src-port-gte1 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-gte1.out +# @TEST-EXEC: bro -b -s src-port-gte2 -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-gte2.out +# @TEST-EXEC: bro -b -s src-port-gte-nomatch -r $TRACES/chksums/ip6-udp-good-chksum.pcap %INPUT >src-port-gte-nomatch.out + +# @TEST-EXEC: btest-diff src-port-eq.out +# @TEST-EXEC: btest-diff src-port-eq-nomatch.out +# @TEST-EXEC: btest-diff src-port-eq-list.out +# @TEST-EXEC: btest-diff src-port-eq-ip6.out +# @TEST-EXEC: btest-diff src-port-ne.out +# @TEST-EXEC: btest-diff src-port-ne-nomatch.out +# @TEST-EXEC: btest-diff src-port-ne-list.out +# @TEST-EXEC: btest-diff src-port-ne-list-nomatch.out +# @TEST-EXEC: btest-diff src-port-lt.out +# @TEST-EXEC: btest-diff src-port-lt-nomatch.out +# @TEST-EXEC: btest-diff src-port-lte1.out +# @TEST-EXEC: btest-diff src-port-lte2.out +# @TEST-EXEC: btest-diff src-port-lte-nomatch.out +# @TEST-EXEC: btest-diff src-port-gt.out +# @TEST-EXEC: btest-diff src-port-gt-nomatch.out +# @TEST-EXEC: btest-diff src-port-gte1.out +# @TEST-EXEC: btest-diff src-port-gte2.out +# @TEST-EXEC: btest-diff src-port-gte-nomatch.out + +@TEST-START-FILE src-port-eq.sig +signature id { + src-port == 30000 + event "src-port-eq" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-eq-nomatch.sig +signature id { + src-port == 22 + event "src-port-eq-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-eq-list.sig +signature id { + src-port == 22,23,24,30000 + event "src-port-eq-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-ne.sig +signature id { + src-port != 22 + event "src-port-ne" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-ne-nomatch.sig +signature id { + src-port != 30000 + event "src-port-ne-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-ne-list.sig +signature id { + src-port != 22,23,24,25 + event "src-port-ne-list" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-ne-list-nomatch.sig +signature id { + src-port != 22,23,24,25,30000 + event "src-port-ne-list-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-lt.sig +signature id { + src-port < 30001 + event "src-port-lt" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-lt-nomatch.sig +signature id { + src-port < 30000 + event "src-port-lt-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-lte1.sig +signature id { + src-port <= 30000 + event "src-port-lte1" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-lte2.sig +signature id { + src-port <= 30001 + event "src-port-lte2" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-lte-nomatch.sig +signature id { + src-port <= 29999 + event "src-port-lte-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-gt.sig +signature id { + src-port > 29999 + event "src-port-gt" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-gt-nomatch.sig +signature id { + src-port > 30000 + event "src-port-gt-nomatch" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-gte1.sig +signature id { + src-port >= 30000 + event "src-port-gte1" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-gte2.sig +signature id { + src-port >= 29999 + event "src-port-gte2" +} +@TEST-END-FILE + +@TEST-START-FILE src-port-gte-nomatch.sig +signature id { + src-port >= 30001 + event "src-port-gte-nomatch" +} +@TEST-END-FILE + +event signature_match(state: signature_state, msg: string, data: string) + { + print fmt("signature_match %s - %s", state$conn$id, msg); + } diff --git a/testing/external/Makefile b/testing/external/Makefile index 994d8962c0..9715b3d669 100644 --- a/testing/external/Makefile +++ b/testing/external/Makefile @@ -6,11 +6,11 @@ DIAG=diag.log all: @rm -f $(DIAG) - @for repo in $(REPOS); do (cd $$repo && make ); done + @for repo in $(REPOS); do (cd $$repo && make -s ); done brief: @rm -f $(DIAG) - @for repo in $(REPOS); do (cd $$repo && make brief ); done + @for repo in $(REPOS); do (cd $$repo && make -s brief ); done init: git clone $(PUBLIC_REPO) @@ -24,3 +24,7 @@ push: status: @for repo in $(REPOS); do ( cd $$repo && echo '>>' $$repo && git status -bs && echo ); done +coverage: + @for repo in $(REPOS); do (cd $$repo && echo "Coverage for '$$repo' repo:" && make coverage); done + +.PHONY: all brief init pull push status coverage diff --git a/testing/external/scripts/diff-all b/testing/external/scripts/diff-all index 597c769687..e84416c088 100755 --- a/testing/external/scripts/diff-all +++ b/testing/external/scripts/diff-all @@ -1,6 +1,10 @@ #! /usr/bin/env bash # -# Runs btest-diff on $@ and fails if any fails. +# Runs btest-diff on $@ and fails if any fails. If $@ contains globs, we expand +# them relative to *both* the current directory and the test's baseline +# directory so that we spot missing files. Note that you will need to quote +# the globals in the TEST-EXEC line as otherwise they will have been expanded relative +# to the current directory already when this scripts runs. diag=$TEST_DIAGNOSTICS @@ -14,8 +18,20 @@ fi rc=0; -for i in $@; do - if [[ "$i" != "loaded_scripts.log" && "$i" != "prof.log" ]]; then +files_cwd=`ls $@` +files_baseline=`cd $TEST_BASELINE && ls $@` + +for i in `echo $files_cwd $files_baseline | sort | uniq`; do + if [[ "$i" != "loaded_scripts.log" && "$i" != "prof.log" && "$i" != "debug.log" && "$i" != "stats.log" ]]; then + + if [[ "$i" == "reporter.log" ]]; then + # Do not diff the reporter.log if it only complains about missing + # GeoIP support. + if ! egrep -v "^#|Bro was not configured for GeoIP support" $i; then + continue + fi + fi + if ! btest-diff $i; then echo "" >>$diag echo "#### btest-diff $i" >>$diag diff --git a/testing/external/scripts/perftools-adapt-paths b/testing/external/scripts/perftools-adapt-paths new file mode 100755 index 0000000000..cfecd39993 --- /dev/null +++ b/testing/external/scripts/perftools-adapt-paths @@ -0,0 +1,10 @@ +#! /usr/bin/env bash +# +# Adapts relative paths in perftools stderr output to work +# directly from the top-level test directory. +# +# Returns an exit code > 0 if there's a leak. + +cat $1 | sed "s#bro *\"\./#../../../build/src/bro \".tmp/$TEST_NAME/#g" | sed 's/ *--gv//g' >$1.tmp && mv $1.tmp $1 + +grep -qv "detected leaks of" $1 diff --git a/testing/external/scripts/skel/test.skeleton b/testing/external/scripts/skel/test.skeleton index becd970d78..a76f3d4d09 100644 --- a/testing/external/scripts/skel/test.skeleton +++ b/testing/external/scripts/skel/test.skeleton @@ -1,5 +1,5 @@ # @TEST-EXEC: zcat $TRACES/trace.gz | bro -r - %INPUT -# @TEST-EXEC: $SCRIPTS/diff-all *.log +# @TEST-EXEC: $SCRIPTS/diff-all '*.log' @load testing-setup @load test-all-policy diff --git a/testing/external/scripts/testing-setup.bro b/testing/external/scripts/testing-setup.bro index fa5664a877..4b4d110864 100644 --- a/testing/external/scripts/testing-setup.bro +++ b/testing/external/scripts/testing-setup.bro @@ -1,6 +1,12 @@ # Sets some testing specific options. @ifdef ( SMTP::never_calc_md5 ) - # MDD5s can depend on libmagic output. + # MDD5s can depend on libmagic output. redef SMTP::never_calc_md5 = T; @endif + +@ifdef ( LogElasticSearch::server_host ) + # Set to empty so that logs-to-elasticsearch.bro doesn't try to setup + #log forwarding to ES. + redef LogElasticSearch::server_host = ""; +@endif diff --git a/testing/external/scripts/update-traces b/testing/external/scripts/update-traces index 8c27fb055e..8dd8d09e9c 100755 --- a/testing/external/scripts/update-traces +++ b/testing/external/scripts/update-traces @@ -69,9 +69,9 @@ cat $cfg | while read line; do eval "$proxy curl $auth -f --anyauth $url -o $file" echo mv $fp.tmp $fp - else - echo "`basename $file` already available." - fi + #else + # echo "`basename $file` already available." + fi rm -f $fp.tmp diff --git a/testing/external/subdir-btest.cfg b/testing/external/subdir-btest.cfg index dd9a57c879..fba89fb724 100644 --- a/testing/external/subdir-btest.cfg +++ b/testing/external/subdir-btest.cfg @@ -10,10 +10,11 @@ BROPATH=`bash -c %(testbase)s/../../../build/bro-path-dev`:%(testbase)s/../scrip BRO_SEED_FILE=%(testbase)s/../random.seed TZ=UTC LC_ALL=C -PATH=%(testbase)s/../../../build/src:%(testbase)s/../../../aux/btest:%(default_path)s +PATH=%(testbase)s/../../../build/src:%(testbase)s/../../../aux/btest:%(testbase)s/../../scripts:%(default_path)s TEST_DIFF_CANONIFIER=%(testbase)s/../../scripts/diff-canonifier-external TEST_DIFF_BRIEF=1 TRACES=%(testbase)s/Traces SCRIPTS=%(testbase)s/../scripts DIST=%(testbase)s/../../.. BUILD=%(testbase)s/../../../build +BRO_PROFILER_FILE=%(testbase)s/.tmp/script-coverage.XXXXXX diff --git a/testing/scripts/coverage-calc b/testing/scripts/coverage-calc new file mode 100755 index 0000000000..cc5253c75c --- /dev/null +++ b/testing/scripts/coverage-calc @@ -0,0 +1,59 @@ +#! /usr/bin/env python + +# This script aggregates many files containing Bro script coverage information +# into a single file and reports the overall coverage information. Usage: +# +# coverage-calc