Various unit test cleanup.

Updated README and collected coverage-related tests in a common dir.

There are still coverage failures resulting from either the following
scripts not being @load'd in the default bro mode:

base/frameworks/time-machine/notice.bro
base/protocols/http/partial-content.bro
base/protocols/rpc/main.bro

Or the following result in errors when @load'd:

policy/protocols/conn/scan.bro
policy/hot.conn.bro

If these are all scripts-in-progress, can we move them all to live
outside the main scripts/ directory until they're ready?
This commit is contained in:
Jon Siwek 2011-09-27 12:41:30 -05:00
parent 24bb14390b
commit a71ab223c4
19 changed files with 119 additions and 95 deletions

View file

@ -42,6 +42,7 @@ rest_target(${psd} base/frameworks/notice/actions/add-geodata.bro)
rest_target(${psd} base/frameworks/notice/actions/drop.bro)
rest_target(${psd} base/frameworks/notice/actions/email_admin.bro)
rest_target(${psd} base/frameworks/notice/actions/page.bro)
rest_target(${psd} base/frameworks/notice/cluster.bro)
rest_target(${psd} base/frameworks/notice/extend-email/hostnames.bro)
rest_target(${psd} base/frameworks/notice/main.bro)
rest_target(${psd} base/frameworks/notice/weird.bro)
@ -125,6 +126,8 @@ rest_target(${psd} policy/protocols/ssh/detect-bruteforcing.bro)
rest_target(${psd} policy/protocols/ssh/geo-data.bro)
rest_target(${psd} policy/protocols/ssh/interesting-hostnames.bro)
rest_target(${psd} policy/protocols/ssh/software.bro)
rest_target(${psd} policy/protocols/ssl/expiring-certs.bro)
rest_target(${psd} policy/protocols/ssl/extract-certs-pem.bro)
rest_target(${psd} policy/protocols/ssl/known-certs.bro)
rest_target(${psd} policy/protocols/ssl/validate-certs.bro)
rest_target(${psd} policy/tuning/defaults/packet-fragments.bro)

View file

@ -1,3 +1,3 @@
@load ./main
@load ./postprocessors
@load ./writers/ascii

View file

@ -0,0 +1 @@
@load ./scp

View file

@ -17,4 +17,4 @@
@if ( Cluster::is_enabled() )
@load ./cluster
@endif
@endif

View file

@ -5,6 +5,7 @@
@load base/protocols/ssl
@load base/frameworks/notice
@load base/utils/directions-and-hosts
module SSL;
@ -59,4 +60,4 @@ event x509_certificate(c: connection, cert: X509, is_server: bool, chain_idx: co
$identifier=fmt("%s:%d-%s", c$id$resp_h, c$id$resp_p, md5_hash(der_cert))]);
}

View file

@ -14,6 +14,7 @@
##!
@load base/protocols/ssl
@load base/utils/directions-and-hosts
module SSL;
@ -45,4 +46,4 @@ event ssl_established(c: connection)
local side = Site::is_local_addr(c$id$resp_h) ? "local" : "remote";
local cmd = fmt("%s x509 -inform DER -outform PEM >> certs-%s.pem", openssl_util, side);
piped_exec(cmd, c$ssl$cert);
}
}

View file

@ -49,6 +49,8 @@
@load protocols/ssh/geo-data.bro
@load protocols/ssh/interesting-hostnames.bro
@load protocols/ssh/software.bro
@load protocols/ssl/expiring-certs.bro
@load protocols/ssl/extract-certs-pem.bro
@load protocols/ssl/known-certs.bro
@load protocols/ssl/validate-certs.bro
@load tuning/__load__.bro

View file

@ -12,5 +12,7 @@
1 scripts/base/frameworks/logging/__load__.bro
2 scripts/base/frameworks/logging/./main.bro
3 build/src/base/logging.bif.bro
2 scripts/base/frameworks/logging/./postprocessors/__load__.bro
3 scripts/base/frameworks/logging/./postprocessors/./scp.bro
2 scripts/base/frameworks/logging/./writers/ascii.bro
0 scripts/policy/misc/loaded-scripts.bro

View file

@ -12,6 +12,8 @@
1 scripts/base/frameworks/logging/__load__.bro
2 scripts/base/frameworks/logging/./main.bro
3 build/src/base/logging.bif.bro
2 scripts/base/frameworks/logging/./postprocessors/__load__.bro
3 scripts/base/frameworks/logging/./postprocessors/./scp.bro
2 scripts/base/frameworks/logging/./writers/ascii.bro
0 scripts/base/init-default.bro
1 scripts/base/utils/site.bro

View file

@ -0,0 +1,6 @@
-./frameworks/cluster/nodes/manager.bro
-./frameworks/cluster/nodes/proxy.bro
-./frameworks/cluster/nodes/worker.bro
-./frameworks/cluster/setup-connections.bro
-./frameworks/metrics/cluster.bro
-./frameworks/notice/cluster.bro

View file

@ -1,97 +1,85 @@
BTest is simple framework for writing unit tests. Each test consists of a set
of command lines that will be executed, and success is determined based on
their exit codes. In addition, output can optionally be compared against a
previously established baseline.
This a test suite of small "unit tests" that verify small pieces of bro
functionality. They all utilize BTest, a simple framework/driver for
writing unit tests. More information about BTest can be found at
http://www.bro-ids.org/development/btest.html
More information about BTest can be found at http://www.icir.org/robin/btest/
The test suite's BTest configuration is handled through the
``btest.cfg`` file. Of particular interest is the "TestDirs" settings,
which specifies which directories BTest will recursively search for
test files.
Significant Subdirectories
==========================
This README contains the following sections:
* Contents of the testing/btest/ directory
* Running tests
* Adding tests
* Baseline/
Validated baselines for comparison against the output of each
test on future runs. If the new output differs from the Baseline
output, then the test fails.
Contents of the testing/btest/ directory:
Baseline/*/
The validated baselines for comparison against the output of each test on
future runs. If the new output differs from the Baseline output, then the
test fails.
Scripts/
Shell scripts invoked by BTest to support testing.
Traces/
* Traces/
Packet captures utilized by the various BTest tests.
logging/
Tests to validate the logging framework.
* scripts/
This hierarchy of tests emulates the hierarchy of the bro scripts/
directory.
policy/
Tests of the functionality of Bro's bundled policy scripts.
* coverage/
This collection of tests relates to checking whether we're covering
everything we want to in terms of tests, documentation, and which
scripts get loaded in different Bro configurations. These tests are
more prone to fail as new bro scripts are developed and added to the
distribution -- checking the individual test's comments is the best
place to check for more details on what exactly the test is checking
and hints on how to fix it when it fails.
software/
Tests to validate Bro software not tested elsewhere.
Running Tests
=============
btest.cfg
Configuration file that specifies run-time settings for BTest. Of particular
interest is the "TestDirs" settings, which specifies which directories BTest
will recursively search for test files.
Either use the ``make all`` or ``make brief`` ``Makefile`` targets, or
run ``btest`` directly with desired options/arguments. Examples:
* btest <no arguments>
If you simply execute btest in this directory with no arguments,
then all directories listed as "TestDirs" in btest.cfg will be
searched recursively for test files.
Running tests:
btest <no arguments>
If you simply execute btest in this directory with no arguments, then all
directories listed as "TestDirs" in btest.cfg will be searched recursively
for test files. This is how the NMI automated build & test environment
invokes BTest to run all tests.
* btest <btest options> test_directory
You can specify a directory on the command line to run just the
tests contained in that directory. This is useful if you wish to
run all of a given type of test, without running all the tests
there are. For example, "btest scripts" will run all of the Bro
script unit tests.
btest test_directory
You can specify a directory on the command line to run just the tests
contained in that directory. This is useful if you wish to run all of a
given type of test, without running all the tests there are. For example,
"btest policy" will run all of the tests for Bro's bundled policy scripts.
btest test_directory/test_file
You can specify a single test file to run just that test. This is useful
when testing a single aspect of Bro functionality, and also when developing
* btest <btest options> test_directory/test_file
You can specify a single test file to run just that test. This
is useful when testing a single failing test or when developing
a new test.
Addding Tests
=============
See either the `BTest documentation
<http://www.bro-ids.org/development/btest.html>`_ or the existing unit
tests for examples of what they actually look like. The essential
components of a new test include:
Adding tests:
* A test file in one of the subdirectories listed in the ``TestDirs``
of the ``btest.cfg`` file.
See the documentation at http://www.icir.org/robin/btest/ for information on
what BTests actually look like.
* If the unit test requires a known-good baseline output against which
future tests will be compared (via ``btest-diff``), then that baseline
output will need to live in the ``Baseline`` directory. Manually
adding that is possible, but it's easier to just use the ``-u`` or
``-U`` options of ``btest`` to do it for you (using ``btest -d`` on a
test for which no baseline exists will show you the output so it can
be verified first before adding/updating the baseline output).
The essential components of a new test include:
* A test file in a subdirectory of /testing/btest. This can be a sub-sub-
directory, as the search for test files is recursive from the directories
listed as "TestDirs" in btest.cfg
* A baseline for the output of your test. Although the baseline will be stored
in testing/btest/Baseline/ you should allow btest to copy the correct files
to that location, rather than copying them manually (see below).
If you create a new top-level testing directory for collecting related
tests, then you'll need to add it to the list of ``TestDirs`` in
``btest.cfg``. Do this only if your test really doesn't fit logically in
any of the extant directories.
If you create a new subdirectory from testing/btest you'll need to add it to the
list of "TestDirs" in btest.cfg. Do this only if your test really doesn't fit
logically in any of the extant directories.
While developing your test, you can specify the "-t" command-line option to make
BTest preserve the testing/btest/.tmp directory. This directory holds the output
from your test run; you can inspect it in place to ensure it is correct and as
expected.
Once you are satisfied with the results in testing/btest/.tmp you can make BTest
store this output as the Baseline for the test by specifying the "-U" command-
line option.
When you are ready to commit your test to git, be sure the testing/btest/.tmp
directory is deleted, and use "git status" to ensure you correctly identify all
of the files that should be committed to the repository.
Note that any new test you add this way will automatically be included in the
testing done in the NMI automated build & test environment.
Note that any new test you add this way will automatically be included
in the testing done in the NMI automated build & test environment.

View file

@ -1,5 +1,5 @@
[btest]
TestDirs = doc bifs language core scripts istate
TestDirs = doc bifs language core scripts istate coverage
TmpDir = %(testbase)s/.tmp
BaselineDir = %(testbase)s/Baseline
IgnoreDirs = .svn CVS .tmp

View file

@ -1,5 +1,7 @@
# This test is meant to cover whether the set of scripts that get loaded by
# default in bare mode matches a baseline of known defaults.
# default in bare mode matches a baseline of known defaults. The baseline
# should only need updating if something new is @load'd from init-bare.bro
# (or from an @load'd descendent of it).
#
# As the output has absolute paths in it, we need to remove the common
# prefix to make the test work everywhere. That's what the sed magic
@ -7,6 +9,6 @@
# @TEST-EXEC: bro -b misc/loaded-scripts
# @TEST-EXEC: test -e loaded_scripts.log
# @TEST-EXEC: cat loaded_scripts.log | egrep -v '#' | awk 'NR>1{print $2}' | sed -e ':a' -e '$!N' -e 's/^\(.*\).*\n\1.*/\1/' -e 'ta' >prefix
# @TEST-EXEC: cat loaded_scripts.log | egrep -v '#' | awk 'NR>0{print $2}' | sed -e ':a' -e '$!N' -e 's/^\(.*\).*\n\1.*/\1/' -e 'ta' >prefix
# @TEST-EXEC: cat loaded_scripts.log | sed "s#`cat prefix`##g" >canonified_loaded_scripts.log
# @TEST-EXEC: btest-diff canonified_loaded_scripts.log

View file

@ -1,6 +1,9 @@
# Makes sure any given policy script in the scripts/ tree can be loaded in
# bare mode. btest-bg-run/btest-bg-wait are used to kill off scripts that
# block after loading, e.g. start listening on a socket.
# Makes sure any given bro script in the scripts/ tree can be loaded in
# bare mode without error. btest-bg-run/btest-bg-wait are used to kill off
# scripts that block after loading, e.g. start listening on a socket.
#
# Commonly, this test may fail if one forgets to @load some base/ scripts
# when writing a new bro scripts.
#
# @TEST-EXEC: test -d $DIST/scripts
# @TEST-EXEC: for script in `find $DIST/scripts -name \*\.bro`; do echo $script;if [[ "$script" =~ listen-clear|listen-ssl|controllee ]]; then rm -rf load_attempt .bgprocs; btest-bg-run load_attempt bro -b $script; btest-bg-wait -k 2; cat load_attempt/.stderr >>allerrors; else bro -b $script 2>>allerrors; fi done || exit 0

View file

@ -1,5 +1,7 @@
# This test is meant to cover whether the set of scripts that get loaded by
# default matches a baseline of known defaults.
# default matches a baseline of known defaults. When new scripts are
# added to the scripts/base/ directory, the baseline will usually just need
# to be updated.
#
# As the output has absolute paths in it, we need to remove the common
# prefix to make the test work everywhere. That's what the sed magic
@ -7,6 +9,6 @@
# @TEST-EXEC: bro misc/loaded-scripts
# @TEST-EXEC: test -e loaded_scripts.log
# @TEST-EXEC: cat loaded_scripts.log | egrep -v '#' | awk 'NR>1{print $2}' | sed -e ':a' -e '$!N' -e 's/^\(.*\).*\n\1.*/\1/' -e 'ta' >prefix
# @TEST-EXEC: cat loaded_scripts.log | egrep -v '#' | awk 'NR>0{print $2}' | sed -e ':a' -e '$!N' -e 's/^\(.*\).*\n\1.*/\1/' -e 'ta' >prefix
# @TEST-EXEC: cat loaded_scripts.log | sed "s#`cat prefix`##g" >canonified_loaded_scripts.log
# @TEST-EXEC: btest-diff canonified_loaded_scripts.log

View file

@ -1,5 +1,5 @@
# This tests that we're generating policy script documentation for all the
# available policy scripts. If this fails, then the genDocSources.sh needs
# This tests that we're generating bro script documentation for all the
# available bro scripts. If this fails, then the genDocSources.sh needs
# to be run to produce a new DocSourcesList.cmake or genDocSources.sh needs
# to be updated to blacklist undesired scripts.
#

View file

@ -1,11 +1,18 @@
# Makes sure that all base/* scripts are loaded by default via init-default.bro;
# and that all scripts loaded there in there actually exist.
#
# This test will fail if a new bro script is added under the scripts/base/
# directory and it is not also added as an @load in base/init-default.bro.
# In some cases, a script in base is loaded based on the bro configuration
# (e.g. cluster operation), and in such cases, the missing_loads baseline
# can be adjusted to tolerate that.
#@TEST-EXEC: test -d $DIST/scripts/base
#@TEST-EXEC: test -e $DIST/scripts/base/init-default.bro
#@TEST-EXEC: ( cd $DIST/scripts/base && find . -name '*.bro' ) | sort >"all scripts found"
#@TEST-EXEC: bro misc/loaded-scripts
#@TEST-EXEC: cat loaded_scripts.log | egrep -v '/build/|/loaded-scripts.bro|#' | awk 'NR>1{print $2}' | sed 's#/./#/#g' >loaded_scripts.log.tmp
#@TEST-EXEC: cat loaded_scripts.log | egrep -v '/build/|/loaded-scripts.bro|#' | awk 'NR>0{print $2}' | sed 's#/./#/#g' >loaded_scripts.log.tmp
#@TEST-EXEC: cat loaded_scripts.log.tmp | sed -e ':a' -e '$!N' -e 's/^\(.*\).*\n\1.*/\1/' -e 'ta' >prefix
#@TEST-EXEC: cat loaded_scripts.log.tmp | sed "s#`cat prefix`#./#g" | sort >init-default.bro
#@TEST-EXEC: diff -u "all scripts found" init-default.bro 1>&2
#@TEST-EXEC: diff -u "all scripts found" init-default.bro | egrep "^-[^-]" > missing_loads
#@TEST-EXEC: btest-diff missing_loads

View file

@ -1,5 +1,9 @@
# Makes sure that all policy/* scripts are loaded in test-all-policy.bro; and that
# all scripts loaded there actually exist.
# Makes sure that all policy/* scripts are loaded in
# scripts/test-all-policy.bro and that all scripts loaded there actually exist.
#
# This test will fail if new bro scripts are added to the scripts/policy/
# directory. Correcting that just involves updating scripts/test-all-policy.bro
# to @load the new bro scripts.
@TEST-EXEC: test -e $DIST/scripts/test-all-policy.bro
@TEST-EXEC: test -d $DIST/scripts