Removing DataSeries and ElasticSearch from configure script.

This commit is contained in:
Robin Sommer 2014-08-13 21:16:01 -07:00
parent 58f3a715f2
commit f884fc6c11
4 changed files with 12 additions and 33 deletions

@ -1 +1 @@
Subproject commit 92351e44ee59e424546148ecb6a292ca6d625e75 Subproject commit 2e19a879bd022e419016bb16803ee237afe00f12

@ -1 +1 @@
Subproject commit 9617273c1e81257c71b3b92a893437e5ea0e8547 Subproject commit cf6617b1a7831ea1398fd87ca4a400ff1f583b50

21
configure vendored
View file

@ -39,8 +39,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--disable-auxtools don't build or install auxiliary tools --disable-auxtools don't build or install auxiliary tools
--disable-perftools don't try to build with Google Perftools --disable-perftools don't try to build with Google Perftools
--disable-python don't try to build python bindings for broccoli --disable-python don't try to build python bindings for broccoli
--disable-dataseries don't use the optional DataSeries log writer
--disable-elasticsearch don't use the optional ElasticSearch log writer
Required Packages in Non-Standard Locations: Required Packages in Non-Standard Locations:
--with-openssl=PATH path to OpenSSL install root --with-openssl=PATH path to OpenSSL install root
@ -62,9 +60,6 @@ Usage: $0 [OPTION]... [VAR=VALUE]...
--with-ruby-lib=PATH path to ruby library --with-ruby-lib=PATH path to ruby library
--with-ruby-inc=PATH path to ruby headers --with-ruby-inc=PATH path to ruby headers
--with-swig=PATH path to SWIG executable --with-swig=PATH path to SWIG executable
--with-dataseries=PATH path to DataSeries and Lintel libraries
--with-xml2=PATH path to libxml2 installation (for DataSeries)
--with-curl=PATH path to libcurl install root (for ElasticSearch)
Packaging Options (for developers): Packaging Options (for developers):
--binary-package toggle special logic for binary packaging --binary-package toggle special logic for binary packaging
@ -183,12 +178,6 @@ while [ $# -ne 0 ]; do
--enable-ruby) --enable-ruby)
append_cache_entry DISABLE_RUBY_BINDINGS BOOL false append_cache_entry DISABLE_RUBY_BINDINGS BOOL false
;; ;;
--disable-dataseries)
append_cache_entry DISABLE_DATASERIES BOOL true
;;
--disable-elasticsearch)
append_cache_entry DISABLE_ELASTICSEARCH BOOL true
;;
--with-openssl=*) --with-openssl=*)
append_cache_entry OpenSSL_ROOT_DIR PATH $optarg append_cache_entry OpenSSL_ROOT_DIR PATH $optarg
;; ;;
@ -243,16 +232,6 @@ while [ $# -ne 0 ]; do
--with-swig=*) --with-swig=*)
append_cache_entry SWIG_EXECUTABLE PATH $optarg append_cache_entry SWIG_EXECUTABLE PATH $optarg
;; ;;
--with-dataseries=*)
append_cache_entry DataSeries_ROOT_DIR PATH $optarg
append_cache_entry Lintel_ROOT_DIR PATH $optarg
;;
--with-xml2=*)
append_cache_entry LibXML2_ROOT_DIR PATH $optarg
;;
--with-curl=*)
append_cache_entry LibCURL_ROOT_DIR PATH $optarg
;;
--binary-package) --binary-package)
append_cache_entry BINARY_PACKAGING_MODE BOOL true append_cache_entry BINARY_PACKAGING_MODE BOOL true
;; ;;

View file

@ -38,7 +38,7 @@ Bro's logging interface is built around three main abstractions:
Writers Writers
A writer defines the actual output format for the information A writer defines the actual output format for the information
being logged. At the moment, Bro comes with only one type of being logged. At the moment, Bro comes with only one type of
writer, which produces tab separated ASCII files. In the writer, which produces tab separated ASCII files. In the
future we will add further writers, like for binary output and future we will add further writers, like for binary output and
direct logging into a database. direct logging into a database.
@ -98,7 +98,7 @@ Note the fields that are set for the filter:
``include`` ``include``
A set limiting the fields to the ones given. The names A set limiting the fields to the ones given. The names
correspond to those in the :bro:type:`Conn::Info` record, with correspond to those in the :bro:type:`Conn::Info` record, with
sub-records unrolled by concatenating fields (separated with sub-records unrolled by concatenating fields (separated with
dots). dots).
Using the code above, you will now get a new log file ``origs.log`` Using the code above, you will now get a new log file ``origs.log``
@ -155,7 +155,7 @@ that returns the desired path:
{ {
local filter: Log::Filter = [$name="conn-split", $path_func=split_log, $include=set("ts", "id.orig_h")]; local filter: Log::Filter = [$name="conn-split", $path_func=split_log, $include=set("ts", "id.orig_h")];
Log::add_filter(Conn::LOG, filter); Log::add_filter(Conn::LOG, filter);
} }
Running this will now produce two files, ``local.log`` and Running this will now produce two files, ``local.log`` and
``remote.log``, with the corresponding entries. One could extend this ``remote.log``, with the corresponding entries. One could extend this
@ -263,7 +263,7 @@ specific destination exceeds a certain duration:
.. code:: bro .. code:: bro
redef enum Notice::Type += { redef enum Notice::Type += {
## Indicates that a connection remained established longer ## Indicates that a connection remained established longer
## than 5 minutes. ## than 5 minutes.
Long_Conn_Found Long_Conn_Found
}; };
@ -271,8 +271,8 @@ specific destination exceeds a certain duration:
event Conn::log_conn(rec: Conn::Info) event Conn::log_conn(rec: Conn::Info)
{ {
if ( rec$duration > 5mins ) if ( rec$duration > 5mins )
NOTICE([$note=Long_Conn_Found, NOTICE([$note=Long_Conn_Found,
$msg=fmt("unusually long conn to %s", rec$id$resp_h), $msg=fmt("unusually long conn to %s", rec$id$resp_h),
$id=rec$id]); $id=rec$id]);
} }
@ -335,11 +335,11 @@ example for the ``Foo`` module:
# Define a hook event. By convention, this is called # Define a hook event. By convention, this is called
# "log_<stream>". # "log_<stream>".
global log_foo: event(rec: Info); global log_foo: event(rec: Info);
} }
# This event should be handled at a higher priority so that when # This event should be handled at a higher priority so that when
# users modify your stream later and they do it at priority 0, # users modify your stream later and they do it at priority 0,
# their code runs after this. # their code runs after this.
event bro_init() &priority=5 event bro_init() &priority=5
{ {
@ -356,7 +356,7 @@ it easily accessible across event handlers:
foo: Info &optional; foo: Info &optional;
} }
Now you can use the :bro:id:`Log::write` method to output log records and Now you can use the :bro:id:`Log::write` method to output log records and
save the logged ``Foo::Info`` record into the connection record: save the logged ``Foo::Info`` record into the connection record:
.. code:: bro .. code:: bro
@ -387,4 +387,4 @@ Bro supports the following built-in output formats other than ASCII:
logging-input-sqlite logging-input-sqlite
Further formats are available as external plugins. Further formats are available as external plugins.