Split long lines in input framework docs

This commit is contained in:
Daniel Thayer 2015-08-21 16:30:51 -05:00
parent ac9552a0cf
commit ab8a8d3ef3
3 changed files with 66 additions and 53 deletions

View file

@ -23,17 +23,18 @@ In contrast to the ASCII reader and writer, the SQLite plugins have not yet
seen extensive use in production environments. While we are not aware seen extensive use in production environments. While we are not aware
of any issues with them, we urge to caution when using them of any issues with them, we urge to caution when using them
in production environments. There could be lingering issues which only occur in production environments. There could be lingering issues which only occur
when the plugins are used with high amounts of data or in high-load environments. when the plugins are used with high amounts of data or in high-load
environments.
Logging Data into SQLite Databases Logging Data into SQLite Databases
================================== ==================================
Logging support for SQLite is available in all Bro installations starting with Logging support for SQLite is available in all Bro installations starting with
version 2.2. There is no need to load any additional scripts or for any compile-time version 2.2. There is no need to load any additional scripts or for any
configurations. compile-time configurations.
Sending data from existing logging streams to SQLite is rather straightforward. You Sending data from existing logging streams to SQLite is rather straightforward.
have to define a filter which specifies SQLite as the writer. You have to define a filter which specifies SQLite as the writer.
The following example code adds SQLite as a filter for the connection log: The following example code adds SQLite as a filter for the connection log:
@ -44,15 +45,15 @@ The following example code adds SQLite as a filter for the connection log:
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro
Bro will create the database file ``/var/db/conn.sqlite``, if it does not already exist. Bro will create the database file ``/var/db/conn.sqlite``, if it does not
It will also create a table with the name ``conn`` (if it does not exist) and start already exist. It will also create a table with the name ``conn`` (if it
appending connection information to the table. does not exist) and start appending connection information to the table.
At the moment, SQLite databases are not rotated the same way ASCII log-files are. You At the moment, SQLite databases are not rotated the same way ASCII log-files
have to take care to create them in an adequate location. are. You have to take care to create them in an adequate location.
If you examine the resulting SQLite database, the schema will contain the same fields If you examine the resulting SQLite database, the schema will contain the
that are present in the ASCII log files:: same fields that are present in the ASCII log files::
# sqlite3 /var/db/conn.sqlite # sqlite3 /var/db/conn.sqlite
@ -75,27 +76,31 @@ from being created, you can remove the default filter:
Log::remove_filter(Conn::LOG, "default"); Log::remove_filter(Conn::LOG, "default");
To create a custom SQLite log file, you have to create a new log stream that contains To create a custom SQLite log file, you have to create a new log stream
just the information you want to commit to the database. Please refer to the that contains just the information you want to commit to the database.
:ref:`framework-logging` documentation on how to create custom log streams. Please refer to the :ref:`framework-logging` documentation on how to
create custom log streams.
Reading Data from SQLite Databases Reading Data from SQLite Databases
================================== ==================================
Like logging support, support for reading data from SQLite databases is built into Bro starting Like logging support, support for reading data from SQLite databases is
with version 2.2. built into Bro starting with version 2.2.
Just as with the text-based input readers (please refer to the :ref:`framework-input` Just as with the text-based input readers (please refer to the
documentation for them and for basic information on how to use the input-framework), the SQLite reader :ref:`framework-input` documentation for them and for basic information
can be used to read data - in this case the result of SQL queries - into tables or into events. on how to use the input framework), the SQLite reader can be used to
read data - in this case the result of SQL queries - into tables or into
events.
Reading Data into Tables Reading Data into Tables
------------------------ ------------------------
To read data from a SQLite database, we first have to provide Bro with the information, how To read data from a SQLite database, we first have to provide Bro with
the resulting data will be structured. For this example, we expect that we have a SQLite database, the information, how the resulting data will be structured. For this
which contains host IP addresses and the user accounts that are allowed to log into a specific example, we expect that we have a SQLite database, which contains
machine. host IP addresses and the user accounts that are allowed to log into
a specific machine.
The SQLite commands to create the schema are as follows:: The SQLite commands to create the schema are as follows::
@ -107,8 +112,8 @@ The SQLite commands to create the schema are as follows::
insert into machines_to_users values ('192.168.17.2', 'bernhard'); insert into machines_to_users values ('192.168.17.2', 'bernhard');
insert into machines_to_users values ('192.168.17.3', 'seth,matthias'); insert into machines_to_users values ('192.168.17.3', 'seth,matthias');
After creating a file called ``hosts.sqlite`` with this content, we can read the resulting table After creating a file called ``hosts.sqlite`` with this content, we can
into Bro: read the resulting table into Bro:
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro
@ -117,22 +122,25 @@ into Bro:
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro
Afterwards, that table can be used to check logins into hosts against the available Afterwards, that table can be used to check logins into hosts against
userlist. the available userlist.
Turning Data into Events Turning Data into Events
------------------------ ------------------------
The second mode is to use the SQLite reader to output the input data as events. Typically there The second mode is to use the SQLite reader to output the input data as events.
are two reasons to do this. First, when the structure of the input data is too complicated Typically there are two reasons to do this. First, when the structure of
for a direct table import. In this case, the data can be read into an event which can then the input data is too complicated for a direct table import. In this case,
create the necessary data structures in Bro in scriptland. the data can be read into an event which can then create the necessary
data structures in Bro in scriptland.
The second reason is, that the dataset is too big to hold it in memory. In this case, the checks The second reason is, that the dataset is too big to hold it in memory. In
can be performed on-demand, when Bro encounters a situation where it needs additional information. this case, the checks can be performed on-demand, when Bro encounters a
situation where it needs additional information.
An example for this would be an internal huge database with malware hashes. Live database queries An example for this would be an internal huge database with malware
could be used to check the sporadically happening downloads against the database. hashes. Live database queries could be used to check the sporadically
happening downloads against the database.
The SQLite commands to create the schema are as follows:: The SQLite commands to create the schema are as follows::
@ -151,9 +159,10 @@ The SQLite commands to create the schema are as follows::
insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace'); insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace');
The following code uses the file-analysis framework to get the sha1 hashes of files that are The following code uses the file-analysis framework to get the sha1 hashes
transmitted over the network. For each hash, a SQL-query is run against SQLite. If the query of files that are transmitted over the network. For each hash, a SQL-query
returns with a result, we had a hit against our malware-database and output the matching hash. is run against SQLite. If the query returns with a result, we had a hit
against our malware-database and output the matching hash.
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro .. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro
@ -162,5 +171,5 @@ returns with a result, we had a hit against our malware-database and output the
# Make sure this parses correctly at least. # Make sure this parses correctly at least.
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro @TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro
If you run this script against the trace in ``testing/btest/Traces/ftp/ipv4.trace``, you If you run this script against the trace in
will get one hit. ``testing/btest/Traces/ftp/ipv4.trace``, you will get one hit.

View file

@ -73,22 +73,23 @@ export {
idx: any; idx: any;
## Record that defines the values used as the elements of the table. ## Record that defines the values used as the elements of the table.
## If this is undefined, then *destination* has to be a set. ## If this is undefined, then *destination* must be a set.
val: any &optional; val: any &optional;
## Defines if the value of the table is a record (default), or a single value. ## Defines if the value of the table is a record (default), or a single value.
## When this is set to false, then *val* can only contain one element. ## When this is set to false, then *val* can only contain one element.
want_record: bool &default=T; want_record: bool &default=T;
## The event that is raised each time a value is added to, changed in or removed ## The event that is raised each time a value is added to, changed in or
## from the table. The event will receive an Input::Event enum as the first ## removed from the table. The event will receive an Input::Event enum
## argument, the *idx* record as the second argument and the value (record) as the ## as the first argument, the *idx* record as the second argument and
## third argument. ## the value (record) as the third argument.
ev: any &optional; # event containing idx, val as values. ev: any &optional; # event containing idx, val as values.
## Predicate function that can decide if an insertion, update or removal should ## Predicate function that can decide if an insertion, update or removal
## really be executed. Parameters are the same as for the event. If true is ## should really be executed. Parameters are the same as for the event.
## returned, the update is performed. If false is returned, it is skipped. ## If true is returned, the update is performed. If false is returned,
## it is skipped.
pred: function(typ: Input::Event, left: any, right: any): bool &optional; pred: function(typ: Input::Event, left: any, right: any): bool &optional;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed on the reader.
@ -123,8 +124,9 @@ export {
## If this is set to true (default), the event receives all fields in a single record value. ## If this is set to true (default), the event receives all fields in a single record value.
want_record: bool &default=T; want_record: bool &default=T;
## The event that is raised each time a new line is received from the reader. ## The event that is raised each time a new line is received from the
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments. ## reader. The event will receive an Input::Event enum as the first
## element, and the fields as the following arguments.
ev: any; ev: any;
## A key/value table that will be passed on the reader. ## A key/value table that will be passed on the reader.

View file

@ -11,7 +11,9 @@ export {
## ##
## name: name of the input stream. ## name: name of the input stream.
## source: source of the input stream. ## source: source of the input stream.
## exit_code: exit code of the program, or number of the signal that forced the program to exit. ## exit_code: exit code of the program, or number of the signal that forced
## signal_exit: false when program exited normally, true when program was forced to exit by a signal. ## the program to exit.
## signal_exit: false when program exited normally, true when program was
## forced to exit by a signal.
global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool); global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool);
} }