mirror of
https://github.com/zeek/zeek.git
synced 2025-10-02 06:38:20 +00:00
Split long lines in input framework docs
This commit is contained in:
parent
ac9552a0cf
commit
ab8a8d3ef3
3 changed files with 66 additions and 53 deletions
|
@ -23,17 +23,18 @@ In contrast to the ASCII reader and writer, the SQLite plugins have not yet
|
|||
seen extensive use in production environments. While we are not aware
|
||||
of any issues with them, we urge to caution when using them
|
||||
in production environments. There could be lingering issues which only occur
|
||||
when the plugins are used with high amounts of data or in high-load environments.
|
||||
when the plugins are used with high amounts of data or in high-load
|
||||
environments.
|
||||
|
||||
Logging Data into SQLite Databases
|
||||
==================================
|
||||
|
||||
Logging support for SQLite is available in all Bro installations starting with
|
||||
version 2.2. There is no need to load any additional scripts or for any compile-time
|
||||
configurations.
|
||||
version 2.2. There is no need to load any additional scripts or for any
|
||||
compile-time configurations.
|
||||
|
||||
Sending data from existing logging streams to SQLite is rather straightforward. You
|
||||
have to define a filter which specifies SQLite as the writer.
|
||||
Sending data from existing logging streams to SQLite is rather straightforward.
|
||||
You have to define a filter which specifies SQLite as the writer.
|
||||
|
||||
The following example code adds SQLite as a filter for the connection log:
|
||||
|
||||
|
@ -44,15 +45,15 @@ The following example code adds SQLite as a filter for the connection log:
|
|||
# Make sure this parses correctly at least.
|
||||
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-conn-filter.bro
|
||||
|
||||
Bro will create the database file ``/var/db/conn.sqlite``, if it does not already exist.
|
||||
It will also create a table with the name ``conn`` (if it does not exist) and start
|
||||
appending connection information to the table.
|
||||
Bro will create the database file ``/var/db/conn.sqlite``, if it does not
|
||||
already exist. It will also create a table with the name ``conn`` (if it
|
||||
does not exist) and start appending connection information to the table.
|
||||
|
||||
At the moment, SQLite databases are not rotated the same way ASCII log-files are. You
|
||||
have to take care to create them in an adequate location.
|
||||
At the moment, SQLite databases are not rotated the same way ASCII log-files
|
||||
are. You have to take care to create them in an adequate location.
|
||||
|
||||
If you examine the resulting SQLite database, the schema will contain the same fields
|
||||
that are present in the ASCII log files::
|
||||
If you examine the resulting SQLite database, the schema will contain the
|
||||
same fields that are present in the ASCII log files::
|
||||
|
||||
# sqlite3 /var/db/conn.sqlite
|
||||
|
||||
|
@ -75,27 +76,31 @@ from being created, you can remove the default filter:
|
|||
Log::remove_filter(Conn::LOG, "default");
|
||||
|
||||
|
||||
To create a custom SQLite log file, you have to create a new log stream that contains
|
||||
just the information you want to commit to the database. Please refer to the
|
||||
:ref:`framework-logging` documentation on how to create custom log streams.
|
||||
To create a custom SQLite log file, you have to create a new log stream
|
||||
that contains just the information you want to commit to the database.
|
||||
Please refer to the :ref:`framework-logging` documentation on how to
|
||||
create custom log streams.
|
||||
|
||||
Reading Data from SQLite Databases
|
||||
==================================
|
||||
|
||||
Like logging support, support for reading data from SQLite databases is built into Bro starting
|
||||
with version 2.2.
|
||||
Like logging support, support for reading data from SQLite databases is
|
||||
built into Bro starting with version 2.2.
|
||||
|
||||
Just as with the text-based input readers (please refer to the :ref:`framework-input`
|
||||
documentation for them and for basic information on how to use the input-framework), the SQLite reader
|
||||
can be used to read data - in this case the result of SQL queries - into tables or into events.
|
||||
Just as with the text-based input readers (please refer to the
|
||||
:ref:`framework-input` documentation for them and for basic information
|
||||
on how to use the input framework), the SQLite reader can be used to
|
||||
read data - in this case the result of SQL queries - into tables or into
|
||||
events.
|
||||
|
||||
Reading Data into Tables
|
||||
------------------------
|
||||
|
||||
To read data from a SQLite database, we first have to provide Bro with the information, how
|
||||
the resulting data will be structured. For this example, we expect that we have a SQLite database,
|
||||
which contains host IP addresses and the user accounts that are allowed to log into a specific
|
||||
machine.
|
||||
To read data from a SQLite database, we first have to provide Bro with
|
||||
the information, how the resulting data will be structured. For this
|
||||
example, we expect that we have a SQLite database, which contains
|
||||
host IP addresses and the user accounts that are allowed to log into
|
||||
a specific machine.
|
||||
|
||||
The SQLite commands to create the schema are as follows::
|
||||
|
||||
|
@ -107,8 +112,8 @@ The SQLite commands to create the schema are as follows::
|
|||
insert into machines_to_users values ('192.168.17.2', 'bernhard');
|
||||
insert into machines_to_users values ('192.168.17.3', 'seth,matthias');
|
||||
|
||||
After creating a file called ``hosts.sqlite`` with this content, we can read the resulting table
|
||||
into Bro:
|
||||
After creating a file called ``hosts.sqlite`` with this content, we can
|
||||
read the resulting table into Bro:
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-table.bro
|
||||
|
||||
|
@ -117,22 +122,25 @@ into Bro:
|
|||
# Make sure this parses correctly at least.
|
||||
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-table.bro
|
||||
|
||||
Afterwards, that table can be used to check logins into hosts against the available
|
||||
userlist.
|
||||
Afterwards, that table can be used to check logins into hosts against
|
||||
the available userlist.
|
||||
|
||||
Turning Data into Events
|
||||
------------------------
|
||||
|
||||
The second mode is to use the SQLite reader to output the input data as events. Typically there
|
||||
are two reasons to do this. First, when the structure of the input data is too complicated
|
||||
for a direct table import. In this case, the data can be read into an event which can then
|
||||
create the necessary data structures in Bro in scriptland.
|
||||
The second mode is to use the SQLite reader to output the input data as events.
|
||||
Typically there are two reasons to do this. First, when the structure of
|
||||
the input data is too complicated for a direct table import. In this case,
|
||||
the data can be read into an event which can then create the necessary
|
||||
data structures in Bro in scriptland.
|
||||
|
||||
The second reason is, that the dataset is too big to hold it in memory. In this case, the checks
|
||||
can be performed on-demand, when Bro encounters a situation where it needs additional information.
|
||||
The second reason is, that the dataset is too big to hold it in memory. In
|
||||
this case, the checks can be performed on-demand, when Bro encounters a
|
||||
situation where it needs additional information.
|
||||
|
||||
An example for this would be an internal huge database with malware hashes. Live database queries
|
||||
could be used to check the sporadically happening downloads against the database.
|
||||
An example for this would be an internal huge database with malware
|
||||
hashes. Live database queries could be used to check the sporadically
|
||||
happening downloads against the database.
|
||||
|
||||
The SQLite commands to create the schema are as follows::
|
||||
|
||||
|
@ -151,9 +159,10 @@ The SQLite commands to create the schema are as follows::
|
|||
insert into malware_hashes values ('73f45106968ff8dc51fba105fa91306af1ff6666', 'ftp-trace');
|
||||
|
||||
|
||||
The following code uses the file-analysis framework to get the sha1 hashes of files that are
|
||||
transmitted over the network. For each hash, a SQL-query is run against SQLite. If the query
|
||||
returns with a result, we had a hit against our malware-database and output the matching hash.
|
||||
The following code uses the file-analysis framework to get the sha1 hashes
|
||||
of files that are transmitted over the network. For each hash, a SQL-query
|
||||
is run against SQLite. If the query returns with a result, we had a hit
|
||||
against our malware-database and output the matching hash.
|
||||
|
||||
.. btest-include:: ${DOC_ROOT}/frameworks/sqlite-read-events.bro
|
||||
|
||||
|
@ -162,5 +171,5 @@ returns with a result, we had a hit against our malware-database and output the
|
|||
# Make sure this parses correctly at least.
|
||||
@TEST-EXEC: bro ${DOC_ROOT}/frameworks/sqlite-read-events.bro
|
||||
|
||||
If you run this script against the trace in ``testing/btest/Traces/ftp/ipv4.trace``, you
|
||||
will get one hit.
|
||||
If you run this script against the trace in
|
||||
``testing/btest/Traces/ftp/ipv4.trace``, you will get one hit.
|
||||
|
|
|
@ -73,22 +73,23 @@ export {
|
|||
idx: any;
|
||||
|
||||
## Record that defines the values used as the elements of the table.
|
||||
## If this is undefined, then *destination* has to be a set.
|
||||
## If this is undefined, then *destination* must be a set.
|
||||
val: any &optional;
|
||||
|
||||
## Defines if the value of the table is a record (default), or a single value.
|
||||
## When this is set to false, then *val* can only contain one element.
|
||||
want_record: bool &default=T;
|
||||
|
||||
## The event that is raised each time a value is added to, changed in or removed
|
||||
## from the table. The event will receive an Input::Event enum as the first
|
||||
## argument, the *idx* record as the second argument and the value (record) as the
|
||||
## third argument.
|
||||
## The event that is raised each time a value is added to, changed in or
|
||||
## removed from the table. The event will receive an Input::Event enum
|
||||
## as the first argument, the *idx* record as the second argument and
|
||||
## the value (record) as the third argument.
|
||||
ev: any &optional; # event containing idx, val as values.
|
||||
|
||||
## Predicate function that can decide if an insertion, update or removal should
|
||||
## really be executed. Parameters are the same as for the event. If true is
|
||||
## returned, the update is performed. If false is returned, it is skipped.
|
||||
## Predicate function that can decide if an insertion, update or removal
|
||||
## should really be executed. Parameters are the same as for the event.
|
||||
## If true is returned, the update is performed. If false is returned,
|
||||
## it is skipped.
|
||||
pred: function(typ: Input::Event, left: any, right: any): bool &optional;
|
||||
|
||||
## A key/value table that will be passed on the reader.
|
||||
|
@ -123,8 +124,9 @@ export {
|
|||
## If this is set to true (default), the event receives all fields in a single record value.
|
||||
want_record: bool &default=T;
|
||||
|
||||
## The event that is raised each time a new line is received from the reader.
|
||||
## The event will receive an Input::Event enum as the first element, and the fields as the following arguments.
|
||||
## The event that is raised each time a new line is received from the
|
||||
## reader. The event will receive an Input::Event enum as the first
|
||||
## element, and the fields as the following arguments.
|
||||
ev: any;
|
||||
|
||||
## A key/value table that will be passed on the reader.
|
||||
|
|
|
@ -11,7 +11,9 @@ export {
|
|||
##
|
||||
## name: name of the input stream.
|
||||
## source: source of the input stream.
|
||||
## exit_code: exit code of the program, or number of the signal that forced the program to exit.
|
||||
## signal_exit: false when program exited normally, true when program was forced to exit by a signal.
|
||||
## exit_code: exit code of the program, or number of the signal that forced
|
||||
## the program to exit.
|
||||
## signal_exit: false when program exited normally, true when program was
|
||||
## forced to exit by a signal.
|
||||
global process_finished: event(name: string, source:string, exit_code:count, signal_exit:bool);
|
||||
}
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue