Working on DataSeries support.

- The option to use integers insteads of double was ignored.

   - Renaming script-level options to remove the ds_ prefix.

   - Log rotation didn't work.

   - A set of simple unit tests.
This commit is contained in:
Robin Sommer 2012-04-09 17:30:57 -07:00
parent 952b6b293a
commit 7131feefbc
16 changed files with 1001 additions and 128 deletions

View file

@ -10,18 +10,18 @@ export {
## 'lzo' -- LZO compression. Very fast decompression times.
## 'gz' -- GZIP compression. Slower than LZF, but also produces smaller output.
## 'bz2' -- BZIP2 compression. Slower than GZIP, but also produces smaller output.
const ds_compression = "lzf" &redef;
const compression = "lzf" &redef;
## The extent buffer size.
## Larger values here lead to better compression and more efficient writes, but
## also increases the lag between the time events are received and the time they
## are actually written to disk.
const ds_extent_size = 65536 &redef;
const extent_size = 65536 &redef;
## Should we dump the XML schema we use for this ds file to disk?
## If yes, the XML schema shares the name of the logfile, but has
## an XML ending.
const ds_dump_schema = T &redef;
const dump_schema = F &redef;
## How many threads should DataSeries spawn to perform compression?
## Note that this dictates the number of threads per log stream. If
@ -31,7 +31,7 @@ export {
## Default value is 1, which will spawn one thread / core / stream.
##
## MAX is 128, MIN is 1.
const ds_num_threads = 1 &redef;
const num_threads = 1 &redef;
## Should time be stored as an integer or a double?
## Storing time as a double leads to possible precision issues and
@ -41,7 +41,7 @@ export {
## when working with the raw DataSeries format.
##
## Double timestamps are used by default.
const ds_use_integer = F &redef;
const use_integer_for_time = F &redef;
}
# Default function to postprocess a rotated DataSeries log file. It moves the