Broker had changed the semantics of remote logging: it sent over the
original Bro record containing the values to be logged, which on the
receiving side would then pass through the logging framework normally,
including triggering filters and events. The old communication system
however special-cases logs: it sends already processed log entries,
just as they go into the log files, and without any receiver-side
filtering etc. This more efficient as it short-cuts the processing
path, and also avoids the more expensive Val serialization. It also
lets the sender determine the specifics of what gets logged (and how).
This commit changes Broker over to now use the same semantics as the
old communication system.
TODOs:
- The new Broker code doesn't have consistent #ifdefs yet.
- Right now, when a new log receiver connects, all existing logs
are broadcasted out again to all current clients. That doesn't so
any harm, but is unncessary. Need to add a way to send the
existing logs to just the new client.
* origin/topic/seth/log-framework-ext:
Log extensions: series of small fixes and new tests.
Change the function for log extension to take a path only and update tests.
Final changes to log framework ext code.
Add logging framework metadata mechanism.
Add unrolling separator & field name map to logging framework.
The extensions now work with optional types, as well with complex types
(like subrecords). Not returning a record in the ext_func no longer
crashes bro.
The default_ext_func was switched to return void in
cases where no extension revord is defined (was bool).
I also got rid of the offsets in the indices - with the rest of the
implementation, that was not really necessary and made the code more
complex.
The "metadata" functionality has been renamed to "ext" to
represent that the logs are being extended. The function that
returns the record which is used to extend the log now receives
a log filter as it's single argument.
The field name "unrolling" is now renamed to "scope" so the variables
names now look like this: "Log::default_scope_sep"
- When a log record is being "unrolled" (sub-records flattened
out into a single record), it's now possible to choose the
character/string to separate the outer name from the inner
name. This can be used to work around the problems
with ElasticSearch 2.0 not supporting dots "." in field names.
This value can be provided per-filter as well as a global
default value.
- Log fields can be renamed by providing a table per-filter
(or a global default) to rename fields for any log writer.
The name translation is performed after unrolling so the
value in the field name table must match whatever is being
used to separate field names.
For example if the unrolling separator was set to "*":
redef Log::default_unrolling_sep = "*";
The field name map would need to reflect it:
redef Log::default_field_name_map = {
["id*orig_h"] = "src",
["id*orig_p"] = "src_port",
["id*resp_h"] = "dst",
["id*resp_p"] = "dst_port",
};
It now works a bit differently than before: whether to send a remote log
write is now a property of the logging stream, not the logging filter
and it's now up the the receiver side filters to instantiate the desired
writer. i.e. the sender now has no say in what the receiver should use
as the log writer backend.
Under the new style of remote logging, the "Log::enable_remote_logging"
option is repurposed to set the default behavior for new logging
streams. There's also "Comm::{enable,disable}_remote_logging()" to
explicitly set the desired behavior for a given logging stream. To
receive remote logs, one calls "Comm::subscribe_to_logs(<topic>)", where
senders implicitly use topics of the form "bro/log/<stream id>".
It doesn't do anything else than simply forwarding to FlushBuffers().
This is just for consistency in terminate_bro() where components get
their Terminate() called so that the main code doesn't need to know
anything more specific about what particular action to take at
shutdown.
* origin/topic/bernhard/thread-cleanup:
and just to be really sure - always make threads go through OnWaitForStop
hopefully finally fix last interesting race-condition
it is apparently getting a bit late for changes at important code...
spoke to soon (forgot to comment in line again).
Change thread shutdown again to also work with input framework.
Changing semantics of thread stop methods.
Support for cleaning up threads that have terminated.
Seems to work, tests pass, but not really verified.
Major change 1:
finished flag in MsgThread was replaced by 2 flags:
child_finished and main_finished.
child_finished is set by child_thread and means that the processing
loop is stopped immediately (no longer needed, no new input messages
will be processed, if loop continues running there is an ugly delay
on shutdown). (This took me a while to realize...)
main_finished is set by a message that is sent back by the child
to the main thread when Finished() is called (and child_finished
is set). when main_finished is set, processing of output messages
stops. But all messages that the child thread pushed in the queue
before calling Finish() are still processed.
Change 2:
Logging terminate call was replaced by a smaller call that just
flushes out the cache held by the main thread. This call
has to be done before thread shutdown is called - otherwhise
the threads will be shut down before all messages are pushed
on them. (This also took me a while to realize...).
Change 3:
Input framework actually calls it stop methods correctly (everything
was prepared, function call was missing)
Once a BasicThread leaves its run() method, a thread is now marked for
cleaning up, and the ThreadMgr will soon join it to release the OS
resources.
Also, adding a function Log::remove_stream() that remove a logging
stream, stopping all writer threads that are associated with it.
Note, however, that removing a *filter* from a stream still doesn't
clean up any threads. The problem is that because of the output paths
potentially being created dynamically it's unclear if the writer
thread will still be needed in the future. We could add clean writers
up with timeouts, but that doesn't sound great either. So for now, the
only way to sure clean up logging threads is to remove the entire
stream.
Also note that cleanup doesn't work with input threads yet, which
don't seem to terminate (at least in the case I tried).
- 'when' statements were problematic when used in a function/event/hook
that had local variables with an assigned function value. This was
because 'when' blocks operate on a clone of the frame and the cloning
process serializes locals and the serialization of functions had an
infinite cycle in it (ID -> BroFunc -> ID -> BroFunc ...). The ID
was only used for the function name and type information, so
refactoring Func and subclasses to depend on those two things instead
fixes the issue.
- 'return when' blocks, specifically, didn't work whenever execution
of the containing function's body does another function call before
reaching the 'return when' block, because of an assertion. This was
was due to logic in CallExpr::Eval always clearing the CallExpr
associated with the Frame after doing the call, instead of restoring
any previous CallExpr, which the code in Trigger::Eval expected to
have available.
- An assert could be reached when the condition of a 'when' statement
depended on checking the value of global state variables. The assert
in Trigger::QueueTrigger that checks that the Trigger isn't disabled
would get hit because Trigger::Eval/Timeout disable themselves after
running, but don't unregister themselves from the NotifierRegistry,
which keeps calling QueueTrigger for every state access of the global.
If a log filter attempts to write to a path for which a writer is
already instantiated due to remote logging, it will re-use the writer
as long as the fields of the filter and writer are compatible, else
the filter path will be auto-adjusted to not conflict with existing
writer's. Conflicts between two local filters are still always
auto-adjusted even if field types agree (since they could still
be semantically different).
Addresses #842.
There are now two FinishedRotation() methods, one that triggers
post-processing and one that doesn't. There's also insurance built in
against a writer not calling either (or both), in which case we abort
with an internal error.
This changes writer implementations to always respond to rotation
messages in their DoRotate() method, even for failure/no-op cases
with a new RotationFailedMessage. This informs the manager to
decrement its count of pending rotations.
Addresses #860.
* origin/fastpath:
Small (potential performance) improvement for logging framework.
Script-level rotation postprocessor fix.
update input framework documentation to reflect want_record change.
Fix crash when encountering an InterpreterException in a predicate in logging or input Framework.
make want_record=T the default for events
Inputframework: did not contain any error handling for this case.
Logging framework: tried to catch the interpreter-exception. However the exception already was caught
by the call-function and not propagated. Instead, call returns a 0-pointer in this case, which
lead to a segmentation fault.
Small tweak: I added the "same writer" constraint to the loop
condition as well. Makes sense?
* origin/fastpath:
Change path conflicts between log filters to be auto-corrected.
This change makes it so when differing logging filters on the same
stream attempt to write to the same writer/path combination, the path
of the filter doing the later write will be automatically adjusted so
that it does not conflict with the other. The path is adjusted by
appending "-N", where N is the smallest integer greater or equal to 2
required to resolve the path name conflict.
Addresses #842.
Since WriterFrontend objects are looked up internally by writer type and
path, and they also expect to write consistent field arguments, it could
be the case that more than one filter of a given stream attempts to
write to the same path (derived either from $path or $path_func fields
of the filter) with the same writer type. This won't work, so now
WriterFrontend objects are bound to the filter that instantiated them so
that we can warn about other filters attempting to write to the
conflicting writer/path and the write can be skipped. Remote logs don't
appear to suffer the same issue due to pre-filtering.
Addresses #842.
I've only tested that it compiles, not whether it still works. The
fact that we don't have any tests for this makes me uneasy ...
* remotes/origin/topic/seth/elasticsearch: (35 commits)
Some documentation updates for elasticsearch plugin.
Temporarily removing the ES timeout because it works with signals and is incompatible with Bro threads.
Changed ES index names to localtime and added a meta index.
New script for easily duplicating logs to ElasticSearch.
Some better elasticsearch reliability.
Fixed small elasticsearch problem in configure output.
Re-adding the needed call to FinishedRotation in the ES writer plugin.
Tiny updates.
Bringing elasticsearch branch up to date with master.
Adding a define to make the stdint C macros available.
Adding an extra header.
Fixed a bug with messed up time value passing to elasticsearch.
Small updates and a little standardization for config.h.in naming.
Bug fixes.
Bug fix and feature.
Forgot to call the parent method for DoHeartBeat.
Changed the escaping method.
Flush logs to ES daemon as Bro is shutting down.
Reduce the batch size to 1000 and add a maximum time interval for batches.
Reworked bulk operation string construction to use ODesc and added json escaping.
...
frameworks.
There were a number of cases that weren't thread-safe. In particular,
we don't use std::string anymore for anything that's passed between
threads (but instead plain old const char*, with manual memmory
managmenet).
This is still a check-point commit, I'll do more testing.
Turns out the finish methods weren't called correctly, caused by a
mess up with method names which all sounded too similar and the wrong
one ended up being called. I've reworked this by changing the
thread/writer/reader interfaces, which actually also simplifies them
by getting rid of the requirement for writer backends to call their
parent methods (i.e., less opportunity for errors).
This commit also includes the following (because I noticed the problem
above when working on some of these):
- The ASCII log writer now includes "#start <timestamp>" and
"#end <timestamp> lines in the each file. The latter supersedes
Bernhard's "EOF" patch.
This required a number of tests updates. The standard canonifier
removes the timestamps, but some tests compare files directly,
which doesn't work if they aren't printing out the same
timestamps (like the comm tests).
- The above required yet another change to the writer API to
network_time to methods.
- Renamed ASCII logger "header" options to "meta".
- Fixes#763 "Escape # when first character in log file line".
All btests pass for me on Linux FC15. Will try MacOS next.