Broker had changed the semantics of remote logging: it sent over the
original Bro record containing the values to be logged, which on the
receiving side would then pass through the logging framework normally,
including triggering filters and events. The old communication system
however special-cases logs: it sends already processed log entries,
just as they go into the log files, and without any receiver-side
filtering etc. This more efficient as it short-cuts the processing
path, and also avoids the more expensive Val serialization. It also
lets the sender determine the specifics of what gets logged (and how).
This commit changes Broker over to now use the same semantics as the
old communication system.
TODOs:
- The new Broker code doesn't have consistent #ifdefs yet.
- Right now, when a new log receiver connects, all existing logs
are broadcasted out again to all current clients. That doesn't so
any harm, but is unncessary. Need to add a way to send the
existing logs to just the new client.
* origin/topic/dnthayer/ticket1788:
Fix to_json() to not lose precision for values of type double
Fix the to_json() function for bool, enum, and interval types
Add tests for the to_json() function
This undoes the changes applied in merge 9db27a6d60
and goes back to the state in the branch as of the merge 5ab3b86.
Getting rid of the additional layer of removing analyzers and just
keeping them in the set introduced subtle differences in behavior since
a few calls were still passed along. Skipping all of these with SetSkip
introduced yet other subtle behavioral differences.
This is a small caveat to this implementation. The ethernet
header that is carried over the tunnel is ignored. If a user
tries to do MAC address logging, it will only show the MAC
addresses for the outer tunnel and the inner MAC addresses
will be stripped and not available anywhere.
* origin/topic/robin/file-analysis-fixes:
Adding test with command line that used to trigger a crash.
Cleaning up a couple of comments.
Fix delay in disabling file analyzers.
Fix file analyzer memory management.
The merge changes around functionality a bit again - instead of having
a list of done analyzers, analyzers are simply set to skipping when they
are removed, and cleaned up later on destruction of the AnalyzerSet.
BIT-1782 #merged
If connection flipping occured in Sessions.cc code (invoked e.g. when
the original SYN is missing), layer 2 flipping was not performed. This
change switches to always use the connection flipping code in Conn.cc
which performs the switch correctly.
When a file analyzer signaled being done with data delivery, the
analyzer would only be scheduled for removal at that poing, meaning it
could still receive more data until that action actually took effect.
Now we make sure to not send any more data to an analyzer.
File analyzers got deleted immediately once the queue with the
corresponding removal operation got drained. That however can happen
while the analyzer is still doing stuff: the queue is drained whenever
any the "special" file analysis events needing immediate attention has
been executed. This fix now only schedules the analyzer for deletion
at that time, but postpones the actual operation until file object
itself is being destroyed.
- New fields: extracted_cutoff and extracted_size.
These fields will be null if the file isn't extracted.
- Extended the extraction test to test the files log too.