This feature can be enabled globally for all logs by setting
LogAscii::gzip_level to a value greater than 0.
This feature can be enabled on a per-log basis by setting gzip-level in
$confic to a value greater than 0.
The dpd signature missed a few cases that are used for TLS 1.3,
especially when draft versions (which are all that we are seeing at the
moment) are being negotiated.
This fix mostly allows draft versions in the server hello (identified by
7F[version]; since we do not know how many drafts there will be, we are
currently allowing a rather safe upper limit.
The changes are now a bit more succinct with less code changes required.
Behavior is tested a little bit more thoroughly and a memory problem
when reading incomplete lines was fixed. ReadHeader also always directly
returns if header reading failed.
Error messages now are back to what they were before the change, if the
new behavior is not used.
I also tweaked the documentation text a bit.
The linker was complaining about linking files that didn't
have any symbols. These were actually empty files so I just
got rid of them and removed references to them.
* origin/topic/robin/broker-logging:
Another fix for the new Broker-based remote logging.
Fix some minor issues.
Adding Broker ifdefs for new remote logging code.
Changing semantics of Broker's remote logging to match old communication framework.
By default, the ASCII reader does not fail on errors anymore.
If there is a problem parsing a line, a reporter warning is
written and parsing continues. If the file is missing or can't
be read, the input thread just tries again on the next heartbeat.
Options have been added to recreate the previous behavior...
const InputAscii::fail_on_invalid_lines: bool;
and
const InputAscii::fail_on_file_problem: bool;
They are both set to `F` by default which makes the input readers
resilient to failure.
- This fixes BIT-1769 by logging all requests even in the absence of a
reply. The way that request and replying matching were being handled
was restructured to mostly ignore the transaction ids because they
aren't that helpful for network monitoring and it makes the script
structure more complicated.
- Add `framed_addr` field to the radius log to indicate if the radius
server is hinting at an address for the client.
- Add `ttl` field to indicate how quickly the radius server is replying
to the network access server.
- Fix a bunch of indentation inconsistencies.
Broker had changed the semantics of remote logging: it sent over the
original Bro record containing the values to be logged, which on the
receiving side would then pass through the logging framework normally,
including triggering filters and events. The old communication system
however special-cases logs: it sends already processed log entries,
just as they go into the log files, and without any receiver-side
filtering etc. This more efficient as it short-cuts the processing
path, and also avoids the more expensive Val serialization. It also
lets the sender determine the specifics of what gets logged (and how).
This commit changes Broker over to now use the same semantics as the
old communication system.
TODOs:
- The new Broker code doesn't have consistent #ifdefs yet.
- Right now, when a new log receiver connects, all existing logs
are broadcasted out again to all current clients. That doesn't so
any harm, but is unncessary. Need to add a way to send the
existing logs to just the new client.
This is a small caveat to this implementation. The ethernet
header that is carried over the tunnel is ignored. If a user
tries to do MAC address logging, it will only show the MAC
addresses for the outer tunnel and the inner MAC addresses
will be stripped and not available anywhere.
If connection flipping occured in Sessions.cc code (invoked e.g. when
the original SYN is missing), layer 2 flipping was not performed. This
change switches to always use the connection flipping code in Conn.cc
which performs the switch correctly.
- New fields: extracted_cutoff and extracted_size.
These fields will be null if the file isn't extracted.
- Extended the extraction test to test the files log too.