Fixed some Coverity warnings in RemoteSerializer::ProcessLogCreateWriter().
Upon failure, CreateWriterForRemoteLog() frees the "info" and "fields"
pointers, so they are now set to null in order to avoid freeing them
a second time.
The changes are now a bit more succinct with less code changes required.
Behavior is tested a little bit more thoroughly and a memory problem
when reading incomplete lines was fixed. ReadHeader also always directly
returns if header reading failed.
Error messages now are back to what they were before the change, if the
new behavior is not used.
I also tweaked the documentation text a bit.
This moves all threading code in Bro from pthreads to the c++11
primitives, which make for shorter, easier to use, and less error-prone
code.
pthreads is still used in 2 places in Bro currently. BasicThread uses
two bits of functionality that are not available using the c++ API
(setting thread names & setting signal masks). Since all c++
implementations that I am aware of still use an underlying pthreads
implementation, we just use native_handle to access the underlying
pthreads implementation for these cases. I do not expect this to lead to
problems in the forseable future. If we ever encounter a platform where
a different thread architecture is used, we might have to change that
around.
This code is guarded by static_asserts, so we will notice if a platform
uses a different implementation.
sqlite also uses pthreads directly.
The linker was complaining about linking files that didn't
have any symbols. These were actually empty files so I just
got rid of them and removed references to them.
* origin/topic/robin/broker-logging:
Another fix for the new Broker-based remote logging.
Fix some minor issues.
Adding Broker ifdefs for new remote logging code.
Changing semantics of Broker's remote logging to match old communication framework.
By default, the ASCII reader does not fail on errors anymore.
If there is a problem parsing a line, a reporter warning is
written and parsing continues. If the file is missing or can't
be read, the input thread just tries again on the next heartbeat.
Options have been added to recreate the previous behavior...
const InputAscii::fail_on_invalid_lines: bool;
and
const InputAscii::fail_on_file_problem: bool;
They are both set to `F` by default which makes the input readers
resilient to failure.
This works correctly now (as a prototype at least). If a file
disappears, the thread complains once and once the file reappears
the thread will once again begin watching it.
- This fixes BIT-1769 by logging all requests even in the absence of a
reply. The way that request and replying matching were being handled
was restructured to mostly ignore the transaction ids because they
aren't that helpful for network monitoring and it makes the script
structure more complicated.
- Add `framed_addr` field to the radius log to indicate if the radius
server is hinting at an address for the client.
- Add `ttl` field to indicate how quickly the radius server is replying
to the network access server.
- Fix a bunch of indentation inconsistencies.
* topic/seth/krb5-ticket-tracking-merge:
Refactor base krb scripts and update tests.
Add script to log ticket hashes in krb log
Ensure TGS req does not stomp out AP data
Add ciphertext to ticket data structures
Broker had changed the semantics of remote logging: it sent over the
original Bro record containing the values to be logged, which on the
receiving side would then pass through the logging framework normally,
including triggering filters and events. The old communication system
however special-cases logs: it sends already processed log entries,
just as they go into the log files, and without any receiver-side
filtering etc. This more efficient as it short-cuts the processing
path, and also avoids the more expensive Val serialization. It also
lets the sender determine the specifics of what gets logged (and how).
This commit changes Broker over to now use the same semantics as the
old communication system.
TODOs:
- The new Broker code doesn't have consistent #ifdefs yet.
- Right now, when a new log receiver connects, all existing logs
are broadcasted out again to all current clients. That doesn't so
any harm, but is unncessary. Need to add a way to send the
existing logs to just the new client.
Re-enable logging, now in policy because it probably is interesting to
no-one. We also only log ocsp replies.
Fix all tests.
Fix an issue where ocsp replies were added to the x.509 certificate
list.
With this change, we also parse signed certificate timestamps from OCSP
replies. This introduces a common base class between the OCSP and X509
analyzer, which now share a bit of common code. The event for signed
certificate timestamps is raised by both and thus renamed do:
x509_ocsp_ext_signed_certificate_timestamp
This makes it much easier for protocols where the mime type is known in
advance like, for example, TLS. We now do no longer have to perform deep
script-level magic.
Instead of having an additional string argument specifying if we are
sending a request or a reply, we now have an ANALYZER_OCSP_REQUEST and
an ANALYZER_OCSP_REPLY
Instead of having a big event, that tries to parse all the data into a
huge datastructure, we do the more common thing and use a series of
smaller events to parse requests and responses.
The new events are:
ocsp_request -> raised for an ocsp request, giving version and requestor
ocsp_request_certificate -> raised n times per request, once per cert
ocsp_response_status -> raised for each ocsp response, giving status
ocsp_response_bytes -> raised for each ocsp response with information
ocsp_response_certificate -> raised for each cert in an ocsp response