This is a tiny bit evil because it uses parts of the SSL protocol
analyzer in the X.509 certificate parser. Which is the fault of the
protocol, which replicates the functionality.
* origin/topic/dnthayer/ticket1788:
Fix to_json() to not lose precision for values of type double
Fix the to_json() function for bool, enum, and interval types
Add tests for the to_json() function
This undoes the changes applied in merge 9db27a6d60
and goes back to the state in the branch as of the merge 5ab3b86.
Getting rid of the additional layer of removing analyzers and just
keeping them in the set introduced subtle differences in behavior since
a few calls were still passed along. Skipping all of these with SetSkip
introduced yet other subtle behavioral differences.
This event is the replacement for ssl_application_data, which is removed
in the same commit. It is more generic, containing more information than
ssl_application_dataand is raised for all SSL/TLS messages that are
exchanged before encryption starts.
It is used by Bro internally to determine when a TLS1.3 session has been
completely established. Apart from that, it can be used to, e.g.,
determine the record layer TLS version.
This exposes the record layer version of the fragment in addition to the
content type and the length. The ordering of the arguments in the event
is the same as the ordering in the protocol message (first type, then
version, then length).
This also includes a slight change to the analyzer, no longer calling
the generate function if the event is not used.
This is a small caveat to this implementation. The ethernet
header that is carried over the tunnel is ignored. If a user
tries to do MAC address logging, it will only show the MAC
addresses for the outer tunnel and the inner MAC addresses
will be stripped and not available anywhere.
This change adds compression methods to the ssl_client_hello event. It
not being included was an oversight from a long time ago.
This change means that the signature of ssl_client_hello changes
slightly and scripts will have to be adjusted; since this is a commonly
used event, the impact of it might be higher than usually for event
changes.
* origin/topic/robin/file-analysis-fixes:
Adding test with command line that used to trigger a crash.
Cleaning up a couple of comments.
Fix delay in disabling file analyzers.
Fix file analyzer memory management.
The merge changes around functionality a bit again - instead of having
a list of done analyzers, analyzers are simply set to skipping when they
are removed, and cleaned up later on destruction of the AnalyzerSet.
BIT-1782 #merged
If connection flipping occured in Sessions.cc code (invoked e.g. when
the original SYN is missing), layer 2 flipping was not performed. This
change switches to always use the connection flipping code in Conn.cc
which performs the switch correctly.
When a file analyzer signaled being done with data delivery, the
analyzer would only be scheduled for removal at that poing, meaning it
could still receive more data until that action actually took effect.
Now we make sure to not send any more data to an analyzer.
File analyzers got deleted immediately once the queue with the
corresponding removal operation got drained. That however can happen
while the analyzer is still doing stuff: the queue is drained whenever
any the "special" file analysis events needing immediate attention has
been executed. This fix now only schedules the analyzer for deletion
at that time, but postpones the actual operation until file object
itself is being destroyed.
- New fields: extracted_cutoff and extracted_size.
These fields will be null if the file isn't extracted.
- Extended the extraction test to test the files log too.