It turns out that the serial number field in all events was never
populated correctly. Instead, the previous field (issuer key hash) was
re-read and repeated in all events.
Closes#1830.
* origin/topic/johanna/ocsp-sct-validate: (82 commits)
Tiny script changes for SSL.
Update CT Log list
SSL: Update OCSP/SCT scripts and documentation.
Revert "add parameter 'status_type' to event ssl_stapled_ocsp"
Revert "parse multiple OCSP stapling responses"
SCT: Fix script error when mime type of file unknown.
SCT: another memory leak in SCT parsing.
SCT validation: fix small memory leak (public keys were not freed)
Change end-of-connection handling for validation
OCSP/TLS/SCT: Fix a number of test failures.
SCT Validate: make caching a bit less aggressive.
SSL: Fix type of ssl validation result
TLS-SCT: compile on old versions of OpenSSL (1.0.1...)
SCT: Add caching support for validation
SCT: Add signed certificate timestamp validation script.
SCT: Allow verification of SCTs in Certs.
SCT: only compare correct OID/NID for Cert/OCSP.
SCT: add validation of proofs for extensions and OCSP.
SCT: pass timestamp as uint64 instead of time
Add CT log information to Bro
...
This is much more complex than the TLS Extension/OCSP cases. We need to
first alter the certificate and remove the extension from it, before
extracting the tbscert. Furthermore, we need the key hash of the issuing
certificate to be able to validate the proof - which means that we need
a valid certificate chain.
Missing: documentation, nice integration so that we can just add a
script and use this in Bro.
This does not yet work for certificates, because this requires some
changing the ASN.1 structure before validation (we need to extract the
tbscert and remove the SCT extension before).
API will change in the future.
With this change, we also parse signed certificate timestamps from OCSP
replies. This introduces a common base class between the OCSP and X509
analyzer, which now share a bit of common code. The event for signed
certificate timestamps is raised by both and thus renamed do:
x509_ocsp_ext_signed_certificate_timestamp
This makes it much easier for protocols where the mime type is known in
advance like, for example, TLS. We now do no longer have to perform deep
script-level magic.
Instead of having an additional string argument specifying if we are
sending a request or a reply, we now have an ANALYZER_OCSP_REQUEST and
an ANALYZER_OCSP_REPLY
Instead of having a big event, that tries to parse all the data into a
huge datastructure, we do the more common thing and use a series of
smaller events to parse requests and responses.
The new events are:
ocsp_request -> raised for an ocsp request, giving version and requestor
ocsp_request_certificate -> raised n times per request, once per cert
ocsp_response_status -> raised for each ocsp response, giving status
ocsp_response_bytes -> raised for each ocsp response with information
ocsp_response_certificate -> raised for each cert in an ocsp response
This is a tiny bit evil because it uses parts of the SSL protocol
analyzer in the X.509 certificate parser. Which is the fault of the
protocol, which replicates the functionality.
This undoes the changes applied in merge 9db27a6d60
and goes back to the state in the branch as of the merge 5ab3b86.
Getting rid of the additional layer of removing analyzers and just
keeping them in the set introduced subtle differences in behavior since
a few calls were still passed along. Skipping all of these with SetSkip
introduced yet other subtle behavioral differences.
* origin/topic/robin/file-analysis-fixes:
Adding test with command line that used to trigger a crash.
Cleaning up a couple of comments.
Fix delay in disabling file analyzers.
Fix file analyzer memory management.
The merge changes around functionality a bit again - instead of having
a list of done analyzers, analyzers are simply set to skipping when they
are removed, and cleaned up later on destruction of the AnalyzerSet.
BIT-1782 #merged
When a file analyzer signaled being done with data delivery, the
analyzer would only be scheduled for removal at that poing, meaning it
could still receive more data until that action actually took effect.
Now we make sure to not send any more data to an analyzer.
File analyzers got deleted immediately once the queue with the
corresponding removal operation got drained. That however can happen
while the analyzer is still doing stuff: the queue is drained whenever
any the "special" file analysis events needing immediate attention has
been executed. This fix now only schedules the analyzer for deletion
at that time, but postpones the actual operation until file object
itself is being destroyed.
At one place in the code, we do not check the correct return code. This
makes it possible for a reply to get a response of "good", when the ocsp
reply is not actually signed by the responder in question.
This also instructs ocsp verication to skip certificate chain
validation, which we do ourselves earlier because the OCSP verify
function cannot do it correctly (no way to pass timestamp).
The "file_extraction_limit" event was passing a Files::AnalyzerArgs
record as an "any" type. This is not right at the least and may
have been causing a crash for a user at worst.
The order in which the plugin initializers are executed is compiler
dependent. With this change, Tags will always be generated in
alphabetical ordering, not in compiler-dependent order.
Compiling a plugin required having access to OpenSSL headers because
they were pulled in by Bro headers that the plugin had to include.
Removinng then OpenSSL dependency from those Bro headers.
I'm also reverting a4e5591e. This is a different fix for the same
problem, and reverting that commit gives us a test case. :-)
There was a bug in the new parsing code, introduced in
708ede22c6 which parses validity times
incorrectly if they are before the year 2000. What happens in this case
is that the 2-digit year will be interpreted to be in the 21st century
(1999 will be parsed as 2099, e.g.).
Broke out the stats collection into a bunch of new Bifs
in stats.bif. Scripts that use stats collection functions
have also been updated. More work to do.
- Removed the gap_report event. It wasn't used anymore
and functionally no more capable that scheduling events
and using the get_gap_summary bif.
- Added functionality to Dictionaries to count cumulative
numbers of inserts performed. This is further used to
measure the total number of connections of various types.
Previously only the number of active connections was
available.
- The Reassembler base class now tracks active reassembly
size for all subclasses (File/TCP/Frag & unknown).
- Improvements to the stats.log. Mostly, more information.
The generalizedtime support in for certificates now fits more
seamlessly to how the rest of the code was structured and does the
different processing for UTC and generalized times at the beginning,
when checking for them.
The test does not output the common name anymore, since the output
format might change accross openssl versions (inserted the serial
instead).
I also added a bit more error checking for the UTC time case.
These changes should be safe -- testing the failure cases proves a bit
difficult at the moment due to the fact that OpenSSL seems to fix the
values that are present in the original ASN.1 before passing them on to
us. It is thus not directly easily possible to trigger the error cases
from scriptland.
This also means that a lot of the new error cases we try to catch here
can probably never happen.