* origin/topic/jsiwek/file-signatures:
File type detection changes and fix https.log {orig,resp}_fuids fields.
Various minor changes related to file mime type detection.
Refactor common MIME magic matching code.
Replace libmagic w/ Bro signatures for file MIME type identification.
Conflicts:
scripts/base/init-default.bro
testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log
testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log
BIT-1143 #merged
Add parsing of several more types to SAN extension.
Make error messages of x509 file analyzer more useful.
Fix file ID generation.
You apparently have to be very careful which EndOfFile function of
the file analysis framework you call... otherwhise it might try
to close another file id. This took me quite a while to find.
addresses BIT-953, BIT-760, BIT-1150
- Improve or just remove some file magic signatures ported from libmagic
that were too general and matched incorrectly too often.
- Fix MHR script's use of fa_file$mime_type before checking if it's
initialized. It may be uninitialized if no signatures match.
- The "fa_file" record now contains a "mime_types" field that contains
all magic signatures that matched the file content (where the
"mime_type" field is just a shortcut for the strongest match).
Put some methods in file_analysis::Manager that can perform the
matching process and return MIME type results. Also helps to
centralize the management/re-use of a signature matcher object.
* origin/topic/jsiwek/http-file-id-caching:
Revert use of HTTP file ID caching for gaps range request content.
Extend file analysis API to allow file ID caching, adapt HTTP to use it.
BIT-1125 #merged
This allows an analyzer to either provide file IDs associated with some
file content or to cache a file ID that was already determined by
script-layer logic so that subsequent calls to the file analysis
interface can bypass costly detours through script-layer. This can
yield a decent performance improvement for analyzers that are able to
take advantage of it and deal with streaming content (like HTTP).
- The reassembly behavior can be modified per-file by enabling or
disabling the reassembler and/or modifying the size of the reassembly
buffer.
- Changed the file extraction analyzer to use the stream to avoid
issues with the chunk based approach not immediately triggering
the file_new event due to mime-type detection delay. Early chunks
frequently ended up lost before.
- Generally things are working now and I'd consider this in testing.
- Enable manager to associate analyzers with a MIME type. With that,
one can now say enable all analyzers for, e.g., "image/gif". This is
exposed to script-land as
Files::add_analyzers_for_mime_type(f: fa_file, mtype: string)
For MIME types identified via libmagic, this happens automatically
(via the file_new() handler in files/main.bro).
- Extend the analyzer API to better match that of protocol analyzers:
- Adding unique analyzer IDs so that we can refer to instances
from script-land.
- Adding subtypes to Components so that a single analyzer
implementation can support different types of analyzers
internally.
- Add an analyzer method SetTag() that allows to set the tag after
construction.
- Adding Init() and Done() methods for consistency with what other
classes offer.
- Add debug logging to the file_analysis stream.
TODO: test cases missing for the new script-land functionality.
Made some class templates for code that seemed duplicated between
file/protocol tags and managers. Seems like it helps a bit and
hopefully can be also be used to transition other things that have
enum value "tags" (e.g. logging writers, input readers) to the
plugin system.
This cleans up internals of how analyzer instances get identified by the
tag plus any args given to it and doesn't change script code a user
would write.
- Fix examples/references in the file analysis how-to/usage doc.
- Add Broxygen-generated docs for file analyzer plugins.
- Break FTP::Info type declaration out in to its own file to get
rid of some circular dependencies (between s/b/p/ftp/main and
s/b/p/ftp/utils).
- Remove script-layer data input interface (will be managed directly
by input framework later).
- Only track files internally by file id hash. Chance of collision
too small to justify also tracking unique file string.
Thanks to git this merge was less troublesome that I was afraid it
would be. Not all tests pass yet though (and file hashes have changed
unfortunately).
Conflicts:
cmake
doc/scripts/DocSourcesList.cmake
scripts/base/init-bare.bro
scripts/base/protocols/ftp/main.bro
scripts/base/protocols/irc/dcc-send.bro
scripts/test-all-policy.bro
src/AnalyzerTags.h
src/CMakeLists.txt
src/analyzer/Analyzer.cc
src/analyzer/protocol/file/File.cc
src/analyzer/protocol/file/File.h
src/analyzer/protocol/http/HTTP.cc
src/analyzer/protocol/http/HTTP.h
src/analyzer/protocol/mime/MIME.cc
src/event.bif
src/main.cc
src/util-config.h.in
testing/btest/Baseline/coverage.bare-load-baseline/canonified_loaded_scripts.log
testing/btest/Baseline/coverage.default-load-baseline/canonified_loaded_scripts.log
testing/btest/Baseline/istate.events-ssl/receiver.http.log
testing/btest/Baseline/istate.events-ssl/sender.http.log
testing/btest/Baseline/istate.events/receiver.http.log
testing/btest/Baseline/istate.events/sender.http.log
And added an event called "event_queue_flush_point" to mark where that
occured in the event stream. The FAF now uses an explicit event queue
flush instead of buffering input in order to wait for a file handle to
be returned from script-layer.
- FileAnalysis::Info is now just a record used for logging, the fa_file
record type is defined in init-bare.bro as the analogue to a
connection record.
- Starting to transfer policy hook triggers and analyzer results to
events.
This reverts commit fc267d010d.
There were some diffs caused by this in external test suites I'm
unsure about, I'm going to go over optimizations more closely in
a different branch.
When a file handle is needed and the last event in the queue is also
a get_file_handle event with the same arguments, instead of queueing
a new event, just remember to cache/re-use the resulting handle from
the previous event. This depends on get_file_handle handlers not
changing global state that is also used to derive the file handle
string.
Versus from synchronous function calls, which doesn't work well because
the function call can see a script-layer state that doesn't reflect
the state as it will be in terms of the event/network stream.
Other misc:
- Remove HTTP::MD5 notice.
- Add "last_active" field to FileAnalysis::Info record.
- Replace "conn_uids", "conn_ids" fields in FileAnalysis::Info record
with just a "conns" fields containing full connection records.
- The http-methods unit test is failing now, but I think it will be
fixed once I change the file handle callback mechanism to use events
instead.
A retry happens on every new input and also periodically based on a
timer. If a file handle is returned at those times, the input is
forwarded for analysis, else it keeps retrying until a timeout
threshold.
The framework now cycles through callbacks based on a table indexed
by analyzer tags, or the special case of service strings if a given
analyzer is overloaded for multiple protocols (FTP/IRC data). This
lets each protocol script bundle implement the callback locally and
reduces the FAF's external dependencies.
The add_action, remove_action, and stop BIFs now go through a queue to
ensure that modifications are made at well-defined times and don't end
up invalidating loop iterators.
The Info record now uses a "table[ActionArgs] of ActionResults", which
allows for simultaneous actions of a given type as long as other args
(fields in the ActionArgs record) are different.