- Add a timeout flag to file_analysis.log so it's easy to tell what
has had at least one timeout trigger happen.
- Fix ftp-data service tag not being set for reused connections.
- Fix HTTP::Incorrect_File_Type because mime types returned by FAF have
the charset still in them, but the HTTP::mime_types_extensions table
does not and it requires an exact string match. (still ugly)
- Add TRIGGER_NEW_CONN to track files going over multiple connections.
- Add an initial file/mime type guess for non-linear file transfers.
- Fix a case where file/mime type detection would never be attempted
if the start of the file was a content gap.
- Improve mime type tracking of HTTP byte-range/partial-content,
even if the requests are pipelined or over multiple connections.
- I changed the modbus.events test because having the baseline output
be 80+ MB is nuts and it was sensitive to connection record redefs.
The notable difference here is that ftp.log now logs by default
the PORT, PASV, EPRT, EPSV commands as well as a separate line for
ftp-data channels in which file extraction was requested.
This difference isn't a direct result of now doing the file extraction
through the file analysis framework, it's just because I noticed even
the old way of tracking extracted-file name didn't work right and this
was the way I came up with so that a locally extracted file can be
associated with a data channel and then that data channel associated
with a control channel.
Versus from synchronous function calls, which doesn't work well because
the function call can see a script-layer state that doesn't reflect
the state as it will be in terms of the event/network stream.
Other misc:
- Remove HTTP::MD5 notice.
- Add "last_active" field to FileAnalysis::Info record.
- Replace "conn_uids", "conn_ids" fields in FileAnalysis::Info record
with just a "conns" fields containing full connection records.
- The http-methods unit test is failing now, but I think it will be
fixed once I change the file handle callback mechanism to use events
instead.
* origin/topic/bernhard/base64:
and re-enable caching of extracted certs
and add bae64 bif tests.
re-unify classes
and modernize script.
add base64-encode functionality and bif.
Closes#965.
A retry happens on every new input and also periodically based on a
timer. If a file handle is returned at those times, the input is
forwarded for analysis, else it keeps retrying until a timeout
threshold.
The framework now cycles through callbacks based on a table indexed
by analyzer tags, or the special case of service strings if a given
analyzer is overloaded for multiple protocols (FTP/IRC data). This
lets each protocol script bundle implement the callback locally and
reduces the FAF's external dependencies.
For files that go over a single connection, add connection start time
to handle, so the file id will always differ even if the same connection
parameters are later used to transfer a file (same one or different).
So much nicer!
Closes#954.
* origin/topic/seth/notice-framework-updates:
Update notice framework documentation to represent the new reality.
Complete removal of the old table based notice policy mechanism.
Updates for the notices framework.
This allows replacing an ugly openssl-call from one of
the policy scripts. The openssl call is now replaced with
a still-but-less-ugly call to base64_encode.
I do not know if I split the Base64 classes in a "smart" way... :)
The add_action, remove_action, and stop BIFs now go through a queue to
ensure that modifications are made at well-defined times and don't end
up invalidating loop iterators.
The Info record now uses a "table[ActionArgs] of ActionResults", which
allows for simultaneous actions of a given type as long as other args
(fields in the ActionArgs record) are different.
Added the file extraction action and did other misc. cleanup. Most of
the minimal core features/support for file analysis should be working at
this point, just have to start fleshing things out.
- Moved the Notice::notice event and Notice::policy table to both be hooks.
- Renamed the old Notice::policy to Notice::policy_table and documented it as deprecated.
Added a generic gtpv1_message event generated for any GTP message type.
Added specific events for the create/update/delete PDP context
request/response messages.
Addresses #934.
* origin/topic/matthias/notary:
Small cosmetic changes.
Give log buffer the correct name.
Simplify delayed logging of SSL records.
Implement delay-token style SSL logging.
More style tweaks: replace spaces with tabs.
Factor notary code into separte file.
Adhere to Bro coding style guidelines.
Enhance ssl.log with information from notary.
Closes#928
* topic/robin/exit-after-terminate:
Updating submodule(s).
Fixing exit-after-terminate when used with bare mode.
New option exit_only_after_terminate to prevent Bro from exiting.
These cases should be avoidable by fixing scripts where they occur and
they can also help catch typos that would lead to unintentional runtime
behavior.
Adding this already revealed several scripts where a field in an inlined
record was never removed after a code refactor.
* origin/topic/bernhard/input-logging-commmon-functions:
add the last of Robins suggestions (separate info-struct for constructors).
port memory leak fix from master
harmonize function naming
move AsciiInputOutput over to threading
and thinking about it, ascii-io doesn't need the separator
change constructors
and factor stuff out the input framework too.
factor out ascii input/output.
std::string accessors to escape_sequence functionality
intermediate commit - it has been over a month since I touched this...
I cleaned up the AsciiInputOutput class somewhat, including renaming
it to AsciiFormatter, renaming some of its methods, and turning the
static methods into members for consistency.
Closes#929.
Moved this functionality to be internal instead of in the script-layer
event handlers. The issue with the later is that bad things can happen
between the time a reporter event handler is dispatched and the time it
is executed, and if bro crashes in that time, the message may never be
seen/logged.
Addressed #930 (and revisits #836).
This commit moves the notary script into the policy directory, along with some
architectural changes: the main SSL script now has functionality to add and
remove tokens for a given record. When adding a token, the script delays the
logging until the token has been removed or until the record exceeds a maximum
delay time.
As before, the base SSL script stores all records sequentially and buffers even
non-delayed records for the sake of having an ordered log file. If this turns
out to be not so important, we can easily revert to a simpler logic.
(This is still WiP, some debuggin statements still linger.)