When a file handle is needed and the last event in the queue is also
a get_file_handle event with the same arguments, instead of queueing
a new event, just remember to cache/re-use the resulting handle from
the previous event. This depends on get_file_handle handlers not
changing global state that is also used to derive the file handle
string.
Versus from synchronous function calls, which doesn't work well because
the function call can see a script-layer state that doesn't reflect
the state as it will be in terms of the event/network stream.
Other misc:
- Remove HTTP::MD5 notice.
- Add "last_active" field to FileAnalysis::Info record.
- Replace "conn_uids", "conn_ids" fields in FileAnalysis::Info record
with just a "conns" fields containing full connection records.
- The http-methods unit test is failing now, but I think it will be
fixed once I change the file handle callback mechanism to use events
instead.
A retry happens on every new input and also periodically based on a
timer. If a file handle is returned at those times, the input is
forwarded for analysis, else it keeps retrying until a timeout
threshold.
The framework now cycles through callbacks based on a table indexed
by analyzer tags, or the special case of service strings if a given
analyzer is overloaded for multiple protocols (FTP/IRC data). This
lets each protocol script bundle implement the callback locally and
reduces the FAF's external dependencies.
The add_action, remove_action, and stop BIFs now go through a queue to
ensure that modifications are made at well-defined times and don't end
up invalidating loop iterators.
The Info record now uses a "table[ActionArgs] of ActionResults", which
allows for simultaneous actions of a given type as long as other args
(fields in the ActionArgs record) are different.