* send end_of_data event for all kind of streams
* send process_finished event containing exit code of child process for executed programs
* move raw-tests to separate directory
* expose name of input stream to readers
* better handling of some error cases in raw reader
* new force_kill option for raw reader which SIGKILLs progesses on exit
The ordering of events how they arrive in the main loop is a bit peculiar at the moment.
The process_finished event arrives in scriptland before all of the other events, even though
it should be sent last. I have not yet fully figured that out.
* origin/topic/bernhard/base64:
and re-enable caching of extracted certs
and add bae64 bif tests.
re-unify classes
and modernize script.
add base64-encode functionality and bif.
Closes#965.
* origin/topic/seth/software-version-updates2:
Correctly handle DNS lookups for software version ranges.
Improvements to vulnerable software detection.
Update software version parsing and comparison to account for a third numeric subversion.
Closes#938.
PrepareStop() is now SignalStop() and just signals a thread that it
should terminate. After that's called, WaitForStop() (formerly Stop())
wait for it to actually finish processing.
When stopping writers during operation, we now no longer wait for them
to finish.
Once a BasicThread leaves its run() method, a thread is now marked for
cleaning up, and the ThreadMgr will soon join it to release the OS
resources.
Also, adding a function Log::remove_stream() that remove a logging
stream, stopping all writer threads that are associated with it.
Note, however, that removing a *filter* from a stream still doesn't
clean up any threads. The problem is that because of the output paths
potentially being created dynamically it's unclear if the writer
thread will still be needed in the future. We could add clean writers
up with timeouts, but that doesn't sound great either. So for now, the
only way to sure clean up logging threads is to remove the entire
stream.
Also note that cleanup doesn't work with input threads yet, which
don't seem to terminate (at least in the case I tried).
A retry happens on every new input and also periodically based on a
timer. If a file handle is returned at those times, the input is
forwarded for analysis, else it keeps retrying until a timeout
threshold.
The framework now cycles through callbacks based on a table indexed
by analyzer tags, or the special case of service strings if a given
analyzer is overloaded for multiple protocols (FTP/IRC data). This
lets each protocol script bundle implement the callback locally and
reduces the FAF's external dependencies.
For files that go over a single connection, add connection start time
to handle, so the file id will always differ even if the same connection
parameters are later used to transfer a file (same one or different).
On a first glance - this kind of seems to work. On mac-os you need
a newer than the system-installed sqlite - the hanging problem only
occurs with that one...