- New, expanded API.
- Calculations moved into plugins.
- Scripts using measurement framework ported.
- Updated the script-land queue implementation to make it more generic.
-
I've used the opportunity to also cleanup DPD's expect_connection()
infrastructure, and renamed that bif to schedule_analyzer(), which
seems more appropiate. One can now also schedule more than one
analyzer per connection.
TODOs:
- "make install" is probably broken.
- Broxygen is probably broken for plugin-defined events.
- event groups are broken (do we want to keep them?)
- parallel btest is broken, but I'm not sure why ...
(tests all pass individually, but lots of error when running
in parallel; must be related to *.bif restructuring).
- Document API for src/plugin/*
- Document API for src/analyzer/Analyzer.h
- Document API for scripts/base/frameworks/analyzer
There's now a new directory "src/protocols/", and the plan is for each
protocol analyzer to eventually have its own subdirectory in there
that contains everything it defines (C++/pac/bif). The infrastructure
to make that happen is in place, and two analyzers have been
converted to the new model, HTTP and SSL; there's no further
HTTP/SSL-specific code anywhere else in the core anymore (I believe :-)
Further changes:
- -N lists available plugins, -NN lists more details on what these
plugins provide (analyzers, bif elements). (The latter does not
work for analyzers that haven't been converted yet).
- *.bif.bro files now go into scripts/base/bif/; and
scripts/base/bif/plugins/ for bif files provided by plugins.
- I've factored out the bifcl/binpac CMake magic from
src/CMakeLists.txt to cmake/{BifCl,Binpac}
- There's a new cmake/BroPlugin that contains magic to allow
plugins to have a simple CMakeLists.txt. The hope is that
eventually the same CMakeLists.txt can be used for compiling a
plugin either statically or dynamically.
- bifcl has a new option -c that changes the code it generates so
that it can be used with a plugin.
TODOs:
- "make install" is probably broken.
- Broxygen is probably broken for plugin-defined events.
- event groups are broken (do we want to keep them?)
- Add a timeout flag to file_analysis.log so it's easy to tell what
has had at least one timeout trigger happen.
- Fix ftp-data service tag not being set for reused connections.
- Fix HTTP::Incorrect_File_Type because mime types returned by FAF have
the charset still in them, but the HTTP::mime_types_extensions table
does not and it requires an exact string match. (still ugly)
- Add TRIGGER_NEW_CONN to track files going over multiple connections.
- Add an initial file/mime type guess for non-linear file transfers.
- Fix a case where file/mime type detection would never be attempted
if the start of the file was a content gap.
- Improve mime type tracking of HTTP byte-range/partial-content,
even if the requests are pipelined or over multiple connections.
- I changed the modbus.events test because having the baseline output
be 80+ MB is nuts and it was sensitive to connection record redefs.
The notable difference here is that ftp.log now logs by default
the PORT, PASV, EPRT, EPSV commands as well as a separate line for
ftp-data channels in which file extraction was requested.
This difference isn't a direct result of now doing the file extraction
through the file analysis framework, it's just because I noticed even
the old way of tracking extracted-file name didn't work right and this
was the way I came up with so that a locally extracted file can be
associated with a data channel and then that data channel associated
with a control channel.
All tests pass with one exception: some Broxygen tests are broken
because dpd_config doesn't exist anymore. Need to update the mechanism
for auto-documenting well-known ports.
This is a larger internal change that moves the analyzer
infrastructure to a more flexible model where the available analyzers
don't need to be hardcoded at compile time anymore. While currently
they actually still are, this will in the future enable external
analyzer plugins. For now, it does already add the capability to
dynamically enable/disable analyzers from script-land, replacing the
old Analyzer::Available() methods.
There are three major parts going into this:
- A new plugin infrastructure in src/plugin. This is independent
of analyzers and will eventually support plugins for other parts
of Bro as well (think: readers and writers). The goal is that
plugins can be alternatively compiled in statically or loadead
dynamically at runtime from a shared library. While the latter
isn't there yet, there'll be almost no code change for a plugin
to make it dynamic later (hopefully :)
- New analyzer infrastructure in src/analyzer. I've moved a number
of analyzer-related classes here, including Analyzer and DPM;
the latter now renamed to Analyzer::Manager. More will move here
later. Currently, there's only one plugin here, which provides
*all* existing analyzers. We can modularize this further in the
future (or not).
- A new script interface in base/framework/analyzer. I think that
this will eventually replace the dpm framework, but for now
that's still there as well, though some parts have moved over.
I've also remove the dpd_config table; ports are now configured via
the analyzer framework. For exmaple, for SSH:
const ports = { 22/tcp } &redef;
event bro_init() &priority=5
{
...
Analyzer::register_for_ports(Analyzer::ANALYZER_SSH, ports);
}
As you can see, the old ANALYZER_SSH constants have more into an enum
in the Analyzer namespace.
This is all hardly tested right now, and not everything works yet.
There's also a lot more cleanup to do (moving more classes around;
removing no longer used functionality; documenting script and C++
interfaces; regression tests). But it seems to generally work with a
small trace at least.
The debug stream "dpm" shows more about the loaded/enabled analyzers.
A new option -N lists loaded plugins and what they provide (including
those compiled in statically; i.e., right now it outputs all the
analyzers).
This is all not cast-in-stone yet, for some things we need to see if
they make sense this way. Feedback welcome.
Versus from synchronous function calls, which doesn't work well because
the function call can see a script-layer state that doesn't reflect
the state as it will be in terms of the event/network stream.
Other misc:
- Remove HTTP::MD5 notice.
- Add "last_active" field to FileAnalysis::Info record.
- Replace "conn_uids", "conn_ids" fields in FileAnalysis::Info record
with just a "conns" fields containing full connection records.
- The http-methods unit test is failing now, but I think it will be
fixed once I change the file handle callback mechanism to use events
instead.
* send end_of_data event for all kind of streams
* send process_finished event containing exit code of child process for executed programs
* move raw-tests to separate directory
* expose name of input stream to readers
* better handling of some error cases in raw reader
* new force_kill option for raw reader which SIGKILLs progesses on exit
The ordering of events how they arrive in the main loop is a bit peculiar at the moment.
The process_finished event arrives in scriptland before all of the other events, even though
it should be sent last. I have not yet fully figured that out.
* origin/topic/bernhard/base64:
and re-enable caching of extracted certs
and add bae64 bif tests.
re-unify classes
and modernize script.
add base64-encode functionality and bif.
Closes#965.
Once a BasicThread leaves its run() method, a thread is now marked for
cleaning up, and the ThreadMgr will soon join it to release the OS
resources.
Also, adding a function Log::remove_stream() that remove a logging
stream, stopping all writer threads that are associated with it.
Note, however, that removing a *filter* from a stream still doesn't
clean up any threads. The problem is that because of the output paths
potentially being created dynamically it's unclear if the writer
thread will still be needed in the future. We could add clean writers
up with timeouts, but that doesn't sound great either. So for now, the
only way to sure clean up logging threads is to remove the entire
stream.
Also note that cleanup doesn't work with input threads yet, which
don't seem to terminate (at least in the case I tried).
A retry happens on every new input and also periodically based on a
timer. If a file handle is returned at those times, the input is
forwarded for analysis, else it keeps retrying until a timeout
threshold.
The framework now cycles through callbacks based on a table indexed
by analyzer tags, or the special case of service strings if a given
analyzer is overloaded for multiple protocols (FTP/IRC data). This
lets each protocol script bundle implement the callback locally and
reduces the FAF's external dependencies.
For files that go over a single connection, add connection start time
to handle, so the file id will always differ even if the same connection
parameters are later used to transfer a file (same one or different).
So much nicer!
Closes#954.
* origin/topic/seth/notice-framework-updates:
Update notice framework documentation to represent the new reality.
Complete removal of the old table based notice policy mechanism.
Updates for the notices framework.
This allows replacing an ugly openssl-call from one of
the policy scripts. The openssl call is now replaced with
a still-but-less-ugly call to base64_encode.
I do not know if I split the Base64 classes in a "smart" way... :)
The add_action, remove_action, and stop BIFs now go through a queue to
ensure that modifications are made at well-defined times and don't end
up invalidating loop iterators.
The Info record now uses a "table[ActionArgs] of ActionResults", which
allows for simultaneous actions of a given type as long as other args
(fields in the ActionArgs record) are different.