Since the command-line option for reading NetFlow went away, the has
been neither used nor tested anymore. We might bring this back later,
but for now I'd rather remove it than having dead code that seems to
suggest that we support it.
* origin/topic/vladg/file-analysis-exe-analyzer: (31 commits)
Tweak the PE OS versions based on real-world traffic.
Update pe/main.bro to user register_for_mime_types, ensuring it will also work with the upcoming Files framework changes.
A bit of final core-level cleanup.
A bit of final script cleanup.
Update baselines.
Add a btest for the PE analyzer.
Add a PE memleak test, and fix a memleak.
Documentation and a bit of overall cleanup.
Add data about which tables are present.
Remove the .idata parsing, as it can be more complicated in some cases.
Fix a PE analyzer failure where the IAT isn't aligned with a section boundary.
PE: Rehash the log a bit.
Make base_of_data optional.
Fix support for PE32+ files.
PE Analyzer cleanup.
Checkpoint - Import Address Table being parsed.
Some changes to fix PE analyzer on master.
Parse PE section headers.
Updated PE analyzer to work with changes in master.
In progress checkpoint. Things are starting to work.
...
BIT-1369 #merged
* origin/topic/seth/more-file-type-ident-fixes:
File API updates complete.
Fixes for file type identification.
API changes to file analysis mime type detection.
Make HTTP 206 reassembly require ETags by default.
More file type identification improvements
Fix an issue with files having gaps before the bof_buffer is filled.
Fix an issue with packet loss in http file reporting.
Adding WOFF fonts to file type identification.
Extended JSON matching and added OCSP responses.
Another large signature update.
More signature updates.
Even more file type ident clean up.
Lots of fixes for file type identification.
BIT-1368 #merged
Noticed these gave warnings due to missing namespace, but rather than
fix I'm just removing because they reference names in the same
module/file that will appear inches away from each other in the final
output.
- Backed out eTag changes. The real world is more complicated
than just using eTags to identify the same file.
- A bit of code simplication in the http base scripts.
- Test updates (more existing small problems were identified!).
-
* origin/topic/johanna/conn-threshold:
Wrap threshold stuff up - fix two small bugs and update baselines.
update GridFTP analyzer to use connection thresholding instead of polling
Add high level api for thresholding that holds lists of thresholds and raises an event for each threshold exactly once.
Allow setting packet and byte thresholds for connections.
BIT-1377 #merged
one can now add an option "offset" to the config map. Positive offsets
are interpreted to be from the beginning of the file, negative from the
end of the file (-1 is end of file).
Only works for raw reader in streaming or manual mode. Does not work
with executables.
Addresses BIT-985
This extends the ConnSize analyzer to be able to raise events when each
direction of a connection crosses a certain amount of bytes or packets.
Thresholds are set using
set_conn_bytes_threshold(c$id, [num-bytes], [direction]);
and
set_conn_packets_threshold(c$id, [num-packets], [direction]);
respectively.
They raise the event
event conn_bytes_threshold_crossed(c: connection, threshold: count, is_orig: bool)
and
event conn_packets_threshold_crossed(c: connection, threshold: count, is_orig: bool)
respectively.
Current thresholds can be examined using
get_conn_bytes_threshold and get_conn_packets_threshold
Currently only one threshold can be set per connection.
This also fixes a bug where child packet analyzers of the TCP analyzer
where not found using FindChild.
Under remote communication overload conditions, the child->parent
chunked IO may start rejecting chunks if over the hard cap. Some
messages are made of two chunks, accepting the first part, but rejecting
the second can put the parent in a bad state and the next two chunks it
reads are likely to cause the error.
This patch just removes the rejecting functionality completely and so
now relies solely on shutting down remote peer connections to help
alleviate temporary overload conditions. The
"chunked_io_buffer_soft_cap" script variable can now tune when this
shutting down starts happening and the default setting is now double
what it used to be. For constant overload conditions, communication.log
should keep stating "queue to parent filling up; shutting down heaviest
connection".
An alternative to completely removing the hard cap rejection code could
be ensuring that messages that involve a pair of chunks can never have
the second chunk be rejected when attempting to write it.
Addresses BIT-1376
Note: loading external-ca-list.bro in the external tests increases
execution times by 1-2%; if I remove that @load, things get back to
normal so doesn't seem to indicate a problem.
* origin/topic/johanna/ca-list:
Update mozilla CA list.
BIT-1375 #merged
Turns out that in error situations, the final finish message might not
reach the writer anymore, as communication between the threads will be
shut down. Instead of aborting, we now just clean up in that case and
proceed. This isn't changing any other behaviour. The original error
check was in place mostly for helping debug the data flow between the
threads anyways.
Addresses BIT-1331.