zeek/testing/btest/core/tcp/large-file-reassembly.bro
Jon Siwek 2b3c2bd394 Fix reassembly of data w/ sizes beyond 32-bit capacities (BIT-348).
The main change is that reassembly code (e.g. for TCP) now uses
int64/uint64 (signedness is situational) data types in place of int
types in order to support delivering data to analyzers that pass 2GB
thresholds.  There's also changes in logic that accompany the change in
data types, e.g. to fix TCP sequence space arithmetic inconsistencies.

Another significant change is in the Analyzer API: the *Packet and
*Undelivered methods now use a uint64 in place of an int for the
relative sequence space offset parameter.
2014-04-09 13:03:24 -05:00

22 lines
774 B
Text

# @TEST-EXEC: bro -r $TRACES/ftp/bigtransfer.pcap %INPUT >out
# @TEST-EXEC: btest-diff out
# @TEST-EXEC: btest-diff files.log
# @TEST-EXEC: btest-diff conn.log
# The pcap has been truncated on purpose, so there's going to be large
# gaps that are there by design and shouldn't trigger the "skip
# deliveries" code paths because this test still needs to know about the
# payloads being delivered around critical boundaries (e.g. 32-bit TCP
# sequence wraparound and 32-bit data offsets).
redef tcp_excessive_data_without_further_acks=0;
event file_chunk(f: fa_file, data: string, off: count)
{
print "file_chunk", |data|, off, data;
}
event file_new(f: fa_file)
{
Files::add_analyzer(f, Files::ANALYZER_DATA_EVENT,
[$chunk_event=file_chunk]);
}