zeek/scripts/policy/misc/stats.bro
Jon Siwek 9967aea52c Integrate new Broxygen functionality into Sphinx.
Add a "broxygen" domain Sphinx extension w/ directives to allow
on-the-fly documentation to be generated w/ Bro and included in files.

This means all autogenerated reST docs are now done by Bro.  The odd
CMake/Python glue scipts which used to generate some portions are now
gone.  Bro and the Sphinx extension handle checking for outdated docs
themselves.

Parallel builds of `make doc` target should now work (mostly because
I don't think there's any tasks that can be done in parallel anymore).

Overall, this seems to simplify things and make the Broxygen-generated
portions of the documentation visible/traceable from the main Sphinx
source tree.  The one odd thing still is that per-script documentation
is rsync'd in to a shadow copy of the Sphinx source tree within the
build dir.  This is less elegant than using the new broxygen extension
to make per-script docs, but rsync is faster and simpler.  Simpler as in
less code because it seems like, in the best case, I'd need to write a
custom Sphinx Builder to be able to get that to even work.
2013-11-21 14:34:32 -06:00

86 lines
2.9 KiB
Text

##! Log memory/packet/lag statistics. Differs from
##! :doc:`/scripts/policy/misc/profiling.bro` in that this
##! is lighter-weight (much less info, and less load to generate).
@load base/frameworks/notice
module Stats;
export {
redef enum Log::ID += { LOG };
## How often stats are reported.
const stats_report_interval = 1min &redef;
type Info: record {
## Timestamp for the measurement.
ts: time &log;
## Peer that generated this log. Mostly for clusters.
peer: string &log;
## Amount of memory currently in use in MB.
mem: count &log;
## Number of packets processed since the last stats interval.
pkts_proc: count &log;
## Number of events processed since the last stats interval.
events_proc: count &log;
## Number of events that have been queued since the last stats
## interval.
events_queued: count &log;
## Lag between the wall clock and packet timestamps if reading
## live traffic.
lag: interval &log &optional;
## Number of packets received since the last stats interval if
## reading live traffic.
pkts_recv: count &log &optional;
## Number of packets dropped since the last stats interval if
## reading live traffic.
pkts_dropped: count &log &optional;
## Number of packets seen on the link since the last stats
## interval if reading live traffic.
pkts_link: count &log &optional;
};
## Event to catch stats as they are written to the logging stream.
global log_stats: event(rec: Info);
}
event bro_init() &priority=5
{
Log::create_stream(Stats::LOG, [$columns=Info, $ev=log_stats]);
}
event check_stats(last_ts: time, last_ns: NetStats, last_res: bro_resources)
{
local now = current_time();
local ns = net_stats();
local res = resource_usage();
if ( bro_is_terminating() )
# No more stats will be written or scheduled when Bro is
# shutting down.
return;
local info: Info = [$ts=now, $peer=peer_description, $mem=res$mem/1000000,
$pkts_proc=res$num_packets - last_res$num_packets,
$events_proc=res$num_events_dispatched - last_res$num_events_dispatched,
$events_queued=res$num_events_queued - last_res$num_events_queued];
if ( reading_live_traffic() )
{
info$lag = now - network_time();
# Someone's going to have to explain what this is and add a field to the Info record.
# info$util = 100.0*((res$user_time + res$system_time) - (last_res$user_time + last_res$system_time))/(now-last_ts);
info$pkts_recv = ns$pkts_recvd - last_ns$pkts_recvd;
info$pkts_dropped = ns$pkts_dropped - last_ns$pkts_dropped;
info$pkts_link = ns$pkts_link - last_ns$pkts_link;
}
Log::write(Stats::LOG, info);
schedule stats_report_interval { check_stats(now, ns, res) };
}
event bro_init()
{
schedule stats_report_interval { check_stats(current_time(), net_stats(), resource_usage()) };
}