Currently, a lot of interpreter runtime errors, such as an access to
an unset optional record field, cause Bro to abort with an internal
error. This is an experimental branch that turns such errors into
non-fatal runtime errors by internally raising exceptions. These are
caught upstream and processing continues afterwards.
For now, not many errors actually raise exceptions (the example above
does though). We'll need to go through them eventually and adapt the
current Internal() calls (and potentially others). More generally, at
some point we should cleanup the interpreter error handling (unifying
errors reported at parse- and runtime; and switching to exceptions for
all Expr/Stmt/Vals). But that's a larger change and left for later.
The main question for now is if this code is already helpful enough to
go into 2.0. It will quite likely prevent a number of crashes due to
script errors.
If possible the list elements now get promoted to the yield type of the
vector. There was also a problem with the value returned by the record
constructor expression's eval being completely unref'd since the vector
element assignment function doesn't ref the element -- so I changed it
to ref values if they just constructed before assigning them to the
vector.
Addresses #485.
The communication subsystem is now disabled until a new BiF,
enable_communication(), is called. The base scripts do this
automatically when either a Communication::Node is defined, or Bro is
asked to listen for incoming connections.
The Logger class is now in charge of reporting all errors, warnings,
informational messages, weirds, and syslogs. All other components
route their messages through the global bro_logger singleton.
The Logger class comes with these reporting methods:
void Message(const char* fmt, ...);
void Warning(const char* fmt, ...);
void Error(const char* fmt, ...);
void FatalError(const char* fmt, ...); // Terminate Bro.
void Weird(const char* name);
[ .. some more Weird() variants ... ]
void Syslog(const char* fmt, ...);
void InternalWarning(const char* fmt, ...);
void InternalError(const char* fmt, ...); // Terminates Bro.
See Logger.h for more information on these.
Generally, the reporting now works as follows:
- All non-fatal message are reported in one of two ways:
(1) At startup (i.e., before we start processing packets),
they are logged to stderr.
(2) During processing, they turn into events:
event log_message%(msg: string, location: string%);
event log_warning%(msg: string, location: string%);
event log_error%(msg: string, location: string%);
The script level can then handle them as desired.
If we don't have an event handler, we fall back to
reporting on stderr.
- All fatal errors are logged to stderr and Bro terminates
immediately.
- Syslog(msg) directly syslogs, but doesn't do anything else.
The three main types of messages can also be generated on the
scripting layer via new Log::* bifs:
Log::error(msg: string);
Log::warning(msg: string);
Log::message(msg: string);
These pass through the bro_logger as well and thus are handled in the
same way. Their output includes location information.
More changes:
- Removed the alarm statement and the alarm_hook event.
- Adapted lots of locations to use the bro_logger, including some
of the messages that were previously either just written to
stdout, or even funneled through the alarm mechanism.
- No distinction anymore between Error() and RunTime(). There's
now only one class of errors; the line was quite blurred already
anyway.
- util.h: all the error()/warn()/message()/run_time()/pinpoint()
functions are gone. Use the bro_logger instead now.
- Script errors are formatted a bit differently due to the
changes. What I've seen so far looks ok to me, but let me know
if there's something odd.
Notes:
- The default handlers for the new log_* events are just dummy
implementations for now since we need to integrate all this into
the new scripts anyway.
- I'm not too happy with the names of the Logger class and its
instance bro_logger. We now have a LogMgr as well, which makes
this all a bit confusing. But I didn't have a good idea for
better names so I stuck with them for now.
Perhaps we should merge Logger and LogMgr?
This is obviously a change that break backwards-compatibility. I hope
I caught all cases where vectors are used ...
I've completely removed the VECTOR_MIN constant. Turns out that was
already not working: some code pieces were nevertheless hard-coding
the 1-based indexing ...
"delete x$y" now resets record field "x" back to its original state if
it is either &optional or has a &default. "delete" may not be used
with non-optional/default fields.
with the field.
This works now:
type X: record {
a: table[string] of bool &default=table( ["foo"] = T );
b: table[string] of bool &default=table();
c: set[string] &default=set("A", "B", "C");
d: set[string] &default=set();
};
I think previously the intend was to associate &default with the
table/set (i.e., define the default value for non-existing indices).
However, that was already not working: the error checking was
reporting type mismatches. So, this shouldn't break anything and make
things more consistent.
* topic/robin/record-coercion:
Fixing a bug with nested record ctors.
Enabling automatic coercion from record type A to be B as long as A has all the types that B has.
Conflicts:
src/Expr.cc
- Fixing a crash with an invalid pointer.
- Fixing a namespacing problem with is_ftp_data_conn() and check_relay_3().
- Fixing the do-we-have-an-event-handler-defined check.
Standard test-suite passes.
Seth, I think you can give it a try now ...
logging framework.
- To enable passing a type into a bif, there's now a new
BroType-derived class TypeType and a corresponding TYPE_TYPE tag.
With that, a Val can now have a type as its value.
This is experimental for now.
- RecordVal's get a new method CoerceTo() to coerce their value into a
another record type with the usual semantics. Most of the code in
there was previously in RecordContructorExpr::InitVal(), which is
now calling the new CoerceTo() method.
table/set indices.
This addresses #367. In principle, the fix is quite straightford.
However, it turns out that sometimes record fields lost their
attributes on assignment, and then the hashing can't decide anymore
whether a field is optional or not. So that needed to be fixed as
well.