We need this to sender through Broker, and we also leverage it for
cloning opaques. The serialization methods now produce Broker data
instances directly, and no longer go through the binary formatter.
Summary of the new API for types derived from OpaqueVal:
- Add DECLARE_OPAQUE_VALUE(<class>) to the class declaration
- Add IMPLEMENT_OPAQUE_VALUE(<class>) to the class' implementation file
- Implement these two methods (which are declated by the 1st macro):
- broker::data DoSerialize() const
- bool DoUnserialize(const broker::data& data)
This machinery should work correctly from dynamic plugins as well.
OpaqueVal provides a default implementation of DoClone() as well that
goes through serialization. Derived classes can provide a more
efficient version if they want.
The declaration of the "OpaqueVal" class has moved into the header
file "OpaqueVal.h", along with the new serialization infrastructure.
This is breaking existing code that relies on the location, but
because the API is changing anyways that seems fine.
This adds an internal BiF
"Broker::__opaque_clone_through_serialization" that does what the name
says: deep-copying an opaque by serializing, then-deserializing. That
can be used to tests the new functionality from btests.
Not quite done yet. TODO:
- Not all tests pass yet:
[ 0%] language.named-set-ctors ... failed
[ 16%] language.copy-all-opaques ... failed
[ 33%] language.set-type-checking ... failed
[ 50%] language.table-init-container-ctors ... failed
[ 66%] coverage.sphinx-zeekygen-docs ... failed
[ 83%] scripts.base.frameworks.sumstats.basic-cluster ... failed
(Some of the serialization may still be buggy.)
- Clean up the code a bit more.
Note - this compiles, but you cannot run Bro anymore - it crashes
immediately with a 0-pointer access. The reason behind it is that the
required clone functionality does not work anymore.
The receiver side will wrap the data as a Broker::Data value, which
can then be type-checked/cast via 'is' or 'as' operators to a specific
Bro type. For example:
Sender:
Broker::publish("topic", my_event, "hello")
Receiver:
event my_event(arg: any)
{
if ( arg is string )
print arg as string;
}
Broker had changed the semantics of remote logging: it sent over the
original Bro record containing the values to be logged, which on the
receiving side would then pass through the logging framework normally,
including triggering filters and events. The old communication system
however special-cases logs: it sends already processed log entries,
just as they go into the log files, and without any receiver-side
filtering etc. This more efficient as it short-cuts the processing
path, and also avoids the more expensive Val serialization. It also
lets the sender determine the specifics of what gets logged (and how).
This commit changes Broker over to now use the same semantics as the
old communication system.
TODOs:
- The new Broker code doesn't have consistent #ifdefs yet.
- Right now, when a new log receiver connects, all existing logs
are broadcasted out again to all current clients. That doesn't so
any harm, but is unncessary. Need to add a way to send the
existing logs to just the new client.
When receiving a remote log via broker, there was a bug that would
prevent a log from being written if the log record contained a field
without the &log attribute that was followed by a field with the &log
attribute.
Updated a test case to catch this error.