This changes the basic agent-management model to one in which the configurations
received from the client define not just the data cluster, but also set the set
of acceptable instances. Unless connectivity already exists, the controller will
establish peerings with new agents that listen, or wait for ones that connect to
the controller to check in.
Once all required agents are available, the controller triggers the new
notify_agents_ready event, an agent/controller-level "cluster-is-ready"
event. The controller also uses this event to submit a pending config update to
the now-ready instances.
Prior to this, static configuration needed to be in place to configure the
controller/agent layout. The configuration update can now include new instances
that the controller will connect to, assuming they're instances with a listening
agent.
Request records for configuration updates now store the full configuration. The
ClusterController::Request module now provies a to_string() function for
rendering requests to a string.
* topic/christian/cluster-controller:
Add a cluster controller testcase for agent-controller checkin
Add zeek-client via new submodule
Update baselines affected by cluster controller changes
Introduce cluster controller and cluster agent scripting
Establish a separate init script when using the supervisor
Add optional bare-mode boolean flag to Supervisor's node configuration
Add support for making the supervisor listen for requests
Add support for setting environment variables via supervisor
This is a preliminary implementation of a subset of the functionality set out in
our cluster controller architecture. The controller is the central management
node, existing once in any Zeek cluster. The agent is a node that runs once per
instance, where an instance will commonly be a physical machine. The agent in
turn manages the "data cluster", i.e. the traditional notion of a Zeek cluster
with manager, worker nodes, etc.
Agent and controller live in the policy folder, and are activated when loading
policy/frameworks/cluster/agent and policy/frameworks/cluster/controller,
respectively. Both run in nodes forked by the supervisor. When Zeek doesn't use
the supervisor, they do nothing. Otherwise, boot.zeek instructs the supervisor
to create the respective node, running main.zeek.
Both controller and agent have their own config.zeek with relevant knobs. For
both, controller/types.zeek provides common data types, and controller/log.zeek
provides basic logging (without logger communication -- no such node might
exist).
A primitive request-tracking abstraction can be found in controller/request.zeek
to track outstanding request events and their subsequent responses.
This commit changes the SSL and X.509 logging formats to something that,
hopefully, slowly approaches what they will look like in the future.
X.509 log is not yet deduplicated; this will come in the future.
This commit introduces two new options, which determine if certificate
issuers and subjects are still logged in ssl.log. The default is to have
the host subject/issuer logged, but to remove client-certificate
information. Client-certificates are not a typically used feature
nowadays.
This adds a "policy" hook into the logging framework's streams and
filters to replace the existing log filter predicates. The hook
signature is as follows:
hook(rec: any, id: Log::ID, filter: Log::Filter);
The logging manager invokes hooks on each log record. Hooks can veto
log records via a break, and modify them if necessary. Log filters
inherit the stream-level hook, but can override or remove the hook as
needed.
The distribution's existing log streams now come with pre-defined
hooks that users can add handlers to. Their name is standardized as
"log_policy" by convention, with additional suffixes when a module
provides multiple streams. The following adds a handler to the Conn
module's default log policy hook:
hook Conn::log_policy(rec: Conn::Info, id: Log::ID, filter: Log::Filter)
{
if ( some_veto_reason(rec) )
break;
}
By default, this handler will get invoked for any log filter
associated with the Conn::LOG stream.
The existing predicates are deprecated for removal in 4.1 but continue
to work.
This adds two new functions: `Conn::register_removal_hook()` and
`Conn::unregister_removal_hook()` for registering a hook function to be
called back during `connection_state_remove`. The benefit of using hook
callback approach is better scalability: the overhead of unrelated
protocols having to dispatch no-op `connection_state_remove` handlers is
avoided.
ACTION_DROP is not only part of catch-n-release subsystem.
Also, historically ACTION_DROP has been bundled with ACTION_LOG, ACTION_ALARM, ACTION_EMAIL... and its helpful that this verb remains in base/frameworks/notice/main.zeek
``NetControl::DROP`` had 3 conflicting definitions that could potentially
be used incorrectly without any warnings or type-checking errors.
Such enum redefinition conflicts are now caught and treated as errors,
so the ``NetControl::DROP`` enums had to be renamed:
* The use as enum of type ``Log::ID`` is renamed to ``NetControl::DROP_LOG``
* The use as enum of type ``NetControl::CatchReleaseInfo`` is renamed to
``NetControl::DROP_REQUESTED``
* The use as enum of type ``NetControl::RuleType`` is unchanged and still
named ``NetControl::DROP``
For event/hook handlers that had a previous declaration, any &default
arguments are ineffective. Only &default uses in the initial
prototype's arguments have an effect (that includes if the handler
is actually the site at which the declaration occurs).
And switch Zeek's base scripts over to using it in place of
"connection_state_remove". The difference between the two is
that "connection_state_remove" is raised for all events while
"successful_connection_remove" excludes TCP connections that were never
established (just SYN packets). There can be performance benefits
to this change for some use-cases.
There's also a new event called ``connection_successful`` and a new
``connection`` record field named "successful" to help indicate this new
property of connections.
These are no longer loaded by default due to the performance impact they
cause simply by being loaded (they have event handlers for commonly
generated events) and they aren't generally useful enough to justify it.
This also installs symlinks from "zeek" and "bro-config" to a wrapper
script that prints a deprecation warning.
The btests pass, but this is still WIP. broctl renaming is still
missing.
#239
* All "Broxygen" usages have been replaced in
code, documentation, filenames, etc.
* Sphinx roles/directives like ":bro:see" are now ":zeek:see"
* The "--broxygen" command-line option is now "--zeexygen"
* 'master' of https://github.com/hosom/zeek:
Normalize the intel seen filename for smb.
load smb-filenames in scripts/policy/frameworks/intel/seen/__load__.bro
Add SMB::IN_FILE_NAME to Intel::Where enum
Support filenamess for SMB files
I added a test case