* topic/christian/external-testsuite-tweaks:
Add helpers for syncing commit files with external testsuites
Fix typo in update-timing target for external testsuites
rand64bit called random 4 times to generate one 64 bit number. There is
no reason to do this - random() is basically guaranteed to return a 32
bit number.
This also adds a static check to make sure that it does.
This provides "make sync-repos" to check out all locally available testsuites at
the commits indicated in their commit files, and "make sync-commits" to update
the commit files to the HEADs of the local testsuite repos.
Also adds the commit -> repo sync for the Makefile init target so initialization
always lands on the right version, and removes the corresponding explicit
checkout from the CI repo setup.
* origin/topic/timw/modernize-cpp-headers:
Code modernization: Convert from deprecated C standard library headers
Bump cmake submodule for run-clang-tidy fix [skip ci] [nomail]
* origin/topic/timw/2021-signal-handler-deadlock:
Mark bools in BasicThread as atomic to avoid data races
Avoid calling DBG_LOG during signal handling
Fixes for iosource::Manager for deadlocks during shutdown
This PR changes the way in which the SSL analyzer tracks the direction
of connections. So far, the SSL analyzer assumed that the originator of
a connection would send the client hello (and other associated
client-side events), and that the responder would be the SSL servers.
In some circumstances this is not true, and the initiator of a
connection is the server, with the responder being the client. So far
this confused some of the internal statekeeping logic and could lead to
mis-parsing of extensions.
This reversal of roles can happen in DTLS, if a connection uses STUN -
and potentially in some StartTLS protocols.
This PR tracks the direction of a TLS connection using the hello
request, client hello and server hello handshake messages. Furthermore,
it changes the SSL events from providing is_orig to providing is_client,
where is_client is true for the client_side of a connection. Since the
argument positioning in the event has not changed, old scripts will
continue to work seamlessly - the new semantics are what everyone
writing SSL scripts will have expected in any case.
There is a new event that is raised when a connection is flipped. A
weird is raised if a flip happens repeatedly.
Addresses GH-2198.
This adds restart request/response event pairs that restart nodes in the running
Zeek cluster. The implementation is very similar to get_id_value, which also
involves distributing a list of nodes to agents and aggregating the responses.
This declares our helper functions for sending events to the Supervisor, and
makes them return the created request objects to enable the caller to modify
them. It also adds a helper for restart and status requests, uses the helpers
throughout the module, and makes all handlers more resilient in case Supervisor
events other than the agent's arrive.
The controller now logs its deployment attempt of a persisted configuration at
startup. This is generally helpful to see recorded, and also explains timeout of
the underlying request in case of failure (which triggers a timeout message).
For the case of a running cluster with no connected agents, use the
g_instances_known table instead of g_instances. The latter reflects the contents
of the last deployed config, not the live scenario of actually attached agents.
The timeout result wasn't actually stored in requests timing out in the
agent. (So far that's for deployment requests.) Also log the timing out of any
request state, similar to the controller.
No functional change, just a consistency tweak. Since agent and controller send
response events via Broker::publish(), the arguments aren't named and so this
only affects the API definition.
* topic/christian/management-deploy: (21 commits)
Management framework: bump external cluster testsuite
Management framework: bump zeek-client
Management framework: rename set_configuration events to stage_configuration
Management framework: trigger deployment upon when instances are ready
Management framework: more resilient node shutdown upon deployment
Management framework: re-trigger deployment upon controller launch
Management framework: move most deployment handling to internal function
Management framework: distinguish internally and externally requested deployments
Management framework: track instances by their Broker IDs
Management framework: tweak Supervisor event logging
Management framework: make helper function a local
Management framework: rename "log_level" to "level"
Management framework: add "finish" callback to requests
Management framework: add a helper for rendering result vectors to a string
Management framework: agents now skip re-deployment of current config
Management framework: suppress notify_agent_hello upon Supervisor peering
Management framework: introduce state machine for configs and persist them
Management framework: introduce deployment API in controller
Management framework: rename agent "set_configuration" to "deploy"
Management framework: consistency fixes to the Result record
...
The user so far had to configure with --enable-zeek-client to trigger
installation of the client (from auxil/zeek-client). This flips it around to
allow disabling the installation, and removes --enable-zeek-client from the
Docker build in CI, where we've already been using it to allow the cluster
testsuite to run tests with that image.
More resilience: when an agent restarts, it checks in with the controller. If
the controller has deployed a config, this check-in may lead to an internal
notify_agents_ready event. At that point, we now trigger a deployment when there
currently isn't already one running. This ensures that any agents not yet
running the current cluster will start to do so, and does nothing when those
agents already run it, since they ignore the request in that case.
When agents had to terminate existing Zeek cluster nodes at the beginning of a
new deployment, they so far used their internal state to look up the nodes and
fired off requests to the Supervisor to shut these down. This has a problem:
when an agent restarts unexpectedly, it has no internal state, and when it then
tries to create nodes that already exist, the Supervisor complains with error
messages.
To avoid this, the agent now tears down all Supervised nodes other than agents
and controllers. In order to do so, it first needs to query the Supervisor for
the current node status, which means there are now two such status requests: one
upon deployment, and one during get_nodes requests. In order to disambiguate
these contexts in the SupervisorControl::status_request/response transactions,
we use the finish() callback in the corresponding request state to continue
execution as needed.
A resilience feature: when a booting controller has a previously deployed
configuration (just reloaded from persistent state), it now triggers a
deployment. When agents at this point run something else, this restores the
controller's understanding of what's deployed, and if the agents do still run
this configuration, does nothing since agents ignore deployment of a
configuration they already run.
The controller now runs most of a config deployment via an internal function,
allowing it to be called from multiple places instead of just the deploy_request
event handler.