This function confused checking the return value of MMDB_lookup_sockaddr() with
testing the value of the returned result.found_entry bit when that call
succeeds. Both need to happen.
This introduces a new hook into the Intel::seen() function that allows
users to directly interact with the result of a find() call via external
scripts.
This should solve the use-case brought up by @chrisanag1985 in
discussion #3256: Recording and acting on "no intel match found".
@Canon88 was recently asking on Slack about enabling HTTP logging for a
given connection only when an Intel match occurred and found that the
Intel::match() event would only occur on the manager. The
Intel::match_remote() event might be a workaround, but possibly running a
bit too late and also it's just an internal "detail" event that might not
be stable.
Another internal use case revolved around enabling packet recording
based on Intel matches which necessarily needs to happen on the worker
where the match happened. The proposed workaround is similar to the above
using Intel::match_remote().
This hook also provides an opportunity to rate-limit heavy hitter intel
items locally on the worker nodes, or even replacing the event approach
currently used with a customized approach.
- With `broker::data`, we always have actual `std::string` objects that
we can pass to C functions expecting a null-terminated string.
However, `broker::variant` will return a `std::string_view` where we
have previously received a `std::string`. Hence, we add an extra level
of indirection that ensures that views are converted to
null-terminated strings and also use `c_str()` where we have
previously used `data()`. The former is not present on a
`std::string_view`. Using this member function instead acts as an
extra level of insurance that we do not accidentally pass the bytes
from a view to a C function.
- Switch from error and status views to actual error and status objects.
The view types from Broker only work with `broker::data` and thus
won't be available with `broker::variant`.
A continuation frame has the same type as the first frame, but that
information wasn't used nor kept, resulting payload of continuation
frames not being forwarded. The pcap was created with a fake Python
server and a bit of message crafting.
I'm always a bit worried to use sed -E anywhere, because the canonifiers
give the impression it won't work everywhere consistently. My manpage says
sed -E should be preferred for portability, so lets remove the
sed -r / sed -E differentiation assuming it's just a thing from the past.
There implementation assumed that arg is null terminated. Due to
the ContentLineAnalyzer wrongly being in plain delivery mode, this
assumption was violated. It shouldn't happen anymore, but protect
from this anyhow.
When resetting the BDAT state, we also need to switch the ContentLine
analyzer back into line mode, otherwise we're feeding plain delivery
data through ProcessLine(), possibly violating some assumptions about
null termination.
Do it for both ContentLineAnalyzers - only one of them will be in plain
delivery mode anyhow, but we don't keep state which one it was.
Initially this reused SMTP_IN_DATA, but separating into SMTP_IN_BDAT
to avoid spurious EndData() calls upon a server's reply. The client
should usually continue to send the full in-flight chunk still.
* topic/timw/more-string-view-usage:
Change to use ToStdStringView() in a few other BIFs
Convert remove_prefix/suffix BIFs to use std::string_view
Rework starts_with BIF similarly to ends_with changes in 1649e3e7cc
* origin/topic/awelzel/3424-http-upgrade-websocket-v1:
websocket: Handle breaking from WebSocket::configure_analyzer()
websocket: Address review feedback for BinPac code
fuzzers: Add WebSocket fuzzer
websocket: Fix crash for fragmented messages
websocket: Verify Sec-WebSocket-Key/Accept headers and review feedback
btest/websocket: Test for coalesced reply-ping
HTTP/CONNECT: Also weird on extra data in reply
HTTP/Upgrade: Weird when more data is available
ContentLine: Add GetDeliverStreamRemainingLength() accessor
HTTP: Drain event queue after instantiating upgrade analyzer
btest/http: Explain switching-protocols test change as comment
WebSocket: Introduce new analyzer and log
HTTP: Add mechanism to instantiate Upgrade analyzer
It immediately found an issue with &transient, but fairly stable thereafter.
This is a separate fuzzer implementation as there's a custom Configure()
call for the analyzer as well as disabling all other analyzers so we
don't fuzz unrelated protocols.
The &transient attribute does not work well with $element as that won't
be available within &until anymore apparently.
Found after a few seconds building out the fuzzer.
Don't log them, they are random and arbitrary in the normal case. Users
can do the following to log them if wanted.
redef += WebSocket::Info$client_key += { &log };
redef += WebSocket::Info$server_accept += { &log };
Add a constructed PCAP where the HTTP/websocket server send a WebSocket
ping message directly with the packet of the HTTP reply. Ensure this is
interpreted the same as if the WebSocket message is in a separate packet
following the HTTP reply.
For the server side this should work, for the client side we'd need to
synchronize suspend parsing the client side as we currently cannot quite
know whether it's a pipelined HTTP request following, or upgraded protocol
data and we don't have "suspend parsing" functionality here.
After an HTTP upgrade to another protocol, create a weird if the packet
that contains the HTTP reply *also* contains some additional data
belonging to the upgraded to protocol already.
Helper to get information from the ContentLine analyzer about
bytes still pending to be delivered. In certain cases this can
be a signal for weirdness.
With configurability through script-land comes the draw back
that we actually need to execute event handlers in the middle
of the parsing process: This might not be the best model, but
the script-side configurability it enables is kind of nice.
This explicit call only matters here when the HTTP reply is
directly followed by some WebSocket message data within the
same network packet, otherwise the queue is drained once the
packet has been completely processed anyhow.
DPD enables HTTP based on the content of the WebSocket frames. However,
it's not HTTP, the protocol is x-kaazing-handshake and the server sends
some form of status/acknowledge to the client first, so the HTTP and the
HTTP analyzer receives that as the first bytes of the response and
bails, oh well.
This adds a new WebSocket analyzer that is enabled with the HTTP upgrade
mechanism introduced previously. It is a first implementation in BinPac with
manual chunking of frame payload. Configuration of the analyzer is sketched
via the new websocket_handshake() event and a configuration BiF called
WebSocket::__configure_analyzer(). In short, script land collects WebSocket
related HTTP headers and can forward these to the analyzer to change its
parsing behavior at websocket_handshake() time. For now, however, there's
no actual logic that would change behavior based on agreed upon extensions
exchanged via HTTP headers (e.g. frame compression). WebSocket::Configure()
simply attaches a PIA_TCP analyzer to the WebSocket analyzer for dynamic
protocol detection (or a custom analyzer if set). The added pcaps show this
in action for tunneled ssh, http and https using wstunnel. One test pcap is
Broker's WebSocket traffic from our own test suite, the other is the
Jupyter websocket traffic from the ticket/discussion.
This commit further adds a basic websocket.log that aggregates the WebSocket
specific headers (Sec-WebSocket-*) headers into a single log.
Closes#3424