An URI containing a bracketed or non-bracketed IPv6 address of the form
http://[::1]:42 was previously split on the first colon for port extraction,
causing a subsequent to_count() call to fail. Harden this to check for a
digits in the last :[0-9]+ component.
Fixes#4842
Add full support for RFC 9460's SvcParams list.
Amend the existing `dns_svcb_rr` record by a vector of new
`dns_svcb_param` records containing aptly typed SvcParamKey and
SvcParamValue pairs. Example output:
```
@load base/protocols/dns
event dns_HTTPS( c: connection , msg: dns_msg , ans: dns_answer , https: dns_svcb_rr ) {
for (_, param in https$svc_params)
print to_json(param); # filter uninitialised values
}
```
```
$ dig https cloudflare-ech.com +short | tr [:space:] \\n
1
.
alpn="h3,h2"
ipv4hint=104.18.10.118,104.18.11.118
ech=AEX+DQBBHgAgACBGL2e9TiFwjK/w1Zg9AmRm7mgXHz3PjffP0mTFNMxmDQAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA=
ipv6hint=2606:4700::6812:a76,2606:4700::6812:b76
```
```
{"key":1,"alpn":["h3","h2"]}
{"key":4,"hint":["104.18.10.118","104.18.11.118"]}
{"key":5,"ech":"AEX+DQBBHgAgACBGL2e9TiFwjK/w1Zg9AmRm7mgXHz3PjffP0mTFNMxmDQAEAAEAAQASY2xvdWRmbGFyZS1lY2guY29tAAA="}
{"key":6,"hint":["2606:4700::6812:a76","2606:4700::6812:b76"]}
```
Values with malformed data or belonging to invalid/reserved keys
are passed raw bytes in network order for script-level inspection.
Follow up to "Initial Support to DNS SVCB/HTTPS RR"
https://github.com/zeek/zeek/pull/1808
* 'master' of https://github.com/blightzero/zeek:
Changed behavior of var-extraction-uri.zeek from policy/protocol/http to extract only the URI parameter names. Do not include the path in the first parameter name. Only extract uri vars if parameters actually exist.
Previously, Zeek treated the receipt of `AuthenticationOk` as a
successful login. However, according to the PostgreSQL
Frontend/Backend Protocol, the startup phase is not complete until
the server sends `ReadyForQuery`. It is still possible for the server
to emit an `ErrorResponse` (e.g. ERRCODE_INVALID_AUTHORIZATION_SPECIFICATION)
after `AuthenticationOk` but before `ReadyForQuery`.
This change updates the PostgreSQL analyzer to defer reporting login
success until `ReadyForQuery` is observed. This prevents false
positives in cases where authentication succeeds but session startup
fails.
A user provided a SMB2 pcap with the reserved1 field of a ReadResponse
set to 1 instead of 0. This confused the padding computation due to
including this byte into the offset. Properly split data_offset and
reserved1 into individual byte fields.
Closes#4730
The EDNS rcode was incorrectly calculated. The extended rcode is formed
by taking the upper 8 bits of the extended rcode field, plus the lower 4
bits of the existing rcode.
This also adds a new trace with an extended rcode, and a testcase
parsing it.
Reported by dwhitemv25.
Fixes GH-4656
This adds a new PacketAnalyzer::PPPoE::session_id bif, which extracts
the PPPoE session ID from the current packet.
Furthermore, a new policy script is added which adds the pppoe session
id to the connection log.
Related to GH-4602
The backend does not serve expired but still present entries so to a
user they do not exist. When they put new data over such an entry their
expecation is that the value is overwritten, even if not explicitly
requested.
The SQLite storage backend implements expiration by hand and garbage
collection is done in `DoExpire`. This previously relied exclusively on
gets not running within `Storage::expire_interval` of the put, otherwise
we would potentially serve expired entries.
With this patch we explictly check that entries are not expired before
serving them so that the SQLite backend should never serve expired
entries.
* origin/topic/awelzel/4605-conn-id-context:
NEWS: Adapt for conn_id$ctx introduction
conn_key/fivetuple: Drop support for non conn_id records
Conn: Move conn_id init and flip to IPBasedConnKey
IPBasedConnKey: Add GetTransportProto() helper
input/Manager: Ignore empty record types
external: Bump commit hashes for external suites
ip/vlan_fivetuple: Populate nested conn_id_context, not conn_id
ConnKey: Extend DoPopulateConnIdVal() with ctx
btest: Update tests and baselines after adding ctx to conn_id
init-bare: Add conn_id_ctx to conn_id
Somewhere record types with zero fields get the optional attribute
apparently. The input/sqlite/basic test failed due to complaining
that ctx is optional. It isn't optional and when it has zero fields
we can just ignore it, too.
Also adds a input framework test with an explicit empty record type
Closes#4504
Messages are not typical responses, so they need special handling. This
is different between RESP2 and 3, so this is the first instance where
the script layer needs to tell the difference.
Using a label named "endpoint" is not intuitive and requires explaining to
users that it's really just the Cluster::node value. Change the label to
"node", so that we don't need to do the explaining.
This probably breaks some existing users of the Prometheus metrics, but after
looking more at metrics recently, "endpoint" really is a thorn in my eye.
This commit fixes the parsing of the data field in the SSL analyzer. So
far, this field contained two extra bytes at the beginning, which
contain the length of the following data.
Now, the data passed to the event only contains the actual value of the
session ticket.
The Spicy analyzer already contains the correct handling of this field,
and does not need to be updated. A test that uses the event and
exhibited the bug was added.
In the past, we used a default canonifier, which removes everything that
looks like a timestamp from log files. The goal of this is to prevent
logs from changing, e.g., due to local system times ending up in log
files.
This, however, also has the side-effect of removing information that is
parsed from protocols which probably should be part of our tests.
There is at least one test (1999 certificates) where the entire test
output was essentially removed by the canonifier.
GH-4521 was similarly masked by this.
This commit changes the default canonifier, so that only the first
timestamp in a line is removed. This should skip timestamps that are
likely to change while keeping timestamps that are parsed
from protocol information.
A pass has been made over the tests, with some additional adjustments
for cases which require the old canonifier.
There are some cases in which we probably could go further and not
remove timestamps at all - that, however, seems like a follow-up
project.
Not the not_before/not_after fields output GMT based times.
Also adds a new btest diff canonifier which only removes the first
timestamp in a line.
Fixes GH-4521
When looking up the postprocessor function from shadow files, id::find_func()
would abort() if the function wasn't available instead of falling back
to the default postprocessor.
Fix by using id::find() and checking the type explicitly and also adding a
strict type check while at it.
This issue was tickled by loading the json-streaming-logs package,
Zeek creating shadow files containing its custom postprocessor function,
then restarting Zeek without the package loaded.
Closes#4562