For a long time it has been known that attacks against the keyserver
network were easy due to the lack of cryptographic verification on
signatures, as well as the lack of other checks and balances. Add the
ability to actually verify signatures, allowing those are that are not
valid (or that come from non-present keys) to be removed.
Remove v5 keyid support when libnettle not present
v5 key fingerprints are SHA256 based. We have fallbacks for MD5 + SHA1
when libnettle is not present, but there's no intent to provide a SHA256
fallback, and we're close to the point where support for building
without libnettle will be removed entirely.
Move key database backends into their own directory
These logically sit together and are separate from the core code, so
move them off to a separate directory. There are various bits of support
that should be hived off the core libonak and move in here too, but that
can wait until later.
Cleanup postinst to avoid recursive chown of database
The recursive chown of /var/lib/onak has the potential to be abused
during installation, so just chown the top level directory and then run
the onak that's initialising the DB as the onak user, rather than root.
Add option to only accept updates for existing keys
It's plausible in a curated keyring scenario where a keyserver should
accept updates to existing keys (new signatures, updated expiries, new
subkeys) but no entirely new keys. Add a configuration file option which
enforces this behaviour.
GCC -Wall found a valid issue with a & instead of a && in the keyring DB
backend, a buffer assignment missing a cast and some unused variables in
the DB4 backend.
GCC with the -fsanitize=undefined flag issues complaints like:
runtime error: left shift of 232 by 24 places cannot be represented in
type 'int'
due to the fact we shift the byte values into a 64 bit value. Rather
than adding lots of casts split out the formation into a set of steps
that doesn't end up shifting the byte value, just the final 64 bit
value.
There are various keys in the wild, such as Evil32 (https://evil32.com/)
which a keyserver is unlikely to want to carry in the general case.
Introduce the capability to have a blacklist of fingerprints which will
not be accepted into the keyserver.
The old .conf config file was intended to be compatible with the ancient
PKS keyserver. All new features are only supported with the .ini format
and this is going to make the automated tests supporting both more
awkward. So deprecate the old style and stop testing it.
If the first new key had no new components we'd remove it from the list
but not actually free it. Fix things up so we free it properly. Found
with gcc's -fsanitize=leak option.
We were building a linked list of words in each UID, but then not
cleaning up this linked list when we were done with it. Found with gcc
-fsanitize=leak
This is a long-standing memory leak when merging signed packet lists. It
hasn't been observed in the wild because the key merging is normally
done in a transient process. Found using GCC with -fsanitize=leak
Expose more of calculating the packet signature hash
As preparation for full signature verification move to calculating the
signature hash and exposing it, and then doing the verification in the
cleaning routine.
v3 keys have long been considered insecure. While we want to retain
support for them there's no reason most keyservers should actually store
them these days. So drop them by default when running cleankeys()
This expanded a 32 bit key ID into a 64 bit key ID. It's mostly
redundant, so encode 64 bit key IDs where necessary and rely on the
fetch fallback to 32 bit elsewhere.
delete_key was deleting a key based on the keyid, which is
insufficiently precise. Move over to the full fingerprint (even though
with some backends this is an effective no-op for now).
Originally key retrieval was all performed on the 64 bit key ID but back
in 2013 support was added for fetching by fingerprint. However not all
the pieces to deal with colliding 64 bit IDs were added at this time.
This commit improves things by:
* Using the fingerprint to retrieve the key to update in update_keys()
* Returning all keys that share a 64 bit key ID from the DB4 backend,
rather than just the first found.
* Adding a test to ensure multiple keys are returned for a colliding
key lookup.
It also removes the old compatibility code for the DB4 backend with key
lookups by 64 bit key ID.
Note this isn't yet common in the wild; unlike Evil32 it is not possible
to create collisions for arbitrary key IDs.
In failure paths we can leak the memory allocated to hold the directory
path. This isn't really a problem, as we'll exit shortly afterwards, but
scan-build complains and we should really fix for completeness.
Exit on failure to initialise a dynamic DB backend
The dynamic DB backend does a lot of checks to ensure it can load the
requested backend successfully, but didn't actually check that it
initialised correctly. Bomb out with a graceful error message if this
happens rather than leading the caller to think everything is ok.
Allow the use of an OpenPGP keyring (like keyring.gpg from GnuPG v1) as
a read-only database backend. This is just a collection of keys
concatenated together. The file is mmaped on load and parsed for keys,
then simple linear searches are done for keyids/fingerprints.
Performance with large numbers of keys will not be good.
Rather than only parsing the encoded OID to calculate the key size make
a helper function in decodekey and use that. This will be useful when
trying to parse key material for signature checks.
Assume if we have Nettle it has all the hashes we need
Older versions of Nettle didn't support the SHA2 functions fully so we
checked for their existence. Switch to assuming they're present if we
have Nettle at all.
Don't require a database connection for a key index
We need an active keydb backend if we want to be able to lookup
UIDs for signatures, but if we don't have one we can still index the
key. Only use keyid2uid when we have a non-NULL dbctx.
v5 is being specified in RFC4880bis. It uses a slightly different packet
format for public key packets and uses SHA2-256 for fingerprints. Add
basic support for parsing and storing these days, and some new unit
tests using the v5 test key.
A new subpacket containing the entire fingerprint of the signature
issuer has been added in RFC4880bis. This improves the old issuer keyid
subpacket type.
The auto* tools are hard to work with, and I'd like to split out various
bits of onak into a shared library useful to other projects as well as
various internal helpers (e.g. a backend DB library). To help with that
move over to CMake as a more modern but still widely available build
system.
Cleanup tests to be able to run from a different directory
runtests assumes its run from the directory it lives in, and that this
is the build directory. Improve it to be able to cope with a build in a
different directory to the source and work out the correct paths.
As per draft-dkg-openpgp-abuse-resistant-keystore add the ability to
drop "large" packets. These are defined as UIDs more than 1024 bytes
long, UATs greater than 64k in size and all other packets larger than
8383 bytes (the maximum that will fit in a 2 byte new format packet
length). Disabled by default, enable with "check_packet_size" in the
verification config section.
Use a set of policy flags to indicate what key cleaning to perform
To decouple cleankeys.c from the keyserver config, and prepare for an
extension of the policies available, use a set of flags to indicate what
key cleaning to perform.
Use generic fallback in stacked backend for non core routines
The non-core routines that don't directly return or store a key
structure often fall back to the generic routines under the hood. This
means we can miss propagating a retrieved key up to the top level of the
stack. Avoid this by only calling the non-generic version for the top
layer, then falling back to the generics which will do the appropriate
store on fallback.
The version check on packets was too strict - there are a bunch of
packets that don't have a version (such as the UID). Make these checks
more specific based on the definitions from RFC4800.
(Lesson learned: Do not commit without running the automated tests.)
Although we checked on each round of subpackets that we were still
within the correct length, we weren't checking the subpacket length
itself fit within the remaining data. Fixes some issues found using
American Fuzzy Lop.
At present only PGP packet versions up to 4 are supported. There's no
indication version 5+ will be backwards compatible, so if we see
anything higher it indicates something unsupported. Fixes some issues
found using American Fuzzy Lop.
Throw away invalid packet data when parsing packets
We would detect that a packet wasn't correctly formed, and handle
requests to try to allocation too much memory that failed, when parsing
keys. However the old partial packet structure was still left around. If
we hit an error when parsing an incoming packet make sure it's fully
cleaned up.
Prevent sign extension when parsing large packet sizes
A 2GB+ packet is likely to be a mistake, but in the event it was
legitimate sign extension could result in a much larger amount of
memory being allocated (and probably failing). Fix this by trying
to ensure we're doing an unsigned left shift.
Move cleankey.o from being an extra object to being part of the core
objects; the HKP backend is using it for starters and it so do various
other bits.
When a key was being updated over keyd it would do a delete and then
a store, which ended up being outside a transaction. Add an update
command so that the backend can do the update fully itself.
The change to the new configuration file format introduced some paper
bag errors in the mail processing script. Fix these, and add a Perl
syntax check into the test target to try and prevent this sort of thing
in future.
When keyd is in use any backend configuration is ignored for the clients
and keyd contacted instead. The new config changes failed to correctly
migrate the overriding mechanism for this and as a result nothing was
using keyd.
Fix use of absolute path in Debian postinst script
The changes to the Debian postinst to update an existing modified
configuration to the new style called onak with a full pathname. This
is contrary to Policy, so drop the path as dpkg should call us with
a sane $PATH.
This backend takes advantage of the new configuration file flexibility
to enable the stacking of multiple backends together, with each being
tried in turn until the desired keys are found. All stores go to the
first configured backend, and fetches from subsequent backends are
also stored in the first backend.
Add "dumpconfig" command to onak to aide configuration file migration
onak is capable of parsing both old and new style configuration files,
but future improvements will only be added to the new style. To aide
users in the migration to the new format add a "dumpconfig" option to
the onak binary which will dump the current configuration in new format.
onak's config file format grew from the pksd config style. pksd is long
obsolete and there are a number of features it would be nice to support
in onak which this config format makes hard to support cleanly. Move to
a .ini style config file, with [section] and name=value definitions. The
old format is still supported and at present an old style config is
searched for first to ensure smooth upgrades, but any new config options
will only be supported by the new style.
Additionally add tests against both old + new config styles for all
backends.
onak + wotsap were failing to free the memory allocated for the config
file name if it was passed on the command line, and the config structure
cleanups failed to free any configured sock_dir or the actual DB
backend config structure. All of this gets cleaned up by normal program
exit (which is when we do our clean-up anyway), but fix it anyway.
Rather than having each test script hard code the config file that
should be use, have the top level runtests script pass the filename
to use as the config in as a command line parameter.
The indentation in the config file reading function is horrible, so
pull the actual parsing of the line out to its own function simplifying
the loop to primarily be about reading each line and trimming white
space.