Assume if we have Nettle it has all the hashes we need
Older versions of Nettle didn't support the SHA2 functions fully so we
checked for their existence. Switch to assuming they're present if we
have Nettle at all.
Don't require a database connection for a key index
We need an active keydb backend if we want to be able to lookup
UIDs for signatures, but if we don't have one we can still index the
key. Only use keyid2uid when we have a non-NULL dbctx.
v5 is being specified in RFC4880bis. It uses a slightly different packet
format for public key packets and uses SHA2-256 for fingerprints. Add
basic support for parsing and storing these days, and some new unit
tests using the v5 test key.
A new subpacket containing the entire fingerprint of the signature
issuer has been added in RFC4880bis. This improves the old issuer keyid
subpacket type.
The auto* tools are hard to work with, and I'd like to split out various
bits of onak into a shared library useful to other projects as well as
various internal helpers (e.g. a backend DB library). To help with that
move over to CMake as a more modern but still widely available build
system.
Cleanup tests to be able to run from a different directory
runtests assumes its run from the directory it lives in, and that this
is the build directory. Improve it to be able to cope with a build in a
different directory to the source and work out the correct paths.
As per draft-dkg-openpgp-abuse-resistant-keystore add the ability to
drop "large" packets. These are defined as UIDs more than 1024 bytes
long, UATs greater than 64k in size and all other packets larger than
8383 bytes (the maximum that will fit in a 2 byte new format packet
length). Disabled by default, enable with "check_packet_size" in the
verification config section.
Use a set of policy flags to indicate what key cleaning to perform
To decouple cleankeys.c from the keyserver config, and prepare for an
extension of the policies available, use a set of flags to indicate what
key cleaning to perform.
Use generic fallback in stacked backend for non core routines
The non-core routines that don't directly return or store a key
structure often fall back to the generic routines under the hood. This
means we can miss propagating a retrieved key up to the top level of the
stack. Avoid this by only calling the non-generic version for the top
layer, then falling back to the generics which will do the appropriate
store on fallback.
The version check on packets was too strict - there are a bunch of
packets that don't have a version (such as the UID). Make these checks
more specific based on the definitions from RFC4800.
(Lesson learned: Do not commit without running the automated tests.)
Although we checked on each round of subpackets that we were still
within the correct length, we weren't checking the subpacket length
itself fit within the remaining data. Fixes some issues found using
American Fuzzy Lop.
At present only PGP packet versions up to 4 are supported. There's no
indication version 5+ will be backwards compatible, so if we see
anything higher it indicates something unsupported. Fixes some issues
found using American Fuzzy Lop.
Throw away invalid packet data when parsing packets
We would detect that a packet wasn't correctly formed, and handle
requests to try to allocation too much memory that failed, when parsing
keys. However the old partial packet structure was still left around. If
we hit an error when parsing an incoming packet make sure it's fully
cleaned up.
Prevent sign extension when parsing large packet sizes
A 2GB+ packet is likely to be a mistake, but in the event it was
legitimate sign extension could result in a much larger amount of
memory being allocated (and probably failing). Fix this by trying
to ensure we're doing an unsigned left shift.
Move cleankey.o from being an extra object to being part of the core
objects; the HKP backend is using it for starters and it so do various
other bits.
When a key was being updated over keyd it would do a delete and then
a store, which ended up being outside a transaction. Add an update
command so that the backend can do the update fully itself.
The change to the new configuration file format introduced some paper
bag errors in the mail processing script. Fix these, and add a Perl
syntax check into the test target to try and prevent this sort of thing
in future.
When keyd is in use any backend configuration is ignored for the clients
and keyd contacted instead. The new config changes failed to correctly
migrate the overriding mechanism for this and as a result nothing was
using keyd.
Fix use of absolute path in Debian postinst script
The changes to the Debian postinst to update an existing modified
configuration to the new style called onak with a full pathname. This
is contrary to Policy, so drop the path as dpkg should call us with
a sane $PATH.
This backend takes advantage of the new configuration file flexibility
to enable the stacking of multiple backends together, with each being
tried in turn until the desired keys are found. All stores go to the
first configured backend, and fetches from subsequent backends are
also stored in the first backend.
Add "dumpconfig" command to onak to aide configuration file migration
onak is capable of parsing both old and new style configuration files,
but future improvements will only be added to the new style. To aide
users in the migration to the new format add a "dumpconfig" option to
the onak binary which will dump the current configuration in new format.
onak's config file format grew from the pksd config style. pksd is long
obsolete and there are a number of features it would be nice to support
in onak which this config format makes hard to support cleanly. Move to
a .ini style config file, with [section] and name=value definitions. The
old format is still supported and at present an old style config is
searched for first to ensure smooth upgrades, but any new config options
will only be supported by the new style.
Additionally add tests against both old + new config styles for all
backends.
onak + wotsap were failing to free the memory allocated for the config
file name if it was passed on the command line, and the config structure
cleanups failed to free any configured sock_dir or the actual DB
backend config structure. All of this gets cleaned up by normal program
exit (which is when we do our clean-up anyway), but fix it anyway.
Rather than having each test script hard code the config file that
should be use, have the top level runtests script pass the filename
to use as the config in as a command line parameter.
The indentation in the config file reading function is horrible, so
pull the actual parsing of the line out to its own function simplifying
the loop to primarily be about reading each line and trimming white
space.
Parse pks_bin_dir / mail_dir in the C config handling
pks_bin_dir and mail_dir are only used by the Perl that handles
incoming keyserver email, but parse them in the C code rather than
just ignoring them. This helps pave the way for some config file format
changes and tightening up the parsing to complain about unused options.
Fix compilation breakage introduced in last commit
An unrelated change sneaked in as part of the proper splitting out of
backend database configuration - configuration initialisation for not
yet present config options. Remove them.
While database backends have had private context for some time they've
all been using the same configuration details from the global config
structure. Create a new DB specific config structure and initialise
a single instance from the config file. Also modify the DB backend
initialise functions to take this config structure as a parameter.
This will allow in the future for multiple different backends (whether
the same type or different) to be included at the same time.
Switch to C99 struct initialisation for default configuration
Inbound changes to the configuration file handling will change the
config structure around. Switch to using C99 style initialisation, which
is much clearer to read and reduces the risk of assigning values to the
wrong configuration variable.
Switch to using mail_dir for incoming mail lock file
Rather than using the db_dir for the lock file to prevent multiple
onak-mail instances processing incoming requests at the same time, use
the mail_dir file where the incoming messages are spooled anyway. This
is cleaner and will cope with the potential for multiple DB backends to
be in play in the future.
Rather than putting our socket in $localstatedir/run use $runstatedir
which is support by the Debian package autoconf 2.69-9 and will be in
autoconf 2.70. Fall back to $localstatedir/run when an older version
of autoconf is used.
Add config option to specify keyd socket directory
keyd was stashing its Unix domain socket in the DB directory, which
is contrary to convention. Add a sock_dir config option and default it
to ${prefix}/var/run in the sample config file.
GnuPG will incorrectly add a pre-existing subkey that it doesn't
understand (e.g. ECC) to a key. This results in keys with a large
number of identical subkeys. Avoid this by detecting such keys and
de-duplicating the identical subkeys.
(gnupg bug report at https://bugs.gnupg.org/gnupg/issue1962)
Re-order linking for backend plugins to cope with ld --as-needed
The shared libraries for DB4/curl/PostgreSQL should all come after the
object file that uses them so that ld --as-needed can correctly pick up
the required linkage.
Don't build-depend on Debian systemd dev packages for non-Linux architectures
Later versions of systemd aren't build on FreeBSD etc so the Debian build
was failing due to an unmet build-dep on libsystemd-dev; limit this
requirement to Linux archs.
Fixes Debian bug #763924. Thanks to Pino Toscano <pino@debian.org>
Check for sd_listen_fds in libsystemd as well as libsystemd-daemon
Upstream systemd has moved the sd_listen_fds function from
libsystemd-daemon into a combined libsystemd. For Debian this change
happens in the 214-1 package. Make the autoconf script look in both
and change the Debian build-deps to look for either suitably
recent libsystemd-dev or fall back to libsystemd-daemon-dev (to aid
backports).
The use of strtoul to parse the key ID breaks parsing 64 bit key IDs
on 32 bit platforms. Use strtouq instead, which is defined as returning
a 64 bit rather than being dependant on the length of "unsigned long".
Enable use of systemd + socket activation support for Debian package
Now that onak supports systemd socket activation add the appropriate
systemd service files to enable this on Debian installs. In the longer
term I'd like this to be generic for any distro; patches from people
experienced with them most welcome.
If we have libsystemd-daemon available then enable support for
systemd socket activation of keyd. Even if this support is compiled
in we will fall back to the old behaviour of setting up our own
listening Unix domain socket if systemd is not in use.
Use common function for command sending in keyd client code
Rather than duplicating the checks for writing ok and getting a non
error response use a common function for sending commands to keyd from
the DB backend client code.
Cleanup key array logic to make llvm scan-build happier
We can't have the size != 0 without keys != NULL, but the fact we
checked one at the start and the other later was confusing the static
checks from scan-build. Use size in both places; it keeps scan-build
happy and make the code a little easier to read.
Make keyd more robust in the face of socket errors
We were mostly ignoring errors back from reads or writes to the keyd
socket. Check that we've read/written as many bytes as we expect and
pull some of this out to common functions for sending keys/replies.
Check return value when writing PID to DB4 upgrade lock file
When writing our PID to the lock file for upgrading the DB4 version
we didn't check it was actually successful. This doesn't matter in
general (because it's the existence or not of the file we care about)
but catch it and error out appropriate anyway.
Fix issue with looking up keys by fingerprint via HKP interface
The index variable for the current parameter was being reused to parse
the fingerprint, resulting in attempting to free an unallocated piece of
memory and crashing. Use a different loop variable instead.
With the changes to the backends to store keys using the full fingerprints
it can be useful to force a key to be re-indexed and thus transitioned to
a fingerprint rather than 64 bit key id index. Ideally we'd want to be
able to do this across the entire backend, but that's a bit heavyweight for
a full keyserver so add the ability to do it for a single key to start with.