Merge tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/devic...
authorLinus Torvalds <torvalds@linux-foundation.org>
Sat, 21 Sep 2019 17:40:37 +0000 (10:40 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 21 Sep 2019 17:40:37 +0000 (10:40 -0700)
Pull device mapper updates from Mike Snitzer:

 - crypto and DM crypt advances that allow the crypto API to reclaim
   implementation details that do not belong in DM crypt. The wrapper
   template for ESSIV generation that was factored out will also be used
   by fscrypt in the future.

 - Add root hash pkcs#7 signature verification to the DM verity target.

 - Add a new "clone" DM target that allows for efficient remote
   replication of a device.

 - Enhance DM bufio's cache to be tailored to each client based on use.
   Clients that make heavy use of the cache get more of it, and those
   that use less have reduced cache usage.

 - Add a new DM_GET_TARGET_VERSION ioctl to allow userspace to query the
   version number of a DM target (even if the associated module isn't
   yet loaded).

 - Fix invalid memory access in DM zoned target.

 - Fix the max_discard_sectors limit advertised by the DM raid target;
   it was mistakenly storing the limit in bytes rather than sectors.

 - Small optimizations and cleanups in DM writecache target.

 - Various fixes and cleanups in DM core, DM raid1 and space map portion
   of DM persistent data library.

* tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (22 commits)
  dm: introduce DM_GET_TARGET_VERSION
  dm bufio: introduce a global cache replacement
  dm bufio: remove old-style buffer cleanup
  dm bufio: introduce a global queue
  dm bufio: refactor adjust_total_allocated
  dm bufio: call adjust_total_allocated from __link_buffer and __unlink_buffer
  dm: add clone target
  dm raid: fix updating of max_discard_sectors limit
  dm writecache: skip writecache_wait for pmem mode
  dm stats: use struct_size() helper
  dm crypt: omit parsing of the encapsulated cipher
  dm crypt: switch to ESSIV crypto API template
  crypto: essiv - create wrapper template for ESSIV generation
  dm space map common: remove check for impossible sm_find_free() return value
  dm raid1: use struct_size() with kzalloc()
  dm writecache: optimize performance by sorting the blocks for writeback_all
  dm writecache: add unlikely for getting two block with same LBA
  dm writecache: remove unused member pointer in writeback_struct
  dm zoned: fix invalid memory access
  dm verity: add root hash pkcs#7 signature verification
  ...

27 files changed:
Documentation/admin-guide/device-mapper/dm-clone.rst [new file with mode: 0644]
Documentation/admin-guide/device-mapper/verity.rst
crypto/Kconfig
crypto/Makefile
crypto/essiv.c [new file with mode: 0644]
drivers/md/Kconfig
drivers/md/Makefile
drivers/md/dm-bufio.c
drivers/md/dm-clone-metadata.c [new file with mode: 0644]
drivers/md/dm-clone-metadata.h [new file with mode: 0644]
drivers/md/dm-clone-target.c [new file with mode: 0644]
drivers/md/dm-crypt.c
drivers/md/dm-ioctl.c
drivers/md/dm-raid.c
drivers/md/dm-raid1.c
drivers/md/dm-stats.c
drivers/md/dm-table.c
drivers/md/dm-verity-target.c
drivers/md/dm-verity-verify-sig.c [new file with mode: 0644]
drivers/md/dm-verity-verify-sig.h [new file with mode: 0644]
drivers/md/dm-verity.h
drivers/md/dm-writecache.c
drivers/md/dm-zoned-target.c
drivers/md/dm.c
drivers/md/dm.h
drivers/md/persistent-data/dm-space-map-common.c
include/uapi/linux/dm-ioctl.h

diff --git a/Documentation/admin-guide/device-mapper/dm-clone.rst b/Documentation/admin-guide/device-mapper/dm-clone.rst
new file mode 100644 (file)
index 0000000..b43a34c
--- /dev/null
@@ -0,0 +1,333 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+========
+dm-clone
+========
+
+Introduction
+============
+
+dm-clone is a device mapper target which produces a one-to-one copy of an
+existing, read-only source device into a writable destination device: It
+presents a virtual block device which makes all data appear immediately, and
+redirects reads and writes accordingly.
+
+The main use case of dm-clone is to clone a potentially remote, high-latency,
+read-only, archival-type block device into a writable, fast, primary-type device
+for fast, low-latency I/O. The cloned device is visible/mountable immediately
+and the copy of the source device to the destination device happens in the
+background, in parallel with user I/O.
+
+For example, one could restore an application backup from a read-only copy,
+accessible through a network storage protocol (NBD, Fibre Channel, iSCSI, AoE,
+etc.), into a local SSD or NVMe device, and start using the device immediately,
+without waiting for the restore to complete.
+
+When the cloning completes, the dm-clone table can be removed altogether and be
+replaced, e.g., by a linear table, mapping directly to the destination device.
+
+The dm-clone target reuses the metadata library used by the thin-provisioning
+target.
+
+Glossary
+========
+
+   Hydration
+     The process of filling a region of the destination device with data from
+     the same region of the source device, i.e., copying the region from the
+     source to the destination device.
+
+Once a region gets hydrated we redirect all I/O regarding it to the destination
+device.
+
+Design
+======
+
+Sub-devices
+-----------
+
+The target is constructed by passing three devices to it (along with other
+parameters detailed later):
+
+1. A source device - the read-only device that gets cloned and source of the
+   hydration.
+
+2. A destination device - the destination of the hydration, which will become a
+   clone of the source device.
+
+3. A small metadata device - it records which regions are already valid in the
+   destination device, i.e., which regions have already been hydrated, or have
+   been written to directly, via user I/O.
+
+The size of the destination device must be at least equal to the size of the
+source device.
+
+Regions
+-------
+
+dm-clone divides the source and destination devices in fixed sized regions.
+Regions are the unit of hydration, i.e., the minimum amount of data copied from
+the source to the destination device.
+
+The region size is configurable when you first create the dm-clone device. The
+recommended region size is the same as the file system block size, which usually
+is 4KB. The region size must be between 8 sectors (4KB) and 2097152 sectors
+(1GB) and a power of two.
+
+Reads and writes from/to hydrated regions are serviced from the destination
+device.
+
+A read to a not yet hydrated region is serviced directly from the source device.
+
+A write to a not yet hydrated region will be delayed until the corresponding
+region has been hydrated and the hydration of the region starts immediately.
+
+Note that a write request with size equal to region size will skip copying of
+the corresponding region from the source device and overwrite the region of the
+destination device directly.
+
+Discards
+--------
+
+dm-clone interprets a discard request to a range that hasn't been hydrated yet
+as a hint to skip hydration of the regions covered by the request, i.e., it
+skips copying the region's data from the source to the destination device, and
+only updates its metadata.
+
+If the destination device supports discards, then by default dm-clone will pass
+down discard requests to it.
+
+Background Hydration
+--------------------
+
+dm-clone copies continuously from the source to the destination device, until
+all of the device has been copied.
+
+Copying data from the source to the destination device uses bandwidth. The user
+can set a throttle to prevent more than a certain amount of copying occurring at
+any one time. Moreover, dm-clone takes into account user I/O traffic going to
+the devices and pauses the background hydration when there is I/O in-flight.
+
+A message `hydration_threshold <#regions>` can be used to set the maximum number
+of regions being copied, the default being 1 region.
+
+dm-clone employs dm-kcopyd for copying portions of the source device to the
+destination device. By default, we issue copy requests of size equal to the
+region size. A message `hydration_batch_size <#regions>` can be used to tune the
+size of these copy requests. Increasing the hydration batch size results in
+dm-clone trying to batch together contiguous regions, so we copy the data in
+batches of this many regions.
+
+When the hydration of the destination device finishes, a dm event will be sent
+to user space.
+
+Updating on-disk metadata
+-------------------------
+
+On-disk metadata is committed every time a FLUSH or FUA bio is written. If no
+such requests are made then commits will occur every second. This means the
+dm-clone device behaves like a physical disk that has a volatile write cache. If
+power is lost you may lose some recent writes. The metadata should always be
+consistent in spite of any crash.
+
+Target Interface
+================
+
+Constructor
+-----------
+
+  ::
+
+   clone <metadata dev> <destination dev> <source dev> <region size>
+         [<#feature args> [<feature arg>]* [<#core args> [<core arg>]*]]
+
+ ================ ==============================================================
+ metadata dev     Fast device holding the persistent metadata
+ destination dev  The destination device, where the source will be cloned
+ source dev       Read only device containing the data that gets cloned
+ region size      The size of a region in sectors
+
+ #feature args    Number of feature arguments passed
+ feature args     no_hydration or no_discard_passdown
+
+ #core args       An even number of arguments corresponding to key/value pairs
+                  passed to dm-clone
+ core args        Key/value pairs passed to dm-clone, e.g. `hydration_threshold
+                  256`
+ ================ ==============================================================
+
+Optional feature arguments are:
+
+ ==================== =========================================================
+ no_hydration         Create a dm-clone instance with background hydration
+                      disabled
+ no_discard_passdown  Disable passing down discards to the destination device
+ ==================== =========================================================
+
+Optional core arguments are:
+
+ ================================ ==============================================
+ hydration_threshold <#regions>   Maximum number of regions being copied from
+                                  the source to the destination device at any
+                                  one time, during background hydration.
+ hydration_batch_size <#regions>  During background hydration, try to batch
+                                  together contiguous regions, so we copy data
+                                  from the source to the destination device in
+                                  batches of this many regions.
+ ================================ ==============================================
+
+Status
+------
+
+  ::
+
+   <metadata block size> <#used metadata blocks>/<#total metadata blocks>
+   <region size> <#hydrated regions>/<#total regions> <#hydrating regions>
+   <#feature args> <feature args>* <#core args> <core args>*
+   <clone metadata mode>
+
+ ======================= =======================================================
+ metadata block size     Fixed block size for each metadata block in sectors
+ #used metadata blocks   Number of metadata blocks used
+ #total metadata blocks  Total number of metadata blocks
+ region size             Configurable region size for the device in sectors
+ #hydrated regions       Number of regions that have finished hydrating
+ #total regions          Total number of regions to hydrate
+ #hydrating regions      Number of regions currently hydrating
+ #feature args           Number of feature arguments to follow
+ feature args            Feature arguments, e.g. `no_hydration`
+ #core args              Even number of core arguments to follow
+ core args               Key/value pairs for tuning the core, e.g.
+                         `hydration_threshold 256`
+ clone metadata mode     ro if read-only, rw if read-write
+
+                         In serious cases where even a read-only mode is deemed
+                         unsafe no further I/O will be permitted and the status
+                         will just contain the string 'Fail'. If the metadata
+                         mode changes, a dm event will be sent to user space.
+ ======================= =======================================================
+
+Messages
+--------
+
+  `disable_hydration`
+      Disable the background hydration of the destination device.
+
+  `enable_hydration`
+      Enable the background hydration of the destination device.
+
+  `hydration_threshold <#regions>`
+      Set background hydration threshold.
+
+  `hydration_batch_size <#regions>`
+      Set background hydration batch size.
+
+Examples
+========
+
+Clone a device containing a file system
+---------------------------------------
+
+1. Create the dm-clone device.
+
+   ::
+
+    dmsetup create clone --table "0 1048576000 clone $metadata_dev $dest_dev \
+      $source_dev 8 1 no_hydration"
+
+2. Mount the device and trim the file system. dm-clone interprets the discards
+   sent by the file system and it will not hydrate the unused space.
+
+   ::
+
+    mount /dev/mapper/clone /mnt/cloned-fs
+    fstrim /mnt/cloned-fs
+
+3. Enable background hydration of the destination device.
+
+   ::
+
+    dmsetup message clone 0 enable_hydration
+
+4. When the hydration finishes, we can replace the dm-clone table with a linear
+   table.
+
+   ::
+
+    dmsetup suspend clone
+    dmsetup load clone --table "0 1048576000 linear $dest_dev 0"
+    dmsetup resume clone
+
+   The metadata device is no longer needed and can be safely discarded or reused
+   for other purposes.
+
+Known issues
+============
+
+1. We redirect reads, to not-yet-hydrated regions, to the source device. If
+   reading the source device has high latency and the user repeatedly reads from
+   the same regions, this behaviour could degrade performance. We should use
+   these reads as hints to hydrate the relevant regions sooner. Currently, we
+   rely on the page cache to cache these regions, so we hopefully don't end up
+   reading them multiple times from the source device.
+
+2. Release in-core resources, i.e., the bitmaps tracking which regions are
+   hydrated, after the hydration has finished.
+
+3. During background hydration, if we fail to read the source or write to the
+   destination device, we print an error message, but the hydration process
+   continues indefinitely, until it succeeds. We should stop the background
+   hydration after a number of failures and emit a dm event for user space to
+   notice.
+
+Why not...?
+===========
+
+We explored the following alternatives before implementing dm-clone:
+
+1. Use dm-cache with cache size equal to the source device and implement a new
+   cloning policy:
+
+   * The resulting cache device is not a one-to-one mirror of the source device
+     and thus we cannot remove the cache device once cloning completes.
+
+   * dm-cache writes to the source device, which violates our requirement that
+     the source device must be treated as read-only.
+
+   * Caching is semantically different from cloning.
+
+2. Use dm-snapshot with a COW device equal to the source device:
+
+   * dm-snapshot stores its metadata in the COW device, so the resulting device
+     is not a one-to-one mirror of the source device.
+
+   * No background copying mechanism.
+
+   * dm-snapshot needs to commit its metadata whenever a pending exception
+     completes, to ensure snapshot consistency. In the case of cloning, we don't
+     need to be so strict and can rely on committing metadata every time a FLUSH
+     or FUA bio is written, or periodically, like dm-thin and dm-cache do. This
+     improves the performance significantly.
+
+3. Use dm-mirror: The mirror target has a background copying/mirroring
+   mechanism, but it writes to all mirrors, thus violating our requirement that
+   the source device must be treated as read-only.
+
+4. Use dm-thin's external snapshot functionality. This approach is the most
+   promising among all alternatives, as the thinly-provisioned volume is a
+   one-to-one mirror of the source device and handles reads and writes to
+   un-provisioned/not-yet-cloned areas the same way as dm-clone does.
+
+   Still:
+
+   * There is no background copying mechanism, though one could be implemented.
+
+   * Most importantly, we want to support arbitrary block devices as the
+     destination of the cloning process and not restrict ourselves to
+     thinly-provisioned volumes. Thin-provisioning has an inherent metadata
+     overhead, for maintaining the thin volume mappings, which significantly
+     degrades performance.
+
+   Moreover, cloning a device shouldn't force the use of thin-provisioning. On
+   the other hand, if we wish to use thin provisioning, we can just use a thin
+   LV as dm-clone's destination device.
index a4d1c1476d72d817d093f1b16698615ddaf0ae80..bb02caa45289443a2b5ea52f0bd83ee8cf46c50b 100644 (file)
@@ -125,6 +125,13 @@ check_at_most_once
     blocks, and a hash block will not be verified any more after all the data
     blocks it covers have been verified anyway.
 
+root_hash_sig_key_desc <key_description>
+    This is the description of the USER_KEY that the kernel will lookup to get
+    the pkcs7 signature of the roothash. The pkcs7 signature is used to validate
+    the root hash during the creation of the device mapper block device.
+    Verification of roothash depends on the config DM_VERITY_VERIFY_ROOTHASH_SIG
+    being set in the kernel.
+
 Theory of operation
 ===================
 
index ad86463de715fece52e108748630d7a3447f3380..9e524044d3128654d59a05f2ee7101d376f13cfc 100644 (file)
@@ -487,6 +487,34 @@ config CRYPTO_ADIANTUM
 
          If unsure, say N.
 
+config CRYPTO_ESSIV
+       tristate "ESSIV support for block encryption"
+       select CRYPTO_AUTHENC
+       help
+         Encrypted salt-sector initialization vector (ESSIV) is an IV
+         generation method that is used in some cases by fscrypt and/or
+         dm-crypt. It uses the hash of the block encryption key as the
+         symmetric key for a block encryption pass applied to the input
+         IV, making low entropy IV sources more suitable for block
+         encryption.
+
+         This driver implements a crypto API template that can be
+         instantiated either as a skcipher or as a aead (depending on the
+         type of the first template argument), and which defers encryption
+         and decryption requests to the encapsulated cipher after applying
+         ESSIV to the input IV. Note that in the aead case, it is assumed
+         that the keys are presented in the same format used by the authenc
+         template, and that the IV appears at the end of the authenticated
+         associated data (AAD) region (which is how dm-crypt uses it.)
+
+         Note that the use of ESSIV is not recommended for new deployments,
+         and so this only needs to be enabled when interoperability with
+         existing encrypted volumes of filesystems is required, or when
+         building for a particular system that requires it (e.g., when
+         the SoC in question has accelerated CBC but not XTS, making CBC
+         combined with ESSIV the only feasible mode for h/w accelerated
+         block encryption)
+
 comment "Hash modes"
 
 config CRYPTO_CMAC
index 0d2cdd523fd982cf532451d07ff32e7a3d6e9802..fcb1ee6797822ad0cd5522666d6f789f4a1e6bc9 100644 (file)
@@ -165,6 +165,7 @@ obj-$(CONFIG_CRYPTO_USER_API_AEAD) += algif_aead.o
 obj-$(CONFIG_CRYPTO_ZSTD) += zstd.o
 obj-$(CONFIG_CRYPTO_OFB) += ofb.o
 obj-$(CONFIG_CRYPTO_ECC) += ecc.o
+obj-$(CONFIG_CRYPTO_ESSIV) += essiv.o
 
 ecdh_generic-y += ecdh.o
 ecdh_generic-y += ecdh_helper.o
diff --git a/crypto/essiv.c b/crypto/essiv.c
new file mode 100644 (file)
index 0000000..a8befc8
--- /dev/null
@@ -0,0 +1,663 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * ESSIV skcipher and aead template for block encryption
+ *
+ * This template encapsulates the ESSIV IV generation algorithm used by
+ * dm-crypt and fscrypt, which converts the initial vector for the skcipher
+ * used for block encryption, by encrypting it using the hash of the
+ * skcipher key as encryption key. Usually, the input IV is a 64-bit sector
+ * number in LE representation zero-padded to the size of the IV, but this
+ * is not assumed by this driver.
+ *
+ * The typical use of this template is to instantiate the skcipher
+ * 'essiv(cbc(aes),sha256)', which is the only instantiation used by
+ * fscrypt, and the most relevant one for dm-crypt. However, dm-crypt
+ * also permits ESSIV to be used in combination with the authenc template,
+ * e.g., 'essiv(authenc(hmac(sha256),cbc(aes)),sha256)', in which case
+ * we need to instantiate an aead that accepts the same special key format
+ * as the authenc template, and deals with the way the encrypted IV is
+ * embedded into the AAD area of the aead request. This means the AEAD
+ * flavor produced by this template is tightly coupled to the way dm-crypt
+ * happens to use it.
+ *
+ * Copyright (c) 2019 Linaro, Ltd. <ard.biesheuvel@linaro.org>
+ *
+ * Heavily based on:
+ * adiantum length-preserving encryption mode
+ *
+ * Copyright 2018 Google LLC
+ */
+
+#include <crypto/authenc.h>
+#include <crypto/internal/aead.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+
+#include "internal.h"
+
+struct essiv_instance_ctx {
+       union {
+               struct crypto_skcipher_spawn    skcipher_spawn;
+               struct crypto_aead_spawn        aead_spawn;
+       } u;
+       char    essiv_cipher_name[CRYPTO_MAX_ALG_NAME];
+       char    shash_driver_name[CRYPTO_MAX_ALG_NAME];
+};
+
+struct essiv_tfm_ctx {
+       union {
+               struct crypto_skcipher  *skcipher;
+               struct crypto_aead      *aead;
+       } u;
+       struct crypto_cipher            *essiv_cipher;
+       struct crypto_shash             *hash;
+       int                             ivoffset;
+};
+
+struct essiv_aead_request_ctx {
+       struct scatterlist              sg[4];
+       u8                              *assoc;
+       struct aead_request             aead_req;
+};
+
+static int essiv_skcipher_setkey(struct crypto_skcipher *tfm,
+                                const u8 *key, unsigned int keylen)
+{
+       struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+       SHASH_DESC_ON_STACK(desc, tctx->hash);
+       u8 salt[HASH_MAX_DIGESTSIZE];
+       int err;
+
+       crypto_skcipher_clear_flags(tctx->u.skcipher, CRYPTO_TFM_REQ_MASK);
+       crypto_skcipher_set_flags(tctx->u.skcipher,
+                                 crypto_skcipher_get_flags(tfm) &
+                                 CRYPTO_TFM_REQ_MASK);
+       err = crypto_skcipher_setkey(tctx->u.skcipher, key, keylen);
+       crypto_skcipher_set_flags(tfm,
+                                 crypto_skcipher_get_flags(tctx->u.skcipher) &
+                                 CRYPTO_TFM_RES_MASK);
+       if (err)
+               return err;
+
+       desc->tfm = tctx->hash;
+       err = crypto_shash_digest(desc, key, keylen, salt);
+       if (err)
+               return err;
+
+       crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
+       crypto_cipher_set_flags(tctx->essiv_cipher,
+                               crypto_skcipher_get_flags(tfm) &
+                               CRYPTO_TFM_REQ_MASK);
+       err = crypto_cipher_setkey(tctx->essiv_cipher, salt,
+                                  crypto_shash_digestsize(tctx->hash));
+       crypto_skcipher_set_flags(tfm,
+                                 crypto_cipher_get_flags(tctx->essiv_cipher) &
+                                 CRYPTO_TFM_RES_MASK);
+
+       return err;
+}
+
+static int essiv_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+                            unsigned int keylen)
+{
+       struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+       SHASH_DESC_ON_STACK(desc, tctx->hash);
+       struct crypto_authenc_keys keys;
+       u8 salt[HASH_MAX_DIGESTSIZE];
+       int err;
+
+       crypto_aead_clear_flags(tctx->u.aead, CRYPTO_TFM_REQ_MASK);
+       crypto_aead_set_flags(tctx->u.aead, crypto_aead_get_flags(tfm) &
+                                           CRYPTO_TFM_REQ_MASK);
+       err = crypto_aead_setkey(tctx->u.aead, key, keylen);
+       crypto_aead_set_flags(tfm, crypto_aead_get_flags(tctx->u.aead) &
+                                  CRYPTO_TFM_RES_MASK);
+       if (err)
+               return err;
+
+       if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) {
+               crypto_aead_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
+               return -EINVAL;
+       }
+
+       desc->tfm = tctx->hash;
+       err = crypto_shash_init(desc) ?:
+             crypto_shash_update(desc, keys.enckey, keys.enckeylen) ?:
+             crypto_shash_finup(desc, keys.authkey, keys.authkeylen, salt);
+       if (err)
+               return err;
+
+       crypto_cipher_clear_flags(tctx->essiv_cipher, CRYPTO_TFM_REQ_MASK);
+       crypto_cipher_set_flags(tctx->essiv_cipher, crypto_aead_get_flags(tfm) &
+                                                   CRYPTO_TFM_REQ_MASK);
+       err = crypto_cipher_setkey(tctx->essiv_cipher, salt,
+                                  crypto_shash_digestsize(tctx->hash));
+       crypto_aead_set_flags(tfm, crypto_cipher_get_flags(tctx->essiv_cipher) &
+                                  CRYPTO_TFM_RES_MASK);
+
+       return err;
+}
+
+static int essiv_aead_setauthsize(struct crypto_aead *tfm,
+                                 unsigned int authsize)
+{
+       struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+
+       return crypto_aead_setauthsize(tctx->u.aead, authsize);
+}
+
+static void essiv_skcipher_done(struct crypto_async_request *areq, int err)
+{
+       struct skcipher_request *req = areq->data;
+
+       skcipher_request_complete(req, err);
+}
+
+static int essiv_skcipher_crypt(struct skcipher_request *req, bool enc)
+{
+       struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+       const struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+       struct skcipher_request *subreq = skcipher_request_ctx(req);
+
+       crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv);
+
+       skcipher_request_set_tfm(subreq, tctx->u.skcipher);
+       skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
+                                  req->iv);
+       skcipher_request_set_callback(subreq, skcipher_request_flags(req),
+                                     essiv_skcipher_done, req);
+
+       return enc ? crypto_skcipher_encrypt(subreq) :
+                    crypto_skcipher_decrypt(subreq);
+}
+
+static int essiv_skcipher_encrypt(struct skcipher_request *req)
+{
+       return essiv_skcipher_crypt(req, true);
+}
+
+static int essiv_skcipher_decrypt(struct skcipher_request *req)
+{
+       return essiv_skcipher_crypt(req, false);
+}
+
+static void essiv_aead_done(struct crypto_async_request *areq, int err)
+{
+       struct aead_request *req = areq->data;
+       struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+
+       if (rctx->assoc)
+               kfree(rctx->assoc);
+       aead_request_complete(req, err);
+}
+
+static int essiv_aead_crypt(struct aead_request *req, bool enc)
+{
+       struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+       const struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+       struct essiv_aead_request_ctx *rctx = aead_request_ctx(req);
+       struct aead_request *subreq = &rctx->aead_req;
+       struct scatterlist *src = req->src;
+       int err;
+
+       crypto_cipher_encrypt_one(tctx->essiv_cipher, req->iv, req->iv);
+
+       /*
+        * dm-crypt embeds the sector number and the IV in the AAD region, so
+        * we have to copy the converted IV into the right scatterlist before
+        * we pass it on.
+        */
+       rctx->assoc = NULL;
+       if (req->src == req->dst || !enc) {
+               scatterwalk_map_and_copy(req->iv, req->dst,
+                                        req->assoclen - crypto_aead_ivsize(tfm),
+                                        crypto_aead_ivsize(tfm), 1);
+       } else {
+               u8 *iv = (u8 *)aead_request_ctx(req) + tctx->ivoffset;
+               int ivsize = crypto_aead_ivsize(tfm);
+               int ssize = req->assoclen - ivsize;
+               struct scatterlist *sg;
+               int nents;
+
+               if (ssize < 0)
+                       return -EINVAL;
+
+               nents = sg_nents_for_len(req->src, ssize);
+               if (nents < 0)
+                       return -EINVAL;
+
+               memcpy(iv, req->iv, ivsize);
+               sg_init_table(rctx->sg, 4);
+
+               if (unlikely(nents > 1)) {
+                       /*
+                        * This is a case that rarely occurs in practice, but
+                        * for correctness, we have to deal with it nonetheless.
+                        */
+                       rctx->assoc = kmalloc(ssize, GFP_ATOMIC);
+                       if (!rctx->assoc)
+                               return -ENOMEM;
+
+                       scatterwalk_map_and_copy(rctx->assoc, req->src, 0,
+                                                ssize, 0);
+                       sg_set_buf(rctx->sg, rctx->assoc, ssize);
+               } else {
+                       sg_set_page(rctx->sg, sg_page(req->src), ssize,
+                                   req->src->offset);
+               }
+
+               sg_set_buf(rctx->sg + 1, iv, ivsize);
+               sg = scatterwalk_ffwd(rctx->sg + 2, req->src, req->assoclen);
+               if (sg != rctx->sg + 2)
+                       sg_chain(rctx->sg, 3, sg);
+
+               src = rctx->sg;
+       }
+
+       aead_request_set_tfm(subreq, tctx->u.aead);
+       aead_request_set_ad(subreq, req->assoclen);
+       aead_request_set_callback(subreq, aead_request_flags(req),
+                                 essiv_aead_done, req);
+       aead_request_set_crypt(subreq, src, req->dst, req->cryptlen, req->iv);
+
+       err = enc ? crypto_aead_encrypt(subreq) :
+                   crypto_aead_decrypt(subreq);
+
+       if (rctx->assoc && err != -EINPROGRESS)
+               kfree(rctx->assoc);
+       return err;
+}
+
+static int essiv_aead_encrypt(struct aead_request *req)
+{
+       return essiv_aead_crypt(req, true);
+}
+
+static int essiv_aead_decrypt(struct aead_request *req)
+{
+       return essiv_aead_crypt(req, false);
+}
+
+static int essiv_init_tfm(struct essiv_instance_ctx *ictx,
+                         struct essiv_tfm_ctx *tctx)
+{
+       struct crypto_cipher *essiv_cipher;
+       struct crypto_shash *hash;
+       int err;
+
+       essiv_cipher = crypto_alloc_cipher(ictx->essiv_cipher_name, 0, 0);
+       if (IS_ERR(essiv_cipher))
+               return PTR_ERR(essiv_cipher);
+
+       hash = crypto_alloc_shash(ictx->shash_driver_name, 0, 0);
+       if (IS_ERR(hash)) {
+               err = PTR_ERR(hash);
+               goto err_free_essiv_cipher;
+       }
+
+       tctx->essiv_cipher = essiv_cipher;
+       tctx->hash = hash;
+
+       return 0;
+
+err_free_essiv_cipher:
+       crypto_free_cipher(essiv_cipher);
+       return err;
+}
+
+static int essiv_skcipher_init_tfm(struct crypto_skcipher *tfm)
+{
+       struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+       struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+       struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+       struct crypto_skcipher *skcipher;
+       int err;
+
+       skcipher = crypto_spawn_skcipher(&ictx->u.skcipher_spawn);
+       if (IS_ERR(skcipher))
+               return PTR_ERR(skcipher);
+
+       crypto_skcipher_set_reqsize(tfm, sizeof(struct skcipher_request) +
+                                        crypto_skcipher_reqsize(skcipher));
+
+       err = essiv_init_tfm(ictx, tctx);
+       if (err) {
+               crypto_free_skcipher(skcipher);
+               return err;
+       }
+
+       tctx->u.skcipher = skcipher;
+       return 0;
+}
+
+static int essiv_aead_init_tfm(struct crypto_aead *tfm)
+{
+       struct aead_instance *inst = aead_alg_instance(tfm);
+       struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
+       struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+       struct crypto_aead *aead;
+       unsigned int subreq_size;
+       int err;
+
+       BUILD_BUG_ON(offsetofend(struct essiv_aead_request_ctx, aead_req) !=
+                    sizeof(struct essiv_aead_request_ctx));
+
+       aead = crypto_spawn_aead(&ictx->u.aead_spawn);
+       if (IS_ERR(aead))
+               return PTR_ERR(aead);
+
+       subreq_size = FIELD_SIZEOF(struct essiv_aead_request_ctx, aead_req) +
+                     crypto_aead_reqsize(aead);
+
+       tctx->ivoffset = offsetof(struct essiv_aead_request_ctx, aead_req) +
+                        subreq_size;
+       crypto_aead_set_reqsize(tfm, tctx->ivoffset + crypto_aead_ivsize(aead));
+
+       err = essiv_init_tfm(ictx, tctx);
+       if (err) {
+               crypto_free_aead(aead);
+               return err;
+       }
+
+       tctx->u.aead = aead;
+       return 0;
+}
+
+static void essiv_skcipher_exit_tfm(struct crypto_skcipher *tfm)
+{
+       struct essiv_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+
+       crypto_free_skcipher(tctx->u.skcipher);
+       crypto_free_cipher(tctx->essiv_cipher);
+       crypto_free_shash(tctx->hash);
+}
+
+static void essiv_aead_exit_tfm(struct crypto_aead *tfm)
+{
+       struct essiv_tfm_ctx *tctx = crypto_aead_ctx(tfm);
+
+       crypto_free_aead(tctx->u.aead);
+       crypto_free_cipher(tctx->essiv_cipher);
+       crypto_free_shash(tctx->hash);
+}
+
+static void essiv_skcipher_free_instance(struct skcipher_instance *inst)
+{
+       struct essiv_instance_ctx *ictx = skcipher_instance_ctx(inst);
+
+       crypto_drop_skcipher(&ictx->u.skcipher_spawn);
+       kfree(inst);
+}
+
+static void essiv_aead_free_instance(struct aead_instance *inst)
+{
+       struct essiv_instance_ctx *ictx = aead_instance_ctx(inst);
+
+       crypto_drop_aead(&ictx->u.aead_spawn);
+       kfree(inst);
+}
+
+static bool parse_cipher_name(char *essiv_cipher_name, const char *cra_name)
+{
+       const char *p, *q;
+       int len;
+
+       /* find the last opening parens */
+       p = strrchr(cra_name, '(');
+       if (!p++)
+               return false;
+
+       /* find the first closing parens in the tail of the string */
+       q = strchr(p, ')');
+       if (!q)
+               return false;
+
+       len = q - p;
+       if (len >= CRYPTO_MAX_ALG_NAME)
+               return false;
+
+       memcpy(essiv_cipher_name, p, len);
+       essiv_cipher_name[len] = '\0';
+       return true;
+}
+
+static bool essiv_supported_algorithms(const char *essiv_cipher_name,
+                                      struct shash_alg *hash_alg,
+                                      int ivsize)
+{
+       struct crypto_alg *alg;
+       bool ret = false;
+
+       alg = crypto_alg_mod_lookup(essiv_cipher_name,
+                                   CRYPTO_ALG_TYPE_CIPHER,
+                                   CRYPTO_ALG_TYPE_MASK);
+       if (IS_ERR(alg))
+               return false;
+
+       if (hash_alg->digestsize < alg->cra_cipher.cia_min_keysize ||
+           hash_alg->digestsize > alg->cra_cipher.cia_max_keysize)
+               goto out;
+
+       if (ivsize != alg->cra_blocksize)
+               goto out;
+
+       if (crypto_shash_alg_has_setkey(hash_alg))
+               goto out;
+
+       ret = true;
+
+out:
+       crypto_mod_put(alg);
+       return ret;
+}
+
+static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+       struct crypto_attr_type *algt;
+       const char *inner_cipher_name;
+       const char *shash_name;
+       struct skcipher_instance *skcipher_inst = NULL;
+       struct aead_instance *aead_inst = NULL;
+       struct crypto_instance *inst;
+       struct crypto_alg *base, *block_base;
+       struct essiv_instance_ctx *ictx;
+       struct skcipher_alg *skcipher_alg = NULL;
+       struct aead_alg *aead_alg = NULL;
+       struct crypto_alg *_hash_alg;
+       struct shash_alg *hash_alg;
+       int ivsize;
+       u32 type;
+       int err;
+
+       algt = crypto_get_attr_type(tb);
+       if (IS_ERR(algt))
+               return PTR_ERR(algt);
+
+       inner_cipher_name = crypto_attr_alg_name(tb[1]);
+       if (IS_ERR(inner_cipher_name))
+               return PTR_ERR(inner_cipher_name);
+
+       shash_name = crypto_attr_alg_name(tb[2]);
+       if (IS_ERR(shash_name))
+               return PTR_ERR(shash_name);
+
+       type = algt->type & algt->mask;
+
+       switch (type) {
+       case CRYPTO_ALG_TYPE_BLKCIPHER:
+               skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
+                                       sizeof(*ictx), GFP_KERNEL);
+               if (!skcipher_inst)
+                       return -ENOMEM;
+               inst = skcipher_crypto_instance(skcipher_inst);
+               base = &skcipher_inst->alg.base;
+               ictx = crypto_instance_ctx(inst);
+
+               /* Symmetric cipher, e.g., "cbc(aes)" */
+               crypto_set_skcipher_spawn(&ictx->u.skcipher_spawn, inst);
+               err = crypto_grab_skcipher(&ictx->u.skcipher_spawn,
+                                          inner_cipher_name, 0,
+                                          crypto_requires_sync(algt->type,
+                                                               algt->mask));
+               if (err)
+                       goto out_free_inst;
+               skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn);
+               block_base = &skcipher_alg->base;
+               ivsize = crypto_skcipher_alg_ivsize(skcipher_alg);
+               break;
+
+       case CRYPTO_ALG_TYPE_AEAD:
+               aead_inst = kzalloc(sizeof(*aead_inst) +
+                                   sizeof(*ictx), GFP_KERNEL);
+               if (!aead_inst)
+                       return -ENOMEM;
+               inst = aead_crypto_instance(aead_inst);
+               base = &aead_inst->alg.base;
+               ictx = crypto_instance_ctx(inst);
+
+               /* AEAD cipher, e.g., "authenc(hmac(sha256),cbc(aes))" */
+               crypto_set_aead_spawn(&ictx->u.aead_spawn, inst);
+               err = crypto_grab_aead(&ictx->u.aead_spawn,
+                                      inner_cipher_name, 0,
+                                      crypto_requires_sync(algt->type,
+                                                           algt->mask));
+               if (err)
+                       goto out_free_inst;
+               aead_alg = crypto_spawn_aead_alg(&ictx->u.aead_spawn);
+               block_base = &aead_alg->base;
+               if (!strstarts(block_base->cra_name, "authenc(")) {
+                       pr_warn("Only authenc() type AEADs are supported by ESSIV\n");
+                       err = -EINVAL;
+                       goto out_drop_skcipher;
+               }
+               ivsize = aead_alg->ivsize;
+               break;
+
+       default:
+               return -EINVAL;
+       }
+
+       if (!parse_cipher_name(ictx->essiv_cipher_name, block_base->cra_name)) {
+               pr_warn("Failed to parse ESSIV cipher name from skcipher cra_name\n");
+               err = -EINVAL;
+               goto out_drop_skcipher;
+       }
+
+       /* Synchronous hash, e.g., "sha256" */
+       _hash_alg = crypto_alg_mod_lookup(shash_name,
+                                         CRYPTO_ALG_TYPE_SHASH,
+                                         CRYPTO_ALG_TYPE_MASK);
+       if (IS_ERR(_hash_alg)) {
+               err = PTR_ERR(_hash_alg);
+               goto out_drop_skcipher;
+       }
+       hash_alg = __crypto_shash_alg(_hash_alg);
+
+       /* Check the set of algorithms */
+       if (!essiv_supported_algorithms(ictx->essiv_cipher_name, hash_alg,
+                                       ivsize)) {
+               pr_warn("Unsupported essiv instantiation: essiv(%s,%s)\n",
+                       block_base->cra_name, hash_alg->base.cra_name);
+               err = -EINVAL;
+               goto out_free_hash;
+       }
+
+       /* record the driver name so we can instantiate this exact algo later */
+       strlcpy(ictx->shash_driver_name, hash_alg->base.cra_driver_name,
+               CRYPTO_MAX_ALG_NAME);
+
+       /* Instance fields */
+
+       err = -ENAMETOOLONG;
+       if (snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME,
+                    "essiv(%s,%s)", block_base->cra_name,
+                    hash_alg->base.cra_name) >= CRYPTO_MAX_ALG_NAME)
+               goto out_free_hash;
+       if (snprintf(base->cra_driver_name, CRYPTO_MAX_ALG_NAME,
+                    "essiv(%s,%s)", block_base->cra_driver_name,
+                    hash_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+               goto out_free_hash;
+
+       base->cra_flags         = block_base->cra_flags & CRYPTO_ALG_ASYNC;
+       base->cra_blocksize     = block_base->cra_blocksize;
+       base->cra_ctxsize       = sizeof(struct essiv_tfm_ctx);
+       base->cra_alignmask     = block_base->cra_alignmask;
+       base->cra_priority      = block_base->cra_priority;
+
+       if (type == CRYPTO_ALG_TYPE_BLKCIPHER) {
+               skcipher_inst->alg.setkey       = essiv_skcipher_setkey;
+               skcipher_inst->alg.encrypt      = essiv_skcipher_encrypt;
+               skcipher_inst->alg.decrypt      = essiv_skcipher_decrypt;
+               skcipher_inst->alg.init         = essiv_skcipher_init_tfm;
+               skcipher_inst->alg.exit         = essiv_skcipher_exit_tfm;
+
+               skcipher_inst->alg.min_keysize  = crypto_skcipher_alg_min_keysize(skcipher_alg);
+               skcipher_inst->alg.max_keysize  = crypto_skcipher_alg_max_keysize(skcipher_alg);
+               skcipher_inst->alg.ivsize       = ivsize;
+               skcipher_inst->alg.chunksize    = crypto_skcipher_alg_chunksize(skcipher_alg);
+               skcipher_inst->alg.walksize     = crypto_skcipher_alg_walksize(skcipher_alg);
+
+               skcipher_inst->free             = essiv_skcipher_free_instance;
+
+               err = skcipher_register_instance(tmpl, skcipher_inst);
+       } else {
+               aead_inst->alg.setkey           = essiv_aead_setkey;
+               aead_inst->alg.setauthsize      = essiv_aead_setauthsize;
+               aead_inst->alg.encrypt          = essiv_aead_encrypt;
+               aead_inst->alg.decrypt          = essiv_aead_decrypt;
+               aead_inst->alg.init             = essiv_aead_init_tfm;
+               aead_inst->alg.exit             = essiv_aead_exit_tfm;
+
+               aead_inst->alg.ivsize           = ivsize;
+               aead_inst->alg.maxauthsize      = crypto_aead_alg_maxauthsize(aead_alg);
+               aead_inst->alg.chunksize        = crypto_aead_alg_chunksize(aead_alg);
+
+               aead_inst->free                 = essiv_aead_free_instance;
+
+               err = aead_register_instance(tmpl, aead_inst);
+       }
+
+       if (err)
+               goto out_free_hash;
+
+       crypto_mod_put(_hash_alg);
+       return 0;
+
+out_free_hash:
+       crypto_mod_put(_hash_alg);
+out_drop_skcipher:
+       if (type == CRYPTO_ALG_TYPE_BLKCIPHER)
+               crypto_drop_skcipher(&ictx->u.skcipher_spawn);
+       else
+               crypto_drop_aead(&ictx->u.aead_spawn);
+out_free_inst:
+       kfree(skcipher_inst);
+       kfree(aead_inst);
+       return err;
+}
+
+/* essiv(cipher_name, shash_name) */
+static struct crypto_template essiv_tmpl = {
+       .name   = "essiv",
+       .create = essiv_create,
+       .module = THIS_MODULE,
+};
+
+static int __init essiv_module_init(void)
+{
+       return crypto_register_template(&essiv_tmpl);
+}
+
+static void __exit essiv_module_exit(void)
+{
+       crypto_unregister_template(&essiv_tmpl);
+}
+
+subsys_initcall(essiv_module_init);
+module_exit(essiv_module_exit);
+
+MODULE_DESCRIPTION("ESSIV skcipher/aead wrapper for block encryption");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("essiv");
index 3834332f4963e9a333c826e5fcf734451156c718..aa98953f4462e5d233cd156e56c415130b90a106 100644 (file)
@@ -271,6 +271,7 @@ config DM_CRYPT
        depends on BLK_DEV_DM
        select CRYPTO
        select CRYPTO_CBC
+       select CRYPTO_ESSIV
        ---help---
          This device-mapper target allows you to create a device that
          transparently encrypts the data on it. You'll need to activate
@@ -346,6 +347,20 @@ config DM_ERA
          over time.  Useful for maintaining cache coherency when using
          vendor snapshots.
 
+config DM_CLONE
+       tristate "Clone target (EXPERIMENTAL)"
+       depends on BLK_DEV_DM
+       default n
+       select DM_PERSISTENT_DATA
+       ---help---
+         dm-clone produces a one-to-one copy of an existing, read-only source
+         device into a writable destination device. The cloned device is
+         visible/mountable immediately and the copy of the source device to the
+         destination device happens in the background, in parallel with user
+         I/O.
+
+         If unsure, say N.
+
 config DM_MIRROR
        tristate "Mirror target"
        depends on BLK_DEV_DM
@@ -490,6 +505,18 @@ config DM_VERITY
 
          If unsure, say N.
 
+config DM_VERITY_VERIFY_ROOTHASH_SIG
+       def_bool n
+       bool "Verity data device root hash signature verification support"
+       depends on DM_VERITY
+       select SYSTEM_DATA_VERIFICATION
+         help
+         Add ability for dm-verity device to be validated if the
+         pre-generated tree of cryptographic checksums passed has a pkcs#7
+         signature file that can validate the roothash of the tree.
+
+         If unsure, say N.
+
 config DM_VERITY_FEC
        bool "Verity forward error correction support"
        depends on DM_VERITY
index be7a6eb92abcb47a4371fe6ed435f466124dda7a..d91a7edcd2abf1ed11602c84c8b15a836483b1a2 100644 (file)
@@ -18,6 +18,7 @@ dm-cache-y    += dm-cache-target.o dm-cache-metadata.o dm-cache-policy.o \
                    dm-cache-background-tracker.o
 dm-cache-smq-y   += dm-cache-policy-smq.o
 dm-era-y       += dm-era-target.o
+dm-clone-y     += dm-clone-target.o dm-clone-metadata.o
 dm-verity-y    += dm-verity-target.o
 md-mod-y       += md.o md-bitmap.o
 raid456-y      += raid5.o raid5-cache.o raid5-ppl.o
@@ -65,6 +66,7 @@ obj-$(CONFIG_DM_VERITY)               += dm-verity.o
 obj-$(CONFIG_DM_CACHE)         += dm-cache.o
 obj-$(CONFIG_DM_CACHE_SMQ)     += dm-cache-smq.o
 obj-$(CONFIG_DM_ERA)           += dm-era.o
+obj-$(CONFIG_DM_CLONE)         += dm-clone.o
 obj-$(CONFIG_DM_LOG_WRITES)    += dm-log-writes.o
 obj-$(CONFIG_DM_INTEGRITY)     += dm-integrity.o
 obj-$(CONFIG_DM_ZONED)         += dm-zoned.o
@@ -81,3 +83,7 @@ endif
 ifeq ($(CONFIG_DM_VERITY_FEC),y)
 dm-verity-objs                 += dm-verity-fec.o
 endif
+
+ifeq ($(CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG),y)
+dm-verity-objs                 += dm-verity-verify-sig.o
+endif
index 2a48ea3f1b30d4adfc6581dff3d1cfe1a088b86a..2d519c2235626e4237a074e11d6e03d942914602 100644 (file)
@@ -33,7 +33,8 @@
 
 #define DM_BUFIO_MEMORY_PERCENT                2
 #define DM_BUFIO_VMALLOC_PERCENT       25
-#define DM_BUFIO_WRITEBACK_PERCENT     75
+#define DM_BUFIO_WRITEBACK_RATIO       3
+#define DM_BUFIO_LOW_WATERMARK_RATIO   16
 
 /*
  * Check buffer ages in this interval (seconds)
@@ -132,12 +133,14 @@ enum data_mode {
 struct dm_buffer {
        struct rb_node node;
        struct list_head lru_list;
+       struct list_head global_list;
        sector_t block;
        void *data;
        unsigned char data_mode;                /* DATA_MODE_* */
        unsigned char list_mode;                /* LIST_* */
        blk_status_t read_error;
        blk_status_t write_error;
+       unsigned accessed;
        unsigned hold_count;
        unsigned long state;
        unsigned long last_accessed;
@@ -192,7 +195,11 @@ static unsigned long dm_bufio_cache_size;
  */
 static unsigned long dm_bufio_cache_size_latch;
 
-static DEFINE_SPINLOCK(param_spinlock);
+static DEFINE_SPINLOCK(global_spinlock);
+
+static LIST_HEAD(global_queue);
+
+static unsigned long global_num = 0;
 
 /*
  * Buffers are freed after this timeout
@@ -208,11 +215,6 @@ static unsigned long dm_bufio_current_allocated;
 
 /*----------------------------------------------------------------*/
 
-/*
- * Per-client cache: dm_bufio_cache_size / dm_bufio_client_count
- */
-static unsigned long dm_bufio_cache_size_per_client;
-
 /*
  * The current number of clients.
  */
@@ -224,11 +226,15 @@ static int dm_bufio_client_count;
 static LIST_HEAD(dm_bufio_all_clients);
 
 /*
- * This mutex protects dm_bufio_cache_size_latch,
- * dm_bufio_cache_size_per_client and dm_bufio_client_count
+ * This mutex protects dm_bufio_cache_size_latch and dm_bufio_client_count
  */
 static DEFINE_MUTEX(dm_bufio_clients_lock);
 
+static struct workqueue_struct *dm_bufio_wq;
+static struct delayed_work dm_bufio_cleanup_old_work;
+static struct work_struct dm_bufio_replacement_work;
+
+
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 static void buffer_record_stack(struct dm_buffer *b)
 {
@@ -285,15 +291,23 @@ static void __remove(struct dm_bufio_client *c, struct dm_buffer *b)
 
 /*----------------------------------------------------------------*/
 
-static void adjust_total_allocated(unsigned char data_mode, long diff)
+static void adjust_total_allocated(struct dm_buffer *b, bool unlink)
 {
+       unsigned char data_mode;
+       long diff;
+
        static unsigned long * const class_ptr[DATA_MODE_LIMIT] = {
                &dm_bufio_allocated_kmem_cache,
                &dm_bufio_allocated_get_free_pages,
                &dm_bufio_allocated_vmalloc,
        };
 
-       spin_lock(&param_spinlock);
+       data_mode = b->data_mode;
+       diff = (long)b->c->block_size;
+       if (unlink)
+               diff = -diff;
+
+       spin_lock(&global_spinlock);
 
        *class_ptr[data_mode] += diff;
 
@@ -302,7 +316,19 @@ static void adjust_total_allocated(unsigned char data_mode, long diff)
        if (dm_bufio_current_allocated > dm_bufio_peak_allocated)
                dm_bufio_peak_allocated = dm_bufio_current_allocated;
 
-       spin_unlock(&param_spinlock);
+       b->accessed = 1;
+
+       if (!unlink) {
+               list_add(&b->global_list, &global_queue);
+               global_num++;
+               if (dm_bufio_current_allocated > dm_bufio_cache_size)
+                       queue_work(dm_bufio_wq, &dm_bufio_replacement_work);
+       } else {
+               list_del(&b->global_list);
+               global_num--;
+       }
+
+       spin_unlock(&global_spinlock);
 }
 
 /*
@@ -323,9 +349,6 @@ static void __cache_size_refresh(void)
                              dm_bufio_default_cache_size);
                dm_bufio_cache_size_latch = dm_bufio_default_cache_size;
        }
-
-       dm_bufio_cache_size_per_client = dm_bufio_cache_size_latch /
-                                        (dm_bufio_client_count ? : 1);
 }
 
 /*
@@ -431,8 +454,6 @@ static struct dm_buffer *alloc_buffer(struct dm_bufio_client *c, gfp_t gfp_mask)
                return NULL;
        }
 
-       adjust_total_allocated(b->data_mode, (long)c->block_size);
-
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
        b->stack_len = 0;
 #endif
@@ -446,8 +467,6 @@ static void free_buffer(struct dm_buffer *b)
 {
        struct dm_bufio_client *c = b->c;
 
-       adjust_total_allocated(b->data_mode, -(long)c->block_size);
-
        free_buffer_data(c, b->data, b->data_mode);
        kmem_cache_free(c->slab_buffer, b);
 }
@@ -465,6 +484,8 @@ static void __link_buffer(struct dm_buffer *b, sector_t block, int dirty)
        list_add(&b->lru_list, &c->lru[dirty]);
        __insert(b->c, b);
        b->last_accessed = jiffies;
+
+       adjust_total_allocated(b, false);
 }
 
 /*
@@ -479,6 +500,8 @@ static void __unlink_buffer(struct dm_buffer *b)
        c->n_buffers[b->list_mode]--;
        __remove(b->c, b);
        list_del(&b->lru_list);
+
+       adjust_total_allocated(b, true);
 }
 
 /*
@@ -488,6 +511,8 @@ static void __relink_lru(struct dm_buffer *b, int dirty)
 {
        struct dm_bufio_client *c = b->c;
 
+       b->accessed = 1;
+
        BUG_ON(!c->n_buffers[b->list_mode]);
 
        c->n_buffers[b->list_mode]--;
@@ -906,36 +931,6 @@ static void __write_dirty_buffers_async(struct dm_bufio_client *c, int no_wait,
        }
 }
 
-/*
- * Get writeback threshold and buffer limit for a given client.
- */
-static void __get_memory_limit(struct dm_bufio_client *c,
-                              unsigned long *threshold_buffers,
-                              unsigned long *limit_buffers)
-{
-       unsigned long buffers;
-
-       if (unlikely(READ_ONCE(dm_bufio_cache_size) != dm_bufio_cache_size_latch)) {
-               if (mutex_trylock(&dm_bufio_clients_lock)) {
-                       __cache_size_refresh();
-                       mutex_unlock(&dm_bufio_clients_lock);
-               }
-       }
-
-       buffers = dm_bufio_cache_size_per_client;
-       if (likely(c->sectors_per_block_bits >= 0))
-               buffers >>= c->sectors_per_block_bits + SECTOR_SHIFT;
-       else
-               buffers /= c->block_size;
-
-       if (buffers < c->minimum_buffers)
-               buffers = c->minimum_buffers;
-
-       *limit_buffers = buffers;
-       *threshold_buffers = mult_frac(buffers,
-                                      DM_BUFIO_WRITEBACK_PERCENT, 100);
-}
-
 /*
  * Check if we're over watermark.
  * If we are over threshold_buffers, start freeing buffers.
@@ -944,23 +939,7 @@ static void __get_memory_limit(struct dm_bufio_client *c,
 static void __check_watermark(struct dm_bufio_client *c,
                              struct list_head *write_list)
 {
-       unsigned long threshold_buffers, limit_buffers;
-
-       __get_memory_limit(c, &threshold_buffers, &limit_buffers);
-
-       while (c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY] >
-              limit_buffers) {
-
-               struct dm_buffer *b = __get_unclaimed_buffer(c);
-
-               if (!b)
-                       return;
-
-               __free_buffer_wake(b);
-               cond_resched();
-       }
-
-       if (c->n_buffers[LIST_DIRTY] > threshold_buffers)
+       if (c->n_buffers[LIST_DIRTY] > c->n_buffers[LIST_CLEAN] * DM_BUFIO_WRITEBACK_RATIO)
                __write_dirty_buffers_async(c, 1, write_list);
 }
 
@@ -1841,6 +1820,74 @@ static void __evict_old_buffers(struct dm_bufio_client *c, unsigned long age_hz)
        dm_bufio_unlock(c);
 }
 
+static void do_global_cleanup(struct work_struct *w)
+{
+       struct dm_bufio_client *locked_client = NULL;
+       struct dm_bufio_client *current_client;
+       struct dm_buffer *b;
+       unsigned spinlock_hold_count;
+       unsigned long threshold = dm_bufio_cache_size -
+               dm_bufio_cache_size / DM_BUFIO_LOW_WATERMARK_RATIO;
+       unsigned long loops = global_num * 2;
+
+       mutex_lock(&dm_bufio_clients_lock);
+
+       while (1) {
+               cond_resched();
+
+               spin_lock(&global_spinlock);
+               if (unlikely(dm_bufio_current_allocated <= threshold))
+                       break;
+
+               spinlock_hold_count = 0;
+get_next:
+               if (!loops--)
+                       break;
+               if (unlikely(list_empty(&global_queue)))
+                       break;
+               b = list_entry(global_queue.prev, struct dm_buffer, global_list);
+
+               if (b->accessed) {
+                       b->accessed = 0;
+                       list_move(&b->global_list, &global_queue);
+                       if (likely(++spinlock_hold_count < 16))
+                               goto get_next;
+                       spin_unlock(&global_spinlock);
+                       continue;
+               }
+
+               current_client = b->c;
+               if (unlikely(current_client != locked_client)) {
+                       if (locked_client)
+                               dm_bufio_unlock(locked_client);
+
+                       if (!dm_bufio_trylock(current_client)) {
+                               spin_unlock(&global_spinlock);
+                               dm_bufio_lock(current_client);
+                               locked_client = current_client;
+                               continue;
+                       }
+
+                       locked_client = current_client;
+               }
+
+               spin_unlock(&global_spinlock);
+
+               if (unlikely(!__try_evict_buffer(b, GFP_KERNEL))) {
+                       spin_lock(&global_spinlock);
+                       list_move(&b->global_list, &global_queue);
+                       spin_unlock(&global_spinlock);
+               }
+       }
+
+       spin_unlock(&global_spinlock);
+
+       if (locked_client)
+               dm_bufio_unlock(locked_client);
+
+       mutex_unlock(&dm_bufio_clients_lock);
+}
+
 static void cleanup_old_buffers(void)
 {
        unsigned long max_age_hz = get_max_age_hz();
@@ -1856,14 +1903,11 @@ static void cleanup_old_buffers(void)
        mutex_unlock(&dm_bufio_clients_lock);
 }
 
-static struct workqueue_struct *dm_bufio_wq;
-static struct delayed_work dm_bufio_work;
-
 static void work_fn(struct work_struct *w)
 {
        cleanup_old_buffers();
 
-       queue_delayed_work(dm_bufio_wq, &dm_bufio_work,
+       queue_delayed_work(dm_bufio_wq, &dm_bufio_cleanup_old_work,
                           DM_BUFIO_WORK_TIMER_SECS * HZ);
 }
 
@@ -1905,8 +1949,9 @@ static int __init dm_bufio_init(void)
        if (!dm_bufio_wq)
                return -ENOMEM;
 
-       INIT_DELAYED_WORK(&dm_bufio_work, work_fn);
-       queue_delayed_work(dm_bufio_wq, &dm_bufio_work,
+       INIT_DELAYED_WORK(&dm_bufio_cleanup_old_work, work_fn);
+       INIT_WORK(&dm_bufio_replacement_work, do_global_cleanup);
+       queue_delayed_work(dm_bufio_wq, &dm_bufio_cleanup_old_work,
                           DM_BUFIO_WORK_TIMER_SECS * HZ);
 
        return 0;
@@ -1919,7 +1964,8 @@ static void __exit dm_bufio_exit(void)
 {
        int bug = 0;
 
-       cancel_delayed_work_sync(&dm_bufio_work);
+       cancel_delayed_work_sync(&dm_bufio_cleanup_old_work);
+       flush_workqueue(dm_bufio_wq);
        destroy_workqueue(dm_bufio_wq);
 
        if (dm_bufio_client_count) {
diff --git a/drivers/md/dm-clone-metadata.c b/drivers/md/dm-clone-metadata.c
new file mode 100644 (file)
index 0000000..6bc8c1d
--- /dev/null
@@ -0,0 +1,964 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
+ */
+
+#include <linux/mm.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+#include <linux/rwsem.h>
+#include <linux/bitops.h>
+#include <linux/bitmap.h>
+#include <linux/device-mapper.h>
+
+#include "persistent-data/dm-bitset.h"
+#include "persistent-data/dm-space-map.h"
+#include "persistent-data/dm-block-manager.h"
+#include "persistent-data/dm-transaction-manager.h"
+
+#include "dm-clone-metadata.h"
+
+#define DM_MSG_PREFIX "clone metadata"
+
+#define SUPERBLOCK_LOCATION 0
+#define SUPERBLOCK_MAGIC 0x8af27f64
+#define SUPERBLOCK_CSUM_XOR 257649492
+
+#define DM_CLONE_MAX_CONCURRENT_LOCKS 5
+
+#define UUID_LEN 16
+
+/* Min and max dm-clone metadata versions supported */
+#define DM_CLONE_MIN_METADATA_VERSION 1
+#define DM_CLONE_MAX_METADATA_VERSION 1
+
+/*
+ * On-disk metadata layout
+ */
+struct superblock_disk {
+       __le32 csum;
+       __le32 flags;
+       __le64 blocknr;
+
+       __u8 uuid[UUID_LEN];
+       __le64 magic;
+       __le32 version;
+
+       __u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
+
+       __le64 region_size;
+       __le64 target_size;
+
+       __le64 bitset_root;
+} __packed;
+
+/*
+ * Region and Dirty bitmaps.
+ *
+ * dm-clone logically splits the source and destination devices in regions of
+ * fixed size. The destination device's regions are gradually hydrated, i.e.,
+ * we copy (clone) the source's regions to the destination device. Eventually,
+ * all regions will get hydrated and all I/O will be served from the
+ * destination device.
+ *
+ * We maintain an on-disk bitmap which tracks the state of each of the
+ * destination device's regions, i.e., whether they are hydrated or not.
+ *
+ * To save constantly doing look ups on disk we keep an in core copy of the
+ * on-disk bitmap, the region_map.
+ *
+ * To further reduce metadata I/O overhead we use a second bitmap, the dmap
+ * (dirty bitmap), which tracks the dirty words, i.e. longs, of the region_map.
+ *
+ * When a region finishes hydrating dm-clone calls
+ * dm_clone_set_region_hydrated(), or for discard requests
+ * dm_clone_cond_set_range(), which sets the corresponding bits in region_map
+ * and dmap.
+ *
+ * During a metadata commit we scan the dmap for dirty region_map words (longs)
+ * and update accordingly the on-disk metadata. Thus, we don't have to flush to
+ * disk the whole region_map. We can just flush the dirty region_map words.
+ *
+ * We use a dirty bitmap, which is smaller than the original region_map, to
+ * reduce the amount of memory accesses during a metadata commit. As dm-bitset
+ * accesses the on-disk bitmap in 64-bit word granularity, there is no
+ * significant benefit in tracking the dirty region_map bits with a smaller
+ * granularity.
+ *
+ * We could update directly the on-disk bitmap, when dm-clone calls either
+ * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), buts this
+ * inserts significant metadata I/O overhead in dm-clone's I/O path. Also, as
+ * these two functions don't block, we can call them in interrupt context,
+ * e.g., in a hooked overwrite bio's completion routine, and further reduce the
+ * I/O completion latency.
+ *
+ * We maintain two dirty bitmaps. During a metadata commit we atomically swap
+ * the currently used dmap with the unused one. This allows the metadata update
+ * functions to run concurrently with an ongoing commit.
+ */
+struct dirty_map {
+       unsigned long *dirty_words;
+       unsigned int changed;
+};
+
+struct dm_clone_metadata {
+       /* The metadata block device */
+       struct block_device *bdev;
+
+       sector_t target_size;
+       sector_t region_size;
+       unsigned long nr_regions;
+       unsigned long nr_words;
+
+       /* Spinlock protecting the region and dirty bitmaps. */
+       spinlock_t bitmap_lock;
+       struct dirty_map dmap[2];
+       struct dirty_map *current_dmap;
+
+       /*
+        * In core copy of the on-disk bitmap to save constantly doing look ups
+        * on disk.
+        */
+       unsigned long *region_map;
+
+       /* Protected by bitmap_lock */
+       unsigned int read_only;
+
+       struct dm_block_manager *bm;
+       struct dm_space_map *sm;
+       struct dm_transaction_manager *tm;
+
+       struct rw_semaphore lock;
+
+       struct dm_disk_bitset bitset_info;
+       dm_block_t bitset_root;
+
+       /*
+        * Reading the space map root can fail, so we read it into this
+        * buffer before the superblock is locked and updated.
+        */
+       __u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
+
+       bool hydration_done:1;
+       bool fail_io:1;
+};
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Superblock validation.
+ */
+static void sb_prepare_for_write(struct dm_block_validator *v,
+                                struct dm_block *b, size_t sb_block_size)
+{
+       struct superblock_disk *sb;
+       u32 csum;
+
+       sb = dm_block_data(b);
+       sb->blocknr = cpu_to_le64(dm_block_location(b));
+
+       csum = dm_bm_checksum(&sb->flags, sb_block_size - sizeof(__le32),
+                             SUPERBLOCK_CSUM_XOR);
+       sb->csum = cpu_to_le32(csum);
+}
+
+static int sb_check(struct dm_block_validator *v, struct dm_block *b,
+                   size_t sb_block_size)
+{
+       struct superblock_disk *sb;
+       u32 csum, metadata_version;
+
+       sb = dm_block_data(b);
+
+       if (dm_block_location(b) != le64_to_cpu(sb->blocknr)) {
+               DMERR("Superblock check failed: blocknr %llu, expected %llu",
+                     le64_to_cpu(sb->blocknr),
+                     (unsigned long long)dm_block_location(b));
+               return -ENOTBLK;
+       }
+
+       if (le64_to_cpu(sb->magic) != SUPERBLOCK_MAGIC) {
+               DMERR("Superblock check failed: magic %llu, expected %llu",
+                     le64_to_cpu(sb->magic),
+                     (unsigned long long)SUPERBLOCK_MAGIC);
+               return -EILSEQ;
+       }
+
+       csum = dm_bm_checksum(&sb->flags, sb_block_size - sizeof(__le32),
+                             SUPERBLOCK_CSUM_XOR);
+       if (sb->csum != cpu_to_le32(csum)) {
+               DMERR("Superblock check failed: checksum %u, expected %u",
+                     csum, le32_to_cpu(sb->csum));
+               return -EILSEQ;
+       }
+
+       /* Check metadata version */
+       metadata_version = le32_to_cpu(sb->version);
+       if (metadata_version < DM_CLONE_MIN_METADATA_VERSION ||
+           metadata_version > DM_CLONE_MAX_METADATA_VERSION) {
+               DMERR("Clone metadata version %u found, but only versions between %u and %u supported.",
+                     metadata_version, DM_CLONE_MIN_METADATA_VERSION,
+                     DM_CLONE_MAX_METADATA_VERSION);
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static struct dm_block_validator sb_validator = {
+       .name = "superblock",
+       .prepare_for_write = sb_prepare_for_write,
+       .check = sb_check
+};
+
+/*
+ * Check if the superblock is formatted or not. We consider the superblock to
+ * be formatted in case we find non-zero bytes in it.
+ */
+static int __superblock_all_zeroes(struct dm_block_manager *bm, bool *formatted)
+{
+       int r;
+       unsigned int i, nr_words;
+       struct dm_block *sblock;
+       __le64 *data_le, zero = cpu_to_le64(0);
+
+       /*
+        * We don't use a validator here because the superblock could be all
+        * zeroes.
+        */
+       r = dm_bm_read_lock(bm, SUPERBLOCK_LOCATION, NULL, &sblock);
+       if (r) {
+               DMERR("Failed to read_lock superblock");
+               return r;
+       }
+
+       data_le = dm_block_data(sblock);
+       *formatted = false;
+
+       /* This assumes that the block size is a multiple of 8 bytes */
+       BUG_ON(dm_bm_block_size(bm) % sizeof(__le64));
+       nr_words = dm_bm_block_size(bm) / sizeof(__le64);
+       for (i = 0; i < nr_words; i++) {
+               if (data_le[i] != zero) {
+                       *formatted = true;
+                       break;
+               }
+       }
+
+       dm_bm_unlock(sblock);
+
+       return 0;
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Low-level metadata handling.
+ */
+static inline int superblock_read_lock(struct dm_clone_metadata *cmd,
+                                      struct dm_block **sblock)
+{
+       return dm_bm_read_lock(cmd->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
+}
+
+static inline int superblock_write_lock(struct dm_clone_metadata *cmd,
+                                       struct dm_block **sblock)
+{
+       return dm_bm_write_lock(cmd->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
+}
+
+static inline int superblock_write_lock_zero(struct dm_clone_metadata *cmd,
+                                            struct dm_block **sblock)
+{
+       return dm_bm_write_lock_zero(cmd->bm, SUPERBLOCK_LOCATION, &sb_validator, sblock);
+}
+
+static int __copy_sm_root(struct dm_clone_metadata *cmd)
+{
+       int r;
+       size_t root_size;
+
+       r = dm_sm_root_size(cmd->sm, &root_size);
+       if (r)
+               return r;
+
+       return dm_sm_copy_root(cmd->sm, &cmd->metadata_space_map_root, root_size);
+}
+
+/* Save dm-clone metadata in superblock */
+static void __prepare_superblock(struct dm_clone_metadata *cmd,
+                                struct superblock_disk *sb)
+{
+       sb->flags = cpu_to_le32(0UL);
+
+       /* FIXME: UUID is currently unused */
+       memset(sb->uuid, 0, sizeof(sb->uuid));
+
+       sb->magic = cpu_to_le64(SUPERBLOCK_MAGIC);
+       sb->version = cpu_to_le32(DM_CLONE_MAX_METADATA_VERSION);
+
+       /* Save the metadata space_map root */
+       memcpy(&sb->metadata_space_map_root, &cmd->metadata_space_map_root,
+              sizeof(cmd->metadata_space_map_root));
+
+       sb->region_size = cpu_to_le64(cmd->region_size);
+       sb->target_size = cpu_to_le64(cmd->target_size);
+       sb->bitset_root = cpu_to_le64(cmd->bitset_root);
+}
+
+static int __open_metadata(struct dm_clone_metadata *cmd)
+{
+       int r;
+       struct dm_block *sblock;
+       struct superblock_disk *sb;
+
+       r = superblock_read_lock(cmd, &sblock);
+
+       if (r) {
+               DMERR("Failed to read_lock superblock");
+               return r;
+       }
+
+       sb = dm_block_data(sblock);
+
+       /* Verify that target_size and region_size haven't changed. */
+       if (cmd->region_size != le64_to_cpu(sb->region_size) ||
+           cmd->target_size != le64_to_cpu(sb->target_size)) {
+               DMERR("Region and/or target size don't match the ones in metadata");
+               r = -EINVAL;
+               goto out_with_lock;
+       }
+
+       r = dm_tm_open_with_sm(cmd->bm, SUPERBLOCK_LOCATION,
+                              sb->metadata_space_map_root,
+                              sizeof(sb->metadata_space_map_root),
+                              &cmd->tm, &cmd->sm);
+
+       if (r) {
+               DMERR("dm_tm_open_with_sm failed");
+               goto out_with_lock;
+       }
+
+       dm_disk_bitset_init(cmd->tm, &cmd->bitset_info);
+       cmd->bitset_root = le64_to_cpu(sb->bitset_root);
+
+out_with_lock:
+       dm_bm_unlock(sblock);
+
+       return r;
+}
+
+static int __format_metadata(struct dm_clone_metadata *cmd)
+{
+       int r;
+       struct dm_block *sblock;
+       struct superblock_disk *sb;
+
+       r = dm_tm_create_with_sm(cmd->bm, SUPERBLOCK_LOCATION, &cmd->tm, &cmd->sm);
+       if (r) {
+               DMERR("Failed to create transaction manager");
+               return r;
+       }
+
+       dm_disk_bitset_init(cmd->tm, &cmd->bitset_info);
+
+       r = dm_bitset_empty(&cmd->bitset_info, &cmd->bitset_root);
+       if (r) {
+               DMERR("Failed to create empty on-disk bitset");
+               goto err_with_tm;
+       }
+
+       r = dm_bitset_resize(&cmd->bitset_info, cmd->bitset_root, 0,
+                            cmd->nr_regions, false, &cmd->bitset_root);
+       if (r) {
+               DMERR("Failed to resize on-disk bitset to %lu entries", cmd->nr_regions);
+               goto err_with_tm;
+       }
+
+       /* Flush to disk all blocks, except the superblock */
+       r = dm_tm_pre_commit(cmd->tm);
+       if (r) {
+               DMERR("dm_tm_pre_commit failed");
+               goto err_with_tm;
+       }
+
+       r = __copy_sm_root(cmd);
+       if (r) {
+               DMERR("__copy_sm_root failed");
+               goto err_with_tm;
+       }
+
+       r = superblock_write_lock_zero(cmd, &sblock);
+       if (r) {
+               DMERR("Failed to write_lock superblock");
+               goto err_with_tm;
+       }
+
+       sb = dm_block_data(sblock);
+       __prepare_superblock(cmd, sb);
+       r = dm_tm_commit(cmd->tm, sblock);
+       if (r) {
+               DMERR("Failed to commit superblock");
+               goto err_with_tm;
+       }
+
+       return 0;
+
+err_with_tm:
+       dm_sm_destroy(cmd->sm);
+       dm_tm_destroy(cmd->tm);
+
+       return r;
+}
+
+static int __open_or_format_metadata(struct dm_clone_metadata *cmd, bool may_format_device)
+{
+       int r;
+       bool formatted = false;
+
+       r = __superblock_all_zeroes(cmd->bm, &formatted);
+       if (r)
+               return r;
+
+       if (!formatted)
+               return may_format_device ? __format_metadata(cmd) : -EPERM;
+
+       return __open_metadata(cmd);
+}
+
+static int __create_persistent_data_structures(struct dm_clone_metadata *cmd,
+                                              bool may_format_device)
+{
+       int r;
+
+       /* Create block manager */
+       cmd->bm = dm_block_manager_create(cmd->bdev,
+                                        DM_CLONE_METADATA_BLOCK_SIZE << SECTOR_SHIFT,
+                                        DM_CLONE_MAX_CONCURRENT_LOCKS);
+       if (IS_ERR(cmd->bm)) {
+               DMERR("Failed to create block manager");
+               return PTR_ERR(cmd->bm);
+       }
+
+       r = __open_or_format_metadata(cmd, may_format_device);
+       if (r)
+               dm_block_manager_destroy(cmd->bm);
+
+       return r;
+}
+
+static void __destroy_persistent_data_structures(struct dm_clone_metadata *cmd)
+{
+       dm_sm_destroy(cmd->sm);
+       dm_tm_destroy(cmd->tm);
+       dm_block_manager_destroy(cmd->bm);
+}
+
+/*---------------------------------------------------------------------------*/
+
+static size_t bitmap_size(unsigned long nr_bits)
+{
+       return BITS_TO_LONGS(nr_bits) * sizeof(long);
+}
+
+static int dirty_map_init(struct dm_clone_metadata *cmd)
+{
+       cmd->dmap[0].changed = 0;
+       cmd->dmap[0].dirty_words = kvzalloc(bitmap_size(cmd->nr_words), GFP_KERNEL);
+
+       if (!cmd->dmap[0].dirty_words) {
+               DMERR("Failed to allocate dirty bitmap");
+               return -ENOMEM;
+       }
+
+       cmd->dmap[1].changed = 0;
+       cmd->dmap[1].dirty_words = kvzalloc(bitmap_size(cmd->nr_words), GFP_KERNEL);
+
+       if (!cmd->dmap[1].dirty_words) {
+               DMERR("Failed to allocate dirty bitmap");
+               kvfree(cmd->dmap[0].dirty_words);
+               return -ENOMEM;
+       }
+
+       cmd->current_dmap = &cmd->dmap[0];
+
+       return 0;
+}
+
+static void dirty_map_exit(struct dm_clone_metadata *cmd)
+{
+       kvfree(cmd->dmap[0].dirty_words);
+       kvfree(cmd->dmap[1].dirty_words);
+}
+
+static int __load_bitset_in_core(struct dm_clone_metadata *cmd)
+{
+       int r;
+       unsigned long i;
+       struct dm_bitset_cursor c;
+
+       /* Flush bitset cache */
+       r = dm_bitset_flush(&cmd->bitset_info, cmd->bitset_root, &cmd->bitset_root);
+       if (r)
+               return r;
+
+       r = dm_bitset_cursor_begin(&cmd->bitset_info, cmd->bitset_root, cmd->nr_regions, &c);
+       if (r)
+               return r;
+
+       for (i = 0; ; i++) {
+               if (dm_bitset_cursor_get_value(&c))
+                       __set_bit(i, cmd->region_map);
+               else
+                       __clear_bit(i, cmd->region_map);
+
+               if (i >= (cmd->nr_regions - 1))
+                       break;
+
+               r = dm_bitset_cursor_next(&c);
+
+               if (r)
+                       break;
+       }
+
+       dm_bitset_cursor_end(&c);
+
+       return r;
+}
+
+struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
+                                                sector_t target_size,
+                                                sector_t region_size)
+{
+       int r;
+       struct dm_clone_metadata *cmd;
+
+       cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
+       if (!cmd) {
+               DMERR("Failed to allocate memory for dm-clone metadata");
+               return ERR_PTR(-ENOMEM);
+       }
+
+       cmd->bdev = bdev;
+       cmd->target_size = target_size;
+       cmd->region_size = region_size;
+       cmd->nr_regions = dm_sector_div_up(cmd->target_size, cmd->region_size);
+       cmd->nr_words = BITS_TO_LONGS(cmd->nr_regions);
+
+       init_rwsem(&cmd->lock);
+       spin_lock_init(&cmd->bitmap_lock);
+       cmd->read_only = 0;
+       cmd->fail_io = false;
+       cmd->hydration_done = false;
+
+       cmd->region_map = kvmalloc(bitmap_size(cmd->nr_regions), GFP_KERNEL);
+       if (!cmd->region_map) {
+               DMERR("Failed to allocate memory for region bitmap");
+               r = -ENOMEM;
+               goto out_with_md;
+       }
+
+       r = __create_persistent_data_structures(cmd, true);
+       if (r)
+               goto out_with_region_map;
+
+       r = __load_bitset_in_core(cmd);
+       if (r) {
+               DMERR("Failed to load on-disk region map");
+               goto out_with_pds;
+       }
+
+       r = dirty_map_init(cmd);
+       if (r)
+               goto out_with_pds;
+
+       if (bitmap_full(cmd->region_map, cmd->nr_regions))
+               cmd->hydration_done = true;
+
+       return cmd;
+
+out_with_pds:
+       __destroy_persistent_data_structures(cmd);
+
+out_with_region_map:
+       kvfree(cmd->region_map);
+
+out_with_md:
+       kfree(cmd);
+
+       return ERR_PTR(r);
+}
+
+void dm_clone_metadata_close(struct dm_clone_metadata *cmd)
+{
+       if (!cmd->fail_io)
+               __destroy_persistent_data_structures(cmd);
+
+       dirty_map_exit(cmd);
+       kvfree(cmd->region_map);
+       kfree(cmd);
+}
+
+bool dm_clone_is_hydration_done(struct dm_clone_metadata *cmd)
+{
+       return cmd->hydration_done;
+}
+
+bool dm_clone_is_region_hydrated(struct dm_clone_metadata *cmd, unsigned long region_nr)
+{
+       return dm_clone_is_hydration_done(cmd) || test_bit(region_nr, cmd->region_map);
+}
+
+bool dm_clone_is_range_hydrated(struct dm_clone_metadata *cmd,
+                               unsigned long start, unsigned long nr_regions)
+{
+       unsigned long bit;
+
+       if (dm_clone_is_hydration_done(cmd))
+               return true;
+
+       bit = find_next_zero_bit(cmd->region_map, cmd->nr_regions, start);
+
+       return (bit >= (start + nr_regions));
+}
+
+unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd)
+{
+       return bitmap_weight(cmd->region_map, cmd->nr_regions);
+}
+
+unsigned long dm_clone_find_next_unhydrated_region(struct dm_clone_metadata *cmd,
+                                                  unsigned long start)
+{
+       return find_next_zero_bit(cmd->region_map, cmd->nr_regions, start);
+}
+
+static int __update_metadata_word(struct dm_clone_metadata *cmd, unsigned long word)
+{
+       int r;
+       unsigned long index = word * BITS_PER_LONG;
+       unsigned long max_index = min(cmd->nr_regions, (word + 1) * BITS_PER_LONG);
+
+       while (index < max_index) {
+               if (test_bit(index, cmd->region_map)) {
+                       r = dm_bitset_set_bit(&cmd->bitset_info, cmd->bitset_root,
+                                             index, &cmd->bitset_root);
+
+                       if (r) {
+                               DMERR("dm_bitset_set_bit failed");
+                               return r;
+                       }
+               }
+               index++;
+       }
+
+       return 0;
+}
+
+static int __metadata_commit(struct dm_clone_metadata *cmd)
+{
+       int r;
+       struct dm_block *sblock;
+       struct superblock_disk *sb;
+
+       /* Flush bitset cache */
+       r = dm_bitset_flush(&cmd->bitset_info, cmd->bitset_root, &cmd->bitset_root);
+       if (r) {
+               DMERR("dm_bitset_flush failed");
+               return r;
+       }
+
+       /* Flush to disk all blocks, except the superblock */
+       r = dm_tm_pre_commit(cmd->tm);
+       if (r) {
+               DMERR("dm_tm_pre_commit failed");
+               return r;
+       }
+
+       /* Save the space map root in cmd->metadata_space_map_root */
+       r = __copy_sm_root(cmd);
+       if (r) {
+               DMERR("__copy_sm_root failed");
+               return r;
+       }
+
+       /* Lock the superblock */
+       r = superblock_write_lock_zero(cmd, &sblock);
+       if (r) {
+               DMERR("Failed to write_lock superblock");
+               return r;
+       }
+
+       /* Save the metadata in superblock */
+       sb = dm_block_data(sblock);
+       __prepare_superblock(cmd, sb);
+
+       /* Unlock superblock and commit it to disk */
+       r = dm_tm_commit(cmd->tm, sblock);
+       if (r) {
+               DMERR("Failed to commit superblock");
+               return r;
+       }
+
+       /*
+        * FIXME: Find a more efficient way to check if the hydration is done.
+        */
+       if (bitmap_full(cmd->region_map, cmd->nr_regions))
+               cmd->hydration_done = true;
+
+       return 0;
+}
+
+static int __flush_dmap(struct dm_clone_metadata *cmd, struct dirty_map *dmap)
+{
+       int r;
+       unsigned long word, flags;
+
+       word = 0;
+       do {
+               word = find_next_bit(dmap->dirty_words, cmd->nr_words, word);
+
+               if (word == cmd->nr_words)
+                       break;
+
+               r = __update_metadata_word(cmd, word);
+
+               if (r)
+                       return r;
+
+               __clear_bit(word, dmap->dirty_words);
+               word++;
+       } while (word < cmd->nr_words);
+
+       r = __metadata_commit(cmd);
+
+       if (r)
+               return r;
+
+       /* Update the changed flag */
+       spin_lock_irqsave(&cmd->bitmap_lock, flags);
+       dmap->changed = 0;
+       spin_unlock_irqrestore(&cmd->bitmap_lock, flags);
+
+       return 0;
+}
+
+int dm_clone_metadata_commit(struct dm_clone_metadata *cmd)
+{
+       int r = -EPERM;
+       unsigned long flags;
+       struct dirty_map *dmap, *next_dmap;
+
+       down_write(&cmd->lock);
+
+       if (cmd->fail_io || dm_bm_is_read_only(cmd->bm))
+               goto out;
+
+       /* Get current dirty bitmap */
+       dmap = cmd->current_dmap;
+
+       /* Get next dirty bitmap */
+       next_dmap = (dmap == &cmd->dmap[0]) ? &cmd->dmap[1] : &cmd->dmap[0];
+
+       /*
+        * The last commit failed, so we don't have a clean dirty-bitmap to
+        * use.
+        */
+       if (WARN_ON(next_dmap->changed)) {
+               r = -EINVAL;
+               goto out;
+       }
+
+       /* Swap dirty bitmaps */
+       spin_lock_irqsave(&cmd->bitmap_lock, flags);
+       cmd->current_dmap = next_dmap;
+       spin_unlock_irqrestore(&cmd->bitmap_lock, flags);
+
+       /*
+        * No one is accessing the old dirty bitmap anymore, so we can flush
+        * it.
+        */
+       r = __flush_dmap(cmd, dmap);
+out:
+       up_write(&cmd->lock);
+
+       return r;
+}
+
+int dm_clone_set_region_hydrated(struct dm_clone_metadata *cmd, unsigned long region_nr)
+{
+       int r = 0;
+       struct dirty_map *dmap;
+       unsigned long word, flags;
+
+       word = region_nr / BITS_PER_LONG;
+
+       spin_lock_irqsave(&cmd->bitmap_lock, flags);
+
+       if (cmd->read_only) {
+               r = -EPERM;
+               goto out;
+       }
+
+       dmap = cmd->current_dmap;
+
+       __set_bit(word, dmap->dirty_words);
+       __set_bit(region_nr, cmd->region_map);
+       dmap->changed = 1;
+
+out:
+       spin_unlock_irqrestore(&cmd->bitmap_lock, flags);
+
+       return r;
+}
+
+int dm_clone_cond_set_range(struct dm_clone_metadata *cmd, unsigned long start,
+                           unsigned long nr_regions)
+{
+       int r = 0;
+       struct dirty_map *dmap;
+       unsigned long word, region_nr, flags;
+
+       spin_lock_irqsave(&cmd->bitmap_lock, flags);
+
+       if (cmd->read_only) {
+               r = -EPERM;
+               goto out;
+       }
+
+       dmap = cmd->current_dmap;
+       for (region_nr = start; region_nr < (start + nr_regions); region_nr++) {
+               if (!test_bit(region_nr, cmd->region_map)) {
+                       word = region_nr / BITS_PER_LONG;
+                       __set_bit(word, dmap->dirty_words);
+                       __set_bit(region_nr, cmd->region_map);
+                       dmap->changed = 1;
+               }
+       }
+out:
+       spin_unlock_irqrestore(&cmd->bitmap_lock, flags);
+
+       return r;
+}
+
+/*
+ * WARNING: This must not be called concurrently with either
+ * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), as it changes
+ * cmd->region_map without taking the cmd->bitmap_lock spinlock. The only
+ * exception is after setting the metadata to read-only mode, using
+ * dm_clone_metadata_set_read_only().
+ *
+ * We don't take the spinlock because __load_bitset_in_core() does I/O, so it
+ * may block.
+ */
+int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *cmd)
+{
+       int r = -EINVAL;
+
+       down_write(&cmd->lock);
+
+       if (cmd->fail_io)
+               goto out;
+
+       r = __load_bitset_in_core(cmd);
+out:
+       up_write(&cmd->lock);
+
+       return r;
+}
+
+bool dm_clone_changed_this_transaction(struct dm_clone_metadata *cmd)
+{
+       bool r;
+       unsigned long flags;
+
+       spin_lock_irqsave(&cmd->bitmap_lock, flags);
+       r = cmd->dmap[0].changed || cmd->dmap[1].changed;
+       spin_unlock_irqrestore(&cmd->bitmap_lock, flags);
+
+       return r;
+}
+
+int dm_clone_metadata_abort(struct dm_clone_metadata *cmd)
+{
+       int r = -EPERM;
+
+       down_write(&cmd->lock);
+
+       if (cmd->fail_io || dm_bm_is_read_only(cmd->bm))
+               goto out;
+
+       __destroy_persistent_data_structures(cmd);
+
+       r = __create_persistent_data_structures(cmd, false);
+       if (r) {
+               /* If something went wrong we can neither write nor read the metadata */
+               cmd->fail_io = true;
+       }
+out:
+       up_write(&cmd->lock);
+
+       return r;
+}
+
+void dm_clone_metadata_set_read_only(struct dm_clone_metadata *cmd)
+{
+       unsigned long flags;
+
+       down_write(&cmd->lock);
+
+       spin_lock_irqsave(&cmd->bitmap_lock, flags);
+       cmd->read_only = 1;
+       spin_unlock_irqrestore(&cmd->bitmap_lock, flags);
+
+       if (!cmd->fail_io)
+               dm_bm_set_read_only(cmd->bm);
+
+       up_write(&cmd->lock);
+}
+
+void dm_clone_metadata_set_read_write(struct dm_clone_metadata *cmd)
+{
+       unsigned long flags;
+
+       down_write(&cmd->lock);
+
+       spin_lock_irqsave(&cmd->bitmap_lock, flags);
+       cmd->read_only = 0;
+       spin_unlock_irqrestore(&cmd->bitmap_lock, flags);
+
+       if (!cmd->fail_io)
+               dm_bm_set_read_write(cmd->bm);
+
+       up_write(&cmd->lock);
+}
+
+int dm_clone_get_free_metadata_block_count(struct dm_clone_metadata *cmd,
+                                          dm_block_t *result)
+{
+       int r = -EINVAL;
+
+       down_read(&cmd->lock);
+
+       if (!cmd->fail_io)
+               r = dm_sm_get_nr_free(cmd->sm, result);
+
+       up_read(&cmd->lock);
+
+       return r;
+}
+
+int dm_clone_get_metadata_dev_size(struct dm_clone_metadata *cmd,
+                                  dm_block_t *result)
+{
+       int r = -EINVAL;
+
+       down_read(&cmd->lock);
+
+       if (!cmd->fail_io)
+               r = dm_sm_get_nr_blocks(cmd->sm, result);
+
+       up_read(&cmd->lock);
+
+       return r;
+}
diff --git a/drivers/md/dm-clone-metadata.h b/drivers/md/dm-clone-metadata.h
new file mode 100644 (file)
index 0000000..434bff0
--- /dev/null
@@ -0,0 +1,158 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
+ */
+
+#ifndef DM_CLONE_METADATA_H
+#define DM_CLONE_METADATA_H
+
+#include "persistent-data/dm-block-manager.h"
+#include "persistent-data/dm-space-map-metadata.h"
+
+#define DM_CLONE_METADATA_BLOCK_SIZE DM_SM_METADATA_BLOCK_SIZE
+
+/*
+ * The metadata device is currently limited in size.
+ */
+#define DM_CLONE_METADATA_MAX_SECTORS DM_SM_METADATA_MAX_SECTORS
+
+/*
+ * A metadata device larger than 16GB triggers a warning.
+ */
+#define DM_CLONE_METADATA_MAX_SECTORS_WARNING (16 * (1024 * 1024 * 1024 >> SECTOR_SHIFT))
+
+#define SPACE_MAP_ROOT_SIZE 128
+
+/* dm-clone metadata */
+struct dm_clone_metadata;
+
+/*
+ * Set region status to hydrated.
+ *
+ * @cmd: The dm-clone metadata
+ * @region_nr: The region number
+ *
+ * This function doesn't block, so it's safe to call it from interrupt context.
+ */
+int dm_clone_set_region_hydrated(struct dm_clone_metadata *cmd, unsigned long region_nr);
+
+/*
+ * Set status of all regions in the provided range to hydrated, if not already
+ * hydrated.
+ *
+ * @cmd: The dm-clone metadata
+ * @start: Starting region number
+ * @nr_regions: Number of regions in the range
+ *
+ * This function doesn't block, so it's safe to call it from interrupt context.
+ */
+int dm_clone_cond_set_range(struct dm_clone_metadata *cmd, unsigned long start,
+                           unsigned long nr_regions);
+
+/*
+ * Read existing or create fresh metadata.
+ *
+ * @bdev: The device storing the metadata
+ * @target_size: The target size
+ * @region_size: The region size
+ *
+ * @returns: The dm-clone metadata
+ *
+ * This function reads the superblock of @bdev and checks if it's all zeroes.
+ * If it is, it formats @bdev and creates fresh metadata. If it isn't, it
+ * validates the metadata stored in @bdev.
+ */
+struct dm_clone_metadata *dm_clone_metadata_open(struct block_device *bdev,
+                                                sector_t target_size,
+                                                sector_t region_size);
+
+/*
+ * Free the resources related to metadata management.
+ */
+void dm_clone_metadata_close(struct dm_clone_metadata *cmd);
+
+/*
+ * Commit dm-clone metadata to disk.
+ */
+int dm_clone_metadata_commit(struct dm_clone_metadata *cmd);
+
+/*
+ * Reload the in core copy of the on-disk bitmap.
+ *
+ * This should be used after aborting a metadata transaction and setting the
+ * metadata to read-only, to invalidate the in-core cache and make it match the
+ * on-disk metadata.
+ *
+ * WARNING: It must not be called concurrently with either
+ * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), as it updates
+ * the region bitmap without taking the relevant spinlock. We don't take the
+ * spinlock because dm_clone_reload_in_core_bitset() does I/O, so it may block.
+ *
+ * But, it's safe to use it after calling dm_clone_metadata_set_read_only(),
+ * because the latter sets the metadata to read-only mode. Both
+ * dm_clone_set_region_hydrated() and dm_clone_cond_set_range() refuse to touch
+ * the region bitmap, after calling dm_clone_metadata_set_read_only().
+ */
+int dm_clone_reload_in_core_bitset(struct dm_clone_metadata *cmd);
+
+/*
+ * Check whether dm-clone's metadata changed this transaction.
+ */
+bool dm_clone_changed_this_transaction(struct dm_clone_metadata *cmd);
+
+/*
+ * Abort current metadata transaction and rollback metadata to the last
+ * committed transaction.
+ */
+int dm_clone_metadata_abort(struct dm_clone_metadata *cmd);
+
+/*
+ * Switches metadata to a read only mode. Once read-only mode has been entered
+ * the following functions will return -EPERM:
+ *
+ *   dm_clone_metadata_commit()
+ *   dm_clone_set_region_hydrated()
+ *   dm_clone_cond_set_range()
+ *   dm_clone_metadata_abort()
+ */
+void dm_clone_metadata_set_read_only(struct dm_clone_metadata *cmd);
+void dm_clone_metadata_set_read_write(struct dm_clone_metadata *cmd);
+
+/*
+ * Returns true if the hydration of the destination device is finished.
+ */
+bool dm_clone_is_hydration_done(struct dm_clone_metadata *cmd);
+
+/*
+ * Returns true if region @region_nr is hydrated.
+ */
+bool dm_clone_is_region_hydrated(struct dm_clone_metadata *cmd, unsigned long region_nr);
+
+/*
+ * Returns true if all the regions in the range are hydrated.
+ */
+bool dm_clone_is_range_hydrated(struct dm_clone_metadata *cmd,
+                               unsigned long start, unsigned long nr_regions);
+
+/*
+ * Returns the number of hydrated regions.
+ */
+unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd);
+
+/*
+ * Returns the first unhydrated region with region_nr >= @start
+ */
+unsigned long dm_clone_find_next_unhydrated_region(struct dm_clone_metadata *cmd,
+                                                  unsigned long start);
+
+/*
+ * Get the number of free metadata blocks.
+ */
+int dm_clone_get_free_metadata_block_count(struct dm_clone_metadata *cmd, dm_block_t *result);
+
+/*
+ * Get the total number of metadata blocks.
+ */
+int dm_clone_get_metadata_dev_size(struct dm_clone_metadata *cmd, dm_block_t *result);
+
+#endif /* DM_CLONE_METADATA_H */
diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
new file mode 100644 (file)
index 0000000..cd6f9e9
--- /dev/null
@@ -0,0 +1,2191 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2019 Arrikto, Inc. All Rights Reserved.
+ */
+
+#include <linux/mm.h>
+#include <linux/bio.h>
+#include <linux/err.h>
+#include <linux/hash.h>
+#include <linux/list.h>
+#include <linux/log2.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/dm-io.h>
+#include <linux/mutex.h>
+#include <linux/atomic.h>
+#include <linux/bitops.h>
+#include <linux/blkdev.h>
+#include <linux/kdev_t.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/jiffies.h>
+#include <linux/mempool.h>
+#include <linux/spinlock.h>
+#include <linux/blk_types.h>
+#include <linux/dm-kcopyd.h>
+#include <linux/workqueue.h>
+#include <linux/backing-dev.h>
+#include <linux/device-mapper.h>
+
+#include "dm.h"
+#include "dm-clone-metadata.h"
+
+#define DM_MSG_PREFIX "clone"
+
+/*
+ * Minimum and maximum allowed region sizes
+ */
+#define MIN_REGION_SIZE (1 << 3)  /* 4KB */
+#define MAX_REGION_SIZE (1 << 21) /* 1GB */
+
+#define MIN_HYDRATIONS 256 /* Size of hydration mempool */
+#define DEFAULT_HYDRATION_THRESHOLD 1 /* 1 region */
+#define DEFAULT_HYDRATION_BATCH_SIZE 1 /* Hydrate in batches of 1 region */
+
+#define COMMIT_PERIOD HZ /* 1 sec */
+
+/*
+ * Hydration hash table size: 1 << HASH_TABLE_BITS
+ */
+#define HASH_TABLE_BITS 15
+
+DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(clone_hydration_throttle,
+       "A percentage of time allocated for hydrating regions");
+
+/* Slab cache for struct dm_clone_region_hydration */
+static struct kmem_cache *_hydration_cache;
+
+/* dm-clone metadata modes */
+enum clone_metadata_mode {
+       CM_WRITE,               /* metadata may be changed */
+       CM_READ_ONLY,           /* metadata may not be changed */
+       CM_FAIL,                /* all metadata I/O fails */
+};
+
+struct hash_table_bucket;
+
+struct clone {
+       struct dm_target *ti;
+       struct dm_target_callbacks callbacks;
+
+       struct dm_dev *metadata_dev;
+       struct dm_dev *dest_dev;
+       struct dm_dev *source_dev;
+
+       unsigned long nr_regions;
+       sector_t region_size;
+       unsigned int region_shift;
+
+       /*
+        * A metadata commit and the actions taken in case it fails should run
+        * as a single atomic step.
+        */
+       struct mutex commit_lock;
+
+       struct dm_clone_metadata *cmd;
+
+       /* Region hydration hash table */
+       struct hash_table_bucket *ht;
+
+       atomic_t ios_in_flight;
+
+       wait_queue_head_t hydration_stopped;
+
+       mempool_t hydration_pool;
+
+       unsigned long last_commit_jiffies;
+
+       /*
+        * We defer incoming WRITE bios for regions that are not hydrated,
+        * until after these regions have been hydrated.
+        *
+        * Also, we defer REQ_FUA and REQ_PREFLUSH bios, until after the
+        * metadata have been committed.
+        */
+       spinlock_t lock;
+       struct bio_list deferred_bios;
+       struct bio_list deferred_discard_bios;
+       struct bio_list deferred_flush_bios;
+       struct bio_list deferred_flush_completions;
+
+       /* Maximum number of regions being copied during background hydration. */
+       unsigned int hydration_threshold;
+
+       /* Number of regions to batch together during background hydration. */
+       unsigned int hydration_batch_size;
+
+       /* Which region to hydrate next */
+       unsigned long hydration_offset;
+
+       atomic_t hydrations_in_flight;
+
+       /*
+        * Save a copy of the table line rather than reconstructing it for the
+        * status.
+        */
+       unsigned int nr_ctr_args;
+       const char **ctr_args;
+
+       struct workqueue_struct *wq;
+       struct work_struct worker;
+       struct delayed_work waker;
+
+       struct dm_kcopyd_client *kcopyd_client;
+
+       enum clone_metadata_mode mode;
+       unsigned long flags;
+};
+
+/*
+ * dm-clone flags
+ */
+#define DM_CLONE_DISCARD_PASSDOWN 0
+#define DM_CLONE_HYDRATION_ENABLED 1
+#define DM_CLONE_HYDRATION_SUSPENDED 2
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Metadata failure handling.
+ */
+static enum clone_metadata_mode get_clone_mode(struct clone *clone)
+{
+       return READ_ONCE(clone->mode);
+}
+
+static const char *clone_device_name(struct clone *clone)
+{
+       return dm_table_device_name(clone->ti->table);
+}
+
+static void __set_clone_mode(struct clone *clone, enum clone_metadata_mode new_mode)
+{
+       const char *descs[] = {
+               "read-write",
+               "read-only",
+               "fail"
+       };
+
+       enum clone_metadata_mode old_mode = get_clone_mode(clone);
+
+       /* Never move out of fail mode */
+       if (old_mode == CM_FAIL)
+               new_mode = CM_FAIL;
+
+       switch (new_mode) {
+       case CM_FAIL:
+       case CM_READ_ONLY:
+               dm_clone_metadata_set_read_only(clone->cmd);
+               break;
+
+       case CM_WRITE:
+               dm_clone_metadata_set_read_write(clone->cmd);
+               break;
+       }
+
+       WRITE_ONCE(clone->mode, new_mode);
+
+       if (new_mode != old_mode) {
+               dm_table_event(clone->ti->table);
+               DMINFO("%s: Switching to %s mode", clone_device_name(clone),
+                      descs[(int)new_mode]);
+       }
+}
+
+static void __abort_transaction(struct clone *clone)
+{
+       const char *dev_name = clone_device_name(clone);
+
+       if (get_clone_mode(clone) >= CM_READ_ONLY)
+               return;
+
+       DMERR("%s: Aborting current metadata transaction", dev_name);
+       if (dm_clone_metadata_abort(clone->cmd)) {
+               DMERR("%s: Failed to abort metadata transaction", dev_name);
+               __set_clone_mode(clone, CM_FAIL);
+       }
+}
+
+static void __reload_in_core_bitset(struct clone *clone)
+{
+       const char *dev_name = clone_device_name(clone);
+
+       if (get_clone_mode(clone) == CM_FAIL)
+               return;
+
+       /* Reload the on-disk bitset */
+       DMINFO("%s: Reloading on-disk bitmap", dev_name);
+       if (dm_clone_reload_in_core_bitset(clone->cmd)) {
+               DMERR("%s: Failed to reload on-disk bitmap", dev_name);
+               __set_clone_mode(clone, CM_FAIL);
+       }
+}
+
+static void __metadata_operation_failed(struct clone *clone, const char *op, int r)
+{
+       DMERR("%s: Metadata operation `%s' failed: error = %d",
+             clone_device_name(clone), op, r);
+
+       __abort_transaction(clone);
+       __set_clone_mode(clone, CM_READ_ONLY);
+
+       /*
+        * dm_clone_reload_in_core_bitset() may run concurrently with either
+        * dm_clone_set_region_hydrated() or dm_clone_cond_set_range(), but
+        * it's safe as we have already set the metadata to read-only mode.
+        */
+       __reload_in_core_bitset(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/* Wake up anyone waiting for region hydrations to stop */
+static inline void wakeup_hydration_waiters(struct clone *clone)
+{
+       wake_up_all(&clone->hydration_stopped);
+}
+
+static inline void wake_worker(struct clone *clone)
+{
+       queue_work(clone->wq, &clone->worker);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * bio helper functions.
+ */
+static inline void remap_to_source(struct clone *clone, struct bio *bio)
+{
+       bio_set_dev(bio, clone->source_dev->bdev);
+}
+
+static inline void remap_to_dest(struct clone *clone, struct bio *bio)
+{
+       bio_set_dev(bio, clone->dest_dev->bdev);
+}
+
+static bool bio_triggers_commit(struct clone *clone, struct bio *bio)
+{
+       return op_is_flush(bio->bi_opf) &&
+               dm_clone_changed_this_transaction(clone->cmd);
+}
+
+/* Get the address of the region in sectors */
+static inline sector_t region_to_sector(struct clone *clone, unsigned long region_nr)
+{
+       return (region_nr << clone->region_shift);
+}
+
+/* Get the region number of the bio */
+static inline unsigned long bio_to_region(struct clone *clone, struct bio *bio)
+{
+       return (bio->bi_iter.bi_sector >> clone->region_shift);
+}
+
+/* Get the region range covered by the bio */
+static void bio_region_range(struct clone *clone, struct bio *bio,
+                            unsigned long *rs, unsigned long *re)
+{
+       *rs = dm_sector_div_up(bio->bi_iter.bi_sector, clone->region_size);
+       *re = bio_end_sector(bio) >> clone->region_shift;
+}
+
+/* Check whether a bio overwrites a region */
+static inline bool is_overwrite_bio(struct clone *clone, struct bio *bio)
+{
+       return (bio_data_dir(bio) == WRITE && bio_sectors(bio) == clone->region_size);
+}
+
+static void fail_bios(struct bio_list *bios, blk_status_t status)
+{
+       struct bio *bio;
+
+       while ((bio = bio_list_pop(bios))) {
+               bio->bi_status = status;
+               bio_endio(bio);
+       }
+}
+
+static void submit_bios(struct bio_list *bios)
+{
+       struct bio *bio;
+       struct blk_plug plug;
+
+       blk_start_plug(&plug);
+
+       while ((bio = bio_list_pop(bios)))
+               generic_make_request(bio);
+
+       blk_finish_plug(&plug);
+}
+
+/*
+ * Submit bio to the underlying device.
+ *
+ * If the bio triggers a commit, delay it, until after the metadata have been
+ * committed.
+ *
+ * NOTE: The bio remapping must be performed by the caller.
+ */
+static void issue_bio(struct clone *clone, struct bio *bio)
+{
+       unsigned long flags;
+
+       if (!bio_triggers_commit(clone, bio)) {
+               generic_make_request(bio);
+               return;
+       }
+
+       /*
+        * If the metadata mode is RO or FAIL we won't be able to commit the
+        * metadata, so we complete the bio with an error.
+        */
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+               bio_io_error(bio);
+               return;
+       }
+
+       /*
+        * Batch together any bios that trigger commits and then issue a single
+        * commit for them in process_deferred_flush_bios().
+        */
+       spin_lock_irqsave(&clone->lock, flags);
+       bio_list_add(&clone->deferred_flush_bios, bio);
+       spin_unlock_irqrestore(&clone->lock, flags);
+
+       wake_worker(clone);
+}
+
+/*
+ * Remap bio to the destination device and submit it.
+ *
+ * If the bio triggers a commit, delay it, until after the metadata have been
+ * committed.
+ */
+static void remap_and_issue(struct clone *clone, struct bio *bio)
+{
+       remap_to_dest(clone, bio);
+       issue_bio(clone, bio);
+}
+
+/*
+ * Issue bios that have been deferred until after their region has finished
+ * hydrating.
+ *
+ * We delegate the bio submission to the worker thread, so this is safe to call
+ * from interrupt context.
+ */
+static void issue_deferred_bios(struct clone *clone, struct bio_list *bios)
+{
+       struct bio *bio;
+       unsigned long flags;
+       struct bio_list flush_bios = BIO_EMPTY_LIST;
+       struct bio_list normal_bios = BIO_EMPTY_LIST;
+
+       if (bio_list_empty(bios))
+               return;
+
+       while ((bio = bio_list_pop(bios))) {
+               if (bio_triggers_commit(clone, bio))
+                       bio_list_add(&flush_bios, bio);
+               else
+                       bio_list_add(&normal_bios, bio);
+       }
+
+       spin_lock_irqsave(&clone->lock, flags);
+       bio_list_merge(&clone->deferred_bios, &normal_bios);
+       bio_list_merge(&clone->deferred_flush_bios, &flush_bios);
+       spin_unlock_irqrestore(&clone->lock, flags);
+
+       wake_worker(clone);
+}
+
+static void complete_overwrite_bio(struct clone *clone, struct bio *bio)
+{
+       unsigned long flags;
+
+       /*
+        * If the bio has the REQ_FUA flag set we must commit the metadata
+        * before signaling its completion.
+        *
+        * complete_overwrite_bio() is only called by hydration_complete(),
+        * after having successfully updated the metadata. This means we don't
+        * need to call dm_clone_changed_this_transaction() to check if the
+        * metadata has changed and thus we can avoid taking the metadata spin
+        * lock.
+        */
+       if (!(bio->bi_opf & REQ_FUA)) {
+               bio_endio(bio);
+               return;
+       }
+
+       /*
+        * If the metadata mode is RO or FAIL we won't be able to commit the
+        * metadata, so we complete the bio with an error.
+        */
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+               bio_io_error(bio);
+               return;
+       }
+
+       /*
+        * Batch together any bios that trigger commits and then issue a single
+        * commit for them in process_deferred_flush_bios().
+        */
+       spin_lock_irqsave(&clone->lock, flags);
+       bio_list_add(&clone->deferred_flush_completions, bio);
+       spin_unlock_irqrestore(&clone->lock, flags);
+
+       wake_worker(clone);
+}
+
+static void trim_bio(struct bio *bio, sector_t sector, unsigned int len)
+{
+       bio->bi_iter.bi_sector = sector;
+       bio->bi_iter.bi_size = to_bytes(len);
+}
+
+static void complete_discard_bio(struct clone *clone, struct bio *bio, bool success)
+{
+       unsigned long rs, re;
+
+       /*
+        * If the destination device supports discards, remap and trim the
+        * discard bio and pass it down. Otherwise complete the bio
+        * immediately.
+        */
+       if (test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags) && success) {
+               remap_to_dest(clone, bio);
+               bio_region_range(clone, bio, &rs, &re);
+               trim_bio(bio, rs << clone->region_shift,
+                        (re - rs) << clone->region_shift);
+               generic_make_request(bio);
+       } else
+               bio_endio(bio);
+}
+
+static void process_discard_bio(struct clone *clone, struct bio *bio)
+{
+       unsigned long rs, re, flags;
+
+       bio_region_range(clone, bio, &rs, &re);
+       BUG_ON(re > clone->nr_regions);
+
+       if (unlikely(rs == re)) {
+               bio_endio(bio);
+               return;
+       }
+
+       /*
+        * The covered regions are already hydrated so we just need to pass
+        * down the discard.
+        */
+       if (dm_clone_is_range_hydrated(clone->cmd, rs, re - rs)) {
+               complete_discard_bio(clone, bio, true);
+               return;
+       }
+
+       /*
+        * If the metadata mode is RO or FAIL we won't be able to update the
+        * metadata for the regions covered by the discard so we just ignore
+        * it.
+        */
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+               bio_endio(bio);
+               return;
+       }
+
+       /*
+        * Defer discard processing.
+        */
+       spin_lock_irqsave(&clone->lock, flags);
+       bio_list_add(&clone->deferred_discard_bios, bio);
+       spin_unlock_irqrestore(&clone->lock, flags);
+
+       wake_worker(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * dm-clone region hydrations.
+ */
+struct dm_clone_region_hydration {
+       struct clone *clone;
+       unsigned long region_nr;
+
+       struct bio *overwrite_bio;
+       bio_end_io_t *overwrite_bio_end_io;
+
+       struct bio_list deferred_bios;
+
+       blk_status_t status;
+
+       /* Used by hydration batching */
+       struct list_head list;
+
+       /* Used by hydration hash table */
+       struct hlist_node h;
+};
+
+/*
+ * Hydration hash table implementation.
+ *
+ * Ideally we would like to use list_bl, which uses bit spin locks and employs
+ * the least significant bit of the list head to lock the corresponding bucket,
+ * reducing the memory overhead for the locks. But, currently, list_bl and bit
+ * spin locks don't support IRQ safe versions. Since we have to take the lock
+ * in both process and interrupt context, we must fall back to using regular
+ * spin locks; one per hash table bucket.
+ */
+struct hash_table_bucket {
+       struct hlist_head head;
+
+       /* Spinlock protecting the bucket */
+       spinlock_t lock;
+};
+
+#define bucket_lock_irqsave(bucket, flags) \
+       spin_lock_irqsave(&(bucket)->lock, flags)
+
+#define bucket_unlock_irqrestore(bucket, flags) \
+       spin_unlock_irqrestore(&(bucket)->lock, flags)
+
+static int hash_table_init(struct clone *clone)
+{
+       unsigned int i, sz;
+       struct hash_table_bucket *bucket;
+
+       sz = 1 << HASH_TABLE_BITS;
+
+       clone->ht = kvmalloc(sz * sizeof(struct hash_table_bucket), GFP_KERNEL);
+       if (!clone->ht)
+               return -ENOMEM;
+
+       for (i = 0; i < sz; i++) {
+               bucket = clone->ht + i;
+
+               INIT_HLIST_HEAD(&bucket->head);
+               spin_lock_init(&bucket->lock);
+       }
+
+       return 0;
+}
+
+static void hash_table_exit(struct clone *clone)
+{
+       kvfree(clone->ht);
+}
+
+static struct hash_table_bucket *get_hash_table_bucket(struct clone *clone,
+                                                      unsigned long region_nr)
+{
+       return &clone->ht[hash_long(region_nr, HASH_TABLE_BITS)];
+}
+
+/*
+ * Search hash table for a hydration with hd->region_nr == region_nr
+ *
+ * NOTE: Must be called with the bucket lock held
+ */
+struct dm_clone_region_hydration *__hash_find(struct hash_table_bucket *bucket,
+                                             unsigned long region_nr)
+{
+       struct dm_clone_region_hydration *hd;
+
+       hlist_for_each_entry(hd, &bucket->head, h) {
+               if (hd->region_nr == region_nr)
+                       return hd;
+       }
+
+       return NULL;
+}
+
+/*
+ * Insert a hydration into the hash table.
+ *
+ * NOTE: Must be called with the bucket lock held.
+ */
+static inline void __insert_region_hydration(struct hash_table_bucket *bucket,
+                                            struct dm_clone_region_hydration *hd)
+{
+       hlist_add_head(&hd->h, &bucket->head);
+}
+
+/*
+ * This function inserts a hydration into the hash table, unless someone else
+ * managed to insert a hydration for the same region first. In the latter case
+ * it returns the existing hydration descriptor for this region.
+ *
+ * NOTE: Must be called with the hydration hash table lock held.
+ */
+static struct dm_clone_region_hydration *
+__find_or_insert_region_hydration(struct hash_table_bucket *bucket,
+                                 struct dm_clone_region_hydration *hd)
+{
+       struct dm_clone_region_hydration *hd2;
+
+       hd2 = __hash_find(bucket, hd->region_nr);
+       if (hd2)
+               return hd2;
+
+       __insert_region_hydration(bucket, hd);
+
+       return hd;
+}
+
+/*---------------------------------------------------------------------------*/
+
+/* Allocate a hydration */
+static struct dm_clone_region_hydration *alloc_hydration(struct clone *clone)
+{
+       struct dm_clone_region_hydration *hd;
+
+       /*
+        * Allocate a hydration from the hydration mempool.
+        * This might block but it can't fail.
+        */
+       hd = mempool_alloc(&clone->hydration_pool, GFP_NOIO);
+       hd->clone = clone;
+
+       return hd;
+}
+
+static inline void free_hydration(struct dm_clone_region_hydration *hd)
+{
+       mempool_free(hd, &hd->clone->hydration_pool);
+}
+
+/* Initialize a hydration */
+static void hydration_init(struct dm_clone_region_hydration *hd, unsigned long region_nr)
+{
+       hd->region_nr = region_nr;
+       hd->overwrite_bio = NULL;
+       bio_list_init(&hd->deferred_bios);
+       hd->status = 0;
+
+       INIT_LIST_HEAD(&hd->list);
+       INIT_HLIST_NODE(&hd->h);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Update dm-clone's metadata after a region has finished hydrating and remove
+ * hydration from the hash table.
+ */
+static int hydration_update_metadata(struct dm_clone_region_hydration *hd)
+{
+       int r = 0;
+       unsigned long flags;
+       struct hash_table_bucket *bucket;
+       struct clone *clone = hd->clone;
+
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
+               r = -EPERM;
+
+       /* Update the metadata */
+       if (likely(!r) && hd->status == BLK_STS_OK)
+               r = dm_clone_set_region_hydrated(clone->cmd, hd->region_nr);
+
+       bucket = get_hash_table_bucket(clone, hd->region_nr);
+
+       /* Remove hydration from hash table */
+       bucket_lock_irqsave(bucket, flags);
+       hlist_del(&hd->h);
+       bucket_unlock_irqrestore(bucket, flags);
+
+       return r;
+}
+
+/*
+ * Complete a region's hydration:
+ *
+ *     1. Update dm-clone's metadata.
+ *     2. Remove hydration from hash table.
+ *     3. Complete overwrite bio.
+ *     4. Issue deferred bios.
+ *     5. If this was the last hydration, wake up anyone waiting for
+ *        hydrations to finish.
+ */
+static void hydration_complete(struct dm_clone_region_hydration *hd)
+{
+       int r;
+       blk_status_t status;
+       struct clone *clone = hd->clone;
+
+       r = hydration_update_metadata(hd);
+
+       if (hd->status == BLK_STS_OK && likely(!r)) {
+               if (hd->overwrite_bio)
+                       complete_overwrite_bio(clone, hd->overwrite_bio);
+
+               issue_deferred_bios(clone, &hd->deferred_bios);
+       } else {
+               status = r ? BLK_STS_IOERR : hd->status;
+
+               if (hd->overwrite_bio)
+                       bio_list_add(&hd->deferred_bios, hd->overwrite_bio);
+
+               fail_bios(&hd->deferred_bios, status);
+       }
+
+       free_hydration(hd);
+
+       if (atomic_dec_and_test(&clone->hydrations_in_flight))
+               wakeup_hydration_waiters(clone);
+}
+
+static void hydration_kcopyd_callback(int read_err, unsigned long write_err, void *context)
+{
+       blk_status_t status;
+
+       struct dm_clone_region_hydration *tmp, *hd = context;
+       struct clone *clone = hd->clone;
+
+       LIST_HEAD(batched_hydrations);
+
+       if (read_err || write_err) {
+               DMERR_LIMIT("%s: hydration failed", clone_device_name(clone));
+               status = BLK_STS_IOERR;
+       } else {
+               status = BLK_STS_OK;
+       }
+       list_splice_tail(&hd->list, &batched_hydrations);
+
+       hd->status = status;
+       hydration_complete(hd);
+
+       /* Complete batched hydrations */
+       list_for_each_entry_safe(hd, tmp, &batched_hydrations, list) {
+               hd->status = status;
+               hydration_complete(hd);
+       }
+
+       /* Continue background hydration, if there is no I/O in-flight */
+       if (test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags) &&
+           !atomic_read(&clone->ios_in_flight))
+               wake_worker(clone);
+}
+
+static void hydration_copy(struct dm_clone_region_hydration *hd, unsigned int nr_regions)
+{
+       unsigned long region_start, region_end;
+       sector_t tail_size, region_size, total_size;
+       struct dm_io_region from, to;
+       struct clone *clone = hd->clone;
+
+       region_size = clone->region_size;
+       region_start = hd->region_nr;
+       region_end = region_start + nr_regions - 1;
+
+       total_size = (nr_regions - 1) << clone->region_shift;
+
+       if (region_end == clone->nr_regions - 1) {
+               /*
+                * The last region of the target might be smaller than
+                * region_size.
+                */
+               tail_size = clone->ti->len & (region_size - 1);
+               if (!tail_size)
+                       tail_size = region_size;
+       } else {
+               tail_size = region_size;
+       }
+
+       total_size += tail_size;
+
+       from.bdev = clone->source_dev->bdev;
+       from.sector = region_to_sector(clone, region_start);
+       from.count = total_size;
+
+       to.bdev = clone->dest_dev->bdev;
+       to.sector = from.sector;
+       to.count = from.count;
+
+       /* Issue copy */
+       atomic_add(nr_regions, &clone->hydrations_in_flight);
+       dm_kcopyd_copy(clone->kcopyd_client, &from, 1, &to, 0,
+                      hydration_kcopyd_callback, hd);
+}
+
+static void overwrite_endio(struct bio *bio)
+{
+       struct dm_clone_region_hydration *hd = bio->bi_private;
+
+       bio->bi_end_io = hd->overwrite_bio_end_io;
+       hd->status = bio->bi_status;
+
+       hydration_complete(hd);
+}
+
+static void hydration_overwrite(struct dm_clone_region_hydration *hd, struct bio *bio)
+{
+       /*
+        * We don't need to save and restore bio->bi_private because device
+        * mapper core generates a new bio for us to use, with clean
+        * bi_private.
+        */
+       hd->overwrite_bio = bio;
+       hd->overwrite_bio_end_io = bio->bi_end_io;
+
+       bio->bi_end_io = overwrite_endio;
+       bio->bi_private = hd;
+
+       atomic_inc(&hd->clone->hydrations_in_flight);
+       generic_make_request(bio);
+}
+
+/*
+ * Hydrate bio's region.
+ *
+ * This function starts the hydration of the bio's region and puts the bio in
+ * the list of deferred bios for this region. In case, by the time this
+ * function is called, the region has finished hydrating it's submitted to the
+ * destination device.
+ *
+ * NOTE: The bio remapping must be performed by the caller.
+ */
+static void hydrate_bio_region(struct clone *clone, struct bio *bio)
+{
+       unsigned long flags;
+       unsigned long region_nr;
+       struct hash_table_bucket *bucket;
+       struct dm_clone_region_hydration *hd, *hd2;
+
+       region_nr = bio_to_region(clone, bio);
+       bucket = get_hash_table_bucket(clone, region_nr);
+
+       bucket_lock_irqsave(bucket, flags);
+
+       hd = __hash_find(bucket, region_nr);
+       if (hd) {
+               /* Someone else is hydrating the region */
+               bio_list_add(&hd->deferred_bios, bio);
+               bucket_unlock_irqrestore(bucket, flags);
+               return;
+       }
+
+       if (dm_clone_is_region_hydrated(clone->cmd, region_nr)) {
+               /* The region has been hydrated */
+               bucket_unlock_irqrestore(bucket, flags);
+               issue_bio(clone, bio);
+               return;
+       }
+
+       /*
+        * We must allocate a hydration descriptor and start the hydration of
+        * the corresponding region.
+        */
+       bucket_unlock_irqrestore(bucket, flags);
+
+       hd = alloc_hydration(clone);
+       hydration_init(hd, region_nr);
+
+       bucket_lock_irqsave(bucket, flags);
+
+       /* Check if the region has been hydrated in the meantime. */
+       if (dm_clone_is_region_hydrated(clone->cmd, region_nr)) {
+               bucket_unlock_irqrestore(bucket, flags);
+               free_hydration(hd);
+               issue_bio(clone, bio);
+               return;
+       }
+
+       hd2 = __find_or_insert_region_hydration(bucket, hd);
+       if (hd2 != hd) {
+               /* Someone else started the region's hydration. */
+               bio_list_add(&hd2->deferred_bios, bio);
+               bucket_unlock_irqrestore(bucket, flags);
+               free_hydration(hd);
+               return;
+       }
+
+       /*
+        * If the metadata mode is RO or FAIL then there is no point starting a
+        * hydration, since we will not be able to update the metadata when the
+        * hydration finishes.
+        */
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+               hlist_del(&hd->h);
+               bucket_unlock_irqrestore(bucket, flags);
+               free_hydration(hd);
+               bio_io_error(bio);
+               return;
+       }
+
+       /*
+        * Start region hydration.
+        *
+        * If a bio overwrites a region, i.e., its size is equal to the
+        * region's size, then we don't need to copy the region from the source
+        * to the destination device.
+        */
+       if (is_overwrite_bio(clone, bio)) {
+               bucket_unlock_irqrestore(bucket, flags);
+               hydration_overwrite(hd, bio);
+       } else {
+               bio_list_add(&hd->deferred_bios, bio);
+               bucket_unlock_irqrestore(bucket, flags);
+               hydration_copy(hd, 1);
+       }
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Background hydrations.
+ */
+
+/*
+ * Batch region hydrations.
+ *
+ * To better utilize device bandwidth we batch together the hydration of
+ * adjacent regions. This allows us to use small region sizes, e.g., 4KB, which
+ * is good for small, random write performance (because of the overwriting of
+ * un-hydrated regions) and at the same time issue big copy requests to kcopyd
+ * to achieve high hydration bandwidth.
+ */
+struct batch_info {
+       struct dm_clone_region_hydration *head;
+       unsigned int nr_batched_regions;
+};
+
+static void __batch_hydration(struct batch_info *batch,
+                             struct dm_clone_region_hydration *hd)
+{
+       struct clone *clone = hd->clone;
+       unsigned int max_batch_size = READ_ONCE(clone->hydration_batch_size);
+
+       if (batch->head) {
+               /* Try to extend the current batch */
+               if (batch->nr_batched_regions < max_batch_size &&
+                   (batch->head->region_nr + batch->nr_batched_regions) == hd->region_nr) {
+                       list_add_tail(&hd->list, &batch->head->list);
+                       batch->nr_batched_regions++;
+                       hd = NULL;
+               }
+
+               /* Check if we should issue the current batch */
+               if (batch->nr_batched_regions >= max_batch_size || hd) {
+                       hydration_copy(batch->head, batch->nr_batched_regions);
+                       batch->head = NULL;
+                       batch->nr_batched_regions = 0;
+               }
+       }
+
+       if (!hd)
+               return;
+
+       /* We treat max batch sizes of zero and one equivalently */
+       if (max_batch_size <= 1) {
+               hydration_copy(hd, 1);
+               return;
+       }
+
+       /* Start a new batch */
+       BUG_ON(!list_empty(&hd->list));
+       batch->head = hd;
+       batch->nr_batched_regions = 1;
+}
+
+static unsigned long __start_next_hydration(struct clone *clone,
+                                           unsigned long offset,
+                                           struct batch_info *batch)
+{
+       unsigned long flags;
+       struct hash_table_bucket *bucket;
+       struct dm_clone_region_hydration *hd;
+       unsigned long nr_regions = clone->nr_regions;
+
+       hd = alloc_hydration(clone);
+
+       /* Try to find a region to hydrate. */
+       do {
+               offset = dm_clone_find_next_unhydrated_region(clone->cmd, offset);
+               if (offset == nr_regions)
+                       break;
+
+               bucket = get_hash_table_bucket(clone, offset);
+               bucket_lock_irqsave(bucket, flags);
+
+               if (!dm_clone_is_region_hydrated(clone->cmd, offset) &&
+                   !__hash_find(bucket, offset)) {
+                       hydration_init(hd, offset);
+                       __insert_region_hydration(bucket, hd);
+                       bucket_unlock_irqrestore(bucket, flags);
+
+                       /* Batch hydration */
+                       __batch_hydration(batch, hd);
+
+                       return (offset + 1);
+               }
+
+               bucket_unlock_irqrestore(bucket, flags);
+
+       } while (++offset < nr_regions);
+
+       if (hd)
+               free_hydration(hd);
+
+       return offset;
+}
+
+/*
+ * This function searches for regions that still reside in the source device
+ * and starts their hydration.
+ */
+static void do_hydration(struct clone *clone)
+{
+       unsigned int current_volume;
+       unsigned long offset, nr_regions = clone->nr_regions;
+
+       struct batch_info batch = {
+               .head = NULL,
+               .nr_batched_regions = 0,
+       };
+
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
+               return;
+
+       if (dm_clone_is_hydration_done(clone->cmd))
+               return;
+
+       /*
+        * Avoid race with device suspension.
+        */
+       atomic_inc(&clone->hydrations_in_flight);
+
+       /*
+        * Make sure atomic_inc() is ordered before test_bit(), otherwise we
+        * might race with clone_postsuspend() and start a region hydration
+        * after the target has been suspended.
+        *
+        * This is paired with the smp_mb__after_atomic() in
+        * clone_postsuspend().
+        */
+       smp_mb__after_atomic();
+
+       offset = clone->hydration_offset;
+       while (likely(!test_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags)) &&
+              !atomic_read(&clone->ios_in_flight) &&
+              test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags) &&
+              offset < nr_regions) {
+               current_volume = atomic_read(&clone->hydrations_in_flight);
+               current_volume += batch.nr_batched_regions;
+
+               if (current_volume > READ_ONCE(clone->hydration_threshold))
+                       break;
+
+               offset = __start_next_hydration(clone, offset, &batch);
+       }
+
+       if (batch.head)
+               hydration_copy(batch.head, batch.nr_batched_regions);
+
+       if (offset >= nr_regions)
+               offset = 0;
+
+       clone->hydration_offset = offset;
+
+       if (atomic_dec_and_test(&clone->hydrations_in_flight))
+               wakeup_hydration_waiters(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+static bool need_commit_due_to_time(struct clone *clone)
+{
+       return !time_in_range(jiffies, clone->last_commit_jiffies,
+                             clone->last_commit_jiffies + COMMIT_PERIOD);
+}
+
+/*
+ * A non-zero return indicates read-only or fail mode.
+ */
+static int commit_metadata(struct clone *clone)
+{
+       int r = 0;
+
+       mutex_lock(&clone->commit_lock);
+
+       if (!dm_clone_changed_this_transaction(clone->cmd))
+               goto out;
+
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY)) {
+               r = -EPERM;
+               goto out;
+       }
+
+       r = dm_clone_metadata_commit(clone->cmd);
+
+       if (unlikely(r)) {
+               __metadata_operation_failed(clone, "dm_clone_metadata_commit", r);
+               goto out;
+       }
+
+       if (dm_clone_is_hydration_done(clone->cmd))
+               dm_table_event(clone->ti->table);
+out:
+       mutex_unlock(&clone->commit_lock);
+
+       return r;
+}
+
+static void process_deferred_discards(struct clone *clone)
+{
+       int r = -EPERM;
+       struct bio *bio;
+       struct blk_plug plug;
+       unsigned long rs, re, flags;
+       struct bio_list discards = BIO_EMPTY_LIST;
+
+       spin_lock_irqsave(&clone->lock, flags);
+       bio_list_merge(&discards, &clone->deferred_discard_bios);
+       bio_list_init(&clone->deferred_discard_bios);
+       spin_unlock_irqrestore(&clone->lock, flags);
+
+       if (bio_list_empty(&discards))
+               return;
+
+       if (unlikely(get_clone_mode(clone) >= CM_READ_ONLY))
+               goto out;
+
+       /* Update the metadata */
+       bio_list_for_each(bio, &discards) {
+               bio_region_range(clone, bio, &rs, &re);
+               /*
+                * A discard request might cover regions that have been already
+                * hydrated. There is no need to update the metadata for these
+                * regions.
+                */
+               r = dm_clone_cond_set_range(clone->cmd, rs, re - rs);
+
+               if (unlikely(r))
+                       break;
+       }
+out:
+       blk_start_plug(&plug);
+       while ((bio = bio_list_pop(&discards)))
+               complete_discard_bio(clone, bio, r == 0);
+       blk_finish_plug(&plug);
+}
+
+static void process_deferred_bios(struct clone *clone)
+{
+       unsigned long flags;
+       struct bio_list bios = BIO_EMPTY_LIST;
+
+       spin_lock_irqsave(&clone->lock, flags);
+       bio_list_merge(&bios, &clone->deferred_bios);
+       bio_list_init(&clone->deferred_bios);
+       spin_unlock_irqrestore(&clone->lock, flags);
+
+       if (bio_list_empty(&bios))
+               return;
+
+       submit_bios(&bios);
+}
+
+static void process_deferred_flush_bios(struct clone *clone)
+{
+       struct bio *bio;
+       unsigned long flags;
+       struct bio_list bios = BIO_EMPTY_LIST;
+       struct bio_list bio_completions = BIO_EMPTY_LIST;
+
+       /*
+        * If there are any deferred flush bios, we must commit the metadata
+        * before issuing them or signaling their completion.
+        */
+       spin_lock_irqsave(&clone->lock, flags);
+       bio_list_merge(&bios, &clone->deferred_flush_bios);
+       bio_list_init(&clone->deferred_flush_bios);
+
+       bio_list_merge(&bio_completions, &clone->deferred_flush_completions);
+       bio_list_init(&clone->deferred_flush_completions);
+       spin_unlock_irqrestore(&clone->lock, flags);
+
+       if (bio_list_empty(&bios) && bio_list_empty(&bio_completions) &&
+           !(dm_clone_changed_this_transaction(clone->cmd) && need_commit_due_to_time(clone)))
+               return;
+
+       if (commit_metadata(clone)) {
+               bio_list_merge(&bios, &bio_completions);
+
+               while ((bio = bio_list_pop(&bios)))
+                       bio_io_error(bio);
+
+               return;
+       }
+
+       clone->last_commit_jiffies = jiffies;
+
+       while ((bio = bio_list_pop(&bio_completions)))
+               bio_endio(bio);
+
+       while ((bio = bio_list_pop(&bios)))
+               generic_make_request(bio);
+}
+
+static void do_worker(struct work_struct *work)
+{
+       struct clone *clone = container_of(work, typeof(*clone), worker);
+
+       process_deferred_bios(clone);
+       process_deferred_discards(clone);
+
+       /*
+        * process_deferred_flush_bios():
+        *
+        *   - Commit metadata
+        *
+        *   - Process deferred REQ_FUA completions
+        *
+        *   - Process deferred REQ_PREFLUSH bios
+        */
+       process_deferred_flush_bios(clone);
+
+       /* Background hydration */
+       do_hydration(clone);
+}
+
+/*
+ * Commit periodically so that not too much unwritten data builds up.
+ *
+ * Also, restart background hydration, if it has been stopped by in-flight I/O.
+ */
+static void do_waker(struct work_struct *work)
+{
+       struct clone *clone = container_of(to_delayed_work(work), struct clone, waker);
+
+       wake_worker(clone);
+       queue_delayed_work(clone->wq, &clone->waker, COMMIT_PERIOD);
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Target methods
+ */
+static int clone_map(struct dm_target *ti, struct bio *bio)
+{
+       struct clone *clone = ti->private;
+       unsigned long region_nr;
+
+       atomic_inc(&clone->ios_in_flight);
+
+       if (unlikely(get_clone_mode(clone) == CM_FAIL))
+               return DM_MAPIO_KILL;
+
+       /*
+        * REQ_PREFLUSH bios carry no data:
+        *
+        * - Commit metadata, if changed
+        *
+        * - Pass down to destination device
+        */
+       if (bio->bi_opf & REQ_PREFLUSH) {
+               remap_and_issue(clone, bio);
+               return DM_MAPIO_SUBMITTED;
+       }
+
+       bio->bi_iter.bi_sector = dm_target_offset(ti, bio->bi_iter.bi_sector);
+
+       /*
+        * dm-clone interprets discards and performs a fast hydration of the
+        * discarded regions, i.e., we skip the copy from the source device and
+        * just mark the regions as hydrated.
+        */
+       if (bio_op(bio) == REQ_OP_DISCARD) {
+               process_discard_bio(clone, bio);
+               return DM_MAPIO_SUBMITTED;
+       }
+
+       /*
+        * If the bio's region is hydrated, redirect it to the destination
+        * device.
+        *
+        * If the region is not hydrated and the bio is a READ, redirect it to
+        * the source device.
+        *
+        * Else, defer WRITE bio until after its region has been hydrated and
+        * start the region's hydration immediately.
+        */
+       region_nr = bio_to_region(clone, bio);
+       if (dm_clone_is_region_hydrated(clone->cmd, region_nr)) {
+               remap_and_issue(clone, bio);
+               return DM_MAPIO_SUBMITTED;
+       } else if (bio_data_dir(bio) == READ) {
+               remap_to_source(clone, bio);
+               return DM_MAPIO_REMAPPED;
+       }
+
+       remap_to_dest(clone, bio);
+       hydrate_bio_region(clone, bio);
+
+       return DM_MAPIO_SUBMITTED;
+}
+
+static int clone_endio(struct dm_target *ti, struct bio *bio, blk_status_t *error)
+{
+       struct clone *clone = ti->private;
+
+       atomic_dec(&clone->ios_in_flight);
+
+       return DM_ENDIO_DONE;
+}
+
+static void emit_flags(struct clone *clone, char *result, unsigned int maxlen,
+                      ssize_t *sz_ptr)
+{
+       ssize_t sz = *sz_ptr;
+       unsigned int count;
+
+       count = !test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+       count += !test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+
+       DMEMIT("%u ", count);
+
+       if (!test_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags))
+               DMEMIT("no_hydration ");
+
+       if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags))
+               DMEMIT("no_discard_passdown ");
+
+       *sz_ptr = sz;
+}
+
+static void emit_core_args(struct clone *clone, char *result,
+                          unsigned int maxlen, ssize_t *sz_ptr)
+{
+       ssize_t sz = *sz_ptr;
+       unsigned int count = 4;
+
+       DMEMIT("%u hydration_threshold %u hydration_batch_size %u ", count,
+              READ_ONCE(clone->hydration_threshold),
+              READ_ONCE(clone->hydration_batch_size));
+
+       *sz_ptr = sz;
+}
+
+/*
+ * Status format:
+ *
+ * <metadata block size> <#used metadata blocks>/<#total metadata blocks>
+ * <clone region size> <#hydrated regions>/<#total regions> <#hydrating regions>
+ * <#features> <features>* <#core args> <core args>* <clone metadata mode>
+ */
+static void clone_status(struct dm_target *ti, status_type_t type,
+                        unsigned int status_flags, char *result,
+                        unsigned int maxlen)
+{
+       int r;
+       unsigned int i;
+       ssize_t sz = 0;
+       dm_block_t nr_free_metadata_blocks = 0;
+       dm_block_t nr_metadata_blocks = 0;
+       char buf[BDEVNAME_SIZE];
+       struct clone *clone = ti->private;
+
+       switch (type) {
+       case STATUSTYPE_INFO:
+               if (get_clone_mode(clone) == CM_FAIL) {
+                       DMEMIT("Fail");
+                       break;
+               }
+
+               /* Commit to ensure statistics aren't out-of-date */
+               if (!(status_flags & DM_STATUS_NOFLUSH_FLAG) && !dm_suspended(ti))
+                       (void) commit_metadata(clone);
+
+               r = dm_clone_get_free_metadata_block_count(clone->cmd, &nr_free_metadata_blocks);
+
+               if (r) {
+                       DMERR("%s: dm_clone_get_free_metadata_block_count returned %d",
+                             clone_device_name(clone), r);
+                       goto error;
+               }
+
+               r = dm_clone_get_metadata_dev_size(clone->cmd, &nr_metadata_blocks);
+
+               if (r) {
+                       DMERR("%s: dm_clone_get_metadata_dev_size returned %d",
+                             clone_device_name(clone), r);
+                       goto error;
+               }
+
+               DMEMIT("%u %llu/%llu %llu %lu/%lu %u ",
+                      DM_CLONE_METADATA_BLOCK_SIZE,
+                      (unsigned long long)(nr_metadata_blocks - nr_free_metadata_blocks),
+                      (unsigned long long)nr_metadata_blocks,
+                      (unsigned long long)clone->region_size,
+                      dm_clone_nr_of_hydrated_regions(clone->cmd),
+                      clone->nr_regions,
+                      atomic_read(&clone->hydrations_in_flight));
+
+               emit_flags(clone, result, maxlen, &sz);
+               emit_core_args(clone, result, maxlen, &sz);
+
+               switch (get_clone_mode(clone)) {
+               case CM_WRITE:
+                       DMEMIT("rw");
+                       break;
+               case CM_READ_ONLY:
+                       DMEMIT("ro");
+                       break;
+               case CM_FAIL:
+                       DMEMIT("Fail");
+               }
+
+               break;
+
+       case STATUSTYPE_TABLE:
+               format_dev_t(buf, clone->metadata_dev->bdev->bd_dev);
+               DMEMIT("%s ", buf);
+
+               format_dev_t(buf, clone->dest_dev->bdev->bd_dev);
+               DMEMIT("%s ", buf);
+
+               format_dev_t(buf, clone->source_dev->bdev->bd_dev);
+               DMEMIT("%s", buf);
+
+               for (i = 0; i < clone->nr_ctr_args; i++)
+                       DMEMIT(" %s", clone->ctr_args[i]);
+       }
+
+       return;
+
+error:
+       DMEMIT("Error");
+}
+
+static int clone_is_congested(struct dm_target_callbacks *cb, int bdi_bits)
+{
+       struct request_queue *dest_q, *source_q;
+       struct clone *clone = container_of(cb, struct clone, callbacks);
+
+       source_q = bdev_get_queue(clone->source_dev->bdev);
+       dest_q = bdev_get_queue(clone->dest_dev->bdev);
+
+       return (bdi_congested(dest_q->backing_dev_info, bdi_bits) |
+               bdi_congested(source_q->backing_dev_info, bdi_bits));
+}
+
+static sector_t get_dev_size(struct dm_dev *dev)
+{
+       return i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT;
+}
+
+/*---------------------------------------------------------------------------*/
+
+/*
+ * Construct a clone device mapping:
+ *
+ * clone <metadata dev> <destination dev> <source dev> <region size>
+ *     [<#feature args> [<feature arg>]* [<#core args> [key value]*]]
+ *
+ * metadata dev: Fast device holding the persistent metadata
+ * destination dev: The destination device, which will become a clone of the
+ *                  source device
+ * source dev: The read-only source device that gets cloned
+ * region size: dm-clone unit size in sectors
+ *
+ * #feature args: Number of feature arguments passed
+ * feature args: E.g. no_hydration, no_discard_passdown
+ *
+ * #core arguments: An even number of core arguments
+ * core arguments: Key/value pairs for tuning the core
+ *                E.g. 'hydration_threshold 256'
+ */
+static int parse_feature_args(struct dm_arg_set *as, struct clone *clone)
+{
+       int r;
+       unsigned int argc;
+       const char *arg_name;
+       struct dm_target *ti = clone->ti;
+
+       const struct dm_arg args = {
+               .min = 0,
+               .max = 2,
+               .error = "Invalid number of feature arguments"
+       };
+
+       /* No feature arguments supplied */
+       if (!as->argc)
+               return 0;
+
+       r = dm_read_arg_group(&args, as, &argc, &ti->error);
+       if (r)
+               return r;
+
+       while (argc) {
+               arg_name = dm_shift_arg(as);
+               argc--;
+
+               if (!strcasecmp(arg_name, "no_hydration")) {
+                       __clear_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+               } else if (!strcasecmp(arg_name, "no_discard_passdown")) {
+                       __clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+               } else {
+                       ti->error = "Invalid feature argument";
+                       return -EINVAL;
+               }
+       }
+
+       return 0;
+}
+
+static int parse_core_args(struct dm_arg_set *as, struct clone *clone)
+{
+       int r;
+       unsigned int argc;
+       unsigned int value;
+       const char *arg_name;
+       struct dm_target *ti = clone->ti;
+
+       const struct dm_arg args = {
+               .min = 0,
+               .max = 4,
+               .error = "Invalid number of core arguments"
+       };
+
+       /* Initialize core arguments */
+       clone->hydration_batch_size = DEFAULT_HYDRATION_BATCH_SIZE;
+       clone->hydration_threshold = DEFAULT_HYDRATION_THRESHOLD;
+
+       /* No core arguments supplied */
+       if (!as->argc)
+               return 0;
+
+       r = dm_read_arg_group(&args, as, &argc, &ti->error);
+       if (r)
+               return r;
+
+       if (argc & 1) {
+               ti->error = "Number of core arguments must be even";
+               return -EINVAL;
+       }
+
+       while (argc) {
+               arg_name = dm_shift_arg(as);
+               argc -= 2;
+
+               if (!strcasecmp(arg_name, "hydration_threshold")) {
+                       if (kstrtouint(dm_shift_arg(as), 10, &value)) {
+                               ti->error = "Invalid value for argument `hydration_threshold'";
+                               return -EINVAL;
+                       }
+                       clone->hydration_threshold = value;
+               } else if (!strcasecmp(arg_name, "hydration_batch_size")) {
+                       if (kstrtouint(dm_shift_arg(as), 10, &value)) {
+                               ti->error = "Invalid value for argument `hydration_batch_size'";
+                               return -EINVAL;
+                       }
+                       clone->hydration_batch_size = value;
+               } else {
+                       ti->error = "Invalid core argument";
+                       return -EINVAL;
+               }
+       }
+
+       return 0;
+}
+
+static int parse_region_size(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+       int r;
+       unsigned int region_size;
+       struct dm_arg arg;
+
+       arg.min = MIN_REGION_SIZE;
+       arg.max = MAX_REGION_SIZE;
+       arg.error = "Invalid region size";
+
+       r = dm_read_arg(&arg, as, &region_size, error);
+       if (r)
+               return r;
+
+       /* Check region size is a power of 2 */
+       if (!is_power_of_2(region_size)) {
+               *error = "Region size is not a power of 2";
+               return -EINVAL;
+       }
+
+       /* Validate the region size against the device logical block size */
+       if (region_size % (bdev_logical_block_size(clone->source_dev->bdev) >> 9) ||
+           region_size % (bdev_logical_block_size(clone->dest_dev->bdev) >> 9)) {
+               *error = "Region size is not a multiple of device logical block size";
+               return -EINVAL;
+       }
+
+       clone->region_size = region_size;
+
+       return 0;
+}
+
+static int validate_nr_regions(unsigned long n, char **error)
+{
+       /*
+        * dm_bitset restricts us to 2^32 regions. test_bit & co. restrict us
+        * further to 2^31 regions.
+        */
+       if (n > (1UL << 31)) {
+               *error = "Too many regions. Consider increasing the region size";
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static int parse_metadata_dev(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+       int r;
+       sector_t metadata_dev_size;
+       char b[BDEVNAME_SIZE];
+
+       r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
+                         &clone->metadata_dev);
+       if (r) {
+               *error = "Error opening metadata device";
+               return r;
+       }
+
+       metadata_dev_size = get_dev_size(clone->metadata_dev);
+       if (metadata_dev_size > DM_CLONE_METADATA_MAX_SECTORS_WARNING)
+               DMWARN("Metadata device %s is larger than %u sectors: excess space will not be used.",
+                      bdevname(clone->metadata_dev->bdev, b), DM_CLONE_METADATA_MAX_SECTORS);
+
+       return 0;
+}
+
+static int parse_dest_dev(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+       int r;
+       sector_t dest_dev_size;
+
+       r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ | FMODE_WRITE,
+                         &clone->dest_dev);
+       if (r) {
+               *error = "Error opening destination device";
+               return r;
+       }
+
+       dest_dev_size = get_dev_size(clone->dest_dev);
+       if (dest_dev_size < clone->ti->len) {
+               dm_put_device(clone->ti, clone->dest_dev);
+               *error = "Device size larger than destination device";
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static int parse_source_dev(struct clone *clone, struct dm_arg_set *as, char **error)
+{
+       int r;
+       sector_t source_dev_size;
+
+       r = dm_get_device(clone->ti, dm_shift_arg(as), FMODE_READ,
+                         &clone->source_dev);
+       if (r) {
+               *error = "Error opening source device";
+               return r;
+       }
+
+       source_dev_size = get_dev_size(clone->source_dev);
+       if (source_dev_size < clone->ti->len) {
+               dm_put_device(clone->ti, clone->source_dev);
+               *error = "Device size larger than source device";
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+static int copy_ctr_args(struct clone *clone, int argc, const char **argv, char **error)
+{
+       unsigned int i;
+       const char **copy;
+
+       copy = kcalloc(argc, sizeof(*copy), GFP_KERNEL);
+       if (!copy)
+               goto error;
+
+       for (i = 0; i < argc; i++) {
+               copy[i] = kstrdup(argv[i], GFP_KERNEL);
+
+               if (!copy[i]) {
+                       while (i--)
+                               kfree(copy[i]);
+                       kfree(copy);
+                       goto error;
+               }
+       }
+
+       clone->nr_ctr_args = argc;
+       clone->ctr_args = copy;
+       return 0;
+
+error:
+       *error = "Failed to allocate memory for table line";
+       return -ENOMEM;
+}
+
+static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+       int r;
+       struct clone *clone;
+       struct dm_arg_set as;
+
+       if (argc < 4) {
+               ti->error = "Invalid number of arguments";
+               return -EINVAL;
+       }
+
+       as.argc = argc;
+       as.argv = argv;
+
+       clone = kzalloc(sizeof(*clone), GFP_KERNEL);
+       if (!clone) {
+               ti->error = "Failed to allocate clone structure";
+               return -ENOMEM;
+       }
+
+       clone->ti = ti;
+
+       /* Initialize dm-clone flags */
+       __set_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+       __set_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
+       __set_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+
+       r = parse_metadata_dev(clone, &as, &ti->error);
+       if (r)
+               goto out_with_clone;
+
+       r = parse_dest_dev(clone, &as, &ti->error);
+       if (r)
+               goto out_with_meta_dev;
+
+       r = parse_source_dev(clone, &as, &ti->error);
+       if (r)
+               goto out_with_dest_dev;
+
+       r = parse_region_size(clone, &as, &ti->error);
+       if (r)
+               goto out_with_source_dev;
+
+       clone->region_shift = __ffs(clone->region_size);
+       clone->nr_regions = dm_sector_div_up(ti->len, clone->region_size);
+
+       r = validate_nr_regions(clone->nr_regions, &ti->error);
+       if (r)
+               goto out_with_source_dev;
+
+       r = dm_set_target_max_io_len(ti, clone->region_size);
+       if (r) {
+               ti->error = "Failed to set max io len";
+               goto out_with_source_dev;
+       }
+
+       r = parse_feature_args(&as, clone);
+       if (r)
+               goto out_with_source_dev;
+
+       r = parse_core_args(&as, clone);
+       if (r)
+               goto out_with_source_dev;
+
+       /* Load metadata */
+       clone->cmd = dm_clone_metadata_open(clone->metadata_dev->bdev, ti->len,
+                                           clone->region_size);
+       if (IS_ERR(clone->cmd)) {
+               ti->error = "Failed to load metadata";
+               r = PTR_ERR(clone->cmd);
+               goto out_with_source_dev;
+       }
+
+       __set_clone_mode(clone, CM_WRITE);
+
+       if (get_clone_mode(clone) != CM_WRITE) {
+               ti->error = "Unable to get write access to metadata, please check/repair metadata";
+               r = -EPERM;
+               goto out_with_metadata;
+       }
+
+       clone->last_commit_jiffies = jiffies;
+
+       /* Allocate hydration hash table */
+       r = hash_table_init(clone);
+       if (r) {
+               ti->error = "Failed to allocate hydration hash table";
+               goto out_with_metadata;
+       }
+
+       atomic_set(&clone->ios_in_flight, 0);
+       init_waitqueue_head(&clone->hydration_stopped);
+       spin_lock_init(&clone->lock);
+       bio_list_init(&clone->deferred_bios);
+       bio_list_init(&clone->deferred_discard_bios);
+       bio_list_init(&clone->deferred_flush_bios);
+       bio_list_init(&clone->deferred_flush_completions);
+       clone->hydration_offset = 0;
+       atomic_set(&clone->hydrations_in_flight, 0);
+
+       clone->wq = alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0);
+       if (!clone->wq) {
+               ti->error = "Failed to allocate workqueue";
+               r = -ENOMEM;
+               goto out_with_ht;
+       }
+
+       INIT_WORK(&clone->worker, do_worker);
+       INIT_DELAYED_WORK(&clone->waker, do_waker);
+
+       clone->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle);
+       if (IS_ERR(clone->kcopyd_client)) {
+               r = PTR_ERR(clone->kcopyd_client);
+               goto out_with_wq;
+       }
+
+       r = mempool_init_slab_pool(&clone->hydration_pool, MIN_HYDRATIONS,
+                                  _hydration_cache);
+       if (r) {
+               ti->error = "Failed to create dm_clone_region_hydration memory pool";
+               goto out_with_kcopyd;
+       }
+
+       /* Save a copy of the table line */
+       r = copy_ctr_args(clone, argc - 3, (const char **)argv + 3, &ti->error);
+       if (r)
+               goto out_with_mempool;
+
+       mutex_init(&clone->commit_lock);
+       clone->callbacks.congested_fn = clone_is_congested;
+       dm_table_add_target_callbacks(ti->table, &clone->callbacks);
+
+       /* Enable flushes */
+       ti->num_flush_bios = 1;
+       ti->flush_supported = true;
+
+       /* Enable discards */
+       ti->discards_supported = true;
+       ti->num_discard_bios = 1;
+
+       ti->private = clone;
+
+       return 0;
+
+out_with_mempool:
+       mempool_exit(&clone->hydration_pool);
+out_with_kcopyd:
+       dm_kcopyd_client_destroy(clone->kcopyd_client);
+out_with_wq:
+       destroy_workqueue(clone->wq);
+out_with_ht:
+       hash_table_exit(clone);
+out_with_metadata:
+       dm_clone_metadata_close(clone->cmd);
+out_with_source_dev:
+       dm_put_device(ti, clone->source_dev);
+out_with_dest_dev:
+       dm_put_device(ti, clone->dest_dev);
+out_with_meta_dev:
+       dm_put_device(ti, clone->metadata_dev);
+out_with_clone:
+       kfree(clone);
+
+       return r;
+}
+
+static void clone_dtr(struct dm_target *ti)
+{
+       unsigned int i;
+       struct clone *clone = ti->private;
+
+       mutex_destroy(&clone->commit_lock);
+
+       for (i = 0; i < clone->nr_ctr_args; i++)
+               kfree(clone->ctr_args[i]);
+       kfree(clone->ctr_args);
+
+       mempool_exit(&clone->hydration_pool);
+       dm_kcopyd_client_destroy(clone->kcopyd_client);
+       destroy_workqueue(clone->wq);
+       hash_table_exit(clone);
+       dm_clone_metadata_close(clone->cmd);
+       dm_put_device(ti, clone->source_dev);
+       dm_put_device(ti, clone->dest_dev);
+       dm_put_device(ti, clone->metadata_dev);
+
+       kfree(clone);
+}
+
+/*---------------------------------------------------------------------------*/
+
+static void clone_postsuspend(struct dm_target *ti)
+{
+       struct clone *clone = ti->private;
+
+       /*
+        * To successfully suspend the device:
+        *
+        *      - We cancel the delayed work for periodic commits and wait for
+        *        it to finish.
+        *
+        *      - We stop the background hydration, i.e. we prevent new region
+        *        hydrations from starting.
+        *
+        *      - We wait for any in-flight hydrations to finish.
+        *
+        *      - We flush the workqueue.
+        *
+        *      - We commit the metadata.
+        */
+       cancel_delayed_work_sync(&clone->waker);
+
+       set_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
+
+       /*
+        * Make sure set_bit() is ordered before atomic_read(), otherwise we
+        * might race with do_hydration() and miss some started region
+        * hydrations.
+        *
+        * This is paired with smp_mb__after_atomic() in do_hydration().
+        */
+       smp_mb__after_atomic();
+
+       wait_event(clone->hydration_stopped, !atomic_read(&clone->hydrations_in_flight));
+       flush_workqueue(clone->wq);
+
+       (void) commit_metadata(clone);
+}
+
+static void clone_resume(struct dm_target *ti)
+{
+       struct clone *clone = ti->private;
+
+       clear_bit(DM_CLONE_HYDRATION_SUSPENDED, &clone->flags);
+       do_waker(&clone->waker.work);
+}
+
+static bool bdev_supports_discards(struct block_device *bdev)
+{
+       struct request_queue *q = bdev_get_queue(bdev);
+
+       return (q && blk_queue_discard(q));
+}
+
+/*
+ * If discard_passdown was enabled verify that the destination device supports
+ * discards. Disable discard_passdown if not.
+ */
+static void disable_passdown_if_not_supported(struct clone *clone)
+{
+       struct block_device *dest_dev = clone->dest_dev->bdev;
+       struct queue_limits *dest_limits = &bdev_get_queue(dest_dev)->limits;
+       const char *reason = NULL;
+       char buf[BDEVNAME_SIZE];
+
+       if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags))
+               return;
+
+       if (!bdev_supports_discards(dest_dev))
+               reason = "discard unsupported";
+       else if (dest_limits->max_discard_sectors < clone->region_size)
+               reason = "max discard sectors smaller than a region";
+
+       if (reason) {
+               DMWARN("Destination device (%s) %s: Disabling discard passdown.",
+                      bdevname(dest_dev, buf), reason);
+               clear_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags);
+       }
+}
+
+static void set_discard_limits(struct clone *clone, struct queue_limits *limits)
+{
+       struct block_device *dest_bdev = clone->dest_dev->bdev;
+       struct queue_limits *dest_limits = &bdev_get_queue(dest_bdev)->limits;
+
+       if (!test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags)) {
+               /* No passdown is done so we set our own virtual limits */
+               limits->discard_granularity = clone->region_size << SECTOR_SHIFT;
+               limits->max_discard_sectors = round_down(UINT_MAX >> SECTOR_SHIFT, clone->region_size);
+               return;
+       }
+
+       /*
+        * clone_iterate_devices() is stacking both the source and destination
+        * device limits but discards aren't passed to the source device, so
+        * inherit destination's limits.
+        */
+       limits->max_discard_sectors = dest_limits->max_discard_sectors;
+       limits->max_hw_discard_sectors = dest_limits->max_hw_discard_sectors;
+       limits->discard_granularity = dest_limits->discard_granularity;
+       limits->discard_alignment = dest_limits->discard_alignment;
+       limits->discard_misaligned = dest_limits->discard_misaligned;
+       limits->max_discard_segments = dest_limits->max_discard_segments;
+}
+
+static void clone_io_hints(struct dm_target *ti, struct queue_limits *limits)
+{
+       struct clone *clone = ti->private;
+       u64 io_opt_sectors = limits->io_opt >> SECTOR_SHIFT;
+
+       /*
+        * If the system-determined stacked limits are compatible with
+        * dm-clone's region size (io_opt is a factor) do not override them.
+        */
+       if (io_opt_sectors < clone->region_size ||
+           do_div(io_opt_sectors, clone->region_size)) {
+               blk_limits_io_min(limits, clone->region_size << SECTOR_SHIFT);
+               blk_limits_io_opt(limits, clone->region_size << SECTOR_SHIFT);
+       }
+
+       disable_passdown_if_not_supported(clone);
+       set_discard_limits(clone, limits);
+}
+
+static int clone_iterate_devices(struct dm_target *ti,
+                                iterate_devices_callout_fn fn, void *data)
+{
+       int ret;
+       struct clone *clone = ti->private;
+       struct dm_dev *dest_dev = clone->dest_dev;
+       struct dm_dev *source_dev = clone->source_dev;
+
+       ret = fn(ti, source_dev, 0, ti->len, data);
+       if (!ret)
+               ret = fn(ti, dest_dev, 0, ti->len, data);
+       return ret;
+}
+
+/*
+ * dm-clone message functions.
+ */
+static void set_hydration_threshold(struct clone *clone, unsigned int nr_regions)
+{
+       WRITE_ONCE(clone->hydration_threshold, nr_regions);
+
+       /*
+        * If user space sets hydration_threshold to zero then the hydration
+        * will stop. If at a later time the hydration_threshold is increased
+        * we must restart the hydration process by waking up the worker.
+        */
+       wake_worker(clone);
+}
+
+static void set_hydration_batch_size(struct clone *clone, unsigned int nr_regions)
+{
+       WRITE_ONCE(clone->hydration_batch_size, nr_regions);
+}
+
+static void enable_hydration(struct clone *clone)
+{
+       if (!test_and_set_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags))
+               wake_worker(clone);
+}
+
+static void disable_hydration(struct clone *clone)
+{
+       clear_bit(DM_CLONE_HYDRATION_ENABLED, &clone->flags);
+}
+
+static int clone_message(struct dm_target *ti, unsigned int argc, char **argv,
+                        char *result, unsigned int maxlen)
+{
+       struct clone *clone = ti->private;
+       unsigned int value;
+
+       if (!argc)
+               return -EINVAL;
+
+       if (!strcasecmp(argv[0], "enable_hydration")) {
+               enable_hydration(clone);
+               return 0;
+       }
+
+       if (!strcasecmp(argv[0], "disable_hydration")) {
+               disable_hydration(clone);
+               return 0;
+       }
+
+       if (argc != 2)
+               return -EINVAL;
+
+       if (!strcasecmp(argv[0], "hydration_threshold")) {
+               if (kstrtouint(argv[1], 10, &value))
+                       return -EINVAL;
+
+               set_hydration_threshold(clone, value);
+
+               return 0;
+       }
+
+       if (!strcasecmp(argv[0], "hydration_batch_size")) {
+               if (kstrtouint(argv[1], 10, &value))
+                       return -EINVAL;
+
+               set_hydration_batch_size(clone, value);
+
+               return 0;
+       }
+
+       DMERR("%s: Unsupported message `%s'", clone_device_name(clone), argv[0]);
+       return -EINVAL;
+}
+
+static struct target_type clone_target = {
+       .name = "clone",
+       .version = {1, 0, 0},
+       .module = THIS_MODULE,
+       .ctr = clone_ctr,
+       .dtr =  clone_dtr,
+       .map = clone_map,
+       .end_io = clone_endio,
+       .postsuspend = clone_postsuspend,
+       .resume = clone_resume,
+       .status = clone_status,
+       .message = clone_message,
+       .io_hints = clone_io_hints,
+       .iterate_devices = clone_iterate_devices,
+};
+
+/*---------------------------------------------------------------------------*/
+
+/* Module functions */
+static int __init dm_clone_init(void)
+{
+       int r;
+
+       _hydration_cache = KMEM_CACHE(dm_clone_region_hydration, 0);
+       if (!_hydration_cache)
+               return -ENOMEM;
+
+       r = dm_register_target(&clone_target);
+       if (r < 0) {
+               DMERR("Failed to register clone target");
+               return r;
+       }
+
+       return 0;
+}
+
+static void __exit dm_clone_exit(void)
+{
+       dm_unregister_target(&clone_target);
+
+       kmem_cache_destroy(_hydration_cache);
+       _hydration_cache = NULL;
+}
+
+/* Module hooks */
+module_init(dm_clone_init);
+module_exit(dm_clone_exit);
+
+MODULE_DESCRIPTION(DM_NAME " clone target");
+MODULE_AUTHOR("Nikos Tsironis <ntsironis@arrikto.com>");
+MODULE_LICENSE("GPL");
index d5216bcc464960b4fc748415c1c8e32abc523321..f87f6495652f5966ab2fce72b252a2016b711186 100644 (file)
@@ -98,11 +98,6 @@ struct crypt_iv_operations {
                    struct dm_crypt_request *dmreq);
 };
 
-struct iv_essiv_private {
-       struct crypto_shash *hash_tfm;
-       u8 *salt;
-};
-
 struct iv_benbi_private {
        int shift;
 };
@@ -120,10 +115,6 @@ struct iv_tcw_private {
        u8 *whitening;
 };
 
-struct iv_eboiv_private {
-       struct crypto_cipher *tfm;
-};
-
 /*
  * Crypt: maps a linear range of a block device
  * and encrypts / decrypts at the same time.
@@ -152,26 +143,21 @@ struct crypt_config {
        struct task_struct *write_thread;
        struct rb_root write_tree;
 
-       char *cipher;
        char *cipher_string;
        char *cipher_auth;
        char *key_string;
 
        const struct crypt_iv_operations *iv_gen_ops;
        union {
-               struct iv_essiv_private essiv;
                struct iv_benbi_private benbi;
                struct iv_lmk_private lmk;
                struct iv_tcw_private tcw;
-               struct iv_eboiv_private eboiv;
        } iv_gen_private;
        u64 iv_offset;
        unsigned int iv_size;
        unsigned short int sector_size;
        unsigned char sector_shift;
 
-       /* ESSIV: struct crypto_cipher *essiv_tfm */
-       void *iv_private;
        union {
                struct crypto_skcipher **tfms;
                struct crypto_aead **tfms_aead;
@@ -329,157 +315,15 @@ static int crypt_iv_plain64be_gen(struct crypt_config *cc, u8 *iv,
        return 0;
 }
 
-/* Initialise ESSIV - compute salt but no local memory allocations */
-static int crypt_iv_essiv_init(struct crypt_config *cc)
-{
-       struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-       SHASH_DESC_ON_STACK(desc, essiv->hash_tfm);
-       struct crypto_cipher *essiv_tfm;
-       int err;
-
-       desc->tfm = essiv->hash_tfm;
-
-       err = crypto_shash_digest(desc, cc->key, cc->key_size, essiv->salt);
-       shash_desc_zero(desc);
-       if (err)
-               return err;
-
-       essiv_tfm = cc->iv_private;
-
-       err = crypto_cipher_setkey(essiv_tfm, essiv->salt,
-                           crypto_shash_digestsize(essiv->hash_tfm));
-       if (err)
-               return err;
-
-       return 0;
-}
-
-/* Wipe salt and reset key derived from volume key */
-static int crypt_iv_essiv_wipe(struct crypt_config *cc)
-{
-       struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-       unsigned salt_size = crypto_shash_digestsize(essiv->hash_tfm);
-       struct crypto_cipher *essiv_tfm;
-       int r, err = 0;
-
-       memset(essiv->salt, 0, salt_size);
-
-       essiv_tfm = cc->iv_private;
-       r = crypto_cipher_setkey(essiv_tfm, essiv->salt, salt_size);
-       if (r)
-               err = r;
-
-       return err;
-}
-
-/* Allocate the cipher for ESSIV */
-static struct crypto_cipher *alloc_essiv_cipher(struct crypt_config *cc,
-                                               struct dm_target *ti,
-                                               const u8 *salt,
-                                               unsigned int saltsize)
-{
-       struct crypto_cipher *essiv_tfm;
-       int err;
-
-       /* Setup the essiv_tfm with the given salt */
-       essiv_tfm = crypto_alloc_cipher(cc->cipher, 0, 0);
-       if (IS_ERR(essiv_tfm)) {
-               ti->error = "Error allocating crypto tfm for ESSIV";
-               return essiv_tfm;
-       }
-
-       if (crypto_cipher_blocksize(essiv_tfm) != cc->iv_size) {
-               ti->error = "Block size of ESSIV cipher does "
-                           "not match IV size of block cipher";
-               crypto_free_cipher(essiv_tfm);
-               return ERR_PTR(-EINVAL);
-       }
-
-       err = crypto_cipher_setkey(essiv_tfm, salt, saltsize);
-       if (err) {
-               ti->error = "Failed to set key for ESSIV cipher";
-               crypto_free_cipher(essiv_tfm);
-               return ERR_PTR(err);
-       }
-
-       return essiv_tfm;
-}
-
-static void crypt_iv_essiv_dtr(struct crypt_config *cc)
-{
-       struct crypto_cipher *essiv_tfm;
-       struct iv_essiv_private *essiv = &cc->iv_gen_private.essiv;
-
-       crypto_free_shash(essiv->hash_tfm);
-       essiv->hash_tfm = NULL;
-
-       kzfree(essiv->salt);
-       essiv->salt = NULL;
-
-       essiv_tfm = cc->iv_private;
-
-       if (essiv_tfm)
-               crypto_free_cipher(essiv_tfm);
-
-       cc->iv_private = NULL;
-}
-
-static int crypt_iv_essiv_ctr(struct crypt_config *cc, struct dm_target *ti,
-                             const char *opts)
-{
-       struct crypto_cipher *essiv_tfm = NULL;
-       struct crypto_shash *hash_tfm = NULL;
-       u8 *salt = NULL;
-       int err;
-
-       if (!opts) {
-               ti->error = "Digest algorithm missing for ESSIV mode";
-               return -EINVAL;
-       }
-
-       /* Allocate hash algorithm */
-       hash_tfm = crypto_alloc_shash(opts, 0, 0);
-       if (IS_ERR(hash_tfm)) {
-               ti->error = "Error initializing ESSIV hash";
-               err = PTR_ERR(hash_tfm);
-               goto bad;
-       }
-
-       salt = kzalloc(crypto_shash_digestsize(hash_tfm), GFP_KERNEL);
-       if (!salt) {
-               ti->error = "Error kmallocing salt storage in ESSIV";
-               err = -ENOMEM;
-               goto bad;
-       }
-
-       cc->iv_gen_private.essiv.salt = salt;
-       cc->iv_gen_private.essiv.hash_tfm = hash_tfm;
-
-       essiv_tfm = alloc_essiv_cipher(cc, ti, salt,
-                                      crypto_shash_digestsize(hash_tfm));
-       if (IS_ERR(essiv_tfm)) {
-               crypt_iv_essiv_dtr(cc);
-               return PTR_ERR(essiv_tfm);
-       }
-       cc->iv_private = essiv_tfm;
-
-       return 0;
-
-bad:
-       if (hash_tfm && !IS_ERR(hash_tfm))
-               crypto_free_shash(hash_tfm);
-       kfree(salt);
-       return err;
-}
-
 static int crypt_iv_essiv_gen(struct crypt_config *cc, u8 *iv,
                              struct dm_crypt_request *dmreq)
 {
-       struct crypto_cipher *essiv_tfm = cc->iv_private;
-
+       /*
+        * ESSIV encryption of the IV is now handled by the crypto API,
+        * so just pass the plain sector number here.
+        */
        memset(iv, 0, cc->iv_size);
        *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector);
-       crypto_cipher_encrypt_one(essiv_tfm, iv, iv);
 
        return 0;
 }
@@ -847,65 +691,47 @@ static int crypt_iv_random_gen(struct crypt_config *cc, u8 *iv,
        return 0;
 }
 
-static void crypt_iv_eboiv_dtr(struct crypt_config *cc)
-{
-       struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv;
-
-       crypto_free_cipher(eboiv->tfm);
-       eboiv->tfm = NULL;
-}
-
 static int crypt_iv_eboiv_ctr(struct crypt_config *cc, struct dm_target *ti,
                            const char *opts)
 {
-       struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv;
-       struct crypto_cipher *tfm;
-
-       tfm = crypto_alloc_cipher(cc->cipher, 0, 0);
-       if (IS_ERR(tfm)) {
-               ti->error = "Error allocating crypto tfm for EBOIV";
-               return PTR_ERR(tfm);
+       if (test_bit(CRYPT_MODE_INTEGRITY_AEAD, &cc->cipher_flags)) {
+               ti->error = "AEAD transforms not supported for EBOIV";
+               return -EINVAL;
        }
 
-       if (crypto_cipher_blocksize(tfm) != cc->iv_size) {
+       if (crypto_skcipher_blocksize(any_tfm(cc)) != cc->iv_size) {
                ti->error = "Block size of EBOIV cipher does "
                            "not match IV size of block cipher";
-               crypto_free_cipher(tfm);
                return -EINVAL;
        }
 
-       eboiv->tfm = tfm;
        return 0;
 }
 
-static int crypt_iv_eboiv_init(struct crypt_config *cc)
+static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv,
+                           struct dm_crypt_request *dmreq)
 {
-       struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv;
+       u8 buf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(__le64));
+       struct skcipher_request *req;
+       struct scatterlist src, dst;
+       struct crypto_wait wait;
        int err;
 
-       err = crypto_cipher_setkey(eboiv->tfm, cc->key, cc->key_size);
-       if (err)
-               return err;
+       req = skcipher_request_alloc(any_tfm(cc), GFP_KERNEL | GFP_NOFS);
+       if (!req)
+               return -ENOMEM;
 
-       return 0;
-}
+       memset(buf, 0, cc->iv_size);
+       *(__le64 *)buf = cpu_to_le64(dmreq->iv_sector * cc->sector_size);
 
-static int crypt_iv_eboiv_wipe(struct crypt_config *cc)
-{
-       /* Called after cc->key is set to random key in crypt_wipe() */
-       return crypt_iv_eboiv_init(cc);
-}
+       sg_init_one(&src, page_address(ZERO_PAGE(0)), cc->iv_size);
+       sg_init_one(&dst, iv, cc->iv_size);
+       skcipher_request_set_crypt(req, &src, &dst, cc->iv_size, buf);
+       skcipher_request_set_callback(req, 0, crypto_req_done, &wait);
+       err = crypto_wait_req(crypto_skcipher_encrypt(req), &wait);
+       skcipher_request_free(req);
 
-static int crypt_iv_eboiv_gen(struct crypt_config *cc, u8 *iv,
-                           struct dm_crypt_request *dmreq)
-{
-       struct iv_eboiv_private *eboiv = &cc->iv_gen_private.eboiv;
-
-       memset(iv, 0, cc->iv_size);
-       *(__le64 *)iv = cpu_to_le64(dmreq->iv_sector * cc->sector_size);
-       crypto_cipher_encrypt_one(eboiv->tfm, iv, iv);
-
-       return 0;
+       return err;
 }
 
 static const struct crypt_iv_operations crypt_iv_plain_ops = {
@@ -921,10 +747,6 @@ static const struct crypt_iv_operations crypt_iv_plain64be_ops = {
 };
 
 static const struct crypt_iv_operations crypt_iv_essiv_ops = {
-       .ctr       = crypt_iv_essiv_ctr,
-       .dtr       = crypt_iv_essiv_dtr,
-       .init      = crypt_iv_essiv_init,
-       .wipe      = crypt_iv_essiv_wipe,
        .generator = crypt_iv_essiv_gen
 };
 
@@ -962,9 +784,6 @@ static struct crypt_iv_operations crypt_iv_random_ops = {
 
 static struct crypt_iv_operations crypt_iv_eboiv_ops = {
        .ctr       = crypt_iv_eboiv_ctr,
-       .dtr       = crypt_iv_eboiv_dtr,
-       .init      = crypt_iv_eboiv_init,
-       .wipe      = crypt_iv_eboiv_wipe,
        .generator = crypt_iv_eboiv_gen
 };
 
@@ -2320,7 +2139,6 @@ static void crypt_dtr(struct dm_target *ti)
        if (cc->dev)
                dm_put_device(ti, cc->dev);
 
-       kzfree(cc->cipher);
        kzfree(cc->cipher_string);
        kzfree(cc->key_string);
        kzfree(cc->cipher_auth);
@@ -2401,52 +2219,6 @@ static int crypt_ctr_ivmode(struct dm_target *ti, const char *ivmode)
        return 0;
 }
 
-/*
- * Workaround to parse cipher algorithm from crypto API spec.
- * The cc->cipher is currently used only in ESSIV.
- * This should be probably done by crypto-api calls (once available...)
- */
-static int crypt_ctr_blkdev_cipher(struct crypt_config *cc)
-{
-       const char *alg_name = NULL;
-       char *start, *end;
-
-       if (crypt_integrity_aead(cc)) {
-               alg_name = crypto_tfm_alg_name(crypto_aead_tfm(any_tfm_aead(cc)));
-               if (!alg_name)
-                       return -EINVAL;
-               if (crypt_integrity_hmac(cc)) {
-                       alg_name = strchr(alg_name, ',');
-                       if (!alg_name)
-                               return -EINVAL;
-               }
-               alg_name++;
-       } else {
-               alg_name = crypto_tfm_alg_name(crypto_skcipher_tfm(any_tfm(cc)));
-               if (!alg_name)
-                       return -EINVAL;
-       }
-
-       start = strchr(alg_name, '(');
-       end = strchr(alg_name, ')');
-
-       if (!start && !end) {
-               cc->cipher = kstrdup(alg_name, GFP_KERNEL);
-               return cc->cipher ? 0 : -ENOMEM;
-       }
-
-       if (!start || !end || ++start >= end)
-               return -EINVAL;
-
-       cc->cipher = kzalloc(end - start + 1, GFP_KERNEL);
-       if (!cc->cipher)
-               return -ENOMEM;
-
-       strncpy(cc->cipher, start, end - start);
-
-       return 0;
-}
-
 /*
  * Workaround to parse HMAC algorithm from AEAD crypto API spec.
  * The HMAC is needed to calculate tag size (HMAC digest size).
@@ -2490,7 +2262,7 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
                                char **ivmode, char **ivopts)
 {
        struct crypt_config *cc = ti->private;
-       char *tmp, *cipher_api;
+       char *tmp, *cipher_api, buf[CRYPTO_MAX_ALG_NAME];
        int ret = -EINVAL;
 
        cc->tfms_count = 1;
@@ -2516,9 +2288,32 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
        /* The rest is crypto API spec */
        cipher_api = tmp;
 
+       /* Alloc AEAD, can be used only in new format. */
+       if (crypt_integrity_aead(cc)) {
+               ret = crypt_ctr_auth_cipher(cc, cipher_api);
+               if (ret < 0) {
+                       ti->error = "Invalid AEAD cipher spec";
+                       return -ENOMEM;
+               }
+       }
+
        if (*ivmode && !strcmp(*ivmode, "lmk"))
                cc->tfms_count = 64;
 
+       if (*ivmode && !strcmp(*ivmode, "essiv")) {
+               if (!*ivopts) {
+                       ti->error = "Digest algorithm missing for ESSIV mode";
+                       return -EINVAL;
+               }
+               ret = snprintf(buf, CRYPTO_MAX_ALG_NAME, "essiv(%s,%s)",
+                              cipher_api, *ivopts);
+               if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) {
+                       ti->error = "Cannot allocate cipher string";
+                       return -ENOMEM;
+               }
+               cipher_api = buf;
+       }
+
        cc->key_parts = cc->tfms_count;
 
        /* Allocate cipher */
@@ -2528,23 +2323,11 @@ static int crypt_ctr_cipher_new(struct dm_target *ti, char *cipher_in, char *key
                return ret;
        }
 
-       /* Alloc AEAD, can be used only in new format. */
-       if (crypt_integrity_aead(cc)) {
-               ret = crypt_ctr_auth_cipher(cc, cipher_api);
-               if (ret < 0) {
-                       ti->error = "Invalid AEAD cipher spec";
-                       return -ENOMEM;
-               }
+       if (crypt_integrity_aead(cc))
                cc->iv_size = crypto_aead_ivsize(any_tfm_aead(cc));
-       else
+       else
                cc->iv_size = crypto_skcipher_ivsize(any_tfm(cc));
 
-       ret = crypt_ctr_blkdev_cipher(cc);
-       if (ret < 0) {
-               ti->error = "Cannot allocate cipher string";
-               return -ENOMEM;
-       }
-
        return 0;
 }
 
@@ -2579,10 +2362,6 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
        }
        cc->key_parts = cc->tfms_count;
 
-       cc->cipher = kstrdup(cipher, GFP_KERNEL);
-       if (!cc->cipher)
-               goto bad_mem;
-
        chainmode = strsep(&tmp, "-");
        *ivmode = strsep(&tmp, ":");
        *ivopts = tmp;
@@ -2605,9 +2384,19 @@ static int crypt_ctr_cipher_old(struct dm_target *ti, char *cipher_in, char *key
        if (!cipher_api)
                goto bad_mem;
 
-       ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
-                      "%s(%s)", chainmode, cipher);
-       if (ret < 0) {
+       if (*ivmode && !strcmp(*ivmode, "essiv")) {
+               if (!*ivopts) {
+                       ti->error = "Digest algorithm missing for ESSIV mode";
+                       kfree(cipher_api);
+                       return -EINVAL;
+               }
+               ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
+                              "essiv(%s(%s),%s)", chainmode, cipher, *ivopts);
+       } else {
+               ret = snprintf(cipher_api, CRYPTO_MAX_ALG_NAME,
+                              "%s(%s)", chainmode, cipher);
+       }
+       if (ret < 0 || ret >= CRYPTO_MAX_ALG_NAME) {
                kfree(cipher_api);
                goto bad_mem;
        }
index 1e03bc89e20f68ce25c7e0c60e178c1985cec8bf..ac83f5002ce5fe3a0483ea5630e389a9f29576ae 100644 (file)
@@ -601,17 +601,27 @@ static void list_version_get_info(struct target_type *tt, void *param)
     info->vers = align_ptr(((void *) ++info->vers) + strlen(tt->name) + 1);
 }
 
-static int list_versions(struct file *filp, struct dm_ioctl *param, size_t param_size)
+static int __list_versions(struct dm_ioctl *param, size_t param_size, const char *name)
 {
        size_t len, needed = 0;
        struct dm_target_versions *vers;
        struct vers_iter iter_info;
+       struct target_type *tt = NULL;
+
+       if (name) {
+               tt = dm_get_target_type(name);
+               if (!tt)
+                       return -EINVAL;
+       }
 
        /*
         * Loop through all the devices working out how much
         * space we need.
         */
-       dm_target_iterate(list_version_get_needed, &needed);
+       if (!tt)
+               dm_target_iterate(list_version_get_needed, &needed);
+       else
+               list_version_get_needed(tt, &needed);
 
        /*
         * Grab our output buffer.
@@ -632,13 +642,28 @@ static int list_versions(struct file *filp, struct dm_ioctl *param, size_t param
        /*
         * Now loop through filling out the names & versions.
         */
-       dm_target_iterate(list_version_get_info, &iter_info);
+       if (!tt)
+               dm_target_iterate(list_version_get_info, &iter_info);
+       else
+               list_version_get_info(tt, &iter_info);
        param->flags |= iter_info.flags;
 
  out:
+       if (tt)
+               dm_put_target_type(tt);
        return 0;
 }
 
+static int list_versions(struct file *filp, struct dm_ioctl *param, size_t param_size)
+{
+       return __list_versions(param, param_size, NULL);
+}
+
+static int get_target_version(struct file *filp, struct dm_ioctl *param, size_t param_size)
+{
+       return __list_versions(param, param_size, param->name);
+}
+
 static int check_name(const char *name)
 {
        if (strchr(name, '/')) {
@@ -1592,7 +1617,7 @@ static int target_message(struct file *filp, struct dm_ioctl *param, size_t para
        }
 
        ti = dm_table_find_target(table, tmsg->sector);
-       if (!dm_target_is_valid(ti)) {
+       if (!ti) {
                DMWARN("Target message sector outside device.");
                r = -EINVAL;
        } else if (ti->type->message)
@@ -1664,6 +1689,7 @@ static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
                {DM_TARGET_MSG_CMD, 0, target_message},
                {DM_DEV_SET_GEOMETRY_CMD, 0, dev_set_geometry},
                {DM_DEV_ARM_POLL, IOCTL_FLAGS_NO_PARAMS, dev_arm_poll},
+               {DM_GET_TARGET_VERSION, 0, get_target_version},
        };
 
        if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
index 1f933dd197cdf82e14fc242290e5a3268f478b5f..b0aa595e4375d8689ea338d2ca676a1870d2bc1b 100644 (file)
@@ -3738,18 +3738,18 @@ static int raid_iterate_devices(struct dm_target *ti,
 static void raid_io_hints(struct dm_target *ti, struct queue_limits *limits)
 {
        struct raid_set *rs = ti->private;
-       unsigned int chunk_size = to_bytes(rs->md.chunk_sectors);
+       unsigned int chunk_size_bytes = to_bytes(rs->md.chunk_sectors);
 
-       blk_limits_io_min(limits, chunk_size);
-       blk_limits_io_opt(limits, chunk_size * mddev_data_stripes(rs));
+       blk_limits_io_min(limits, chunk_size_bytes);
+       blk_limits_io_opt(limits, chunk_size_bytes * mddev_data_stripes(rs));
 
        /*
         * RAID1 and RAID10 personalities require bio splitting,
         * RAID0/4/5/6 don't and process large discard bios properly.
         */
        if (rs_is_raid1(rs) || rs_is_raid10(rs)) {
-               limits->discard_granularity = chunk_size;
-               limits->max_discard_sectors = chunk_size;
+               limits->discard_granularity = chunk_size_bytes;
+               limits->max_discard_sectors = rs->md.chunk_sectors;
        }
 }
 
index 5a51151f680d6b0d552a80f549dfce0ede0d6b24..089aed57e0836342a461c6cb12d9e54207797009 100644 (file)
@@ -878,12 +878,9 @@ static struct mirror_set *alloc_context(unsigned int nr_mirrors,
                                        struct dm_target *ti,
                                        struct dm_dirty_log *dl)
 {
-       size_t len;
-       struct mirror_set *ms = NULL;
-
-       len = sizeof(*ms) + (sizeof(ms->mirror[0]) * nr_mirrors);
+       struct mirror_set *ms =
+               kzalloc(struct_size(ms, mirror, nr_mirrors), GFP_KERNEL);
 
-       ms = kzalloc(len, GFP_KERNEL);
        if (!ms) {
                ti->error = "Cannot allocate mirror context";
                return NULL;
index 45b92a3d9d8e17a493c11df5cbd85b0caacacf36..71417048256af1d3be307b9d6938aa0f6e07ee4b 100644 (file)
@@ -262,7 +262,7 @@ static int dm_stats_create(struct dm_stats *stats, sector_t start, sector_t end,
        if (n_entries != (size_t)n_entries || !(size_t)(n_entries + 1))
                return -EOVERFLOW;
 
-       shared_alloc_size = sizeof(struct dm_stat) + (size_t)n_entries * sizeof(struct dm_stat_shared);
+       shared_alloc_size = struct_size(s, stat_shared, n_entries);
        if ((shared_alloc_size - sizeof(struct dm_stat)) / sizeof(struct dm_stat_shared) != n_entries)
                return -EOVERFLOW;
 
index 8820931ec7d2d70cf90d1c1f65fbdb8881fd559a..52e049554f5cdf6fd33a2b7e5997e531b8449759 100644 (file)
@@ -163,10 +163,8 @@ static int alloc_targets(struct dm_table *t, unsigned int num)
 
        /*
         * Allocate both the target array and offset array at once.
-        * Append an empty entry to catch sectors beyond the end of
-        * the device.
         */
-       n_highs = (sector_t *) dm_vcalloc(num + 1, sizeof(struct dm_target) +
+       n_highs = (sector_t *) dm_vcalloc(num, sizeof(struct dm_target) +
                                          sizeof(sector_t));
        if (!n_highs)
                return -ENOMEM;
@@ -1359,7 +1357,7 @@ struct dm_target *dm_table_get_target(struct dm_table *t, unsigned int index)
 /*
  * Search the btree for the correct target.
  *
- * Caller should check returned pointer with dm_target_is_valid()
+ * Caller should check returned pointer for NULL
  * to trap I/O beyond end of device.
  */
 struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
@@ -1368,7 +1366,7 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
        sector_t *node;
 
        if (unlikely(sector >= dm_table_get_size(t)))
-               return &t->targets[t->num_targets];
+               return NULL;
 
        for (l = 0; l < t->depth; l++) {
                n = get_child(n, k);
index ea24ff0612e3a358c6aef10b4657f27f83fb9572..4fb33e7562c5244724edd24a6898ee421b0b2908 100644 (file)
@@ -15,7 +15,7 @@
 
 #include "dm-verity.h"
 #include "dm-verity-fec.h"
-
+#include "dm-verity-verify-sig.h"
 #include <linux/module.h>
 #include <linux/reboot.h>
 
@@ -33,7 +33,8 @@
 #define DM_VERITY_OPT_IGN_ZEROES       "ignore_zero_blocks"
 #define DM_VERITY_OPT_AT_MOST_ONCE     "check_at_most_once"
 
-#define DM_VERITY_OPTS_MAX             (2 + DM_VERITY_OPTS_FEC)
+#define DM_VERITY_OPTS_MAX             (2 + DM_VERITY_OPTS_FEC + \
+                                        DM_VERITY_ROOT_HASH_VERIFICATION_OPTS)
 
 static unsigned dm_verity_prefetch_cluster = DM_VERITY_DEFAULT_PREFETCH_SIZE;
 
@@ -713,6 +714,8 @@ static void verity_status(struct dm_target *ti, status_type_t type,
                        args++;
                if (v->validated_blocks)
                        args++;
+               if (v->signature_key_desc)
+                       args += DM_VERITY_ROOT_HASH_VERIFICATION_OPTS;
                if (!args)
                        return;
                DMEMIT(" %u", args);
@@ -734,6 +737,9 @@ static void verity_status(struct dm_target *ti, status_type_t type,
                if (v->validated_blocks)
                        DMEMIT(" " DM_VERITY_OPT_AT_MOST_ONCE);
                sz = verity_fec_status_table(v, sz, result, maxlen);
+               if (v->signature_key_desc)
+                       DMEMIT(" " DM_VERITY_ROOT_HASH_VERIFICATION_OPT_SIG_KEY
+                               " %s", v->signature_key_desc);
                break;
        }
 }
@@ -799,6 +805,8 @@ static void verity_dtr(struct dm_target *ti)
 
        verity_fec_dtr(v);
 
+       kfree(v->signature_key_desc);
+
        kfree(v);
 }
 
@@ -854,7 +862,8 @@ out:
        return r;
 }
 
-static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v)
+static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v,
+                                struct dm_verity_sig_opts *verify_args)
 {
        int r;
        unsigned argc;
@@ -903,6 +912,14 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v)
                        if (r)
                                return r;
                        continue;
+               } else if (verity_verify_is_sig_opt_arg(arg_name)) {
+                       r = verity_verify_sig_parse_opt_args(as, v,
+                                                            verify_args,
+                                                            &argc, arg_name);
+                       if (r)
+                               return r;
+                       continue;
+
                }
 
                ti->error = "Unrecognized verity feature request";
@@ -929,6 +946,7 @@ static int verity_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v)
 static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
 {
        struct dm_verity *v;
+       struct dm_verity_sig_opts verify_args = {0};
        struct dm_arg_set as;
        unsigned int num;
        unsigned long long num_ll;
@@ -936,6 +954,7 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
        int i;
        sector_t hash_position;
        char dummy;
+       char *root_hash_digest_to_validate;
 
        v = kzalloc(sizeof(struct dm_verity), GFP_KERNEL);
        if (!v) {
@@ -1069,6 +1088,7 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
                r = -EINVAL;
                goto bad;
        }
+       root_hash_digest_to_validate = argv[8];
 
        if (strcmp(argv[9], "-")) {
                v->salt_size = strlen(argv[9]) / 2;
@@ -1094,11 +1114,20 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
                as.argc = argc;
                as.argv = argv;
 
-               r = verity_parse_opt_args(&as, v);
+               r = verity_parse_opt_args(&as, v, &verify_args);
                if (r < 0)
                        goto bad;
        }
 
+       /* Root hash signature is  a optional parameter*/
+       r = verity_verify_root_hash(root_hash_digest_to_validate,
+                                   strlen(root_hash_digest_to_validate),
+                                   verify_args.sig,
+                                   verify_args.sig_size);
+       if (r < 0) {
+               ti->error = "Root hash verification failed";
+               goto bad;
+       }
        v->hash_per_block_bits =
                __fls((1 << v->hash_dev_block_bits) / v->digest_size);
 
@@ -1164,9 +1193,13 @@ static int verity_ctr(struct dm_target *ti, unsigned argc, char **argv)
        ti->per_io_data_size = roundup(ti->per_io_data_size,
                                       __alignof__(struct dm_verity_io));
 
+       verity_verify_sig_opts_cleanup(&verify_args);
+
        return 0;
 
 bad:
+
+       verity_verify_sig_opts_cleanup(&verify_args);
        verity_dtr(ti);
 
        return r;
@@ -1174,7 +1207,7 @@ bad:
 
 static struct target_type verity_target = {
        .name           = "verity",
-       .version        = {1, 4, 0},
+       .version        = {1, 5, 0},
        .module         = THIS_MODULE,
        .ctr            = verity_ctr,
        .dtr            = verity_dtr,
diff --git a/drivers/md/dm-verity-verify-sig.c b/drivers/md/dm-verity-verify-sig.c
new file mode 100644 (file)
index 0000000..614e43d
--- /dev/null
@@ -0,0 +1,133 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Microsoft Corporation.
+ *
+ * Author:  Jaskaran Singh Khurana <jaskarankhurana@linux.microsoft.com>
+ *
+ */
+#include <linux/device-mapper.h>
+#include <linux/verification.h>
+#include <keys/user-type.h>
+#include <linux/module.h>
+#include "dm-verity.h"
+#include "dm-verity-verify-sig.h"
+
+#define DM_VERITY_VERIFY_ERR(s) DM_VERITY_ROOT_HASH_VERIFICATION " " s
+
+static bool require_signatures;
+module_param(require_signatures, bool, false);
+MODULE_PARM_DESC(require_signatures,
+               "Verify the roothash of dm-verity hash tree");
+
+#define DM_VERITY_IS_SIG_FORCE_ENABLED() \
+       (require_signatures != false)
+
+bool verity_verify_is_sig_opt_arg(const char *arg_name)
+{
+       return (!strcasecmp(arg_name,
+                           DM_VERITY_ROOT_HASH_VERIFICATION_OPT_SIG_KEY));
+}
+
+static int verity_verify_get_sig_from_key(const char *key_desc,
+                                       struct dm_verity_sig_opts *sig_opts)
+{
+       struct key *key;
+       const struct user_key_payload *ukp;
+       int ret = 0;
+
+       key = request_key(&key_type_user,
+                       key_desc, NULL);
+       if (IS_ERR(key))
+               return PTR_ERR(key);
+
+       down_read(&key->sem);
+
+       ukp = user_key_payload_locked(key);
+       if (!ukp) {
+               ret = -EKEYREVOKED;
+               goto end;
+       }
+
+       sig_opts->sig = kmalloc(ukp->datalen, GFP_KERNEL);
+       if (!sig_opts->sig) {
+               ret = -ENOMEM;
+               goto end;
+       }
+       sig_opts->sig_size = ukp->datalen;
+
+       memcpy(sig_opts->sig, ukp->data, sig_opts->sig_size);
+
+end:
+       up_read(&key->sem);
+       key_put(key);
+
+       return ret;
+}
+
+int verity_verify_sig_parse_opt_args(struct dm_arg_set *as,
+                                    struct dm_verity *v,
+                                    struct dm_verity_sig_opts *sig_opts,
+                                    unsigned int *argc,
+                                    const char *arg_name)
+{
+       struct dm_target *ti = v->ti;
+       int ret = 0;
+       const char *sig_key = NULL;
+
+       if (!*argc) {
+               ti->error = DM_VERITY_VERIFY_ERR("Signature key not specified");
+               return -EINVAL;
+       }
+
+       sig_key = dm_shift_arg(as);
+       (*argc)--;
+
+       ret = verity_verify_get_sig_from_key(sig_key, sig_opts);
+       if (ret < 0)
+               ti->error = DM_VERITY_VERIFY_ERR("Invalid key specified");
+
+       v->signature_key_desc = kstrdup(sig_key, GFP_KERNEL);
+       if (!v->signature_key_desc)
+               return -ENOMEM;
+
+       return ret;
+}
+
+/*
+ * verify_verify_roothash - Verify the root hash of the verity hash device
+ *                          using builtin trusted keys.
+ *
+ * @root_hash: For verity, the roothash/data to be verified.
+ * @root_hash_len: Size of the roothash/data to be verified.
+ * @sig_data: The trusted signature that verifies the roothash/data.
+ * @sig_len: Size of the signature.
+ *
+ */
+int verity_verify_root_hash(const void *root_hash, size_t root_hash_len,
+                           const void *sig_data, size_t sig_len)
+{
+       int ret;
+
+       if (!root_hash || root_hash_len == 0)
+               return -EINVAL;
+
+       if (!sig_data  || sig_len == 0) {
+               if (DM_VERITY_IS_SIG_FORCE_ENABLED())
+                       return -ENOKEY;
+               else
+                       return 0;
+       }
+
+       ret = verify_pkcs7_signature(root_hash, root_hash_len, sig_data,
+                               sig_len, NULL, VERIFYING_UNSPECIFIED_SIGNATURE,
+                               NULL, NULL);
+
+       return ret;
+}
+
+void verity_verify_sig_opts_cleanup(struct dm_verity_sig_opts *sig_opts)
+{
+       kfree(sig_opts->sig);
+       sig_opts->sig = NULL;
+       sig_opts->sig_size = 0;
+}
diff --git a/drivers/md/dm-verity-verify-sig.h b/drivers/md/dm-verity-verify-sig.h
new file mode 100644 (file)
index 0000000..19b1547
--- /dev/null
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2019 Microsoft Corporation.
+ *
+ * Author:  Jaskaran Singh Khurana <jaskarankhurana@linux.microsoft.com>
+ *
+ */
+#ifndef DM_VERITY_SIG_VERIFICATION_H
+#define DM_VERITY_SIG_VERIFICATION_H
+
+#define DM_VERITY_ROOT_HASH_VERIFICATION "DM Verity Sig Verification"
+#define DM_VERITY_ROOT_HASH_VERIFICATION_OPT_SIG_KEY "root_hash_sig_key_desc"
+
+struct dm_verity_sig_opts {
+       unsigned int sig_size;
+       u8 *sig;
+};
+
+#ifdef CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG
+
+#define DM_VERITY_ROOT_HASH_VERIFICATION_OPTS 2
+
+int verity_verify_root_hash(const void *data, size_t data_len,
+                           const void *sig_data, size_t sig_len);
+bool verity_verify_is_sig_opt_arg(const char *arg_name);
+
+int verity_verify_sig_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v,
+                                   struct dm_verity_sig_opts *sig_opts,
+                                   unsigned int *argc, const char *arg_name);
+
+void verity_verify_sig_opts_cleanup(struct dm_verity_sig_opts *sig_opts);
+
+#else
+
+#define DM_VERITY_ROOT_HASH_VERIFICATION_OPTS 0
+
+int verity_verify_root_hash(const void *data, size_t data_len,
+                           const void *sig_data, size_t sig_len)
+{
+       return 0;
+}
+
+bool verity_verify_is_sig_opt_arg(const char *arg_name)
+{
+       return false;
+}
+
+int verity_verify_sig_parse_opt_args(struct dm_arg_set *as, struct dm_verity *v,
+                                   struct dm_verity_sig_opts *sig_opts,
+                                   unsigned int *argc, const char *arg_name)
+{
+       return -EINVAL;
+}
+
+void verity_verify_sig_opts_cleanup(struct dm_verity_sig_opts *sig_opts)
+{
+}
+
+#endif /* CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG */
+#endif /* DM_VERITY_SIG_VERIFICATION_H */
index eeaf940aef6d1d3644810029b58e0989d71ca3a2..641b9e3a399b79e8da7f8b7c73af5a012e6198f5 100644 (file)
@@ -63,6 +63,8 @@ struct dm_verity {
 
        struct dm_verity_fec *fec;      /* forward error correction */
        unsigned long *validated_blocks; /* bitset blocks validated */
+
+       char *signature_key_desc; /* signature keyring reference */
 };
 
 struct dm_verity_io {
index 1cb137f0ef9d7f4b265779563273ed82d136eee3..d06b8aa41e261847263f05f2ca74979f72ef45f3 100644 (file)
@@ -190,7 +190,6 @@ struct writeback_struct {
        struct dm_writecache *wc;
        struct wc_entry **wc_list;
        unsigned wc_list_n;
-       struct page *page;
        struct wc_entry *wc_list_inline[WB_LIST_INLINE];
        struct bio bio;
 };
@@ -727,7 +726,8 @@ static void writecache_flush(struct dm_writecache *wc)
        }
        writecache_commit_flushed(wc);
 
-       writecache_wait_for_ios(wc, WRITE);
+       if (!WC_MODE_PMEM(wc))
+               writecache_wait_for_ios(wc, WRITE);
 
        wc->seq_count++;
        pmem_assign(sb(wc)->seq_count, cpu_to_le64(wc->seq_count));
@@ -1561,7 +1561,7 @@ static void writecache_writeback(struct work_struct *work)
 {
        struct dm_writecache *wc = container_of(work, struct dm_writecache, writeback_work);
        struct blk_plug plug;
-       struct wc_entry *e, *f, *g;
+       struct wc_entry *f, *g, *e = NULL;
        struct rb_node *node, *next_node;
        struct list_head skipped;
        struct writeback_list wbl;
@@ -1598,7 +1598,14 @@ restart:
                        break;
                }
 
-               e = container_of(wc->lru.prev, struct wc_entry, lru);
+               if (unlikely(wc->writeback_all)) {
+                       if (unlikely(!e)) {
+                               writecache_flush(wc);
+                               e = container_of(rb_first(&wc->tree), struct wc_entry, rb_node);
+                       } else
+                               e = g;
+               } else
+                       e = container_of(wc->lru.prev, struct wc_entry, lru);
                BUG_ON(e->write_in_progress);
                if (unlikely(!writecache_entry_is_committed(wc, e))) {
                        writecache_flush(wc);
@@ -1629,8 +1636,8 @@ restart:
                        if (unlikely(!next_node))
                                break;
                        g = container_of(next_node, struct wc_entry, rb_node);
-                       if (read_original_sector(wc, g) ==
-                           read_original_sector(wc, f)) {
+                       if (unlikely(read_original_sector(wc, g) ==
+                           read_original_sector(wc, f))) {
                                f = g;
                                continue;
                        }
@@ -1659,8 +1666,14 @@ restart:
                        g->wc_list_contiguous = BIO_MAX_PAGES;
                        f = g;
                        e->wc_list_contiguous++;
-                       if (unlikely(e->wc_list_contiguous == BIO_MAX_PAGES))
+                       if (unlikely(e->wc_list_contiguous == BIO_MAX_PAGES)) {
+                               if (unlikely(wc->writeback_all)) {
+                                       next_node = rb_next(&f->rb_node);
+                                       if (likely(next_node))
+                                               g = container_of(next_node, struct wc_entry, rb_node);
+                               }
                                break;
+                       }
                }
                cond_resched();
        }
index 31478fef6032daf0c84eb8db29263816f91117b3..d3bcc4197f5dd7e6c44e227a86aa896b952f22d9 100644 (file)
@@ -134,8 +134,6 @@ static int dmz_submit_bio(struct dmz_target *dmz, struct dm_zone *zone,
 
        refcount_inc(&bioctx->ref);
        generic_make_request(clone);
-       if (clone->bi_status == BLK_STS_IOERR)
-               return -EIO;
 
        if (bio_op(bio) == REQ_OP_WRITE && dmz_is_seq(zone))
                zone->wp_block += nr_blocks;
index d0beef033e2f50148df8d3afdab450a0edb8c532..1a5e328c443a997a48e36a0397afbe27ee8c8e4b 100644 (file)
@@ -457,7 +457,7 @@ static int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
                return -EIO;
 
        tgt = dm_table_find_target(map, sector);
-       if (!dm_target_is_valid(tgt)) {
+       if (!tgt) {
                ret = -EIO;
                goto out;
        }
@@ -1072,7 +1072,7 @@ static struct dm_target *dm_dax_get_live_target(struct mapped_device *md,
                return NULL;
 
        ti = dm_table_find_target(map, sector);
-       if (!dm_target_is_valid(ti))
+       if (!ti)
                return NULL;
 
        return ti;
@@ -1572,7 +1572,7 @@ static int __split_and_process_non_flush(struct clone_info *ci)
        int r;
 
        ti = dm_table_find_target(ci->map, ci->sector);
-       if (!dm_target_is_valid(ti))
+       if (!ti)
                return -EIO;
 
        if (__process_abnormal_io(ci, ti, &r))
@@ -1748,7 +1748,7 @@ static blk_qc_t dm_process_bio(struct mapped_device *md,
 
        if (!ti) {
                ti = dm_table_find_target(map, bio->bi_iter.bi_sector);
-               if (unlikely(!ti || !dm_target_is_valid(ti))) {
+               if (unlikely(!ti)) {
                        bio_io_error(bio);
                        return ret;
                }
index 0475673337f3ad06942376ff31810239dcc6876b..d7c4f6606b5fca1e2b360a807ca67f60973de3f8 100644 (file)
@@ -85,11 +85,6 @@ struct target_type *dm_get_immutable_target_type(struct mapped_device *md);
 
 int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t);
 
-/*
- * To check the return value from dm_table_find_target().
- */
-#define dm_target_is_valid(t) ((t)->table)
-
 /*
  * To check whether the target type is bio-based or not (request-based).
  */
index b8a62188f6be5906630983ef8fe183c9ba68ef2f..bd68f6fef69482e914cf71f043f8eb198414454f 100644 (file)
@@ -369,10 +369,6 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
                         */
                        dm_tm_unlock(ll->tm, blk);
                        continue;
-
-               } else if (r < 0) {
-                       dm_tm_unlock(ll->tm, blk);
-                       return r;
                }
 
                dm_tm_unlock(ll->tm, blk);
index f396a82dfd3e61bb1810f4cacf6fcb9b9eefa37b..2df8ceca1f9b8479ff0537d421baec643a7de2cc 100644 (file)
@@ -243,6 +243,7 @@ enum {
        DM_TARGET_MSG_CMD,
        DM_DEV_SET_GEOMETRY_CMD,
        DM_DEV_ARM_POLL_CMD,
+       DM_GET_TARGET_VERSION_CMD,
 };
 
 #define DM_IOCTL 0xfd
@@ -265,14 +266,15 @@ enum {
 #define DM_TABLE_STATUS  _IOWR(DM_IOCTL, DM_TABLE_STATUS_CMD, struct dm_ioctl)
 
 #define DM_LIST_VERSIONS _IOWR(DM_IOCTL, DM_LIST_VERSIONS_CMD, struct dm_ioctl)
+#define DM_GET_TARGET_VERSION _IOWR(DM_IOCTL, DM_GET_TARGET_VERSION_CMD, struct dm_ioctl)
 
 #define DM_TARGET_MSG   _IOWR(DM_IOCTL, DM_TARGET_MSG_CMD, struct dm_ioctl)
 #define DM_DEV_SET_GEOMETRY    _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
 
 #define DM_VERSION_MAJOR       4
-#define DM_VERSION_MINOR       40
+#define DM_VERSION_MINOR       41
 #define DM_VERSION_PATCHLEVEL  0
-#define DM_VERSION_EXTRA       "-ioctl (2019-01-18)"
+#define DM_VERSION_EXTRA       "-ioctl (2019-09-16)"
 
 /* Status bits */
 #define DM_READONLY_FLAG       (1 << 0) /* In/Out */