Merge tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
authorLinus Torvalds <torvalds@linux-foundation.org>
Fri, 3 Nov 2023 02:15:30 +0000 (16:15 -1000)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 3 Nov 2023 02:15:30 +0000 (16:15 -1000)
Pull crypto updates from Herbert Xu:
 "API:
   - Add virtual-address based lskcipher interface
   - Optimise ahash/shash performance in light of costly indirect calls
   - Remove ahash alignmask attribute

  Algorithms:
   - Improve AES/XTS performance of 6-way unrolling for ppc
   - Remove some uses of obsolete algorithms (md4, md5, sha1)
   - Add FIPS 202 SHA-3 support in pkcs1pad
   - Add fast path for single-page messages in adiantum
   - Remove zlib-deflate

  Drivers:
   - Add support for S4 in meson RNG driver
   - Add STM32MP13x support in stm32
   - Add hwrng interface support in qcom-rng
   - Add support for deflate algorithm in hisilicon/zip"

* tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (283 commits)
  crypto: adiantum - flush destination page before unmapping
  crypto: testmgr - move pkcs1pad(rsa,sha3-*) to correct place
  Documentation/module-signing.txt: bring up to date
  module: enable automatic module signing with FIPS 202 SHA-3
  crypto: asymmetric_keys - allow FIPS 202 SHA-3 signatures
  crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support
  crypto: FIPS 202 SHA-3 register in hash info for IMA
  x509: Add OIDs for FIPS 202 SHA-3 hash and signatures
  crypto: ahash - optimize performance when wrapping shash
  crypto: ahash - check for shash type instead of not ahash type
  crypto: hash - move "ahash wrapping shash" functions to ahash.c
  crypto: talitos - stop using crypto_ahash::init
  crypto: chelsio - stop using crypto_ahash::init
  crypto: ahash - improve file comment
  crypto: ahash - remove struct ahash_request_priv
  crypto: ahash - remove crypto_ahash_alignmask
  crypto: gcm - stop using alignmask of ahash
  crypto: chacha20poly1305 - stop using alignmask of ahash
  crypto: ccm - stop using alignmask of ahash
  net: ipv6: stop checking crypto_ahash_alignmask
  ...

275 files changed:
Documentation/ABI/testing/debugfs-driver-qat
Documentation/ABI/testing/sysfs-driver-qat
Documentation/ABI/testing/sysfs-driver-qat_ras [new file with mode: 0644]
Documentation/ABI/testing/sysfs-driver-qat_rl [new file with mode: 0644]
Documentation/admin-guide/module-signing.rst
Documentation/crypto/devel-algos.rst
Documentation/devicetree/bindings/crypto/fsl-imx-sahara.yaml
Documentation/devicetree/bindings/crypto/qcom,inline-crypto-engine.yaml
Documentation/devicetree/bindings/crypto/qcom,prng.yaml
Documentation/devicetree/bindings/rng/amlogic,meson-rng.yaml
Documentation/devicetree/bindings/rng/st,stm32-rng.yaml
MAINTAINERS
arch/arm/crypto/nhpoly1305-neon-glue.c
arch/arm64/crypto/nhpoly1305-neon-glue.c
arch/arm64/crypto/sha1-ce-core.S
arch/arm64/crypto/sha1-ce-glue.c
arch/arm64/crypto/sha2-ce-core.S
arch/arm64/crypto/sha2-ce-glue.c
arch/arm64/crypto/sha256-glue.c
arch/arm64/crypto/sha512-ce-core.S
arch/arm64/crypto/sha512-ce-glue.c
arch/arm64/crypto/sha512-glue.c
arch/loongarch/crypto/crc32-loongarch.c
arch/mips/crypto/crc32-mips.c
arch/sparc/crypto/crc32c_glue.c
arch/x86/crypto/aesni-intel_asm.S
arch/x86/crypto/aesni-intel_avx-x86_64.S
arch/x86/crypto/aesni-intel_glue.c
arch/x86/crypto/nhpoly1305-avx2-glue.c
arch/x86/crypto/nhpoly1305-sse2-glue.c
arch/x86/crypto/sha1_ssse3_glue.c
arch/x86/crypto/sha256_ssse3_glue.c
certs/Kconfig
crypto/Kconfig
crypto/Makefile
crypto/adiantum.c
crypto/aead.c
crypto/ahash.c
crypto/api.c
crypto/arc4.c
crypto/asymmetric_keys/Kconfig
crypto/asymmetric_keys/Makefile
crypto/asymmetric_keys/mscode_parser.c
crypto/asymmetric_keys/pkcs7.asn1
crypto/asymmetric_keys/pkcs7_parser.c
crypto/asymmetric_keys/pkcs8.asn1
crypto/asymmetric_keys/public_key.c
crypto/asymmetric_keys/selftest.c
crypto/asymmetric_keys/signature.c
crypto/asymmetric_keys/x509.asn1
crypto/asymmetric_keys/x509_akid.asn1
crypto/asymmetric_keys/x509_cert_parser.c
crypto/asymmetric_keys/x509_parser.h
crypto/asymmetric_keys/x509_public_key.c
crypto/authenc.c
crypto/authencesn.c
crypto/cbc.c
crypto/ccm.c
crypto/chacha20poly1305.c
crypto/cmac.c
crypto/cryptd.c
crypto/crypto_engine.c
crypto/ctr.c
crypto/cts.c
crypto/deflate.c
crypto/drbg.c
crypto/ecb.c
crypto/essiv.c
crypto/gcm.c
crypto/hash.h
crypto/hash_info.c
crypto/hctr2.c
crypto/hmac.c
crypto/jitterentropy-kcapi.c
crypto/jitterentropy.c
crypto/jitterentropy.h
crypto/lrw.c
crypto/lskcipher.c [new file with mode: 0644]
crypto/pcrypt.c
crypto/rsa-pkcs1pad.c
crypto/rsaprivkey.asn1
crypto/rsapubkey.asn1
crypto/shash.c
crypto/skcipher.c
crypto/skcipher.h [new file with mode: 0644]
crypto/testmgr.c
crypto/testmgr.h
crypto/vmac.c
crypto/xcbc.c
crypto/xts.c
drivers/char/hw_random/bcm2835-rng.c
drivers/char/hw_random/core.c
drivers/char/hw_random/geode-rng.c
drivers/char/hw_random/hisi-rng.c
drivers/char/hw_random/imx-rngc.c
drivers/char/hw_random/ks-sa-rng.c
drivers/char/hw_random/meson-rng.c
drivers/char/hw_random/mpfs-rng.c
drivers/char/hw_random/n2-drv.c
drivers/char/hw_random/nomadik-rng.c
drivers/char/hw_random/octeon-rng.c
drivers/char/hw_random/st-rng.c
drivers/char/hw_random/stm32-rng.c
drivers/char/hw_random/xgene-rng.c
drivers/char/hw_random/xiphera-trng.c
drivers/crypto/Kconfig
drivers/crypto/allwinner/sun4i-ss/sun4i-ss-core.c
drivers/crypto/allwinner/sun8i-ce/sun8i-ce-core.c
drivers/crypto/allwinner/sun8i-ss/sun8i-ss-core.c
drivers/crypto/amcc/crypto4xx_core.c
drivers/crypto/amlogic/amlogic-gxl-core.c
drivers/crypto/aspeed/aspeed-acry.c
drivers/crypto/aspeed/aspeed-hace.c
drivers/crypto/atmel-aes.c
drivers/crypto/atmel-sha.c
drivers/crypto/atmel-tdes.c
drivers/crypto/axis/artpec6_crypto.c
drivers/crypto/bcm/cipher.c
drivers/crypto/caam/caamalg.c
drivers/crypto/caam/caamalg_qi2.c
drivers/crypto/caam/jr.c
drivers/crypto/cavium/nitrox/nitrox_hal.c
drivers/crypto/ccp/dbc.c
drivers/crypto/ccp/dbc.h
drivers/crypto/ccp/psp-dev.c
drivers/crypto/ccp/psp-dev.h
drivers/crypto/ccp/sev-dev.c
drivers/crypto/ccp/sp-dev.h
drivers/crypto/ccp/sp-pci.c
drivers/crypto/ccp/sp-platform.c
drivers/crypto/ccp/tee-dev.c
drivers/crypto/ccp/tee-dev.h
drivers/crypto/ccree/cc_driver.c
drivers/crypto/chelsio/chcr_algo.c
drivers/crypto/exynos-rng.c
drivers/crypto/gemini/sl3516-ce-core.c
drivers/crypto/hifn_795x.c
drivers/crypto/hisilicon/debugfs.c
drivers/crypto/hisilicon/hpre/hpre_crypto.c
drivers/crypto/hisilicon/hpre/hpre_main.c
drivers/crypto/hisilicon/qm.c
drivers/crypto/hisilicon/qm_common.h
drivers/crypto/hisilicon/sec/sec_drv.c
drivers/crypto/hisilicon/sec2/sec_crypto.c
drivers/crypto/hisilicon/sec2/sec_main.c
drivers/crypto/hisilicon/trng/trng.c
drivers/crypto/hisilicon/zip/zip_crypto.c
drivers/crypto/hisilicon/zip/zip_main.c
drivers/crypto/img-hash.c
drivers/crypto/inside-secure/safexcel.c
drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c
drivers/crypto/intel/keembay/keembay-ocs-aes-core.c
drivers/crypto/intel/keembay/keembay-ocs-ecc.c
drivers/crypto/intel/keembay/keembay-ocs-hcu-core.c
drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.h
drivers/crypto/intel/qat/qat_4xxx/adf_drv.c
drivers/crypto/intel/qat/qat_c3xxx/adf_c3xxx_hw_data.c
drivers/crypto/intel/qat/qat_c3xxx/adf_drv.c
drivers/crypto/intel/qat/qat_c3xxxvf/adf_drv.c
drivers/crypto/intel/qat/qat_c62x/adf_c62x_hw_data.c
drivers/crypto/intel/qat/qat_c62x/adf_drv.c
drivers/crypto/intel/qat/qat_c62xvf/adf_drv.c
drivers/crypto/intel/qat/qat_common/Makefile
drivers/crypto/intel/qat/qat_common/adf_accel_devices.h
drivers/crypto/intel/qat/qat_common/adf_admin.c
drivers/crypto/intel/qat/qat_common/adf_admin.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_aer.c
drivers/crypto/intel/qat/qat_common/adf_cfg_services.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_cfg_services.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_cfg_strings.h
drivers/crypto/intel/qat/qat_common/adf_clock.c
drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_common_drv.h
drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
drivers/crypto/intel/qat/qat_common/adf_fw_counters.c
drivers/crypto/intel/qat/qat_common/adf_gen4_hw_data.h
drivers/crypto/intel/qat/qat_common/adf_gen4_pm.c
drivers/crypto/intel/qat/qat_common/adf_gen4_pm.h
drivers/crypto/intel/qat/qat_common/adf_gen4_pm_debugfs.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_gen4_ras.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_gen4_timer.c
drivers/crypto/intel/qat/qat_common/adf_heartbeat.c
drivers/crypto/intel/qat/qat_common/adf_heartbeat_dbgfs.c
drivers/crypto/intel/qat/qat_common/adf_init.c
drivers/crypto/intel/qat/qat_common/adf_isr.c
drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_rl.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_rl.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_rl_admin.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_rl_admin.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_sysfs.c
drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.c [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.h [new file with mode: 0644]
drivers/crypto/intel/qat/qat_common/adf_transport_debug.c
drivers/crypto/intel/qat/qat_common/icp_qat_fw_init_admin.h
drivers/crypto/intel/qat/qat_common/icp_qat_hw.h
drivers/crypto/intel/qat/qat_common/qat_algs_send.c
drivers/crypto/intel/qat/qat_common/qat_comp_algs.c
drivers/crypto/intel/qat/qat_common/qat_uclo.c
drivers/crypto/intel/qat/qat_dh895xcc/adf_dh895xcc_hw_data.c
drivers/crypto/intel/qat/qat_dh895xcc/adf_drv.c
drivers/crypto/intel/qat/qat_dh895xccvf/adf_drv.c
drivers/crypto/marvell/cesa/cesa.c
drivers/crypto/mxs-dcp.c
drivers/crypto/n2_core.c
drivers/crypto/omap-aes.c
drivers/crypto/omap-des.c
drivers/crypto/omap-sham.c
drivers/crypto/qce/core.c
drivers/crypto/qcom-rng.c
drivers/crypto/rockchip/rk3288_crypto.c
drivers/crypto/rockchip/rk3288_crypto_ahash.c
drivers/crypto/s5p-sss.c
drivers/crypto/sa2ul.c
drivers/crypto/sahara.c
drivers/crypto/starfive/jh7110-hash.c
drivers/crypto/stm32/stm32-crc32.c
drivers/crypto/stm32/stm32-cryp.c
drivers/crypto/stm32/stm32-hash.c
drivers/crypto/talitos.c
drivers/crypto/vmx/aesp8-ppc.pl
drivers/crypto/xilinx/zynqmp-aes-gcm.c
drivers/crypto/xilinx/zynqmp-sha.c
drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.c
drivers/net/ethernet/chelsio/inline_crypto/ch_ipsec/chcr_ipsec.h
drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
drivers/net/wireguard/cookie.c
drivers/net/wireguard/netlink.c
drivers/net/wireguard/noise.c
fs/crypto/keysetup_v1.c
fs/smb/server/ksmbd_spnego_negtokeninit.asn1
fs/smb/server/ksmbd_spnego_negtokentarg.asn1
fs/ubifs/auth.c
fs/ubifs/replay.c
fs/ubifs/ubifs.h
include/crypto/aead.h
include/crypto/akcipher.h
include/crypto/algapi.h
include/crypto/engine.h
include/crypto/hash.h
include/crypto/hash_info.h
include/crypto/internal/hash.h
include/crypto/internal/skcipher.h
include/crypto/sig.h
include/crypto/skcipher.h
include/linux/crypto.h
include/linux/hisi_acc_qm.h
include/linux/hw_random.h
include/linux/oid_registry.h
include/linux/units.h
include/linux/verification.h
include/uapi/linux/hash_info.h
kernel/module/Kconfig
kernel/padata.c
net/bluetooth/smp.c
net/ceph/messenger_v2.c
net/ipv4/ah4.c
net/ipv4/netfilter/nf_nat_snmp_basic.asn1
net/ipv6/ah6.c
net/mptcp/subflow.c
net/sunrpc/auth_gss/gss_krb5_crypto.c
net/sunrpc/auth_gss/gss_krb5_unseal.c
net/xfrm/Kconfig
net/xfrm/xfrm_algo.c
security/integrity/evm/evm_main.c
security/keys/encrypted-keys/encrypted.c
tools/crypto/ccp/dbc.c
tools/crypto/ccp/dbc.py
tools/crypto/ccp/test_dbc.py

index 6731ffacc5f0c6a299344667d5a01151bae080b1..b2db010d851eeb67febe0dda7f5ca0dc5e7d1ff9 100644 (file)
@@ -1,4 +1,4 @@
-What:          /sys/kernel/debug/qat_<device>_<BDF>/qat/fw_counters
+What:          /sys/kernel/debug/qat_<device>_<BDF>/fw_counters
 Date:          November 2023
 KernelVersion: 6.6
 Contact:       qat-linux@intel.com
@@ -59,3 +59,25 @@ Description: (RO) Read returns the device health status.
 
                The driver does not monitor for Heartbeat. It is left for a user
                to poll the status periodically.
+
+What:          /sys/kernel/debug/qat_<device>_<BDF>/pm_status
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:   (RO) Read returns power management information specific to the
+               QAT device.
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/kernel/debug/qat_<device>_<BDF>/cnv_errors
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:   (RO) Read returns, for each Acceleration Engine (AE), the number
+               of errors and the type of the last error detected by the device
+               when performing verified compression.
+               Reported counters::
+
+                       <N>: Number of Compress and Verify (CnV) errors and type
+                            of the last CnV error detected by Acceleration
+                            Engine N.
index ef6d6c57105efbf199495f3c9718a2ae9e0eb68f..bbf329cf0d67bc10c5e3b74fdf7a1e34317688ef 100644 (file)
@@ -29,6 +29,8 @@ Description:  (RW) Reports the current configuration of the QAT device.
                  services
                * asym;sym: identical to sym;asym
                * dc: the device is configured for running compression services
+               * dcc: identical to dc but enables the dc chaining feature,
+                 hash then compression. If this is not required chose dc
                * sym: the device is configured for running symmetric crypto
                  services
                * asym: the device is configured for running asymmetric crypto
@@ -93,3 +95,49 @@ Description: (RW) This configuration option provides a way to force the device i
                        0
 
                This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat/rp2srv
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RW) This attribute provides a way for a user to query a
+               specific ring pair for the type of service that it is currently
+               configured for.
+
+               When written to, the value is cached and used to perform the
+               read operation. Allowed values are in the range 0 to N-1, where
+               N is the max number of ring pairs supported by a device. This
+               can be queried using the attribute qat/num_rps.
+
+               A read returns the service associated to the ring pair queried.
+
+               The values are:
+
+               * dc: the ring pair is configured for running compression services
+               * sym: the ring pair is configured for running symmetric crypto
+                 services
+               * asym: the ring pair is configured for running asymmetric crypto
+                 services
+
+               Example usage::
+
+                       # echo 1 > /sys/bus/pci/devices/<BDF>/qat/rp2srv
+                       # cat /sys/bus/pci/devices/<BDF>/qat/rp2srv
+                       sym
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat/num_rps
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RO) Returns the number of ring pairs that a single device has.
+
+               Example usage::
+
+                       # cat /sys/bus/pci/devices/<BDF>/qat/num_rps
+                       64
+
+               This attribute is only available for qat_4xxx devices.
diff --git a/Documentation/ABI/testing/sysfs-driver-qat_ras b/Documentation/ABI/testing/sysfs-driver-qat_ras
new file mode 100644 (file)
index 0000000..176dea1
--- /dev/null
@@ -0,0 +1,41 @@
+What:          /sys/bus/pci/devices/<BDF>/qat_ras/errors_correctable
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:   (RO) Reports the number of correctable errors detected by the device.
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_ras/errors_nonfatal
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:   (RO) Reports the number of non fatal errors detected by the device.
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_ras/errors_fatal
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:   (RO) Reports the number of fatal errors detected by the device.
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_ras/reset_error_counters
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:   (WO) Write to resets all error counters of a device.
+
+               The following example reports how to reset the counters::
+
+                       # echo 1 > /sys/bus/pci/devices/<BDF>/qat_ras/reset_error_counters
+                       # cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_correctable
+                       0
+                       # cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_nonfatal
+                       0
+                       # cat /sys/bus/pci/devices/<BDF>/qat_ras/errors_fatal
+                       0
+
+               This attribute is only available for qat_4xxx devices.
diff --git a/Documentation/ABI/testing/sysfs-driver-qat_rl b/Documentation/ABI/testing/sysfs-driver-qat_rl
new file mode 100644 (file)
index 0000000..8c282ae
--- /dev/null
@@ -0,0 +1,226 @@
+What:          /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (WO) This attribute is used to perform an operation on an SLA.
+               The supported operations are: add, update, rm, rm_all, and get.
+
+               Input values must be filled through the associated attribute in
+               this group before a write to this file.
+               If the operation completes successfully, the associated
+               attributes will be updated.
+               The associated attributes are: cir, pir, srv, rp, and id.
+
+               Supported operations:
+
+               * add: Creates a new SLA with the provided inputs from user.
+                       * Inputs: cir, pir, srv, and rp
+                       * Output: id
+
+               * get: Returns the configuration of the specified SLA in id attribute
+                       * Inputs: id
+                       * Outputs: cir, pir, srv, and rp
+
+               * update: Updates the SLA with new values set in the following attributes
+                       * Inputs: id, cir, and pir
+
+               * rm: Removes the specified SLA in the id attribute.
+                       * Inputs: id
+
+               * rm_all: Removes all the configured SLAs.
+                       * Inputs: None
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_rl/rp
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RW) When read, reports the current assigned ring pairs for the
+               queried SLA.
+               When wrote to, configures the ring pairs associated to a new SLA.
+
+               The value is a 64-bit bit mask and is written/displayed in hex.
+               Each bit of this mask represents a single ring pair i.e.,
+               bit 1 == ring pair id 0; bit 3 == ring pair id 2.
+
+               Selected ring pairs must to be assigned to a single service,
+               i.e. the one provided with the srv attribute. The service
+               assigned to a certain ring pair can be checked by querying
+               the attribute qat/rp2srv.
+
+               The maximum number of ring pairs is 4 per SLA.
+
+               Applicability in sla_op:
+
+               * WRITE: add operation
+               * READ: get operation
+
+               Example usage::
+
+                       ## Read
+                       # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/rp
+                       0x5
+
+                       ## Write
+                       # echo 0x5 > /sys/bus/pci/devices/<BDF>/qat_rl/rp
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_rl/id
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RW) If written to, the value is used to retrieve a particular
+               SLA and operate on it.
+               This is valid only for the following operations: update, rm,
+               and get.
+               A read of this attribute is only guaranteed to have correct data
+               after creation of an SLA.
+
+               Applicability in sla_op:
+
+               * WRITE: rm and update operations
+               * READ: add and get operations
+
+               Example usage::
+
+                       ## Read
+                       ## Set attributes e.g. cir, pir, srv, etc
+                       # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/id
+                       4
+
+                       ## Write
+                       # echo 7 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+                       # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/rp
+                       0x5  ## ring pair ID 0 and ring pair ID 2
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_rl/cir
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RW) Committed information rate (CIR). Rate guaranteed to be
+               achieved by a particular SLA. The value is expressed in
+               permille scale, i.e. 1000 refers to the maximum device
+               throughput for a selected service.
+
+               After sending a "get" to sla_op, this will be populated with the
+               CIR for that queried SLA.
+               Write to this file before sending an "add/update" sla_op, to set
+               the SLA to the specified value.
+
+               Applicability in sla_op:
+
+               * WRITE: add and update operations
+               * READ: get operation
+
+               Example usage::
+
+                       ## Write
+                       # echo 500 > /sys/bus/pci/devices/<BDF>/qat_rl/cir
+                       # echo "add" /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+
+                       ## Read
+                       # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+                       # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/cir
+                       500
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_rl/pir
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RW) Peak information rate (PIR). The maximum rate that can be
+               achieved by that particular SLA. An SLA can reach a value
+               between CIR and PIR when the device is not fully utilized by
+               requests from other users (assigned to different SLAs).
+
+               After sending a "get" to sla_op, this will be populated with the
+               PIR for that queried SLA.
+               Write to this file before sending an "add/update" sla_op, to set
+               the SLA to the specified value.
+
+               Applicability in sla_op:
+
+               * WRITE: add and update operations
+               * READ: get operation
+
+               Example usage::
+
+                       ## Write
+                       # echo 750 > /sys/bus/pci/devices/<BDF>/qat_rl/pir
+                       # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+
+                       ## Read
+                       # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+                       # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/pir
+                       750
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_rl/srv
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RW) Service (SRV). Represents the service (sym, asym, dc)
+               associated to an SLA.
+               Can be written to or queried to set/show the SRV type for an SLA.
+               The SRV attribute is used to specify the SRV type before adding
+               an SLA. After an SLA is configured, reports the service
+               associated to that SLA.
+
+               Applicability in sla_op:
+
+               * WRITE: add and update operations
+               * READ: get operation
+
+               Example usage::
+
+                       ## Write
+                       # echo "dc" > /sys/bus/pci/devices/<BDF>/qat_rl/srv
+                       # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/id
+                       4
+
+                       ## Read
+                       # echo 4 > /sys/bus/pci/devices/<BDF>/qat_rl/id
+                       # echo "get" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/srv
+                       dc
+
+               This attribute is only available for qat_4xxx devices.
+
+What:          /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+Date:          January 2024
+KernelVersion: 6.7
+Contact:       qat-linux@intel.com
+Description:
+               (RW) This file will return the remaining capability for a
+               particular service/sla. This is the remaining value that a new
+               SLA can be set to or a current SLA can be increased with.
+
+               Example usage::
+
+                       # echo "asym" > /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+                       250
+                       # echo 250 > /sys/bus/pci/devices/<BDF>/qat_rl/cir
+                       # echo "add" > /sys/bus/pci/devices/<BDF>/qat_rl/sla_op
+                       # cat /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
+                       0
+
+               This attribute is only available for qat_4xxx devices.
index 2898b270329785b57f4525b986d62259210018d4..a8667a777490a8047e5cc3d0a6dba0bcd443eb82 100644 (file)
@@ -28,10 +28,10 @@ trusted userspace bits.
 
 This facility uses X.509 ITU-T standard certificates to encode the public keys
 involved.  The signatures are not themselves encoded in any industrial standard
-type.  The facility currently only supports the RSA public key encryption
-standard (though it is pluggable and permits others to be used).  The possible
-hash algorithms that can be used are SHA-1, SHA-224, SHA-256, SHA-384, and
-SHA-512 (the algorithm is selected by data in the signature).
+type.  The built-in facility currently only supports the RSA & NIST P-384 ECDSA
+public key signing standard (though it is pluggable and permits others to be
+used).  The possible hash algorithms that can be used are SHA-2 and SHA-3 of
+sizes 256, 384, and 512 (the algorithm is selected by data in the signature).
 
 
 ==========================
@@ -81,11 +81,12 @@ This has a number of options available:
      sign the modules with:
 
         =============================== ==========================================
-       ``CONFIG_MODULE_SIG_SHA1``      :menuselection:`Sign modules with SHA-1`
-       ``CONFIG_MODULE_SIG_SHA224``    :menuselection:`Sign modules with SHA-224`
        ``CONFIG_MODULE_SIG_SHA256``    :menuselection:`Sign modules with SHA-256`
        ``CONFIG_MODULE_SIG_SHA384``    :menuselection:`Sign modules with SHA-384`
        ``CONFIG_MODULE_SIG_SHA512``    :menuselection:`Sign modules with SHA-512`
+       ``CONFIG_MODULE_SIG_SHA3_256``  :menuselection:`Sign modules with SHA3-256`
+       ``CONFIG_MODULE_SIG_SHA3_384``  :menuselection:`Sign modules with SHA3-384`
+       ``CONFIG_MODULE_SIG_SHA3_512``  :menuselection:`Sign modules with SHA3-512`
         =============================== ==========================================
 
      The algorithm selected here will also be built into the kernel (rather
@@ -145,6 +146,10 @@ into vmlinux) using parameters in the::
 
 file (which is also generated if it does not already exist).
 
+One can select between RSA (``MODULE_SIG_KEY_TYPE_RSA``) and ECDSA
+(``MODULE_SIG_KEY_TYPE_ECDSA``) to generate either RSA 4k or NIST
+P-384 keypair.
+
 It is strongly recommended that you provide your own x509.genkey file.
 
 Most notably, in the x509.genkey file, the req_distinguished_name section
index 3506899ef83e394cd24463dbfcd170496c44ab2d..9b7782f4f6e0a8c2b8a0edf4faa8c1278975e025 100644 (file)
@@ -235,6 +235,4 @@ Specifics Of Asynchronous HASH Transformation
 
 Some of the drivers will want to use the Generic ScatterWalk in case the
 implementation needs to be fed separate chunks of the scatterlist which
-contains the input data. The buffer containing the resulting hash will
-always be properly aligned to .cra_alignmask so there is no need to
-worry about this.
+contains the input data.
index d531f3af3ea45294bb8fce996a4395bd3528892b..41df80bcdcd9d914de48681bc92fb9c18851266f 100644 (file)
@@ -4,7 +4,7 @@
 $id: http://devicetree.org/schemas/crypto/fsl-imx-sahara.yaml#
 $schema: http://devicetree.org/meta-schemas/core.yaml#
 
-title: Freescale SAHARA Cryptographic Accelerator included in some i.MX chips
+title: Freescale SAHARA Cryptographic Accelerator
 
 maintainers:
   - Steffen Trumtrar <s.trumtrar@pengutronix.de>
@@ -19,19 +19,56 @@ properties:
     maxItems: 1
 
   interrupts:
-    maxItems: 1
+    items:
+      - description: SAHARA Interrupt for Host 0
+      - description: SAHARA Interrupt for Host 1
+    minItems: 1
+
+  clocks:
+    items:
+      - description: Sahara IPG clock
+      - description: Sahara AHB clock
+
+  clock-names:
+    items:
+      - const: ipg
+      - const: ahb
 
 required:
   - compatible
   - reg
   - interrupts
+  - clocks
+  - clock-names
+
+allOf:
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - fsl,imx53-sahara
+    then:
+      properties:
+        interrupts:
+          minItems: 2
+          maxItems: 2
+    else:
+      properties:
+        interrupts:
+          maxItems: 1
 
 additionalProperties: false
 
 examples:
   - |
+    #include <dt-bindings/clock/imx27-clock.h>
+
     crypto@10025000 {
         compatible = "fsl,imx27-sahara";
-        reg = < 0x10025000 0x800>;
+        reg = <0x10025000 0x800>;
         interrupts = <75>;
+        clocks = <&clks IMX27_CLK_SAHARA_IPG_GATE>,
+                 <&clks IMX27_CLK_SAHARA_AHB_GATE>;
+        clock-names = "ipg", "ahb";
     };
index 7da9aa82d8374e9b0717981ab9e83a783106fe34..ca4f7d1cefaa99064aea48fe0abf8b98d4f58908 100644 (file)
@@ -13,6 +13,7 @@ properties:
   compatible:
     items:
       - enum:
+          - qcom,sa8775p-inline-crypto-engine
           - qcom,sm8450-inline-crypto-engine
           - qcom,sm8550-inline-crypto-engine
       - const: qcom,inline-crypto-engine
index bb42f4588b40a7c8251a8c5e4bf31f28cd1cf9f8..13070db0f70ccca500f941144d4eedf9ae2e747a 100644 (file)
@@ -11,9 +11,17 @@ maintainers:
 
 properties:
   compatible:
-    enum:
-      - qcom,prng  # 8916 etc.
-      - qcom,prng-ee  # 8996 and later using EE
+    oneOf:
+      - enum:
+          - qcom,prng  # 8916 etc.
+          - qcom,prng-ee  # 8996 and later using EE
+      - items:
+          - enum:
+              - qcom,sa8775p-trng
+              - qcom,sc7280-trng
+              - qcom,sm8450-trng
+              - qcom,sm8550-trng
+          - const: qcom,trng
 
   reg:
     maxItems: 1
@@ -28,8 +36,18 @@ properties:
 required:
   - compatible
   - reg
-  - clocks
-  - clock-names
+
+allOf:
+  - if:
+      not:
+        properties:
+          compatible:
+            contains:
+              const: qcom,trng
+    then:
+      required:
+        - clocks
+        - clock-names
 
 additionalProperties: false
 
index 457a6e43d810fc9ec825f2ae7f18ca6b54bd9916..afa52af442a7420c9045e6ab7ba2f4469f878523 100644 (file)
@@ -14,6 +14,7 @@ properties:
   compatible:
     enum:
       - amlogic,meson-rng
+      - amlogic,meson-s4-rng
 
   reg:
     maxItems: 1
index 187b172d0cca5ccb47ffced6d88d10f90a13b02e..717f6b321f884d39dc197796da8a4e33e59c3f1d 100644 (file)
@@ -15,7 +15,9 @@ maintainers:
 
 properties:
   compatible:
-    const: st,stm32-rng
+    enum:
+      - st,stm32-rng
+      - st,stm32mp13-rng
 
   reg:
     maxItems: 1
@@ -30,11 +32,27 @@ properties:
     type: boolean
     description: If set enable the clock detection management
 
+  st,rng-lock-conf:
+    type: boolean
+    description: If set, the RNG configuration in RNG_CR, RNG_HTCR and
+                  RNG_NSCR will be locked.
+
 required:
   - compatible
   - reg
   - clocks
 
+allOf:
+  - if:
+      properties:
+        compatible:
+          contains:
+            enum:
+              - st,stm32-rng
+    then:
+      properties:
+        st,rng-lock-conf: false
+
 additionalProperties: false
 
 examples:
index b4046e7097874006be806475dc119877c1ea137f..7ddf1db587c1a193604ce83079ba24347131f3e6 100644 (file)
@@ -908,7 +908,7 @@ F:  drivers/crypto/ccp/
 F:     include/linux/ccp.h
 
 AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER - SEV SUPPORT
-M:     Brijesh Singh <brijesh.singh@amd.com>
+M:     Ashish Kalra <ashish.kalra@amd.com>
 M:     Tom Lendacky <thomas.lendacky@amd.com>
 L:     linux-crypto@vger.kernel.org
 S:     Supported
index e93e41ff265665a094efd8cc2c791bef5479e7dd..62cf7ccdde736082c0aa5bd48f41e2b6960f9308 100644 (file)
@@ -34,6 +34,14 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
        return 0;
 }
 
+static int nhpoly1305_neon_digest(struct shash_desc *desc,
+                                 const u8 *src, unsigned int srclen, u8 *out)
+{
+       return crypto_nhpoly1305_init(desc) ?:
+              nhpoly1305_neon_update(desc, src, srclen) ?:
+              crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
        .base.cra_name          = "nhpoly1305",
        .base.cra_driver_name   = "nhpoly1305-neon",
@@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
        .init                   = crypto_nhpoly1305_init,
        .update                 = nhpoly1305_neon_update,
        .final                  = crypto_nhpoly1305_final,
+       .digest                 = nhpoly1305_neon_digest,
        .setkey                 = crypto_nhpoly1305_setkey,
        .descsize               = sizeof(struct nhpoly1305_state),
 };
index cd882c35d9252d8608d69e4fe0d9d6978681fb37..e4a0b463f080e0a095293ac101103f2cd59f49ac 100644 (file)
@@ -34,6 +34,14 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
        return 0;
 }
 
+static int nhpoly1305_neon_digest(struct shash_desc *desc,
+                                 const u8 *src, unsigned int srclen, u8 *out)
+{
+       return crypto_nhpoly1305_init(desc) ?:
+              nhpoly1305_neon_update(desc, src, srclen) ?:
+              crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
        .base.cra_name          = "nhpoly1305",
        .base.cra_driver_name   = "nhpoly1305-neon",
@@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
        .init                   = crypto_nhpoly1305_init,
        .update                 = nhpoly1305_neon_update,
        .final                  = crypto_nhpoly1305_final,
+       .digest                 = nhpoly1305_neon_digest,
        .setkey                 = crypto_nhpoly1305_setkey,
        .descsize               = sizeof(struct nhpoly1305_state),
 };
index 889ca0f8972b3736c044a1f80bdccf5ddc41e4df..9b1f2d82a6feae09aa7bd4b0f25a82918d515db5 100644 (file)
        .endm
 
        /*
-        * int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
-        *                       int blocks)
+        * int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+        *                         int blocks)
         */
-SYM_FUNC_START(sha1_ce_transform)
+SYM_FUNC_START(__sha1_ce_transform)
        /* load round constants */
        loadrc          k0.4s, 0x5a827999, w6
        loadrc          k1.4s, 0x6ed9eba1, w6
@@ -147,4 +147,4 @@ CPU_LE(     rev32           v11.16b, v11.16b        )
        str             dgb, [x0, #16]
        mov             w0, w2
        ret
-SYM_FUNC_END(sha1_ce_transform)
+SYM_FUNC_END(__sha1_ce_transform)
index 71fa4f1122d747b9d67fd0b95944609a89aadad5..1dd93e1fcb39a276a7be635878986c907d63afaf 100644 (file)
@@ -29,18 +29,19 @@ struct sha1_ce_state {
 extern const u32 sha1_ce_offsetof_count;
 extern const u32 sha1_ce_offsetof_finalize;
 
-asmlinkage int sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
-                                int blocks);
+asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
+                                  int blocks);
 
-static void __sha1_ce_transform(struct sha1_state *sst, u8 const *src,
-                               int blocks)
+static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
+                             int blocks)
 {
        while (blocks) {
                int rem;
 
                kernel_neon_begin();
-               rem = sha1_ce_transform(container_of(sst, struct sha1_ce_state,
-                                                    sst), src, blocks);
+               rem = __sha1_ce_transform(container_of(sst,
+                                                      struct sha1_ce_state,
+                                                      sst), src, blocks);
                kernel_neon_end();
                src += (blocks - rem) * SHA1_BLOCK_SIZE;
                blocks = rem;
@@ -59,7 +60,7 @@ static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
                return crypto_sha1_update(desc, data, len);
 
        sctx->finalize = 0;
-       sha1_base_do_update(desc, data, len, __sha1_ce_transform);
+       sha1_base_do_update(desc, data, len, sha1_ce_transform);
 
        return 0;
 }
@@ -79,9 +80,9 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
         */
        sctx->finalize = finalize;
 
-       sha1_base_do_update(desc, data, len, __sha1_ce_transform);
+       sha1_base_do_update(desc, data, len, sha1_ce_transform);
        if (!finalize)
-               sha1_base_do_finalize(desc, __sha1_ce_transform);
+               sha1_base_do_finalize(desc, sha1_ce_transform);
        return sha1_base_finish(desc, out);
 }
 
@@ -93,7 +94,7 @@ static int sha1_ce_final(struct shash_desc *desc, u8 *out)
                return crypto_sha1_finup(desc, NULL, 0, out);
 
        sctx->finalize = 0;
-       sha1_base_do_finalize(desc, __sha1_ce_transform);
+       sha1_base_do_finalize(desc, sha1_ce_transform);
        return sha1_base_finish(desc, out);
 }
 
index 491179922f49808f1144a7a313b3eb647067d17e..fce84d88ddb2cce382ac6cc3d86dda6e6bb80e43 100644 (file)
        .word           0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2
 
        /*
-        * void sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
-        *                        int blocks)
+        * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+        *                           int blocks)
         */
        .text
-SYM_FUNC_START(sha2_ce_transform)
+SYM_FUNC_START(__sha256_ce_transform)
        /* load round constants */
        adr_l           x8, .Lsha2_rcon
        ld1             { v0.4s- v3.4s}, [x8], #64
@@ -154,4 +154,4 @@ CPU_LE(     rev32           v19.16b, v19.16b        )
 3:     st1             {dgav.4s, dgbv.4s}, [x0]
        mov             w0, w2
        ret
-SYM_FUNC_END(sha2_ce_transform)
+SYM_FUNC_END(__sha256_ce_transform)
index c57a6119fefc586d9ac1264efaff8159bfe6aec3..0a44d2e7ee1f7b1d5da894b6229a75c3a3c7bde4 100644 (file)
@@ -30,18 +30,19 @@ struct sha256_ce_state {
 extern const u32 sha256_ce_offsetof_count;
 extern const u32 sha256_ce_offsetof_finalize;
 
-asmlinkage int sha2_ce_transform(struct sha256_ce_state *sst, u8 const *src,
-                                int blocks);
+asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src,
+                                    int blocks);
 
-static void __sha2_ce_transform(struct sha256_state *sst, u8 const *src,
+static void sha256_ce_transform(struct sha256_state *sst, u8 const *src,
                                int blocks)
 {
        while (blocks) {
                int rem;
 
                kernel_neon_begin();
-               rem = sha2_ce_transform(container_of(sst, struct sha256_ce_state,
-                                                    sst), src, blocks);
+               rem = __sha256_ce_transform(container_of(sst,
+                                                        struct sha256_ce_state,
+                                                        sst), src, blocks);
                kernel_neon_end();
                src += (blocks - rem) * SHA256_BLOCK_SIZE;
                blocks = rem;
@@ -55,8 +56,8 @@ const u32 sha256_ce_offsetof_finalize = offsetof(struct sha256_ce_state,
 
 asmlinkage void sha256_block_data_order(u32 *digest, u8 const *src, int blocks);
 
-static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
-                                     int blocks)
+static void sha256_arm64_transform(struct sha256_state *sst, u8 const *src,
+                                  int blocks)
 {
        sha256_block_data_order(sst->state, src, blocks);
 }
@@ -68,10 +69,10 @@ static int sha256_ce_update(struct shash_desc *desc, const u8 *data,
 
        if (!crypto_simd_usable())
                return sha256_base_do_update(desc, data, len,
-                               __sha256_block_data_order);
+                                            sha256_arm64_transform);
 
        sctx->finalize = 0;
-       sha256_base_do_update(desc, data, len, __sha2_ce_transform);
+       sha256_base_do_update(desc, data, len, sha256_ce_transform);
 
        return 0;
 }
@@ -85,8 +86,8 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
        if (!crypto_simd_usable()) {
                if (len)
                        sha256_base_do_update(desc, data, len,
-                               __sha256_block_data_order);
-               sha256_base_do_finalize(desc, __sha256_block_data_order);
+                                             sha256_arm64_transform);
+               sha256_base_do_finalize(desc, sha256_arm64_transform);
                return sha256_base_finish(desc, out);
        }
 
@@ -96,9 +97,9 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data,
         */
        sctx->finalize = finalize;
 
-       sha256_base_do_update(desc, data, len, __sha2_ce_transform);
+       sha256_base_do_update(desc, data, len, sha256_ce_transform);
        if (!finalize)
-               sha256_base_do_finalize(desc, __sha2_ce_transform);
+               sha256_base_do_finalize(desc, sha256_ce_transform);
        return sha256_base_finish(desc, out);
 }
 
@@ -107,15 +108,22 @@ static int sha256_ce_final(struct shash_desc *desc, u8 *out)
        struct sha256_ce_state *sctx = shash_desc_ctx(desc);
 
        if (!crypto_simd_usable()) {
-               sha256_base_do_finalize(desc, __sha256_block_data_order);
+               sha256_base_do_finalize(desc, sha256_arm64_transform);
                return sha256_base_finish(desc, out);
        }
 
        sctx->finalize = 0;
-       sha256_base_do_finalize(desc, __sha2_ce_transform);
+       sha256_base_do_finalize(desc, sha256_ce_transform);
        return sha256_base_finish(desc, out);
 }
 
+static int sha256_ce_digest(struct shash_desc *desc, const u8 *data,
+                           unsigned int len, u8 *out)
+{
+       sha256_base_init(desc);
+       return sha256_ce_finup(desc, data, len, out);
+}
+
 static int sha256_ce_export(struct shash_desc *desc, void *out)
 {
        struct sha256_ce_state *sctx = shash_desc_ctx(desc);
@@ -155,6 +163,7 @@ static struct shash_alg algs[] = { {
        .update                 = sha256_ce_update,
        .final                  = sha256_ce_final,
        .finup                  = sha256_ce_finup,
+       .digest                 = sha256_ce_digest,
        .export                 = sha256_ce_export,
        .import                 = sha256_ce_import,
        .descsize               = sizeof(struct sha256_ce_state),
index 9b5c86e07a9af3db024c34bedeedf88a7c94b447..35356987cc1e0b4d9fdb5cfe93dbd4b334e2538c 100644 (file)
@@ -27,8 +27,8 @@ asmlinkage void sha256_block_data_order(u32 *digest, const void *data,
                                        unsigned int num_blks);
 EXPORT_SYMBOL(sha256_block_data_order);
 
-static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
-                                     int blocks)
+static void sha256_arm64_transform(struct sha256_state *sst, u8 const *src,
+                                  int blocks)
 {
        sha256_block_data_order(sst->state, src, blocks);
 }
@@ -36,8 +36,8 @@ static void __sha256_block_data_order(struct sha256_state *sst, u8 const *src,
 asmlinkage void sha256_block_neon(u32 *digest, const void *data,
                                  unsigned int num_blks);
 
-static void __sha256_block_neon(struct sha256_state *sst, u8 const *src,
-                               int blocks)
+static void sha256_neon_transform(struct sha256_state *sst, u8 const *src,
+                                 int blocks)
 {
        sha256_block_neon(sst->state, src, blocks);
 }
@@ -45,17 +45,15 @@ static void __sha256_block_neon(struct sha256_state *sst, u8 const *src,
 static int crypto_sha256_arm64_update(struct shash_desc *desc, const u8 *data,
                                      unsigned int len)
 {
-       return sha256_base_do_update(desc, data, len,
-                                    __sha256_block_data_order);
+       return sha256_base_do_update(desc, data, len, sha256_arm64_transform);
 }
 
 static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *data,
                                     unsigned int len, u8 *out)
 {
        if (len)
-               sha256_base_do_update(desc, data, len,
-                                     __sha256_block_data_order);
-       sha256_base_do_finalize(desc, __sha256_block_data_order);
+               sha256_base_do_update(desc, data, len, sha256_arm64_transform);
+       sha256_base_do_finalize(desc, sha256_arm64_transform);
 
        return sha256_base_finish(desc, out);
 }
@@ -98,7 +96,7 @@ static int sha256_update_neon(struct shash_desc *desc, const u8 *data,
 
        if (!crypto_simd_usable())
                return sha256_base_do_update(desc, data, len,
-                               __sha256_block_data_order);
+                               sha256_arm64_transform);
 
        while (len > 0) {
                unsigned int chunk = len;
@@ -114,7 +112,7 @@ static int sha256_update_neon(struct shash_desc *desc, const u8 *data,
                                sctx->count % SHA256_BLOCK_SIZE;
 
                kernel_neon_begin();
-               sha256_base_do_update(desc, data, chunk, __sha256_block_neon);
+               sha256_base_do_update(desc, data, chunk, sha256_neon_transform);
                kernel_neon_end();
                data += chunk;
                len -= chunk;
@@ -128,13 +126,13 @@ static int sha256_finup_neon(struct shash_desc *desc, const u8 *data,
        if (!crypto_simd_usable()) {
                if (len)
                        sha256_base_do_update(desc, data, len,
-                               __sha256_block_data_order);
-               sha256_base_do_finalize(desc, __sha256_block_data_order);
+                               sha256_arm64_transform);
+               sha256_base_do_finalize(desc, sha256_arm64_transform);
        } else {
                if (len)
                        sha256_update_neon(desc, data, len);
                kernel_neon_begin();
-               sha256_base_do_finalize(desc, __sha256_block_neon);
+               sha256_base_do_finalize(desc, sha256_neon_transform);
                kernel_neon_end();
        }
        return sha256_base_finish(desc, out);
index b6a3a36e15f58cf98c7829bc2ad746349d23a74e..91ef68b15fcc65273b94374193698ae291ba2fc7 100644 (file)
        .endm
 
        /*
-        * void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
-        *                        int blocks)
+        * int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+        *                           int blocks)
         */
        .text
-SYM_FUNC_START(sha512_ce_transform)
+SYM_FUNC_START(__sha512_ce_transform)
        /* load state */
        ld1             {v8.2d-v11.2d}, [x0]
 
@@ -203,4 +203,4 @@ CPU_LE(     rev64           v19.16b, v19.16b        )
 3:     st1             {v8.2d-v11.2d}, [x0]
        mov             w0, w2
        ret
-SYM_FUNC_END(sha512_ce_transform)
+SYM_FUNC_END(__sha512_ce_transform)
index 94cb7580deb7b6bb28e1f074cc606e10446fd780..f3431fc6231540724b82b5f0eb8441f4f53b4c61 100644 (file)
@@ -26,27 +26,27 @@ MODULE_LICENSE("GPL v2");
 MODULE_ALIAS_CRYPTO("sha384");
 MODULE_ALIAS_CRYPTO("sha512");
 
-asmlinkage int sha512_ce_transform(struct sha512_state *sst, u8 const *src,
-                                  int blocks);
+asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+                                    int blocks);
 
 asmlinkage void sha512_block_data_order(u64 *digest, u8 const *src, int blocks);
 
-static void __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
-                                 int blocks)
+static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
+                               int blocks)
 {
        while (blocks) {
                int rem;
 
                kernel_neon_begin();
-               rem = sha512_ce_transform(sst, src, blocks);
+               rem = __sha512_ce_transform(sst, src, blocks);
                kernel_neon_end();
                src += (blocks - rem) * SHA512_BLOCK_SIZE;
                blocks = rem;
        }
 }
 
-static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
-                                     int blocks)
+static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
+                                  int blocks)
 {
        sha512_block_data_order(sst->state, src, blocks);
 }
@@ -54,8 +54,8 @@ static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
 static int sha512_ce_update(struct shash_desc *desc, const u8 *data,
                            unsigned int len)
 {
-       sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform
-                                                  : __sha512_block_data_order;
+       sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
+                                                  : sha512_arm64_transform;
 
        sha512_base_do_update(desc, data, len, fn);
        return 0;
@@ -64,8 +64,8 @@ static int sha512_ce_update(struct shash_desc *desc, const u8 *data,
 static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
                           unsigned int len, u8 *out)
 {
-       sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform
-                                                  : __sha512_block_data_order;
+       sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
+                                                  : sha512_arm64_transform;
 
        sha512_base_do_update(desc, data, len, fn);
        sha512_base_do_finalize(desc, fn);
@@ -74,8 +74,8 @@ static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
 
 static int sha512_ce_final(struct shash_desc *desc, u8 *out)
 {
-       sha512_block_fn *fn = crypto_simd_usable() ? __sha512_ce_transform
-                                                  : __sha512_block_data_order;
+       sha512_block_fn *fn = crypto_simd_usable() ? sha512_ce_transform
+                                                  : sha512_arm64_transform;
 
        sha512_base_do_finalize(desc, fn);
        return sha512_base_finish(desc, out);
index 2acff1c7df5d7699d22674d23ead6a57e59e6c0e..62f129dea83d89b5767f093d11bcd06d8a3c0f87 100644 (file)
@@ -23,8 +23,8 @@ asmlinkage void sha512_block_data_order(u64 *digest, const void *data,
                                        unsigned int num_blks);
 EXPORT_SYMBOL(sha512_block_data_order);
 
-static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
-                                     int blocks)
+static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
+                                  int blocks)
 {
        sha512_block_data_order(sst->state, src, blocks);
 }
@@ -32,17 +32,15 @@ static void __sha512_block_data_order(struct sha512_state *sst, u8 const *src,
 static int sha512_update(struct shash_desc *desc, const u8 *data,
                         unsigned int len)
 {
-       return sha512_base_do_update(desc, data, len,
-                                    __sha512_block_data_order);
+       return sha512_base_do_update(desc, data, len, sha512_arm64_transform);
 }
 
 static int sha512_finup(struct shash_desc *desc, const u8 *data,
                        unsigned int len, u8 *out)
 {
        if (len)
-               sha512_base_do_update(desc, data, len,
-                                     __sha512_block_data_order);
-       sha512_base_do_finalize(desc, __sha512_block_data_order);
+               sha512_base_do_update(desc, data, len, sha512_arm64_transform);
+       sha512_base_do_finalize(desc, sha512_arm64_transform);
 
        return sha512_base_finish(desc, out);
 }
index 1f2a2c3839bcbf8664269f8117e4660cf320a176..a49e507af38c0e667092bfe7839da5391a7734e1 100644 (file)
@@ -239,7 +239,6 @@ static struct shash_alg crc32_alg = {
                .cra_priority           =       300,
                .cra_flags              =       CRYPTO_ALG_OPTIONAL_KEY,
                .cra_blocksize          =       CHKSUM_BLOCK_SIZE,
-               .cra_alignmask          =       0,
                .cra_ctxsize            =       sizeof(struct chksum_ctx),
                .cra_module             =       THIS_MODULE,
                .cra_init               =       chksum_cra_init,
@@ -261,7 +260,6 @@ static struct shash_alg crc32c_alg = {
                .cra_priority           =       300,
                .cra_flags              =       CRYPTO_ALG_OPTIONAL_KEY,
                .cra_blocksize          =       CHKSUM_BLOCK_SIZE,
-               .cra_alignmask          =       0,
                .cra_ctxsize            =       sizeof(struct chksum_ctx),
                .cra_module             =       THIS_MODULE,
                .cra_init               =       chksumc_cra_init,
index 3e4f5ba104f89a42fddd04a70e85f947d9ba80d4..ec6d58008f8e10ad3df8256404f1ba17dc8cc9d9 100644 (file)
@@ -290,7 +290,6 @@ static struct shash_alg crc32_alg = {
                .cra_priority           =       300,
                .cra_flags              =       CRYPTO_ALG_OPTIONAL_KEY,
                .cra_blocksize          =       CHKSUM_BLOCK_SIZE,
-               .cra_alignmask          =       0,
                .cra_ctxsize            =       sizeof(struct chksum_ctx),
                .cra_module             =       THIS_MODULE,
                .cra_init               =       chksum_cra_init,
@@ -312,7 +311,6 @@ static struct shash_alg crc32c_alg = {
                .cra_priority           =       300,
                .cra_flags              =       CRYPTO_ALG_OPTIONAL_KEY,
                .cra_blocksize          =       CHKSUM_BLOCK_SIZE,
-               .cra_alignmask          =       0,
                .cra_ctxsize            =       sizeof(struct chksum_ctx),
                .cra_module             =       THIS_MODULE,
                .cra_init               =       chksum_cra_init,
index 82efb7f81c2887fb52675305f2d9b7e1b594f337..688db0dcb97d92d746c3c51c194760bb804bd493 100644 (file)
@@ -20,6 +20,7 @@
 
 #include <asm/pstate.h>
 #include <asm/elf.h>
+#include <asm/unaligned.h>
 
 #include "opcodes.h"
 
@@ -35,7 +36,7 @@ static int crc32c_sparc64_setkey(struct crypto_shash *hash, const u8 *key,
 
        if (keylen != sizeof(u32))
                return -EINVAL;
-       *mctx = le32_to_cpup((__le32 *)key);
+       *mctx = get_unaligned_le32(key);
        return 0;
 }
 
@@ -51,18 +52,26 @@ static int crc32c_sparc64_init(struct shash_desc *desc)
 
 extern void crc32c_sparc64(u32 *crcp, const u64 *data, unsigned int len);
 
-static void crc32c_compute(u32 *crcp, const u64 *data, unsigned int len)
+static u32 crc32c_compute(u32 crc, const u8 *data, unsigned int len)
 {
-       unsigned int asm_len;
-
-       asm_len = len & ~7U;
-       if (asm_len) {
-               crc32c_sparc64(crcp, data, asm_len);
-               data += asm_len / 8;
-               len -= asm_len;
+       unsigned int n = -(uintptr_t)data & 7;
+
+       if (n) {
+               /* Data isn't 8-byte aligned.  Align it. */
+               n = min(n, len);
+               crc = __crc32c_le(crc, data, n);
+               data += n;
+               len -= n;
+       }
+       n = len & ~7U;
+       if (n) {
+               crc32c_sparc64(&crc, (const u64 *)data, n);
+               data += n;
+               len -= n;
        }
        if (len)
-               *crcp = __crc32c_le(*crcp, (const unsigned char *) data, len);
+               crc = __crc32c_le(crc, data, len);
+       return crc;
 }
 
 static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data,
@@ -70,19 +79,14 @@ static int crc32c_sparc64_update(struct shash_desc *desc, const u8 *data,
 {
        u32 *crcp = shash_desc_ctx(desc);
 
-       crc32c_compute(crcp, (const u64 *) data, len);
-
+       *crcp = crc32c_compute(*crcp, data, len);
        return 0;
 }
 
-static int __crc32c_sparc64_finup(u32 *crcp, const u8 *data, unsigned int len,
-                                 u8 *out)
+static int __crc32c_sparc64_finup(const u32 *crcp, const u8 *data,
+                                 unsigned int len, u8 *out)
 {
-       u32 tmp = *crcp;
-
-       crc32c_compute(&tmp, (const u64 *) data, len);
-
-       *(__le32 *) out = ~cpu_to_le32(tmp);
+       put_unaligned_le32(~crc32c_compute(*crcp, data, len), out);
        return 0;
 }
 
@@ -96,7 +100,7 @@ static int crc32c_sparc64_final(struct shash_desc *desc, u8 *out)
 {
        u32 *crcp = shash_desc_ctx(desc);
 
-       *(__le32 *) out = ~cpu_to_le32p(crcp);
+       put_unaligned_le32(~*crcp, out);
        return 0;
 }
 
@@ -135,7 +139,6 @@ static struct shash_alg alg = {
                .cra_flags              =       CRYPTO_ALG_OPTIONAL_KEY,
                .cra_blocksize          =       CHKSUM_BLOCK_SIZE,
                .cra_ctxsize            =       sizeof(u32),
-               .cra_alignmask          =       7,
                .cra_module             =       THIS_MODULE,
                .cra_init               =       crc32c_sparc64_cra_init,
        }
index 3ac7487ecad2d3f0248ce95a2f8688e75b20895c..187f913cc2390b132ace105cc1420fc1280d9d34 100644 (file)
@@ -672,7 +672,7 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
        add     %r13, %r10
        # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
        sub     $16, %r10
-       # Determine if if partial block is not being filled and
+       # Determine if partial block is not being filled and
        # shift mask accordingly
        jge     .L_no_extra_mask_1_\@
        sub     %r10, %r12
@@ -708,7 +708,7 @@ ALL_F:      .octa 0xffffffffffffffffffffffffffffffff
        add     %r13, %r10
        # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
        sub     $16, %r10
-       # Determine if if partial block is not being filled and
+       # Determine if partial block is not being filled and
        # shift mask accordingly
        jge     .L_no_extra_mask_2_\@
        sub     %r10, %r12
index 46cddd78857bd9eb2782d62d36853ece8fb21cd6..74dd230973cf9ec6bd044c4441ec713c6a6e991d 100644 (file)
@@ -753,7 +753,7 @@ VARIABLE_OFFSET = 16*8
         add    %r13, %r10
         # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
         sub    $16, %r10
-        # Determine if if partial block is not being filled and
+        # Determine if partial block is not being filled and
         # shift mask accordingly
         jge    .L_no_extra_mask_1_\@
         sub    %r10, %r12
@@ -789,7 +789,7 @@ VARIABLE_OFFSET = 16*8
         add    %r13, %r10
         # Set r10 to be the amount of data left in CYPH_PLAIN_IN after filling
         sub    $16, %r10
-        # Determine if if partial block is not being filled and
+        # Determine if partial block is not being filled and
         # shift mask accordingly
         jge    .L_no_extra_mask_2_\@
         sub    %r10, %r12
index 39d6a62ac62778339a3efd5bbe5a0096a05f1521..b1d90c25975afb8ac8e33db740efbae58d6f9294 100644 (file)
@@ -61,8 +61,8 @@ struct generic_gcmaes_ctx {
 };
 
 struct aesni_xts_ctx {
-       u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
-       u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
+       struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR;
+       struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR;
 };
 
 #define GCM_BLOCK_LEN 16
@@ -80,6 +80,13 @@ struct gcm_context_data {
        u8 hash_keys[GCM_BLOCK_LEN * 16];
 };
 
+static inline void *aes_align_addr(void *addr)
+{
+       if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN)
+               return addr;
+       return PTR_ALIGN(addr, AESNI_ALIGN);
+}
+
 asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
                             unsigned int key_len);
 asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
@@ -201,32 +208,24 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2);
 static inline struct
 aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
 {
-       unsigned long align = AESNI_ALIGN;
-
-       if (align <= crypto_tfm_ctx_alignment())
-               align = 1;
-       return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+       return aes_align_addr(crypto_aead_ctx(tfm));
 }
 
 static inline struct
 generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm)
 {
-       unsigned long align = AESNI_ALIGN;
-
-       if (align <= crypto_tfm_ctx_alignment())
-               align = 1;
-       return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+       return aes_align_addr(crypto_aead_ctx(tfm));
 }
 #endif
 
 static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
 {
-       unsigned long addr = (unsigned long)raw_ctx;
-       unsigned long align = AESNI_ALIGN;
+       return aes_align_addr(raw_ctx);
+}
 
-       if (align <= crypto_tfm_ctx_alignment())
-               align = 1;
-       return (struct crypto_aes_ctx *)ALIGN(addr, align);
+static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm)
+{
+       return aes_align_addr(crypto_skcipher_ctx(tfm));
 }
 
 static int aes_set_key_common(struct crypto_aes_ctx *ctx,
@@ -881,7 +880,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
 static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
                            unsigned int keylen)
 {
-       struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+       struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
        int err;
 
        err = xts_verify_key(tfm, key, keylen);
@@ -891,19 +890,18 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
        keylen /= 2;
 
        /* first half of xts-key is for crypt */
-       err = aes_set_key_common(aes_ctx(ctx->raw_crypt_ctx), key, keylen);
+       err = aes_set_key_common(&ctx->crypt_ctx, key, keylen);
        if (err)
                return err;
 
        /* second half of xts-key is for tweak */
-       return aes_set_key_common(aes_ctx(ctx->raw_tweak_ctx), key + keylen,
-                                 keylen);
+       return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen);
 }
 
 static int xts_crypt(struct skcipher_request *req, bool encrypt)
 {
        struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-       struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+       struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
        int tail = req->cryptlen % AES_BLOCK_SIZE;
        struct skcipher_request subreq;
        struct skcipher_walk walk;
@@ -939,7 +937,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
        kernel_fpu_begin();
 
        /* calculate first value of T */
-       aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv);
+       aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv);
 
        while (walk.nbytes > 0) {
                int nbytes = walk.nbytes;
@@ -948,11 +946,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
                        nbytes &= ~(AES_BLOCK_SIZE - 1);
 
                if (encrypt)
-                       aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+                       aesni_xts_encrypt(&ctx->crypt_ctx,
                                          walk.dst.virt.addr, walk.src.virt.addr,
                                          nbytes, walk.iv);
                else
-                       aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+                       aesni_xts_decrypt(&ctx->crypt_ctx,
                                          walk.dst.virt.addr, walk.src.virt.addr,
                                          nbytes, walk.iv);
                kernel_fpu_end();
@@ -980,11 +978,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
 
                kernel_fpu_begin();
                if (encrypt)
-                       aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+                       aesni_xts_encrypt(&ctx->crypt_ctx,
                                          walk.dst.virt.addr, walk.src.virt.addr,
                                          walk.nbytes, walk.iv);
                else
-                       aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+                       aesni_xts_decrypt(&ctx->crypt_ctx,
                                          walk.dst.virt.addr, walk.src.virt.addr,
                                          walk.nbytes, walk.iv);
                kernel_fpu_end();
index 46b036204ed918dc047e51f5448941a546484418..c3a872f4d6a773305aca2d06d3e5b9f731d2bf84 100644 (file)
@@ -34,6 +34,14 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
        return 0;
 }
 
+static int nhpoly1305_avx2_digest(struct shash_desc *desc,
+                                 const u8 *src, unsigned int srclen, u8 *out)
+{
+       return crypto_nhpoly1305_init(desc) ?:
+              nhpoly1305_avx2_update(desc, src, srclen) ?:
+              crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
        .base.cra_name          = "nhpoly1305",
        .base.cra_driver_name   = "nhpoly1305-avx2",
@@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
        .init                   = crypto_nhpoly1305_init,
        .update                 = nhpoly1305_avx2_update,
        .final                  = crypto_nhpoly1305_final,
+       .digest                 = nhpoly1305_avx2_digest,
        .setkey                 = crypto_nhpoly1305_setkey,
        .descsize               = sizeof(struct nhpoly1305_state),
 };
index 4a4970d751076826a5ae52293c00ce1c128de52d..a268a8439a5c98a1fa460ec1043cd55d68d02c6c 100644 (file)
@@ -34,6 +34,14 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
        return 0;
 }
 
+static int nhpoly1305_sse2_digest(struct shash_desc *desc,
+                                 const u8 *src, unsigned int srclen, u8 *out)
+{
+       return crypto_nhpoly1305_init(desc) ?:
+              nhpoly1305_sse2_update(desc, src, srclen) ?:
+              crypto_nhpoly1305_final(desc, out);
+}
+
 static struct shash_alg nhpoly1305_alg = {
        .base.cra_name          = "nhpoly1305",
        .base.cra_driver_name   = "nhpoly1305-sse2",
@@ -44,6 +52,7 @@ static struct shash_alg nhpoly1305_alg = {
        .init                   = crypto_nhpoly1305_init,
        .update                 = nhpoly1305_sse2_update,
        .final                  = crypto_nhpoly1305_final,
+       .digest                 = nhpoly1305_sse2_digest,
        .setkey                 = crypto_nhpoly1305_setkey,
        .descsize               = sizeof(struct nhpoly1305_state),
 };
index 44340a1139e0b7cd57be7ee46491199be33ecd31..959afa705e95ca16699df719964bf222a10784e8 100644 (file)
 #include <linux/types.h>
 #include <crypto/sha1.h>
 #include <crypto/sha1_base.h>
+#include <asm/cpu_device_id.h>
 #include <asm/simd.h>
 
+static const struct x86_cpu_id module_cpu_ids[] = {
+       X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
+       X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
+       X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
+       {}
+};
+MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
+
 static int sha1_update(struct shash_desc *desc, const u8 *data,
                             unsigned int len, sha1_block_fn *sha1_xform)
 {
@@ -301,6 +310,9 @@ static inline void unregister_sha1_ni(void) { }
 
 static int __init sha1_ssse3_mod_init(void)
 {
+       if (!x86_match_cpu(module_cpu_ids))
+               return -ENODEV;
+
        if (register_sha1_ssse3())
                goto fail;
 
index 3a5f6be7dbba4e5af1cc7e0103ff6cc9ebbf73b2..4c0383a90e1147eef0253d51889f56e557581700 100644 (file)
 #include <crypto/sha2.h>
 #include <crypto/sha256_base.h>
 #include <linux/string.h>
+#include <asm/cpu_device_id.h>
 #include <asm/simd.h>
 
 asmlinkage void sha256_transform_ssse3(struct sha256_state *state,
                                       const u8 *data, int blocks);
 
+static const struct x86_cpu_id module_cpu_ids[] = {
+       X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
+       X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
+       X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
+       {}
+};
+MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
+
 static int _sha256_update(struct shash_desc *desc, const u8 *data,
                          unsigned int len, sha256_block_fn *sha256_xform)
 {
@@ -98,12 +107,20 @@ static int sha256_ssse3_final(struct shash_desc *desc, u8 *out)
        return sha256_ssse3_finup(desc, NULL, 0, out);
 }
 
+static int sha256_ssse3_digest(struct shash_desc *desc, const u8 *data,
+             unsigned int len, u8 *out)
+{
+       return sha256_base_init(desc) ?:
+              sha256_ssse3_finup(desc, data, len, out);
+}
+
 static struct shash_alg sha256_ssse3_algs[] = { {
        .digestsize     =       SHA256_DIGEST_SIZE,
        .init           =       sha256_base_init,
        .update         =       sha256_ssse3_update,
        .final          =       sha256_ssse3_final,
        .finup          =       sha256_ssse3_finup,
+       .digest         =       sha256_ssse3_digest,
        .descsize       =       sizeof(struct sha256_state),
        .base           =       {
                .cra_name       =       "sha256",
@@ -163,12 +180,20 @@ static int sha256_avx_final(struct shash_desc *desc, u8 *out)
        return sha256_avx_finup(desc, NULL, 0, out);
 }
 
+static int sha256_avx_digest(struct shash_desc *desc, const u8 *data,
+                     unsigned int len, u8 *out)
+{
+       return sha256_base_init(desc) ?:
+              sha256_avx_finup(desc, data, len, out);
+}
+
 static struct shash_alg sha256_avx_algs[] = { {
        .digestsize     =       SHA256_DIGEST_SIZE,
        .init           =       sha256_base_init,
        .update         =       sha256_avx_update,
        .final          =       sha256_avx_final,
        .finup          =       sha256_avx_finup,
+       .digest         =       sha256_avx_digest,
        .descsize       =       sizeof(struct sha256_state),
        .base           =       {
                .cra_name       =       "sha256",
@@ -239,12 +264,20 @@ static int sha256_avx2_final(struct shash_desc *desc, u8 *out)
        return sha256_avx2_finup(desc, NULL, 0, out);
 }
 
+static int sha256_avx2_digest(struct shash_desc *desc, const u8 *data,
+                     unsigned int len, u8 *out)
+{
+       return sha256_base_init(desc) ?:
+              sha256_avx2_finup(desc, data, len, out);
+}
+
 static struct shash_alg sha256_avx2_algs[] = { {
        .digestsize     =       SHA256_DIGEST_SIZE,
        .init           =       sha256_base_init,
        .update         =       sha256_avx2_update,
        .final          =       sha256_avx2_final,
        .finup          =       sha256_avx2_finup,
+       .digest         =       sha256_avx2_digest,
        .descsize       =       sizeof(struct sha256_state),
        .base           =       {
                .cra_name       =       "sha256",
@@ -314,12 +347,20 @@ static int sha256_ni_final(struct shash_desc *desc, u8 *out)
        return sha256_ni_finup(desc, NULL, 0, out);
 }
 
+static int sha256_ni_digest(struct shash_desc *desc, const u8 *data,
+                     unsigned int len, u8 *out)
+{
+       return sha256_base_init(desc) ?:
+              sha256_ni_finup(desc, data, len, out);
+}
+
 static struct shash_alg sha256_ni_algs[] = { {
        .digestsize     =       SHA256_DIGEST_SIZE,
        .init           =       sha256_base_init,
        .update         =       sha256_ni_update,
        .final          =       sha256_ni_final,
        .finup          =       sha256_ni_finup,
+       .digest         =       sha256_ni_digest,
        .descsize       =       sizeof(struct sha256_state),
        .base           =       {
                .cra_name       =       "sha256",
@@ -366,6 +407,9 @@ static inline void unregister_sha256_ni(void) { }
 
 static int __init sha256_ssse3_mod_init(void)
 {
+       if (!x86_match_cpu(module_cpu_ids))
+               return -ENODEV;
+
        if (register_sha256_ssse3())
                goto fail;
 
index 62036974367c49b5f2da9967e39b1b13841ac22d..78307dc25559148a08985a868f296621b3922768 100644 (file)
@@ -30,9 +30,11 @@ config MODULE_SIG_KEY_TYPE_RSA
 config MODULE_SIG_KEY_TYPE_ECDSA
        bool "ECDSA"
        select CRYPTO_ECDSA
+       depends on !(MODULE_SIG_SHA256 || MODULE_SIG_SHA3_256)
        help
-        Use an elliptic curve key (NIST P384) for module signing. Consider
-        using a strong hash like sha256 or sha384 for hashing modules.
+        Use an elliptic curve key (NIST P384) for module signing. Use
+        a strong hash of same or higher bit length, i.e. sha384 or
+        sha512 for hashing modules.
 
         Note: Remove all ECDSA signing keys, e.g. certs/signing_key.pem,
         when falling back to building Linux 5.14 and older kernels.
index 650b1b3620d8183ca5b0f5937af39b154078bd19..bbf51d55724e3fdb1a1dc339860396e5f767e8fc 100644 (file)
@@ -85,6 +85,7 @@ config CRYPTO_SKCIPHER
        tristate
        select CRYPTO_SKCIPHER2
        select CRYPTO_ALGAPI
+       select CRYPTO_ECB
 
 config CRYPTO_SKCIPHER2
        tristate
@@ -689,7 +690,7 @@ config CRYPTO_CTS
 
 config CRYPTO_ECB
        tristate "ECB (Electronic Codebook)"
-       select CRYPTO_SKCIPHER
+       select CRYPTO_SKCIPHER2
        select CRYPTO_MANAGER
        help
          ECB (Electronic Codebook) mode (NIST SP800-38A)
@@ -1296,6 +1297,66 @@ config CRYPTO_JITTERENTROPY
 
          See https://www.chronox.de/jent.html
 
+choice
+       prompt "CPU Jitter RNG Memory Size"
+       default CRYPTO_JITTERENTROPY_MEMSIZE_2
+       depends on CRYPTO_JITTERENTROPY
+       help
+         The Jitter RNG measures the execution time of memory accesses.
+         Multiple consecutive memory accesses are performed. If the memory
+         size fits into a cache (e.g. L1), only the memory access timing
+         to that cache is measured. The closer the cache is to the CPU
+         the less variations are measured and thus the less entropy is
+         obtained. Thus, if the memory size fits into the L1 cache, the
+         obtained entropy is less than if the memory size fits within
+         L1 + L2, which in turn is less if the memory fits into
+         L1 + L2 + L3. Thus, by selecting a different memory size,
+         the entropy rate produced by the Jitter RNG can be modified.
+
+       config CRYPTO_JITTERENTROPY_MEMSIZE_2
+               bool "2048 Bytes (default)"
+
+       config CRYPTO_JITTERENTROPY_MEMSIZE_128
+               bool "128 kBytes"
+
+       config CRYPTO_JITTERENTROPY_MEMSIZE_1024
+               bool "1024 kBytes"
+
+       config CRYPTO_JITTERENTROPY_MEMSIZE_8192
+               bool "8192 kBytes"
+endchoice
+
+config CRYPTO_JITTERENTROPY_MEMORY_BLOCKS
+       int
+       default 64 if CRYPTO_JITTERENTROPY_MEMSIZE_2
+       default 512 if CRYPTO_JITTERENTROPY_MEMSIZE_128
+       default 1024 if CRYPTO_JITTERENTROPY_MEMSIZE_1024
+       default 4096 if CRYPTO_JITTERENTROPY_MEMSIZE_8192
+
+config CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE
+       int
+       default 32 if CRYPTO_JITTERENTROPY_MEMSIZE_2
+       default 256 if CRYPTO_JITTERENTROPY_MEMSIZE_128
+       default 1024 if CRYPTO_JITTERENTROPY_MEMSIZE_1024
+       default 2048 if CRYPTO_JITTERENTROPY_MEMSIZE_8192
+
+config CRYPTO_JITTERENTROPY_OSR
+       int "CPU Jitter RNG Oversampling Rate"
+       range 1 15
+       default 1
+       depends on CRYPTO_JITTERENTROPY
+       help
+         The Jitter RNG allows the specification of an oversampling rate (OSR).
+         The Jitter RNG operation requires a fixed amount of timing
+         measurements to produce one output block of random numbers. The
+         OSR value is multiplied with the amount of timing measurements to
+         generate one output block. Thus, the timing measurement is oversampled
+         by the OSR factor. The oversampling allows the Jitter RNG to operate
+         on hardware whose timers deliver limited amount of entropy (e.g.
+         the timer is coarse) by setting the OSR to a higher value. The
+         trade-off, however, is that the Jitter RNG now requires more time
+         to generate random numbers.
+
 config CRYPTO_JITTERENTROPY_TESTINTERFACE
        bool "CPU Jitter RNG Test Interface"
        depends on CRYPTO_JITTERENTROPY
index 953a7e105e58c837d927d21a0358581e94f86857..5ac6876f935a3f9e2eb6f98f2f0dbf1c2d66931b 100644 (file)
@@ -16,7 +16,11 @@ obj-$(CONFIG_CRYPTO_ALGAPI2) += crypto_algapi.o
 obj-$(CONFIG_CRYPTO_AEAD2) += aead.o
 obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
 
-obj-$(CONFIG_CRYPTO_SKCIPHER2) += skcipher.o
+crypto_skcipher-y += lskcipher.o
+crypto_skcipher-y += skcipher.o
+
+obj-$(CONFIG_CRYPTO_SKCIPHER2) += crypto_skcipher.o
+
 obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
 obj-$(CONFIG_CRYPTO_ECHAINIV) += echainiv.o
 
index c33ba22a66389cfdf06878d0bf288935ae5746c3..60f3883b736aa823e7106fcb90baa853c79d5a26 100644 (file)
@@ -245,10 +245,9 @@ static void adiantum_hash_header(struct skcipher_request *req)
 
 /* Hash the left-hand part (the "bulk") of the message using NHPoly1305 */
 static int adiantum_hash_message(struct skcipher_request *req,
-                                struct scatterlist *sgl, le128 *digest)
+                                struct scatterlist *sgl, unsigned int nents,
+                                le128 *digest)
 {
-       struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-       const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
        struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
        const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
        struct shash_desc *hash_desc = &rctx->u.hash_desc;
@@ -256,14 +255,11 @@ static int adiantum_hash_message(struct skcipher_request *req,
        unsigned int i, n;
        int err;
 
-       hash_desc->tfm = tctx->hash;
-
        err = crypto_shash_init(hash_desc);
        if (err)
                return err;
 
-       sg_miter_start(&miter, sgl, sg_nents(sgl),
-                      SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+       sg_miter_start(&miter, sgl, nents, SG_MITER_FROM_SG | SG_MITER_ATOMIC);
        for (i = 0; i < bulk_len; i += n) {
                sg_miter_next(&miter);
                n = min_t(unsigned int, miter.length, bulk_len - i);
@@ -285,6 +281,8 @@ static int adiantum_finish(struct skcipher_request *req)
        const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
        struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
        const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+       struct scatterlist *dst = req->dst;
+       const unsigned int dst_nents = sg_nents(dst);
        le128 digest;
        int err;
 
@@ -298,13 +296,32 @@ static int adiantum_finish(struct skcipher_request *req)
         *      enc: C_R = C_M - H_{K_H}(T, C_L)
         *      dec: P_R = P_M - H_{K_H}(T, P_L)
         */
-       err = adiantum_hash_message(req, req->dst, &digest);
-       if (err)
-               return err;
-       le128_add(&digest, &digest, &rctx->header_hash);
-       le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
-       scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->dst,
-                                bulk_len, BLOCKCIPHER_BLOCK_SIZE, 1);
+       rctx->u.hash_desc.tfm = tctx->hash;
+       le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
+       if (dst_nents == 1 && dst->offset + req->cryptlen <= PAGE_SIZE) {
+               /* Fast path for single-page destination */
+               struct page *page = sg_page(dst);
+               void *virt = kmap_local_page(page) + dst->offset;
+
+               err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
+                                         (u8 *)&digest);
+               if (err) {
+                       kunmap_local(virt);
+                       return err;
+               }
+               le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
+               memcpy(virt + bulk_len, &rctx->rbuf.bignum, sizeof(le128));
+               flush_dcache_page(page);
+               kunmap_local(virt);
+       } else {
+               /* Slow path that works for any destination scatterlist */
+               err = adiantum_hash_message(req, dst, dst_nents, &digest);
+               if (err)
+                       return err;
+               le128_sub(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
+               scatterwalk_map_and_copy(&rctx->rbuf.bignum, dst,
+                                        bulk_len, sizeof(le128), 1);
+       }
        return 0;
 }
 
@@ -324,6 +341,8 @@ static int adiantum_crypt(struct skcipher_request *req, bool enc)
        const struct adiantum_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
        struct adiantum_request_ctx *rctx = skcipher_request_ctx(req);
        const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+       struct scatterlist *src = req->src;
+       const unsigned int src_nents = sg_nents(src);
        unsigned int stream_len;
        le128 digest;
        int err;
@@ -339,12 +358,24 @@ static int adiantum_crypt(struct skcipher_request *req, bool enc)
         *      dec: C_M = C_R + H_{K_H}(T, C_L)
         */
        adiantum_hash_header(req);
-       err = adiantum_hash_message(req, req->src, &digest);
+       rctx->u.hash_desc.tfm = tctx->hash;
+       if (src_nents == 1 && src->offset + req->cryptlen <= PAGE_SIZE) {
+               /* Fast path for single-page source */
+               void *virt = kmap_local_page(sg_page(src)) + src->offset;
+
+               err = crypto_shash_digest(&rctx->u.hash_desc, virt, bulk_len,
+                                         (u8 *)&digest);
+               memcpy(&rctx->rbuf.bignum, virt + bulk_len, sizeof(le128));
+               kunmap_local(virt);
+       } else {
+               /* Slow path that works for any source scatterlist */
+               err = adiantum_hash_message(req, src, src_nents, &digest);
+               scatterwalk_map_and_copy(&rctx->rbuf.bignum, src,
+                                        bulk_len, sizeof(le128), 0);
+       }
        if (err)
                return err;
-       le128_add(&digest, &digest, &rctx->header_hash);
-       scatterwalk_map_and_copy(&rctx->rbuf.bignum, req->src,
-                                bulk_len, BLOCKCIPHER_BLOCK_SIZE, 0);
+       le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &rctx->header_hash);
        le128_add(&rctx->rbuf.bignum, &rctx->rbuf.bignum, &digest);
 
        /* If encrypting, encrypt P_M with the block cipher to get C_M */
@@ -468,7 +499,7 @@ static void adiantum_free_instance(struct skcipher_instance *inst)
  * Check for a supported set of inner algorithms.
  * See the comment at the beginning of this file.
  */
-static bool adiantum_supported_algorithms(struct skcipher_alg *streamcipher_alg,
+static bool adiantum_supported_algorithms(struct skcipher_alg_common *streamcipher_alg,
                                          struct crypto_alg *blockcipher_alg,
                                          struct shash_alg *hash_alg)
 {
@@ -494,7 +525,7 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
        const char *nhpoly1305_name;
        struct skcipher_instance *inst;
        struct adiantum_instance_ctx *ictx;
-       struct skcipher_alg *streamcipher_alg;
+       struct skcipher_alg_common *streamcipher_alg;
        struct crypto_alg *blockcipher_alg;
        struct shash_alg *hash_alg;
        int err;
@@ -514,7 +545,7 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
                                   crypto_attr_alg_name(tb[1]), 0, mask);
        if (err)
                goto err_free_inst;
-       streamcipher_alg = crypto_spawn_skcipher_alg(&ictx->streamcipher_spawn);
+       streamcipher_alg = crypto_spawn_skcipher_alg_common(&ictx->streamcipher_spawn);
 
        /* Block cipher, e.g. "aes" */
        err = crypto_grab_cipher(&ictx->blockcipher_spawn,
@@ -561,8 +592,7 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
 
        inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
        inst->alg.base.cra_ctxsize = sizeof(struct adiantum_tfm_ctx);
-       inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask |
-                                      hash_alg->base.cra_alignmask;
+       inst->alg.base.cra_alignmask = streamcipher_alg->base.cra_alignmask;
        /*
         * The block cipher is only invoked once per message, so for long
         * messages (e.g. sectors for disk encryption) its performance doesn't
@@ -578,8 +608,8 @@ static int adiantum_create(struct crypto_template *tmpl, struct rtattr **tb)
        inst->alg.decrypt = adiantum_decrypt;
        inst->alg.init = adiantum_init_tfm;
        inst->alg.exit = adiantum_exit_tfm;
-       inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(streamcipher_alg);
-       inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(streamcipher_alg);
+       inst->alg.min_keysize = streamcipher_alg->min_keysize;
+       inst->alg.max_keysize = streamcipher_alg->max_keysize;
        inst->alg.ivsize = TWEAK_SIZE;
 
        inst->free = adiantum_free_instance;
index d5ba204ebdbfa6c5bf90ff87d9f3d55e637c741d..54906633566a2357789d619916ec3a099c935064 100644 (file)
@@ -269,6 +269,12 @@ struct crypto_aead *crypto_alloc_aead(const char *alg_name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_alloc_aead);
 
+int crypto_has_aead(const char *alg_name, u32 type, u32 mask)
+{
+       return crypto_type_has_alg(alg_name, &crypto_aead_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_has_aead);
+
 static int aead_prepare_alg(struct aead_alg *alg)
 {
        struct crypto_istat_aead *istat = aead_get_stat(alg);
index 709ef09407991374e633c3d015ffca9c3bdaa72a..deee55f939dc8c0a12efd205007eb6693da17eb6 100644 (file)
@@ -2,8 +2,12 @@
 /*
  * Asynchronous Cryptographic Hash operations.
  *
- * This is the asynchronous version of hash.c with notification of
- * completion via a callback.
+ * This is the implementation of the ahash (asynchronous hash) API.  It differs
+ * from shash (synchronous hash) in that ahash supports asynchronous operations,
+ * and it hashes data from scatterlists instead of virtually addressed buffers.
+ *
+ * The ahash API provides access to both ahash and shash algorithms.  The shash
+ * API only provides access to shash algorithms.
  *
  * Copyright (c) 2008 Loc Ho <lho@amcc.com>
  */
 
 #include "hash.h"
 
-static const struct crypto_type crypto_ahash_type;
+#define CRYPTO_ALG_TYPE_AHASH_MASK     0x0000000e
 
-struct ahash_request_priv {
-       crypto_completion_t complete;
-       void *data;
-       u8 *result;
-       u32 flags;
-       void *ubuf[] CRYPTO_MINALIGN_ATTR;
-};
+static inline struct crypto_istat_hash *ahash_get_stat(struct ahash_alg *alg)
+{
+       return hash_get_stat(&alg->halg);
+}
+
+static inline int crypto_ahash_errstat(struct ahash_alg *alg, int err)
+{
+       if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+               return err;
+
+       if (err && err != -EINPROGRESS && err != -EBUSY)
+               atomic64_inc(&ahash_get_stat(alg)->err_cnt);
+
+       return err;
+}
+
+/*
+ * For an ahash tfm that is using an shash algorithm (instead of an ahash
+ * algorithm), this returns the underlying shash tfm.
+ */
+static inline struct crypto_shash *ahash_to_shash(struct crypto_ahash *tfm)
+{
+       return *(struct crypto_shash **)crypto_ahash_ctx(tfm);
+}
+
+static inline struct shash_desc *prepare_shash_desc(struct ahash_request *req,
+                                                   struct crypto_ahash *tfm)
+{
+       struct shash_desc *desc = ahash_request_ctx(req);
+
+       desc->tfm = ahash_to_shash(tfm);
+       return desc;
+}
+
+int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
+{
+       struct crypto_hash_walk walk;
+       int nbytes;
+
+       for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
+            nbytes = crypto_hash_walk_done(&walk, nbytes))
+               nbytes = crypto_shash_update(desc, walk.data, nbytes);
+
+       return nbytes;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_update);
+
+int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
+{
+       struct crypto_hash_walk walk;
+       int nbytes;
+
+       nbytes = crypto_hash_walk_first(req, &walk);
+       if (!nbytes)
+               return crypto_shash_final(desc, req->result);
+
+       do {
+               nbytes = crypto_hash_walk_last(&walk) ?
+                        crypto_shash_finup(desc, walk.data, nbytes,
+                                           req->result) :
+                        crypto_shash_update(desc, walk.data, nbytes);
+               nbytes = crypto_hash_walk_done(&walk, nbytes);
+       } while (nbytes > 0);
+
+       return nbytes;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_finup);
+
+int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
+{
+       unsigned int nbytes = req->nbytes;
+       struct scatterlist *sg;
+       unsigned int offset;
+       int err;
+
+       if (nbytes &&
+           (sg = req->src, offset = sg->offset,
+            nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
+               void *data;
+
+               data = kmap_local_page(sg_page(sg));
+               err = crypto_shash_digest(desc, data + offset, nbytes,
+                                         req->result);
+               kunmap_local(data);
+       } else
+               err = crypto_shash_init(desc) ?:
+                     shash_ahash_finup(req, desc);
+
+       return err;
+}
+EXPORT_SYMBOL_GPL(shash_ahash_digest);
+
+static void crypto_exit_ahash_using_shash(struct crypto_tfm *tfm)
+{
+       struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
+
+       crypto_free_shash(*ctx);
+}
+
+static int crypto_init_ahash_using_shash(struct crypto_tfm *tfm)
+{
+       struct crypto_alg *calg = tfm->__crt_alg;
+       struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
+       struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
+       struct crypto_shash *shash;
+
+       if (!crypto_mod_get(calg))
+               return -EAGAIN;
+
+       shash = crypto_create_tfm(calg, &crypto_shash_type);
+       if (IS_ERR(shash)) {
+               crypto_mod_put(calg);
+               return PTR_ERR(shash);
+       }
+
+       crt->using_shash = true;
+       *ctx = shash;
+       tfm->exit = crypto_exit_ahash_using_shash;
+
+       crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
+                                   CRYPTO_TFM_NEED_KEY);
+       crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
+
+       return 0;
+}
 
 static int hash_walk_next(struct crypto_hash_walk *walk)
 {
-       unsigned int alignmask = walk->alignmask;
        unsigned int offset = walk->offset;
        unsigned int nbytes = min(walk->entrylen,
                                  ((unsigned int)(PAGE_SIZE)) - offset);
 
        walk->data = kmap_local_page(walk->pg);
        walk->data += offset;
-
-       if (offset & alignmask) {
-               unsigned int unaligned = alignmask + 1 - (offset & alignmask);
-
-               if (nbytes > unaligned)
-                       nbytes = unaligned;
-       }
-
        walk->entrylen -= nbytes;
        return nbytes;
 }
@@ -71,23 +184,8 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
 
 int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
 {
-       unsigned int alignmask = walk->alignmask;
-
        walk->data -= walk->offset;
 
-       if (walk->entrylen && (walk->offset & alignmask) && !err) {
-               unsigned int nbytes;
-
-               walk->offset = ALIGN(walk->offset, alignmask + 1);
-               nbytes = min(walk->entrylen,
-                            (unsigned int)(PAGE_SIZE - walk->offset));
-               if (nbytes) {
-                       walk->entrylen -= nbytes;
-                       walk->data += walk->offset;
-                       return nbytes;
-               }
-       }
-
        kunmap_local(walk->data);
        crypto_yield(walk->flags);
 
@@ -119,7 +217,6 @@ int crypto_hash_walk_first(struct ahash_request *req,
                return 0;
        }
 
-       walk->alignmask = crypto_ahash_alignmask(crypto_ahash_reqtfm(req));
        walk->sg = req->src;
        walk->flags = req->base.flags;
 
@@ -127,67 +224,64 @@ int crypto_hash_walk_first(struct ahash_request *req,
 }
 EXPORT_SYMBOL_GPL(crypto_hash_walk_first);
 
-static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key,
-                               unsigned int keylen)
-{
-       unsigned long alignmask = crypto_ahash_alignmask(tfm);
-       int ret;
-       u8 *buffer, *alignbuffer;
-       unsigned long absize;
-
-       absize = keylen + alignmask;
-       buffer = kmalloc(absize, GFP_KERNEL);
-       if (!buffer)
-               return -ENOMEM;
-
-       alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
-       memcpy(alignbuffer, key, keylen);
-       ret = tfm->setkey(tfm, alignbuffer, keylen);
-       kfree_sensitive(buffer);
-       return ret;
-}
-
 static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key,
                          unsigned int keylen)
 {
        return -ENOSYS;
 }
 
-static void ahash_set_needkey(struct crypto_ahash *tfm)
+static void ahash_set_needkey(struct crypto_ahash *tfm, struct ahash_alg *alg)
 {
-       const struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
-
-       if (tfm->setkey != ahash_nosetkey &&
-           !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
+       if (alg->setkey != ahash_nosetkey &&
+           !(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
                crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY);
 }
 
 int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
                        unsigned int keylen)
 {
-       unsigned long alignmask = crypto_ahash_alignmask(tfm);
-       int err;
+       if (likely(tfm->using_shash)) {
+               struct crypto_shash *shash = ahash_to_shash(tfm);
+               int err;
 
-       if ((unsigned long)key & alignmask)
-               err = ahash_setkey_unaligned(tfm, key, keylen);
-       else
-               err = tfm->setkey(tfm, key, keylen);
-
-       if (unlikely(err)) {
-               ahash_set_needkey(tfm);
-               return err;
+               err = crypto_shash_setkey(shash, key, keylen);
+               if (unlikely(err)) {
+                       crypto_ahash_set_flags(tfm,
+                                              crypto_shash_get_flags(shash) &
+                                              CRYPTO_TFM_NEED_KEY);
+                       return err;
+               }
+       } else {
+               struct ahash_alg *alg = crypto_ahash_alg(tfm);
+               int err;
+
+               err = alg->setkey(tfm, key, keylen);
+               if (unlikely(err)) {
+                       ahash_set_needkey(tfm, alg);
+                       return err;
+               }
        }
-
        crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
        return 0;
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
 
+int crypto_ahash_init(struct ahash_request *req)
+{
+       struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+       if (likely(tfm->using_shash))
+               return crypto_shash_init(prepare_shash_desc(req, tfm));
+       if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+               return -ENOKEY;
+       return crypto_ahash_alg(tfm)->init(req);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_init);
+
 static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
                          bool has_state)
 {
        struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-       unsigned long alignmask = crypto_ahash_alignmask(tfm);
        unsigned int ds = crypto_ahash_digestsize(tfm);
        struct ahash_request *subreq;
        unsigned int subreq_size;
@@ -201,7 +295,6 @@ static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
        reqsize = ALIGN(reqsize, crypto_tfm_ctx_alignment());
        subreq_size += reqsize;
        subreq_size += ds;
-       subreq_size += alignmask & ~(crypto_tfm_ctx_alignment() - 1);
 
        flags = ahash_request_flags(req);
        gfp = (flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?  GFP_KERNEL : GFP_ATOMIC;
@@ -213,7 +306,6 @@ static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt,
        ahash_request_set_callback(subreq, flags, cplt, req);
 
        result = (u8 *)(subreq + 1) + reqsize;
-       result = PTR_ALIGN(result, alignmask + 1);
 
        ahash_request_set_crypt(subreq, req->src, result, req->nbytes);
 
@@ -249,100 +341,78 @@ static void ahash_restore_req(struct ahash_request *req, int err)
        kfree_sensitive(subreq);
 }
 
-static void ahash_op_unaligned_done(void *data, int err)
-{
-       struct ahash_request *areq = data;
-
-       if (err == -EINPROGRESS)
-               goto out;
-
-       /* First copy req->result into req->priv.result */
-       ahash_restore_req(areq, err);
-
-out:
-       /* Complete the ORIGINAL request. */
-       ahash_request_complete(areq, err);
-}
-
-static int ahash_op_unaligned(struct ahash_request *req,
-                             int (*op)(struct ahash_request *),
-                             bool has_state)
-{
-       int err;
-
-       err = ahash_save_req(req, ahash_op_unaligned_done, has_state);
-       if (err)
-               return err;
-
-       err = op(req->priv);
-       if (err == -EINPROGRESS || err == -EBUSY)
-               return err;
-
-       ahash_restore_req(req, err);
-
-       return err;
-}
-
-static int crypto_ahash_op(struct ahash_request *req,
-                          int (*op)(struct ahash_request *),
-                          bool has_state)
+int crypto_ahash_update(struct ahash_request *req)
 {
        struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-       unsigned long alignmask = crypto_ahash_alignmask(tfm);
-       int err;
+       struct ahash_alg *alg;
 
-       if ((unsigned long)req->result & alignmask)
-               err = ahash_op_unaligned(req, op, has_state);
-       else
-               err = op(req);
+       if (likely(tfm->using_shash))
+               return shash_ahash_update(req, ahash_request_ctx(req));
 
-       return crypto_hash_errstat(crypto_hash_alg_common(tfm), err);
+       alg = crypto_ahash_alg(tfm);
+       if (IS_ENABLED(CONFIG_CRYPTO_STATS))
+               atomic64_add(req->nbytes, &ahash_get_stat(alg)->hash_tlen);
+       return crypto_ahash_errstat(alg, alg->update(req));
 }
+EXPORT_SYMBOL_GPL(crypto_ahash_update);
 
 int crypto_ahash_final(struct ahash_request *req)
 {
        struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-       struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+       struct ahash_alg *alg;
 
-       if (IS_ENABLED(CONFIG_CRYPTO_STATS))
-               atomic64_inc(&hash_get_stat(alg)->hash_cnt);
+       if (likely(tfm->using_shash))
+               return crypto_shash_final(ahash_request_ctx(req), req->result);
 
-       return crypto_ahash_op(req, tfm->final, true);
+       alg = crypto_ahash_alg(tfm);
+       if (IS_ENABLED(CONFIG_CRYPTO_STATS))
+               atomic64_inc(&ahash_get_stat(alg)->hash_cnt);
+       return crypto_ahash_errstat(alg, alg->final(req));
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_final);
 
 int crypto_ahash_finup(struct ahash_request *req)
 {
        struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-       struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+       struct ahash_alg *alg;
+
+       if (likely(tfm->using_shash))
+               return shash_ahash_finup(req, ahash_request_ctx(req));
 
+       alg = crypto_ahash_alg(tfm);
        if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
-               struct crypto_istat_hash *istat = hash_get_stat(alg);
+               struct crypto_istat_hash *istat = ahash_get_stat(alg);
 
                atomic64_inc(&istat->hash_cnt);
                atomic64_add(req->nbytes, &istat->hash_tlen);
        }
-
-       return crypto_ahash_op(req, tfm->finup, true);
+       return crypto_ahash_errstat(alg, alg->finup(req));
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_finup);
 
 int crypto_ahash_digest(struct ahash_request *req)
 {
        struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-       struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
+       struct ahash_alg *alg;
+       int err;
+
+       if (likely(tfm->using_shash))
+               return shash_ahash_digest(req, prepare_shash_desc(req, tfm));
 
+       alg = crypto_ahash_alg(tfm);
        if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
-               struct crypto_istat_hash *istat = hash_get_stat(alg);
+               struct crypto_istat_hash *istat = ahash_get_stat(alg);
 
                atomic64_inc(&istat->hash_cnt);
                atomic64_add(req->nbytes, &istat->hash_tlen);
        }
 
        if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
-               return crypto_hash_errstat(alg, -ENOKEY);
+               err = -ENOKEY;
+       else
+               err = alg->digest(req);
 
-       return crypto_ahash_op(req, tfm->digest, false);
+       return crypto_ahash_errstat(alg, err);
 }
 EXPORT_SYMBOL_GPL(crypto_ahash_digest);
 
@@ -367,7 +437,7 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err)
 
        subreq->base.complete = ahash_def_finup_done2;
 
-       err = crypto_ahash_reqtfm(req)->final(subreq);
+       err = crypto_ahash_alg(crypto_ahash_reqtfm(req))->final(subreq);
        if (err == -EINPROGRESS || err == -EBUSY)
                return err;
 
@@ -404,13 +474,35 @@ static int ahash_def_finup(struct ahash_request *req)
        if (err)
                return err;
 
-       err = tfm->update(req->priv);
+       err = crypto_ahash_alg(tfm)->update(req->priv);
        if (err == -EINPROGRESS || err == -EBUSY)
                return err;
 
        return ahash_def_finup_finish1(req, err);
 }
 
+int crypto_ahash_export(struct ahash_request *req, void *out)
+{
+       struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+       if (likely(tfm->using_shash))
+               return crypto_shash_export(ahash_request_ctx(req), out);
+       return crypto_ahash_alg(tfm)->export(req, out);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_export);
+
+int crypto_ahash_import(struct ahash_request *req, const void *in)
+{
+       struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+
+       if (likely(tfm->using_shash))
+               return crypto_shash_import(prepare_shash_desc(req, tfm), in);
+       if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+               return -ENOKEY;
+       return crypto_ahash_alg(tfm)->import(req, in);
+}
+EXPORT_SYMBOL_GPL(crypto_ahash_import);
+
 static void crypto_ahash_exit_tfm(struct crypto_tfm *tfm)
 {
        struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
@@ -424,25 +516,12 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
        struct crypto_ahash *hash = __crypto_ahash_cast(tfm);
        struct ahash_alg *alg = crypto_ahash_alg(hash);
 
-       hash->setkey = ahash_nosetkey;
-
        crypto_ahash_set_statesize(hash, alg->halg.statesize);
 
-       if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
-               return crypto_init_shash_ops_async(tfm);
+       if (tfm->__crt_alg->cra_type == &crypto_shash_type)
+               return crypto_init_ahash_using_shash(tfm);
 
-       hash->init = alg->init;
-       hash->update = alg->update;
-       hash->final = alg->final;
-       hash->finup = alg->finup ?: ahash_def_finup;
-       hash->digest = alg->digest;
-       hash->export = alg->export;
-       hash->import = alg->import;
-
-       if (alg->setkey) {
-               hash->setkey = alg->setkey;
-               ahash_set_needkey(hash);
-       }
+       ahash_set_needkey(hash, alg);
 
        if (alg->exit_tfm)
                tfm->exit = crypto_ahash_exit_tfm;
@@ -452,7 +531,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
 
 static unsigned int crypto_ahash_extsize(struct crypto_alg *alg)
 {
-       if (alg->cra_type != &crypto_ahash_type)
+       if (alg->cra_type == &crypto_shash_type)
                return sizeof(struct crypto_shash *);
 
        return crypto_alg_extsize(alg);
@@ -560,19 +639,21 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
        if (IS_ERR(nhash))
                return nhash;
 
-       nhash->init = hash->init;
-       nhash->update = hash->update;
-       nhash->final = hash->final;
-       nhash->finup = hash->finup;
-       nhash->digest = hash->digest;
-       nhash->export = hash->export;
-       nhash->import = hash->import;
-       nhash->setkey = hash->setkey;
        nhash->reqsize = hash->reqsize;
        nhash->statesize = hash->statesize;
 
-       if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
-               return crypto_clone_shash_ops_async(nhash, hash);
+       if (likely(hash->using_shash)) {
+               struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
+               struct crypto_shash *shash;
+
+               shash = crypto_clone_shash(ahash_to_shash(hash));
+               if (IS_ERR(shash)) {
+                       err = PTR_ERR(shash);
+                       goto out_free_nhash;
+               }
+               *nctx = shash;
+               return nhash;
+       }
 
        err = -ENOSYS;
        alg = crypto_ahash_alg(hash);
@@ -606,6 +687,11 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
        base->cra_type = &crypto_ahash_type;
        base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;
 
+       if (!alg->finup)
+               alg->finup = ahash_def_finup;
+       if (!alg->setkey)
+               alg->setkey = ahash_nosetkey;
+
        return 0;
 }
 
@@ -677,10 +763,10 @@ bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
 {
        struct crypto_alg *alg = &halg->base;
 
-       if (alg->cra_type != &crypto_ahash_type)
+       if (alg->cra_type == &crypto_shash_type)
                return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));
 
-       return __crypto_ahash_alg(alg)->setkey != NULL;
+       return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey;
 }
 EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);
 
index b9cc0c906efe0706fadb8ba9629ac09e652dd99c..7f402107f0cc88863cd9d65904f83a590889a2a1 100644 (file)
@@ -389,7 +389,7 @@ EXPORT_SYMBOL_GPL(crypto_shoot_alg);
 struct crypto_tfm *__crypto_alloc_tfmgfp(struct crypto_alg *alg, u32 type,
                                         u32 mask, gfp_t gfp)
 {
-       struct crypto_tfm *tfm = NULL;
+       struct crypto_tfm *tfm;
        unsigned int tfm_size;
        int err = -ENOMEM;
 
index 3254dcc3436889d0b34427add7558bbd90491dbe..eb3590dc92826c8dc20591c55c30648977f68f44 100644 (file)
@@ -7,7 +7,6 @@
  * Jon Oberheide <jon@oberheide.org>
  */
 
-#include <crypto/algapi.h>
 #include <crypto/arc4.h>
 #include <crypto/internal/skcipher.h>
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/sched.h>
 
-static int crypto_arc4_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
+static int crypto_arc4_setkey(struct crypto_lskcipher *tfm, const u8 *in_key,
                              unsigned int key_len)
 {
-       struct arc4_ctx *ctx = crypto_skcipher_ctx(tfm);
+       struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
        return arc4_setkey(ctx, in_key, key_len);
 }
 
-static int crypto_arc4_crypt(struct skcipher_request *req)
+static int crypto_arc4_crypt(struct crypto_lskcipher *tfm, const u8 *src,
+                            u8 *dst, unsigned nbytes, u8 *iv, bool final)
 {
-       struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-       struct arc4_ctx *ctx = crypto_skcipher_ctx(tfm);
-       struct skcipher_walk walk;
-       int err;
+       struct arc4_ctx *ctx = crypto_lskcipher_ctx(tfm);
 
-       err = skcipher_walk_virt(&walk, req, false);
-
-       while (walk.nbytes > 0) {
-               arc4_crypt(ctx, walk.dst.virt.addr, walk.src.virt.addr,
-                          walk.nbytes);
-               err = skcipher_walk_done(&walk, 0);
-       }
-
-       return err;
+       arc4_crypt(ctx, dst, src, nbytes);
+       return 0;
 }
 
-static int crypto_arc4_init(struct crypto_skcipher *tfm)
+static int crypto_arc4_init(struct crypto_lskcipher *tfm)
 {
        pr_warn_ratelimited("\"%s\" (%ld) uses obsolete ecb(arc4) skcipher\n",
                            current->comm, (unsigned long)current->pid);
@@ -49,33 +39,29 @@ static int crypto_arc4_init(struct crypto_skcipher *tfm)
        return 0;
 }
 
-static struct skcipher_alg arc4_alg = {
-       /*
-        * For legacy reasons, this is named "ecb(arc4)", not "arc4".
-        * Nevertheless it's actually a stream cipher, not a block cipher.
-        */
-       .base.cra_name          =       "ecb(arc4)",
-       .base.cra_driver_name   =       "ecb(arc4)-generic",
-       .base.cra_priority      =       100,
-       .base.cra_blocksize     =       ARC4_BLOCK_SIZE,
-       .base.cra_ctxsize       =       sizeof(struct arc4_ctx),
-       .base.cra_module        =       THIS_MODULE,
-       .min_keysize            =       ARC4_MIN_KEY_SIZE,
-       .max_keysize            =       ARC4_MAX_KEY_SIZE,
-       .setkey                 =       crypto_arc4_setkey,
-       .encrypt                =       crypto_arc4_crypt,
-       .decrypt                =       crypto_arc4_crypt,
-       .init                   =       crypto_arc4_init,
+static struct lskcipher_alg arc4_alg = {
+       .co.base.cra_name               =       "arc4",
+       .co.base.cra_driver_name        =       "arc4-generic",
+       .co.base.cra_priority           =       100,
+       .co.base.cra_blocksize          =       ARC4_BLOCK_SIZE,
+       .co.base.cra_ctxsize            =       sizeof(struct arc4_ctx),
+       .co.base.cra_module             =       THIS_MODULE,
+       .co.min_keysize                 =       ARC4_MIN_KEY_SIZE,
+       .co.max_keysize                 =       ARC4_MAX_KEY_SIZE,
+       .setkey                         =       crypto_arc4_setkey,
+       .encrypt                        =       crypto_arc4_crypt,
+       .decrypt                        =       crypto_arc4_crypt,
+       .init                           =       crypto_arc4_init,
 };
 
 static int __init arc4_init(void)
 {
-       return crypto_register_skcipher(&arc4_alg);
+       return crypto_register_lskcipher(&arc4_alg);
 }
 
 static void __exit arc4_exit(void)
 {
-       crypto_unregister_skcipher(&arc4_alg);
+       crypto_unregister_lskcipher(&arc4_alg);
 }
 
 subsys_initcall(arc4_init);
index 1ef3b46d6f6e5ca2b2da611bf35dd9bb29b2c1cf..59ec726b7c770e1064b5e5cd4e58b79880d21cfe 100644 (file)
@@ -76,7 +76,7 @@ config SIGNED_PE_FILE_VERIFICATION
          signed PE binary.
 
 config FIPS_SIGNATURE_SELFTEST
-       bool "Run FIPS selftests on the X.509+PKCS7 signature verification"
+       tristate "Run FIPS selftests on the X.509+PKCS7 signature verification"
        help
          This option causes some selftests to be run on the signature
          verification code, using some built in data.  This is required
@@ -84,5 +84,6 @@ config FIPS_SIGNATURE_SELFTEST
        depends on KEYS
        depends on ASYMMETRIC_KEY_TYPE
        depends on PKCS7_MESSAGE_PARSER=X509_CERTIFICATE_PARSER
+       depends on X509_CERTIFICATE_PARSER
 
 endif # ASYMMETRIC_KEY_TYPE
index 0d1fa1b692c6b23ae7508802b68fcf52d6dd9cd7..1a273d6df3ebf4e41da89b619d9fd870bddda342 100644 (file)
@@ -22,7 +22,8 @@ x509_key_parser-y := \
        x509_cert_parser.o \
        x509_loader.o \
        x509_public_key.o
-x509_key_parser-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += selftest.o
+obj-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += x509_selftest.o
+x509_selftest-y += selftest.o
 
 $(obj)/x509_cert_parser.o: \
        $(obj)/x509.asn1.h \
index 839591ad21ac04992d23b657664a0f7e9560c323..05402ef8964ed41332f121919660376dc7e43e6a 100644 (file)
@@ -75,15 +75,6 @@ int mscode_note_digest_algo(void *context, size_t hdrlen,
 
        oid = look_up_OID(value, vlen);
        switch (oid) {
-       case OID_md4:
-               ctx->digest_algo = "md4";
-               break;
-       case OID_md5:
-               ctx->digest_algo = "md5";
-               break;
-       case OID_sha1:
-               ctx->digest_algo = "sha1";
-               break;
        case OID_sha256:
                ctx->digest_algo = "sha256";
                break;
@@ -93,8 +84,14 @@ int mscode_note_digest_algo(void *context, size_t hdrlen,
        case OID_sha512:
                ctx->digest_algo = "sha512";
                break;
-       case OID_sha224:
-               ctx->digest_algo = "sha224";
+       case OID_sha3_256:
+               ctx->digest_algo = "sha3-256";
+               break;
+       case OID_sha3_384:
+               ctx->digest_algo = "sha3-384";
+               break;
+       case OID_sha3_512:
+               ctx->digest_algo = "sha3-512";
                break;
 
        case OID__NR:
index 1eca740b816ace4680870df9e3a7885b8ba47538..28e1f4a41c14426666277b97c8014455c451928b 100644 (file)
@@ -1,3 +1,10 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2009 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc5652#section-3
+
 PKCS7ContentInfo ::= SEQUENCE {
        contentType     ContentType ({ pkcs7_check_content_type }),
        content         [0] EXPLICIT SignedData OPTIONAL
index 277482bb17777148935430aac9391406934b9a24..5b08c50722d0f512f9605b5d3375b90cff119778 100644 (file)
@@ -227,15 +227,6 @@ int pkcs7_sig_note_digest_algo(void *context, size_t hdrlen,
        struct pkcs7_parse_context *ctx = context;
 
        switch (ctx->last_oid) {
-       case OID_md4:
-               ctx->sinfo->sig->hash_algo = "md4";
-               break;
-       case OID_md5:
-               ctx->sinfo->sig->hash_algo = "md5";
-               break;
-       case OID_sha1:
-               ctx->sinfo->sig->hash_algo = "sha1";
-               break;
        case OID_sha256:
                ctx->sinfo->sig->hash_algo = "sha256";
                break;
@@ -257,6 +248,15 @@ int pkcs7_sig_note_digest_algo(void *context, size_t hdrlen,
        case OID_gost2012Digest512:
                ctx->sinfo->sig->hash_algo = "streebog512";
                break;
+       case OID_sha3_256:
+               ctx->sinfo->sig->hash_algo = "sha3-256";
+               break;
+       case OID_sha3_384:
+               ctx->sinfo->sig->hash_algo = "sha3-384";
+               break;
+       case OID_sha3_512:
+               ctx->sinfo->sig->hash_algo = "sha3-512";
+               break;
        default:
                printk("Unsupported digest algo: %u\n", ctx->last_oid);
                return -ENOPKG;
@@ -278,11 +278,13 @@ int pkcs7_sig_note_pkey_algo(void *context, size_t hdrlen,
                ctx->sinfo->sig->pkey_algo = "rsa";
                ctx->sinfo->sig->encoding = "pkcs1";
                break;
-       case OID_id_ecdsa_with_sha1:
        case OID_id_ecdsa_with_sha224:
        case OID_id_ecdsa_with_sha256:
        case OID_id_ecdsa_with_sha384:
        case OID_id_ecdsa_with_sha512:
+       case OID_id_ecdsa_with_sha3_256:
+       case OID_id_ecdsa_with_sha3_384:
+       case OID_id_ecdsa_with_sha3_512:
                ctx->sinfo->sig->pkey_algo = "ecdsa";
                ctx->sinfo->sig->encoding = "x962";
                break;
index 702c41a3c7137a0fa69640cf743633b4d950767f..a2a8af2633d80ebbfc7e553e7a1ab2dcd0a2ac84 100644 (file)
@@ -1,3 +1,9 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2010 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc5958#section-2
 --
 -- This is the unencrypted variant
 --
index 1dcab27986a6a1b54b3957e08047f5c0ba05dfa6..e5f22691febd599d9a1eeb3310c19c762e715cd8 100644 (file)
@@ -115,11 +115,13 @@ software_key_determine_akcipher(const struct public_key *pkey,
                 */
                if (!hash_algo)
                        return -EINVAL;
-               if (strcmp(hash_algo, "sha1") != 0 &&
-                   strcmp(hash_algo, "sha224") != 0 &&
+               if (strcmp(hash_algo, "sha224") != 0 &&
                    strcmp(hash_algo, "sha256") != 0 &&
                    strcmp(hash_algo, "sha384") != 0 &&
-                   strcmp(hash_algo, "sha512") != 0)
+                   strcmp(hash_algo, "sha512") != 0 &&
+                   strcmp(hash_algo, "sha3-256") != 0 &&
+                   strcmp(hash_algo, "sha3-384") != 0 &&
+                   strcmp(hash_algo, "sha3-512") != 0)
                        return -EINVAL;
        } else if (strcmp(pkey->pkey_algo, "sm2") == 0) {
                if (strcmp(encoding, "raw") != 0)
index fa0bf7f2428495c14146bddafd838d1144aa1999..c50da7ef90ae999e12e4ff38c1dddcbc9d2482e4 100644 (file)
@@ -4,10 +4,11 @@
  * Written by David Howells (dhowells@redhat.com)
  */
 
-#include <linux/kernel.h>
+#include <crypto/pkcs7.h>
 #include <linux/cred.h>
+#include <linux/kernel.h>
 #include <linux/key.h>
-#include <crypto/pkcs7.h>
+#include <linux/module.h>
 #include "x509_parser.h"
 
 struct certs_test {
@@ -175,7 +176,7 @@ static const struct certs_test certs_tests[] __initconst = {
        TEST(certs_selftest_1_data, certs_selftest_1_pkcs7),
 };
 
-int __init fips_signature_selftest(void)
+static int __init fips_signature_selftest(void)
 {
        struct key *keyring;
        int ret, i;
@@ -222,3 +223,9 @@ int __init fips_signature_selftest(void)
        key_put(keyring);
        return 0;
 }
+
+late_initcall(fips_signature_selftest);
+
+MODULE_DESCRIPTION("X.509 self tests");
+MODULE_AUTHOR("Red Hat, Inc.");
+MODULE_LICENSE("GPL");
index 2deff81f8af50bfed8159b72d119e95d35dbe510..398983be77e8bc4ee844f63b457188be9f71a1b5 100644 (file)
@@ -115,7 +115,7 @@ EXPORT_SYMBOL_GPL(decrypt_blob);
  * Sign the specified data blob using the private key specified by params->key.
  * The signature is wrapped in an encoding if params->encoding is specified
  * (eg. "pkcs1").  If the encoding needs to know the digest type, this can be
- * passed through params->hash_algo (eg. "sha1").
+ * passed through params->hash_algo (eg. "sha512").
  *
  * Returns the length of the data placed in the signature buffer or an error.
  */
index 92d59c32f96a8e6ae132212f51ad0453fefdd59c..feb9573cacce07e2a0595296e2e1c47d21605b38 100644 (file)
@@ -1,3 +1,10 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2008 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc5280#section-4
+
 Certificate ::= SEQUENCE {
        tbsCertificate          TBSCertificate ({ x509_note_tbs_certificate }),
        signatureAlgorithm      AlgorithmIdentifier,
index 1a33231a75a89d9148aeb489cda686eb46abe1c5..0f8355cf1907800aa9d09dca66f33a469e28c644 100644 (file)
@@ -1,3 +1,8 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2008 IETF Trust and the persons identified as authors
+-- of the code
+--
 -- X.509 AuthorityKeyIdentifier
 -- rfc5280 section 4.2.1.1
 
@@ -14,15 +19,15 @@ CertificateSerialNumber ::= INTEGER ({ x509_akid_note_serial })
 GeneralNames ::= SEQUENCE OF GeneralName
 
 GeneralName ::= CHOICE {
-       otherName                       [0] ANY,
-       rfc822Name                      [1] IA5String,
-       dNSName                         [2] IA5String,
+       otherName                       [0] IMPLICIT OtherName,
+       rfc822Name                      [1] IMPLICIT IA5String,
+       dNSName                         [2] IMPLICIT IA5String,
        x400Address                     [3] ANY,
        directoryName                   [4] Name ({ x509_akid_note_name }),
-       ediPartyName                    [5] ANY,
-       uniformResourceIdentifier       [6] IA5String,
-       iPAddress                       [7] OCTET STRING,
-       registeredID                    [8] OBJECT IDENTIFIER
+       ediPartyName                    [5] IMPLICIT EDIPartyName,
+       uniformResourceIdentifier       [6] IMPLICIT IA5String,
+       iPAddress                       [7] IMPLICIT OCTET STRING,
+       registeredID                    [8] IMPLICIT OBJECT IDENTIFIER
        }
 
 Name ::= SEQUENCE OF RelativeDistinguishedName
@@ -33,3 +38,13 @@ AttributeValueAssertion ::= SEQUENCE {
        attributeType           OBJECT IDENTIFIER ({ x509_note_OID }),
        attributeValue          ANY ({ x509_extract_name_segment })
        }
+
+OtherName ::= SEQUENCE {
+       type-id                 OBJECT IDENTIFIER,
+       value                   [0] ANY
+       }
+
+EDIPartyName ::= SEQUENCE {
+       nameAssigner            [0] ANY OPTIONAL,
+       partyName               [1] ANY
+       }
index 0a7049b470c1812a710b9815a052e02b253e8f44..487204d394266e74be91e1b47beb25eecfdc8f54 100644 (file)
@@ -195,19 +195,9 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
        pr_debug("PubKey Algo: %u\n", ctx->last_oid);
 
        switch (ctx->last_oid) {
-       case OID_md2WithRSAEncryption:
-       case OID_md3WithRSAEncryption:
        default:
                return -ENOPKG; /* Unsupported combination */
 
-       case OID_md4WithRSAEncryption:
-               ctx->cert->sig->hash_algo = "md4";
-               goto rsa_pkcs1;
-
-       case OID_sha1WithRSAEncryption:
-               ctx->cert->sig->hash_algo = "sha1";
-               goto rsa_pkcs1;
-
        case OID_sha256WithRSAEncryption:
                ctx->cert->sig->hash_algo = "sha256";
                goto rsa_pkcs1;
@@ -224,9 +214,17 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
                ctx->cert->sig->hash_algo = "sha224";
                goto rsa_pkcs1;
 
-       case OID_id_ecdsa_with_sha1:
-               ctx->cert->sig->hash_algo = "sha1";
-               goto ecdsa;
+       case OID_id_rsassa_pkcs1_v1_5_with_sha3_256:
+               ctx->cert->sig->hash_algo = "sha3-256";
+               goto rsa_pkcs1;
+
+       case OID_id_rsassa_pkcs1_v1_5_with_sha3_384:
+               ctx->cert->sig->hash_algo = "sha3-384";
+               goto rsa_pkcs1;
+
+       case OID_id_rsassa_pkcs1_v1_5_with_sha3_512:
+               ctx->cert->sig->hash_algo = "sha3-512";
+               goto rsa_pkcs1;
 
        case OID_id_ecdsa_with_sha224:
                ctx->cert->sig->hash_algo = "sha224";
@@ -244,6 +242,18 @@ int x509_note_sig_algo(void *context, size_t hdrlen, unsigned char tag,
                ctx->cert->sig->hash_algo = "sha512";
                goto ecdsa;
 
+       case OID_id_ecdsa_with_sha3_256:
+               ctx->cert->sig->hash_algo = "sha3-256";
+               goto ecdsa;
+
+       case OID_id_ecdsa_with_sha3_384:
+               ctx->cert->sig->hash_algo = "sha3-384";
+               goto ecdsa;
+
+       case OID_id_ecdsa_with_sha3_512:
+               ctx->cert->sig->hash_algo = "sha3-512";
+               goto ecdsa;
+
        case OID_gost2012Signature256:
                ctx->cert->sig->hash_algo = "streebog256";
                goto ecrdsa;
index a299c9c56f409ef4600f5b1c8d6270f0084b16bc..97a886cbe01c3de4271eddbe6e28cf9ff7432389 100644 (file)
@@ -40,15 +40,6 @@ struct x509_certificate {
        bool            blacklisted;
 };
 
-/*
- * selftest.c
- */
-#ifdef CONFIG_FIPS_SIGNATURE_SELFTEST
-extern int __init fips_signature_selftest(void);
-#else
-static inline int fips_signature_selftest(void) { return 0; }
-#endif
-
 /*
  * x509_cert_parser.c
  */
index 7c71db3ac23d487b36bc4b2c115d5dae4d15be6e..6a4f00be22fc10b55b500fd0d9e1b4ffae50fc48 100644 (file)
@@ -262,15 +262,9 @@ static struct asymmetric_key_parser x509_key_parser = {
 /*
  * Module stuff
  */
-extern int __init certs_selftest(void);
 static int __init x509_key_init(void)
 {
-       int ret;
-
-       ret = register_asymmetric_key_parser(&x509_key_parser);
-       if (ret < 0)
-               return ret;
-       return fips_signature_selftest();
+       return register_asymmetric_key_parser(&x509_key_parser);
 }
 
 static void __exit x509_key_exit(void)
index 3326c7343e8673b6adc6d22292c67aecfa1d55c6..3aaf3ab4e360fa6754fbecde6043e499adde98c0 100644 (file)
@@ -141,9 +141,6 @@ static int crypto_authenc_genicv(struct aead_request *req, unsigned int flags)
        u8 *hash = areq_ctx->tail;
        int err;
 
-       hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
-                          crypto_ahash_alignmask(auth) + 1);
-
        ahash_request_set_tfm(ahreq, auth);
        ahash_request_set_crypt(ahreq, req->dst, hash,
                                req->assoclen + req->cryptlen);
@@ -286,9 +283,6 @@ static int crypto_authenc_decrypt(struct aead_request *req)
        u8 *hash = areq_ctx->tail;
        int err;
 
-       hash = (u8 *)ALIGN((unsigned long)hash + crypto_ahash_alignmask(auth),
-                          crypto_ahash_alignmask(auth) + 1);
-
        ahash_request_set_tfm(ahreq, auth);
        ahash_request_set_crypt(ahreq, req->src, hash,
                                req->assoclen + req->cryptlen - authsize);
@@ -373,9 +367,9 @@ static int crypto_authenc_create(struct crypto_template *tmpl,
        u32 mask;
        struct aead_instance *inst;
        struct authenc_instance_ctx *ctx;
+       struct skcipher_alg_common *enc;
        struct hash_alg_common *auth;
        struct crypto_alg *auth_base;
-       struct skcipher_alg *enc;
        int err;
 
        err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
@@ -398,10 +392,9 @@ static int crypto_authenc_create(struct crypto_template *tmpl,
                                   crypto_attr_alg_name(tb[2]), 0, mask);
        if (err)
                goto err_free_inst;
-       enc = crypto_spawn_skcipher_alg(&ctx->enc);
+       enc = crypto_spawn_skcipher_alg_common(&ctx->enc);
 
-       ctx->reqoff = ALIGN(2 * auth->digestsize + auth_base->cra_alignmask,
-                           auth_base->cra_alignmask + 1);
+       ctx->reqoff = 2 * auth->digestsize;
 
        err = -ENAMETOOLONG;
        if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
@@ -418,12 +411,11 @@ static int crypto_authenc_create(struct crypto_template *tmpl,
        inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
                                      auth_base->cra_priority;
        inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
-       inst->alg.base.cra_alignmask = auth_base->cra_alignmask |
-                                      enc->base.cra_alignmask;
+       inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
        inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_ctx);
 
-       inst->alg.ivsize = crypto_skcipher_alg_ivsize(enc);
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(enc);
+       inst->alg.ivsize = enc->ivsize;
+       inst->alg.chunksize = enc->chunksize;
        inst->alg.maxauthsize = auth->digestsize;
 
        inst->alg.init = crypto_authenc_init_tfm;
index 91424e791d5c77ccc9105bcc3fa54e8d4f92fa4c..2cc933e2f79011076be98ed53354b3363b3d0fb8 100644 (file)
@@ -87,11 +87,8 @@ static int crypto_authenc_esn_genicv_tail(struct aead_request *req,
                                          unsigned int flags)
 {
        struct crypto_aead *authenc_esn = crypto_aead_reqtfm(req);
-       struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
        struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
-       struct crypto_ahash *auth = ctx->auth;
-       u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail,
-                            crypto_ahash_alignmask(auth) + 1);
+       u8 *hash = areq_ctx->tail;
        unsigned int authsize = crypto_aead_authsize(authenc_esn);
        unsigned int assoclen = req->assoclen;
        unsigned int cryptlen = req->cryptlen;
@@ -122,8 +119,7 @@ static int crypto_authenc_esn_genicv(struct aead_request *req,
        struct authenc_esn_request_ctx *areq_ctx = aead_request_ctx(req);
        struct crypto_authenc_esn_ctx *ctx = crypto_aead_ctx(authenc_esn);
        struct crypto_ahash *auth = ctx->auth;
-       u8 *hash = PTR_ALIGN((u8 *)areq_ctx->tail,
-                            crypto_ahash_alignmask(auth) + 1);
+       u8 *hash = areq_ctx->tail;
        struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
        unsigned int authsize = crypto_aead_authsize(authenc_esn);
        unsigned int assoclen = req->assoclen;
@@ -224,8 +220,7 @@ static int crypto_authenc_esn_decrypt_tail(struct aead_request *req,
        struct skcipher_request *skreq = (void *)(areq_ctx->tail +
                                                  ctx->reqoff);
        struct crypto_ahash *auth = ctx->auth;
-       u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail,
-                             crypto_ahash_alignmask(auth) + 1);
+       u8 *ohash = areq_ctx->tail;
        unsigned int cryptlen = req->cryptlen - authsize;
        unsigned int assoclen = req->assoclen;
        struct scatterlist *dst = req->dst;
@@ -272,8 +267,7 @@ static int crypto_authenc_esn_decrypt(struct aead_request *req)
        struct ahash_request *ahreq = (void *)(areq_ctx->tail + ctx->reqoff);
        unsigned int authsize = crypto_aead_authsize(authenc_esn);
        struct crypto_ahash *auth = ctx->auth;
-       u8 *ohash = PTR_ALIGN((u8 *)areq_ctx->tail,
-                             crypto_ahash_alignmask(auth) + 1);
+       u8 *ohash = areq_ctx->tail;
        unsigned int assoclen = req->assoclen;
        unsigned int cryptlen = req->cryptlen;
        u8 *ihash = ohash + crypto_ahash_digestsize(auth);
@@ -344,8 +338,7 @@ static int crypto_authenc_esn_init_tfm(struct crypto_aead *tfm)
        ctx->enc = enc;
        ctx->null = null;
 
-       ctx->reqoff = ALIGN(2 * crypto_ahash_digestsize(auth),
-                           crypto_ahash_alignmask(auth) + 1);
+       ctx->reqoff = 2 * crypto_ahash_digestsize(auth);
 
        crypto_aead_set_reqsize(
                tfm,
@@ -390,9 +383,9 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
        u32 mask;
        struct aead_instance *inst;
        struct authenc_esn_instance_ctx *ctx;
+       struct skcipher_alg_common *enc;
        struct hash_alg_common *auth;
        struct crypto_alg *auth_base;
-       struct skcipher_alg *enc;
        int err;
 
        err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_AEAD, &mask);
@@ -415,7 +408,7 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
                                   crypto_attr_alg_name(tb[2]), 0, mask);
        if (err)
                goto err_free_inst;
-       enc = crypto_spawn_skcipher_alg(&ctx->enc);
+       enc = crypto_spawn_skcipher_alg_common(&ctx->enc);
 
        err = -ENAMETOOLONG;
        if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
@@ -431,12 +424,11 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
        inst->alg.base.cra_priority = enc->base.cra_priority * 10 +
                                      auth_base->cra_priority;
        inst->alg.base.cra_blocksize = enc->base.cra_blocksize;
-       inst->alg.base.cra_alignmask = auth_base->cra_alignmask |
-                                      enc->base.cra_alignmask;
+       inst->alg.base.cra_alignmask = enc->base.cra_alignmask;
        inst->alg.base.cra_ctxsize = sizeof(struct crypto_authenc_esn_ctx);
 
-       inst->alg.ivsize = crypto_skcipher_alg_ivsize(enc);
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(enc);
+       inst->alg.ivsize = enc->ivsize;
+       inst->alg.chunksize = enc->chunksize;
        inst->alg.maxauthsize = auth->digestsize;
 
        inst->alg.init = crypto_authenc_esn_init_tfm;
index 6c03e96b945f6672109cbdc1c59a90cacb7b4190..28345b8d921c6a81cd6e7fde0b50ba618ca08143 100644 (file)
@@ -5,8 +5,6 @@
  * Copyright (c) 2006-2016 Herbert Xu <herbert@gondor.apana.org.au>
  */
 
-#include <crypto/algapi.h>
-#include <crypto/internal/cipher.h>
 #include <crypto/internal/skcipher.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/log2.h>
 #include <linux/module.h>
 
-static int crypto_cbc_encrypt_segment(struct skcipher_walk *walk,
-                                     struct crypto_skcipher *skcipher)
+static int crypto_cbc_encrypt_segment(struct crypto_lskcipher *tfm,
+                                     const u8 *src, u8 *dst, unsigned nbytes,
+                                     u8 *iv)
 {
-       unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-       void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-       unsigned int nbytes = walk->nbytes;
-       u8 *src = walk->src.virt.addr;
-       u8 *dst = walk->dst.virt.addr;
-       struct crypto_cipher *cipher;
-       struct crypto_tfm *tfm;
-       u8 *iv = walk->iv;
-
-       cipher = skcipher_cipher_simple(skcipher);
-       tfm = crypto_cipher_tfm(cipher);
-       fn = crypto_cipher_alg(cipher)->cia_encrypt;
+       unsigned int bsize = crypto_lskcipher_blocksize(tfm);
 
-       do {
+       for (; nbytes >= bsize; src += bsize, dst += bsize, nbytes -= bsize) {
                crypto_xor(iv, src, bsize);
-               fn(tfm, dst, iv);
+               crypto_lskcipher_encrypt(tfm, iv, dst, bsize, NULL);
                memcpy(iv, dst, bsize);
-
-               src += bsize;
-               dst += bsize;
-       } while ((nbytes -= bsize) >= bsize);
+       }
 
        return nbytes;
 }
 
-static int crypto_cbc_encrypt_inplace(struct skcipher_walk *walk,
-                                     struct crypto_skcipher *skcipher)
+static int crypto_cbc_encrypt_inplace(struct crypto_lskcipher *tfm,
+                                     u8 *src, unsigned nbytes, u8 *oiv)
 {
-       unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-       void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-       unsigned int nbytes = walk->nbytes;
-       u8 *src = walk->src.virt.addr;
-       struct crypto_cipher *cipher;
-       struct crypto_tfm *tfm;
-       u8 *iv = walk->iv;
-
-       cipher = skcipher_cipher_simple(skcipher);
-       tfm = crypto_cipher_tfm(cipher);
-       fn = crypto_cipher_alg(cipher)->cia_encrypt;
+       unsigned int bsize = crypto_lskcipher_blocksize(tfm);
+       u8 *iv = oiv;
+
+       if (nbytes < bsize)
+               goto out;
 
        do {
                crypto_xor(src, iv, bsize);
-               fn(tfm, src, src);
+               crypto_lskcipher_encrypt(tfm, src, src, bsize, NULL);
                iv = src;
 
                src += bsize;
        } while ((nbytes -= bsize) >= bsize);
 
-       memcpy(walk->iv, iv, bsize);
+       memcpy(oiv, iv, bsize);
 
+out:
        return nbytes;
 }
 
-static int crypto_cbc_encrypt(struct skcipher_request *req)
+static int crypto_cbc_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+                             u8 *dst, unsigned len, u8 *iv, bool final)
 {
-       struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
-       struct skcipher_walk walk;
-       int err;
+       struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+       struct crypto_lskcipher *cipher = *ctx;
+       int rem;
 
-       err = skcipher_walk_virt(&walk, req, false);
+       if (src == dst)
+               rem = crypto_cbc_encrypt_inplace(cipher, dst, len, iv);
+       else
+               rem = crypto_cbc_encrypt_segment(cipher, src, dst, len, iv);
 
-       while (walk.nbytes) {
-               if (walk.src.virt.addr == walk.dst.virt.addr)
-                       err = crypto_cbc_encrypt_inplace(&walk, skcipher);
-               else
-                       err = crypto_cbc_encrypt_segment(&walk, skcipher);
-               err = skcipher_walk_done(&walk, err);
-       }
-
-       return err;
+       return rem && final ? -EINVAL : rem;
 }
 
-static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk,
-                                     struct crypto_skcipher *skcipher)
+static int crypto_cbc_decrypt_segment(struct crypto_lskcipher *tfm,
+                                     const u8 *src, u8 *dst, unsigned nbytes,
+                                     u8 *oiv)
 {
-       unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-       void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-       unsigned int nbytes = walk->nbytes;
-       u8 *src = walk->src.virt.addr;
-       u8 *dst = walk->dst.virt.addr;
-       struct crypto_cipher *cipher;
-       struct crypto_tfm *tfm;
-       u8 *iv = walk->iv;
-
-       cipher = skcipher_cipher_simple(skcipher);
-       tfm = crypto_cipher_tfm(cipher);
-       fn = crypto_cipher_alg(cipher)->cia_decrypt;
+       unsigned int bsize = crypto_lskcipher_blocksize(tfm);
+       const u8 *iv = oiv;
+
+       if (nbytes < bsize)
+               goto out;
 
        do {
-               fn(tfm, dst, src);
+               crypto_lskcipher_decrypt(tfm, src, dst, bsize, NULL);
                crypto_xor(dst, iv, bsize);
                iv = src;
 
@@ -114,83 +84,72 @@ static int crypto_cbc_decrypt_segment(struct skcipher_walk *walk,
                dst += bsize;
        } while ((nbytes -= bsize) >= bsize);
 
-       memcpy(walk->iv, iv, bsize);
+       memcpy(oiv, iv, bsize);
 
+out:
        return nbytes;
 }
 
-static int crypto_cbc_decrypt_inplace(struct skcipher_walk *walk,
-                                     struct crypto_skcipher *skcipher)
+static int crypto_cbc_decrypt_inplace(struct crypto_lskcipher *tfm,
+                                     u8 *src, unsigned nbytes, u8 *iv)
 {
-       unsigned int bsize = crypto_skcipher_blocksize(skcipher);
-       void (*fn)(struct crypto_tfm *, u8 *, const u8 *);
-       unsigned int nbytes = walk->nbytes;
-       u8 *src = walk->src.virt.addr;
+       unsigned int bsize = crypto_lskcipher_blocksize(tfm);
        u8 last_iv[MAX_CIPHER_BLOCKSIZE];
-       struct crypto_cipher *cipher;
-       struct crypto_tfm *tfm;
 
-       cipher = skcipher_cipher_simple(skcipher);
-       tfm = crypto_cipher_tfm(cipher);
-       fn = crypto_cipher_alg(cipher)->cia_decrypt;
+       if (nbytes < bsize)
+               goto out;
 
        /* Start of the last block. */
        src += nbytes - (nbytes & (bsize - 1)) - bsize;
        memcpy(last_iv, src, bsize);
 
        for (;;) {
-               fn(tfm, src, src);
+               crypto_lskcipher_decrypt(tfm, src, src, bsize, NULL);
                if ((nbytes -= bsize) < bsize)
                        break;
                crypto_xor(src, src - bsize, bsize);
                src -= bsize;
        }
 
-       crypto_xor(src, walk->iv, bsize);
-       memcpy(walk->iv, last_iv, bsize);
+       crypto_xor(src, iv, bsize);
+       memcpy(iv, last_iv, bsize);
 
+out:
        return nbytes;
 }
 
-static int crypto_cbc_decrypt(struct skcipher_request *req)
+static int crypto_cbc_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+                             u8 *dst, unsigned len, u8 *iv, bool final)
 {
-       struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
-       struct skcipher_walk walk;
-       int err;
+       struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+       struct crypto_lskcipher *cipher = *ctx;
+       int rem;
 
-       err = skcipher_walk_virt(&walk, req, false);
+       if (src == dst)
+               rem = crypto_cbc_decrypt_inplace(cipher, dst, len, iv);
+       else
+               rem = crypto_cbc_decrypt_segment(cipher, src, dst, len, iv);
 
-       while (walk.nbytes) {
-               if (walk.src.virt.addr == walk.dst.virt.addr)
-                       err = crypto_cbc_decrypt_inplace(&walk, skcipher);
-               else
-                       err = crypto_cbc_decrypt_segment(&walk, skcipher);
-               err = skcipher_walk_done(&walk, err);
-       }
-
-       return err;
+       return rem && final ? -EINVAL : rem;
 }
 
 static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
-       struct skcipher_instance *inst;
-       struct crypto_alg *alg;
+       struct lskcipher_instance *inst;
        int err;
 
-       inst = skcipher_alloc_instance_simple(tmpl, tb);
+       inst = lskcipher_alloc_instance_simple(tmpl, tb);
        if (IS_ERR(inst))
                return PTR_ERR(inst);
 
-       alg = skcipher_ialg_simple(inst);
-
        err = -EINVAL;
-       if (!is_power_of_2(alg->cra_blocksize))
+       if (!is_power_of_2(inst->alg.co.base.cra_blocksize))
                goto out_free_inst;
 
        inst->alg.encrypt = crypto_cbc_encrypt;
        inst->alg.decrypt = crypto_cbc_decrypt;
 
-       err = skcipher_register_instance(tmpl, inst);
+       err = lskcipher_register_instance(tmpl, inst);
        if (err) {
 out_free_inst:
                inst->free(inst);
index a9453129c51cb7d1de295caf1cf61bdab13cb126..36f0acec32e196e0d086631851b36c192963924b 100644 (file)
@@ -56,6 +56,7 @@ struct cbcmac_tfm_ctx {
 
 struct cbcmac_desc_ctx {
        unsigned int len;
+       u8 dg[];
 };
 
 static inline struct crypto_ccm_req_priv_ctx *crypto_ccm_reqctx(
@@ -447,10 +448,10 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
                                    const char *ctr_name,
                                    const char *mac_name)
 {
+       struct skcipher_alg_common *ctr;
        u32 mask;
        struct aead_instance *inst;
        struct ccm_instance_ctx *ictx;
-       struct skcipher_alg *ctr;
        struct hash_alg_common *mac;
        int err;
 
@@ -478,13 +479,12 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
                                   ctr_name, 0, mask);
        if (err)
                goto err_free_inst;
-       ctr = crypto_spawn_skcipher_alg(&ictx->ctr);
+       ctr = crypto_spawn_skcipher_alg_common(&ictx->ctr);
 
        /* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
        err = -EINVAL;
        if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
-           crypto_skcipher_alg_ivsize(ctr) != 16 ||
-           ctr->base.cra_blocksize != 1)
+           ctr->ivsize != 16 || ctr->base.cra_blocksize != 1)
                goto err_free_inst;
 
        /* ctr and cbcmac must use the same underlying block cipher. */
@@ -504,10 +504,9 @@ static int crypto_ccm_create_common(struct crypto_template *tmpl,
        inst->alg.base.cra_priority = (mac->base.cra_priority +
                                       ctr->base.cra_priority) / 2;
        inst->alg.base.cra_blocksize = 1;
-       inst->alg.base.cra_alignmask = mac->base.cra_alignmask |
-                                      ctr->base.cra_alignmask;
+       inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
        inst->alg.ivsize = 16;
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(ctr);
+       inst->alg.chunksize = ctr->chunksize;
        inst->alg.maxauthsize = 16;
        inst->alg.base.cra_ctxsize = sizeof(struct crypto_ccm_ctx);
        inst->alg.init = crypto_ccm_init_tfm;
@@ -786,10 +785,9 @@ static int crypto_cbcmac_digest_init(struct shash_desc *pdesc)
 {
        struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
        int bs = crypto_shash_digestsize(pdesc->tfm);
-       u8 *dg = (u8 *)ctx + crypto_shash_descsize(pdesc->tfm) - bs;
 
        ctx->len = 0;
-       memset(dg, 0, bs);
+       memset(ctx->dg, 0, bs);
 
        return 0;
 }
@@ -802,18 +800,17 @@ static int crypto_cbcmac_digest_update(struct shash_desc *pdesc, const u8 *p,
        struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
        struct crypto_cipher *tfm = tctx->child;
        int bs = crypto_shash_digestsize(parent);
-       u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs;
 
        while (len > 0) {
                unsigned int l = min(len, bs - ctx->len);
 
-               crypto_xor(dg + ctx->len, p, l);
+               crypto_xor(&ctx->dg[ctx->len], p, l);
                ctx->len +=l;
                len -= l;
                p += l;
 
                if (ctx->len == bs) {
-                       crypto_cipher_encrypt_one(tfm, dg, dg);
+                       crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg);
                        ctx->len = 0;
                }
        }
@@ -828,12 +825,11 @@ static int crypto_cbcmac_digest_final(struct shash_desc *pdesc, u8 *out)
        struct cbcmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
        struct crypto_cipher *tfm = tctx->child;
        int bs = crypto_shash_digestsize(parent);
-       u8 *dg = (u8 *)ctx + crypto_shash_descsize(parent) - bs;
 
        if (ctx->len)
-               crypto_cipher_encrypt_one(tfm, dg, dg);
+               crypto_cipher_encrypt_one(tfm, ctx->dg, ctx->dg);
 
-       memcpy(out, dg, bs);
+       memcpy(out, ctx->dg, bs);
        return 0;
 }
 
@@ -890,8 +886,7 @@ static int cbcmac_create(struct crypto_template *tmpl, struct rtattr **tb)
        inst->alg.base.cra_blocksize = 1;
 
        inst->alg.digestsize = alg->cra_blocksize;
-       inst->alg.descsize = ALIGN(sizeof(struct cbcmac_desc_ctx),
-                                  alg->cra_alignmask + 1) +
+       inst->alg.descsize = sizeof(struct cbcmac_desc_ctx) +
                             alg->cra_blocksize;
 
        inst->alg.base.cra_ctxsize = sizeof(struct cbcmac_tfm_ctx);
index 3a905c5d8f53511776e0885ce8297851356ee471..9e4651330852b51415bf43700d56d831fb2c8725 100644 (file)
@@ -558,7 +558,7 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
        u32 mask;
        struct aead_instance *inst;
        struct chachapoly_instance_ctx *ctx;
-       struct skcipher_alg *chacha;
+       struct skcipher_alg_common *chacha;
        struct hash_alg_common *poly;
        int err;
 
@@ -579,7 +579,7 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
                                   crypto_attr_alg_name(tb[1]), 0, mask);
        if (err)
                goto err_free_inst;
-       chacha = crypto_spawn_skcipher_alg(&ctx->chacha);
+       chacha = crypto_spawn_skcipher_alg_common(&ctx->chacha);
 
        err = crypto_grab_ahash(&ctx->poly, aead_crypto_instance(inst),
                                crypto_attr_alg_name(tb[2]), 0, mask);
@@ -591,7 +591,7 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
        if (poly->digestsize != POLY1305_DIGEST_SIZE)
                goto err_free_inst;
        /* Need 16-byte IV size, including Initial Block Counter value */
-       if (crypto_skcipher_alg_ivsize(chacha) != CHACHA_IV_SIZE)
+       if (chacha->ivsize != CHACHA_IV_SIZE)
                goto err_free_inst;
        /* Not a stream cipher? */
        if (chacha->base.cra_blocksize != 1)
@@ -610,12 +610,11 @@ static int chachapoly_create(struct crypto_template *tmpl, struct rtattr **tb,
        inst->alg.base.cra_priority = (chacha->base.cra_priority +
                                       poly->base.cra_priority) / 2;
        inst->alg.base.cra_blocksize = 1;
-       inst->alg.base.cra_alignmask = chacha->base.cra_alignmask |
-                                      poly->base.cra_alignmask;
+       inst->alg.base.cra_alignmask = chacha->base.cra_alignmask;
        inst->alg.base.cra_ctxsize = sizeof(struct chachapoly_ctx) +
                                     ctx->saltlen;
        inst->alg.ivsize = ivsize;
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(chacha);
+       inst->alg.chunksize = chacha->chunksize;
        inst->alg.maxauthsize = POLY1305_DIGEST_SIZE;
        inst->alg.init = chachapoly_init;
        inst->alg.exit = chachapoly_exit;
index fce6b0f58e88e70e513387a9eb1d466b7dac72c1..c7aa3665b076e4012d7b9e93119f58c9145bd6be 100644 (file)
@@ -28,7 +28,7 @@
  */
 struct cmac_tfm_ctx {
        struct crypto_cipher *child;
-       u8 ctx[];
+       __be64 consts[];
 };
 
 /*
@@ -44,17 +44,15 @@ struct cmac_tfm_ctx {
  */
 struct cmac_desc_ctx {
        unsigned int len;
-       u8 ctx[];
+       u8 odds[];
 };
 
 static int crypto_cmac_digest_setkey(struct crypto_shash *parent,
                                     const u8 *inkey, unsigned int keylen)
 {
-       unsigned long alignmask = crypto_shash_alignmask(parent);
        struct cmac_tfm_ctx *ctx = crypto_shash_ctx(parent);
        unsigned int bs = crypto_shash_blocksize(parent);
-       __be64 *consts = PTR_ALIGN((void *)ctx->ctx,
-                                  (alignmask | (__alignof__(__be64) - 1)) + 1);
+       __be64 *consts = ctx->consts;
        u64 _const[2];
        int i, err = 0;
        u8 msb_mask, gfmask;
@@ -104,10 +102,9 @@ static int crypto_cmac_digest_setkey(struct crypto_shash *parent,
 
 static int crypto_cmac_digest_init(struct shash_desc *pdesc)
 {
-       unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm);
        struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
        int bs = crypto_shash_blocksize(pdesc->tfm);
-       u8 *prev = PTR_ALIGN((void *)ctx->ctx, alignmask + 1) + bs;
+       u8 *prev = &ctx->odds[bs];
 
        ctx->len = 0;
        memset(prev, 0, bs);
@@ -119,12 +116,11 @@ static int crypto_cmac_digest_update(struct shash_desc *pdesc, const u8 *p,
                                     unsigned int len)
 {
        struct crypto_shash *parent = pdesc->tfm;
-       unsigned long alignmask = crypto_shash_alignmask(parent);
        struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent);
        struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
        struct crypto_cipher *tfm = tctx->child;
        int bs = crypto_shash_blocksize(parent);
-       u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1);
+       u8 *odds = ctx->odds;
        u8 *prev = odds + bs;
 
        /* checking the data can fill the block */
@@ -165,14 +161,11 @@ static int crypto_cmac_digest_update(struct shash_desc *pdesc, const u8 *p,
 static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out)
 {
        struct crypto_shash *parent = pdesc->tfm;
-       unsigned long alignmask = crypto_shash_alignmask(parent);
        struct cmac_tfm_ctx *tctx = crypto_shash_ctx(parent);
        struct cmac_desc_ctx *ctx = shash_desc_ctx(pdesc);
        struct crypto_cipher *tfm = tctx->child;
        int bs = crypto_shash_blocksize(parent);
-       u8 *consts = PTR_ALIGN((void *)tctx->ctx,
-                              (alignmask | (__alignof__(__be64) - 1)) + 1);
-       u8 *odds = PTR_ALIGN((void *)ctx->ctx, alignmask + 1);
+       u8 *odds = ctx->odds;
        u8 *prev = odds + bs;
        unsigned int offset = 0;
 
@@ -191,7 +184,7 @@ static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out)
        }
 
        crypto_xor(prev, odds, bs);
-       crypto_xor(prev, consts + offset, bs);
+       crypto_xor(prev, (const u8 *)tctx->consts + offset, bs);
 
        crypto_cipher_encrypt_one(tfm, out, prev);
 
@@ -241,7 +234,6 @@ static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb)
        struct shash_instance *inst;
        struct crypto_cipher_spawn *spawn;
        struct crypto_alg *alg;
-       unsigned long alignmask;
        u32 mask;
        int err;
 
@@ -273,23 +265,14 @@ static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb)
        if (err)
                goto err_free_inst;
 
-       alignmask = alg->cra_alignmask;
-       inst->alg.base.cra_alignmask = alignmask;
        inst->alg.base.cra_priority = alg->cra_priority;
        inst->alg.base.cra_blocksize = alg->cra_blocksize;
+       inst->alg.base.cra_ctxsize = sizeof(struct cmac_tfm_ctx) +
+                                    alg->cra_blocksize * 2;
 
        inst->alg.digestsize = alg->cra_blocksize;
-       inst->alg.descsize =
-               ALIGN(sizeof(struct cmac_desc_ctx), crypto_tfm_ctx_alignment())
-               + (alignmask & ~(crypto_tfm_ctx_alignment() - 1))
-               + alg->cra_blocksize * 2;
-
-       inst->alg.base.cra_ctxsize =
-               ALIGN(sizeof(struct cmac_tfm_ctx), crypto_tfm_ctx_alignment())
-               + ((alignmask | (__alignof__(__be64) - 1)) &
-                  ~(crypto_tfm_ctx_alignment() - 1))
-               + alg->cra_blocksize * 2;
-
+       inst->alg.descsize = sizeof(struct cmac_desc_ctx) +
+                            alg->cra_blocksize * 2;
        inst->alg.init = crypto_cmac_digest_init;
        inst->alg.update = crypto_cmac_digest_update;
        inst->alg.final = crypto_cmac_digest_final;
index bbcc368b6a5513d486ae0ec36dd1918337c0ecd2..31d022d47f7a063181d05a54435d4385b0695183 100644 (file)
@@ -377,7 +377,7 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
 {
        struct skcipherd_instance_ctx *ctx;
        struct skcipher_instance *inst;
-       struct skcipher_alg *alg;
+       struct skcipher_alg_common *alg;
        u32 type;
        u32 mask;
        int err;
@@ -396,17 +396,17 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
        if (err)
                goto err_free_inst;
 
-       alg = crypto_spawn_skcipher_alg(&ctx->spawn);
+       alg = crypto_spawn_skcipher_alg_common(&ctx->spawn);
        err = cryptd_init_instance(skcipher_crypto_instance(inst), &alg->base);
        if (err)
                goto err_free_inst;
 
        inst->alg.base.cra_flags |= CRYPTO_ALG_ASYNC |
                (alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
-       inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg);
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
-       inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
-       inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
+       inst->alg.ivsize = alg->ivsize;
+       inst->alg.chunksize = alg->chunksize;
+       inst->alg.min_keysize = alg->min_keysize;
+       inst->alg.max_keysize = alg->max_keysize;
 
        inst->alg.base.cra_ctxsize = sizeof(struct cryptd_skcipher_ctx);
 
@@ -929,7 +929,7 @@ static int cryptd_create(struct crypto_template *tmpl, struct rtattr **tb)
                return PTR_ERR(algt);
 
        switch (algt->type & algt->mask & CRYPTO_ALG_TYPE_MASK) {
-       case CRYPTO_ALG_TYPE_SKCIPHER:
+       case CRYPTO_ALG_TYPE_LSKCIPHER:
                return cryptd_create_skcipher(tmpl, tb, algt, &queue);
        case CRYPTO_ALG_TYPE_HASH:
                return cryptd_create_hash(tmpl, tb, algt, &queue);
index 108d9d55c509b54c6b894ba3b08e717e019c554f..e60a0eb628e8a03a49c389d142f8f4394f9f5c1c 100644 (file)
@@ -552,20 +552,16 @@ EXPORT_SYMBOL_GPL(crypto_engine_alloc_init);
 /**
  * crypto_engine_exit - free the resources of hardware engine when exit
  * @engine: the hardware engine need to be freed
- *
- * Return 0 for success.
  */
-int crypto_engine_exit(struct crypto_engine *engine)
+void crypto_engine_exit(struct crypto_engine *engine)
 {
        int ret;
 
        ret = crypto_engine_stop(engine);
        if (ret)
-               return ret;
+               return;
 
        kthread_destroy_worker(engine->kworker);
-
-       return 0;
 }
 EXPORT_SYMBOL_GPL(crypto_engine_exit);
 
index 23c698b220130423e7261298234b345ddc230310..1420496062d57d6c200aa3ebe7e895ace3446e4f 100644 (file)
@@ -258,8 +258,8 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
                                 struct rtattr **tb)
 {
        struct skcipher_instance *inst;
-       struct skcipher_alg *alg;
        struct crypto_skcipher_spawn *spawn;
+       struct skcipher_alg_common *alg;
        u32 mask;
        int err;
 
@@ -278,11 +278,11 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
        if (err)
                goto err_free_inst;
 
-       alg = crypto_spawn_skcipher_alg(spawn);
+       alg = crypto_spawn_skcipher_alg_common(spawn);
 
        /* We only support 16-byte blocks. */
        err = -EINVAL;
-       if (crypto_skcipher_alg_ivsize(alg) != CTR_RFC3686_BLOCK_SIZE)
+       if (alg->ivsize != CTR_RFC3686_BLOCK_SIZE)
                goto err_free_inst;
 
        /* Not a stream cipher? */
@@ -303,11 +303,9 @@ static int crypto_rfc3686_create(struct crypto_template *tmpl,
        inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
 
        inst->alg.ivsize = CTR_RFC3686_IV_SIZE;
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
-       inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) +
-                               CTR_RFC3686_NONCE_SIZE;
-       inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) +
-                               CTR_RFC3686_NONCE_SIZE;
+       inst->alg.chunksize = alg->chunksize;
+       inst->alg.min_keysize = alg->min_keysize + CTR_RFC3686_NONCE_SIZE;
+       inst->alg.max_keysize = alg->max_keysize + CTR_RFC3686_NONCE_SIZE;
 
        inst->alg.setkey = crypto_rfc3686_setkey;
        inst->alg.encrypt = crypto_rfc3686_crypt;
index 8f604f6554b1c3e2799abb1e02fa1e5d58d570e4..f5b42156b6c724a4e9b5bd3f50bfc3f37949da41 100644 (file)
@@ -324,8 +324,8 @@ static void crypto_cts_free(struct skcipher_instance *inst)
 static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
        struct crypto_skcipher_spawn *spawn;
+       struct skcipher_alg_common *alg;
        struct skcipher_instance *inst;
-       struct skcipher_alg *alg;
        u32 mask;
        int err;
 
@@ -344,10 +344,10 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
        if (err)
                goto err_free_inst;
 
-       alg = crypto_spawn_skcipher_alg(spawn);
+       alg = crypto_spawn_skcipher_alg_common(spawn);
 
        err = -EINVAL;
-       if (crypto_skcipher_alg_ivsize(alg) != alg->base.cra_blocksize)
+       if (alg->ivsize != alg->base.cra_blocksize)
                goto err_free_inst;
 
        if (strncmp(alg->base.cra_name, "cbc(", 4))
@@ -363,9 +363,9 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
        inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
 
        inst->alg.ivsize = alg->base.cra_blocksize;
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
-       inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
-       inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
+       inst->alg.chunksize = alg->chunksize;
+       inst->alg.min_keysize = alg->min_keysize;
+       inst->alg.max_keysize = alg->max_keysize;
 
        inst->alg.base.cra_ctxsize = sizeof(struct crypto_cts_ctx);
 
index b2a46f6dc961e71d3fd9d8806be1c564264c7f03..6e31e0db0e8659f00dbbb4ef2c396096d3e676ec 100644 (file)
@@ -39,24 +39,20 @@ struct deflate_ctx {
        struct z_stream_s decomp_stream;
 };
 
-static int deflate_comp_init(struct deflate_ctx *ctx, int format)
+static int deflate_comp_init(struct deflate_ctx *ctx)
 {
        int ret = 0;
        struct z_stream_s *stream = &ctx->comp_stream;
 
        stream->workspace = vzalloc(zlib_deflate_workspacesize(
-                                   MAX_WBITS, MAX_MEM_LEVEL));
+                                   -DEFLATE_DEF_WINBITS, MAX_MEM_LEVEL));
        if (!stream->workspace) {
                ret = -ENOMEM;
                goto out;
        }
-       if (format)
-               ret = zlib_deflateInit(stream, 3);
-       else
-               ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
-                                       -DEFLATE_DEF_WINBITS,
-                                       DEFLATE_DEF_MEMLEVEL,
-                                       Z_DEFAULT_STRATEGY);
+       ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
+                               -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL,
+                               Z_DEFAULT_STRATEGY);
        if (ret != Z_OK) {
                ret = -EINVAL;
                goto out_free;
@@ -68,7 +64,7 @@ out_free:
        goto out;
 }
 
-static int deflate_decomp_init(struct deflate_ctx *ctx, int format)
+static int deflate_decomp_init(struct deflate_ctx *ctx)
 {
        int ret = 0;
        struct z_stream_s *stream = &ctx->decomp_stream;
@@ -78,10 +74,7 @@ static int deflate_decomp_init(struct deflate_ctx *ctx, int format)
                ret = -ENOMEM;
                goto out;
        }
-       if (format)
-               ret = zlib_inflateInit(stream);
-       else
-               ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
+       ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
        if (ret != Z_OK) {
                ret = -EINVAL;
                goto out_free;
@@ -105,21 +98,21 @@ static void deflate_decomp_exit(struct deflate_ctx *ctx)
        vfree(ctx->decomp_stream.workspace);
 }
 
-static int __deflate_init(void *ctx, int format)
+static int __deflate_init(void *ctx)
 {
        int ret;
 
-       ret = deflate_comp_init(ctx, format);
+       ret = deflate_comp_init(ctx);
        if (ret)
                goto out;
-       ret = deflate_decomp_init(ctx, format);
+       ret = deflate_decomp_init(ctx);
        if (ret)
                deflate_comp_exit(ctx);
 out:
        return ret;
 }
 
-static void *gen_deflate_alloc_ctx(struct crypto_scomp *tfm, int format)
+static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
 {
        struct deflate_ctx *ctx;
        int ret;
@@ -128,7 +121,7 @@ static void *gen_deflate_alloc_ctx(struct crypto_scomp *tfm, int format)
        if (!ctx)
                return ERR_PTR(-ENOMEM);
 
-       ret = __deflate_init(ctx, format);
+       ret = __deflate_init(ctx);
        if (ret) {
                kfree(ctx);
                return ERR_PTR(ret);
@@ -137,21 +130,11 @@ static void *gen_deflate_alloc_ctx(struct crypto_scomp *tfm, int format)
        return ctx;
 }
 
-static void *deflate_alloc_ctx(struct crypto_scomp *tfm)
-{
-       return gen_deflate_alloc_ctx(tfm, 0);
-}
-
-static void *zlib_deflate_alloc_ctx(struct crypto_scomp *tfm)
-{
-       return gen_deflate_alloc_ctx(tfm, 1);
-}
-
 static int deflate_init(struct crypto_tfm *tfm)
 {
        struct deflate_ctx *ctx = crypto_tfm_ctx(tfm);
 
-       return __deflate_init(ctx, 0);
+       return __deflate_init(ctx);
 }
 
 static void __deflate_exit(void *ctx)
@@ -286,7 +269,7 @@ static struct crypto_alg alg = {
        .coa_decompress         = deflate_decompress } }
 };
 
-static struct scomp_alg scomp[] = { {
+static struct scomp_alg scomp = {
        .alloc_ctx              = deflate_alloc_ctx,
        .free_ctx               = deflate_free_ctx,
        .compress               = deflate_scompress,
@@ -296,17 +279,7 @@ static struct scomp_alg scomp[] = { {
                .cra_driver_name = "deflate-scomp",
                .cra_module      = THIS_MODULE,
        }
-}, {
-       .alloc_ctx              = zlib_deflate_alloc_ctx,
-       .free_ctx               = deflate_free_ctx,
-       .compress               = deflate_scompress,
-       .decompress             = deflate_sdecompress,
-       .base                   = {
-               .cra_name       = "zlib-deflate",
-               .cra_driver_name = "zlib-deflate-scomp",
-               .cra_module      = THIS_MODULE,
-       }
-} };
+};
 
 static int __init deflate_mod_init(void)
 {
@@ -316,7 +289,7 @@ static int __init deflate_mod_init(void)
        if (ret)
                return ret;
 
-       ret = crypto_register_scomps(scomp, ARRAY_SIZE(scomp));
+       ret = crypto_register_scomp(&scomp);
        if (ret) {
                crypto_unregister_alg(&alg);
                return ret;
@@ -328,7 +301,7 @@ static int __init deflate_mod_init(void)
 static void __exit deflate_mod_fini(void)
 {
        crypto_unregister_alg(&alg);
-       crypto_unregister_scomps(scomp, ARRAY_SIZE(scomp));
+       crypto_unregister_scomp(&scomp);
 }
 
 subsys_initcall(deflate_mod_init);
index ff4ebbc68efab1b2956e42784bb5c9cc1df785cb..e01f8c7769d036da4e15f48c25a86a9c44aec545 100644 (file)
@@ -1698,7 +1698,7 @@ static int drbg_init_hash_kernel(struct drbg_state *drbg)
        sdesc->shash.tfm = tfm;
        drbg->priv_data = sdesc;
 
-       return crypto_shash_alignmask(tfm);
+       return 0;
 }
 
 static int drbg_fini_hash_kernel(struct drbg_state *drbg)
index 71fbb0543d64ea9aeef38ed84b4b5d16d3be9892..cc7625d1a475e8d3db8aa8a1b733640b910a3a41 100644 (file)
  * Copyright (c) 2006 Herbert Xu <herbert@gondor.apana.org.au>
  */
 
-#include <crypto/algapi.h>
 #include <crypto/internal/cipher.h>
 #include <crypto/internal/skcipher.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/slab.h>
 
-static int crypto_ecb_crypt(struct skcipher_request *req,
-                           struct crypto_cipher *cipher,
+static int crypto_ecb_crypt(struct crypto_cipher *cipher, const u8 *src,
+                           u8 *dst, unsigned nbytes, bool final,
                            void (*fn)(struct crypto_tfm *, u8 *, const u8 *))
 {
        const unsigned int bsize = crypto_cipher_blocksize(cipher);
-       struct skcipher_walk walk;
-       unsigned int nbytes;
-       int err;
-
-       err = skcipher_walk_virt(&walk, req, false);
 
-       while ((nbytes = walk.nbytes) != 0) {
-               const u8 *src = walk.src.virt.addr;
-               u8 *dst = walk.dst.virt.addr;
+       while (nbytes >= bsize) {
+               fn(crypto_cipher_tfm(cipher), dst, src);
 
-               do {
-                       fn(crypto_cipher_tfm(cipher), dst, src);
+               src += bsize;
+               dst += bsize;
 
-                       src += bsize;
-                       dst += bsize;
-               } while ((nbytes -= bsize) >= bsize);
-
-               err = skcipher_walk_done(&walk, nbytes);
+               nbytes -= bsize;
        }
 
-       return err;
+       return nbytes && final ? -EINVAL : nbytes;
 }
 
-static int crypto_ecb_encrypt(struct skcipher_request *req)
+static int crypto_ecb_encrypt2(struct crypto_lskcipher *tfm, const u8 *src,
+                              u8 *dst, unsigned len, u8 *iv, bool final)
 {
-       struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-       struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
+       struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+       struct crypto_cipher *cipher = *ctx;
 
-       return crypto_ecb_crypt(req, cipher,
+       return crypto_ecb_crypt(cipher, src, dst, len, final,
                                crypto_cipher_alg(cipher)->cia_encrypt);
 }
 
-static int crypto_ecb_decrypt(struct skcipher_request *req)
+static int crypto_ecb_decrypt2(struct crypto_lskcipher *tfm, const u8 *src,
+                              u8 *dst, unsigned len, u8 *iv, bool final)
 {
-       struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-       struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
+       struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+       struct crypto_cipher *cipher = *ctx;
 
-       return crypto_ecb_crypt(req, cipher,
+       return crypto_ecb_crypt(cipher, src, dst, len, final,
                                crypto_cipher_alg(cipher)->cia_decrypt);
 }
 
-static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb)
+static int lskcipher_setkey_simple2(struct crypto_lskcipher *tfm,
+                                   const u8 *key, unsigned int keylen)
+{
+       struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+       struct crypto_cipher *cipher = *ctx;
+
+       crypto_cipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+       crypto_cipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
+                               CRYPTO_TFM_REQ_MASK);
+       return crypto_cipher_setkey(cipher, key, keylen);
+}
+
+static int lskcipher_init_tfm_simple2(struct crypto_lskcipher *tfm)
+{
+       struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
+       struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+       struct crypto_cipher_spawn *spawn;
+       struct crypto_cipher *cipher;
+
+       spawn = lskcipher_instance_ctx(inst);
+       cipher = crypto_spawn_cipher(spawn);
+       if (IS_ERR(cipher))
+               return PTR_ERR(cipher);
+
+       *ctx = cipher;
+       return 0;
+}
+
+static void lskcipher_exit_tfm_simple2(struct crypto_lskcipher *tfm)
+{
+       struct crypto_cipher **ctx = crypto_lskcipher_ctx(tfm);
+
+       crypto_free_cipher(*ctx);
+}
+
+static void lskcipher_free_instance_simple2(struct lskcipher_instance *inst)
+{
+       crypto_drop_cipher(lskcipher_instance_ctx(inst));
+       kfree(inst);
+}
+
+static struct lskcipher_instance *lskcipher_alloc_instance_simple2(
+       struct crypto_template *tmpl, struct rtattr **tb)
+{
+       struct crypto_cipher_spawn *spawn;
+       struct lskcipher_instance *inst;
+       struct crypto_alg *cipher_alg;
+       u32 mask;
+       int err;
+
+       err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
+       if (err)
+               return ERR_PTR(err);
+
+       inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+       if (!inst)
+               return ERR_PTR(-ENOMEM);
+       spawn = lskcipher_instance_ctx(inst);
+
+       err = crypto_grab_cipher(spawn, lskcipher_crypto_instance(inst),
+                                crypto_attr_alg_name(tb[1]), 0, mask);
+       if (err)
+               goto err_free_inst;
+       cipher_alg = crypto_spawn_cipher_alg(spawn);
+
+       err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
+                                 cipher_alg);
+       if (err)
+               goto err_free_inst;
+
+       inst->free = lskcipher_free_instance_simple2;
+
+       /* Default algorithm properties, can be overridden */
+       inst->alg.co.base.cra_blocksize = cipher_alg->cra_blocksize;
+       inst->alg.co.base.cra_alignmask = cipher_alg->cra_alignmask;
+       inst->alg.co.base.cra_priority = cipher_alg->cra_priority;
+       inst->alg.co.min_keysize = cipher_alg->cra_cipher.cia_min_keysize;
+       inst->alg.co.max_keysize = cipher_alg->cra_cipher.cia_max_keysize;
+       inst->alg.co.ivsize = cipher_alg->cra_blocksize;
+
+       /* Use struct crypto_cipher * by default, can be overridden */
+       inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_cipher *);
+       inst->alg.setkey = lskcipher_setkey_simple2;
+       inst->alg.init = lskcipher_init_tfm_simple2;
+       inst->alg.exit = lskcipher_exit_tfm_simple2;
+
+       return inst;
+
+err_free_inst:
+       lskcipher_free_instance_simple2(inst);
+       return ERR_PTR(err);
+}
+
+static int crypto_ecb_create2(struct crypto_template *tmpl, struct rtattr **tb)
 {
-       struct skcipher_instance *inst;
+       struct lskcipher_instance *inst;
        int err;
 
-       inst = skcipher_alloc_instance_simple(tmpl, tb);
+       inst = lskcipher_alloc_instance_simple2(tmpl, tb);
        if (IS_ERR(inst))
                return PTR_ERR(inst);
 
-       inst->alg.ivsize = 0; /* ECB mode doesn't take an IV */
+       /* ECB mode doesn't take an IV */
+       inst->alg.co.ivsize = 0;
+
+       inst->alg.encrypt = crypto_ecb_encrypt2;
+       inst->alg.decrypt = crypto_ecb_decrypt2;
+
+       err = lskcipher_register_instance(tmpl, inst);
+       if (err)
+               inst->free(inst);
+
+       return err;
+}
+
+static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+       struct crypto_lskcipher_spawn *spawn;
+       struct lskcipher_alg *cipher_alg;
+       struct lskcipher_instance *inst;
+       int err;
+
+       inst = lskcipher_alloc_instance_simple(tmpl, tb);
+       if (IS_ERR(inst)) {
+               err = crypto_ecb_create2(tmpl, tb);
+               return err;
+       }
+
+       spawn = lskcipher_instance_ctx(inst);
+       cipher_alg = crypto_lskcipher_spawn_alg(spawn);
+
+       /* ECB mode doesn't take an IV */
+       inst->alg.co.ivsize = 0;
+       if (cipher_alg->co.ivsize)
+               return -EINVAL;
 
-       inst->alg.encrypt = crypto_ecb_encrypt;
-       inst->alg.decrypt = crypto_ecb_decrypt;
+       inst->alg.co.base.cra_ctxsize = cipher_alg->co.base.cra_ctxsize;
+       inst->alg.setkey = cipher_alg->setkey;
+       inst->alg.encrypt = cipher_alg->encrypt;
+       inst->alg.decrypt = cipher_alg->decrypt;
+       inst->alg.init = cipher_alg->init;
+       inst->alg.exit = cipher_alg->exit;
 
-       err = skcipher_register_instance(tmpl, inst);
+       err = lskcipher_register_instance(tmpl, inst);
        if (err)
                inst->free(inst);
 
@@ -102,3 +223,4 @@ module_exit(crypto_ecb_module_exit);
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("ECB block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("ecb");
+MODULE_IMPORT_NS(CRYPTO_INTERNAL);
index f7d4ef4837e5410698661972639f45ae99dc1465..e63fc6442e3201a3334f260315374aa6f12c5d73 100644 (file)
@@ -442,6 +442,7 @@ out:
 
 static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
+       struct skcipher_alg_common *skcipher_alg = NULL;
        struct crypto_attr_type *algt;
        const char *inner_cipher_name;
        const char *shash_name;
@@ -450,7 +451,6 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
        struct crypto_instance *inst;
        struct crypto_alg *base, *block_base;
        struct essiv_instance_ctx *ictx;
-       struct skcipher_alg *skcipher_alg = NULL;
        struct aead_alg *aead_alg = NULL;
        struct crypto_alg *_hash_alg;
        struct shash_alg *hash_alg;
@@ -475,7 +475,7 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
        mask = crypto_algt_inherited_mask(algt);
 
        switch (type) {
-       case CRYPTO_ALG_TYPE_SKCIPHER:
+       case CRYPTO_ALG_TYPE_LSKCIPHER:
                skcipher_inst = kzalloc(sizeof(*skcipher_inst) +
                                        sizeof(*ictx), GFP_KERNEL);
                if (!skcipher_inst)
@@ -489,9 +489,10 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
                                           inner_cipher_name, 0, mask);
                if (err)
                        goto out_free_inst;
-               skcipher_alg = crypto_spawn_skcipher_alg(&ictx->u.skcipher_spawn);
+               skcipher_alg = crypto_spawn_skcipher_alg_common(
+                       &ictx->u.skcipher_spawn);
                block_base = &skcipher_alg->base;
-               ivsize = crypto_skcipher_alg_ivsize(skcipher_alg);
+               ivsize = skcipher_alg->ivsize;
                break;
 
        case CRYPTO_ALG_TYPE_AEAD:
@@ -574,18 +575,17 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
        base->cra_alignmask     = block_base->cra_alignmask;
        base->cra_priority      = block_base->cra_priority;
 
-       if (type == CRYPTO_ALG_TYPE_SKCIPHER) {
+       if (type == CRYPTO_ALG_TYPE_LSKCIPHER) {
                skcipher_inst->alg.setkey       = essiv_skcipher_setkey;
                skcipher_inst->alg.encrypt      = essiv_skcipher_encrypt;
                skcipher_inst->alg.decrypt      = essiv_skcipher_decrypt;
                skcipher_inst->alg.init         = essiv_skcipher_init_tfm;
                skcipher_inst->alg.exit         = essiv_skcipher_exit_tfm;
 
-               skcipher_inst->alg.min_keysize  = crypto_skcipher_alg_min_keysize(skcipher_alg);
-               skcipher_inst->alg.max_keysize  = crypto_skcipher_alg_max_keysize(skcipher_alg);
+               skcipher_inst->alg.min_keysize  = skcipher_alg->min_keysize;
+               skcipher_inst->alg.max_keysize  = skcipher_alg->max_keysize;
                skcipher_inst->alg.ivsize       = ivsize;
-               skcipher_inst->alg.chunksize    = crypto_skcipher_alg_chunksize(skcipher_alg);
-               skcipher_inst->alg.walksize     = crypto_skcipher_alg_walksize(skcipher_alg);
+               skcipher_inst->alg.chunksize    = skcipher_alg->chunksize;
 
                skcipher_inst->free             = essiv_skcipher_free_instance;
 
@@ -616,7 +616,7 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
 out_free_hash:
        crypto_mod_put(_hash_alg);
 out_drop_skcipher:
-       if (type == CRYPTO_ALG_TYPE_SKCIPHER)
+       if (type == CRYPTO_ALG_TYPE_LSKCIPHER)
                crypto_drop_skcipher(&ictx->u.skcipher_spawn);
        else
                crypto_drop_aead(&ictx->u.aead_spawn);
index 4ba624450c3ffb51234174bfacec393f4c8c1526..84f7c23d14e48354c953f21cda4e7423e21932f2 100644 (file)
@@ -576,10 +576,10 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
                                    const char *ctr_name,
                                    const char *ghash_name)
 {
+       struct skcipher_alg_common *ctr;
        u32 mask;
        struct aead_instance *inst;
        struct gcm_instance_ctx *ctx;
-       struct skcipher_alg *ctr;
        struct hash_alg_common *ghash;
        int err;
 
@@ -607,13 +607,12 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
                                   ctr_name, 0, mask);
        if (err)
                goto err_free_inst;
-       ctr = crypto_spawn_skcipher_alg(&ctx->ctr);
+       ctr = crypto_spawn_skcipher_alg_common(&ctx->ctr);
 
        /* The skcipher algorithm must be CTR mode, using 16-byte blocks. */
        err = -EINVAL;
        if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 ||
-           crypto_skcipher_alg_ivsize(ctr) != 16 ||
-           ctr->base.cra_blocksize != 1)
+           ctr->ivsize != 16 || ctr->base.cra_blocksize != 1)
                goto err_free_inst;
 
        err = -ENAMETOOLONG;
@@ -630,11 +629,10 @@ static int crypto_gcm_create_common(struct crypto_template *tmpl,
        inst->alg.base.cra_priority = (ghash->base.cra_priority +
                                       ctr->base.cra_priority) / 2;
        inst->alg.base.cra_blocksize = 1;
-       inst->alg.base.cra_alignmask = ghash->base.cra_alignmask |
-                                      ctr->base.cra_alignmask;
+       inst->alg.base.cra_alignmask = ctr->base.cra_alignmask;
        inst->alg.base.cra_ctxsize = sizeof(struct crypto_gcm_ctx);
        inst->alg.ivsize = GCM_AES_IV_SIZE;
-       inst->alg.chunksize = crypto_skcipher_alg_chunksize(ctr);
+       inst->alg.chunksize = ctr->chunksize;
        inst->alg.maxauthsize = 16;
        inst->alg.init = crypto_gcm_init_tfm;
        inst->alg.exit = crypto_gcm_exit_tfm;
index 7e6c1a948692fbacd6b2652ad84a2e8ba3efcf48..93f6ba0df263e51e8b4c35927c2873cdb565c138 100644 (file)
 
 #include "internal.h"
 
+static inline struct crypto_istat_hash *hash_get_stat(
+       struct hash_alg_common *alg)
+{
+#ifdef CONFIG_CRYPTO_STATS
+       return &alg->stat;
+#else
+       return NULL;
+#endif
+}
+
 static inline int crypto_hash_report_stat(struct sk_buff *skb,
                                          struct crypto_alg *alg,
                                          const char *type)
@@ -31,9 +41,7 @@ static inline int crypto_hash_report_stat(struct sk_buff *skb,
        return nla_put(skb, CRYPTOCFGA_STAT_HASH, sizeof(rhash), &rhash);
 }
 
-int crypto_init_shash_ops_async(struct crypto_tfm *tfm);
-struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
-                                                 struct crypto_ahash *hash);
+extern const struct crypto_type crypto_shash_type;
 
 int hash_prepare_alg(struct hash_alg_common *alg);
 
index a49ff96bde7784f9095ac7ecab8ff65124b7b17c..9a467638c9713064fb3d3cdcca7e8fcf9e4b22aa 100644 (file)
@@ -29,6 +29,9 @@ const char *const hash_algo_name[HASH_ALGO__LAST] = {
        [HASH_ALGO_SM3_256]     = "sm3",
        [HASH_ALGO_STREEBOG_256] = "streebog256",
        [HASH_ALGO_STREEBOG_512] = "streebog512",
+       [HASH_ALGO_SHA3_256]    = "sha3-256",
+       [HASH_ALGO_SHA3_384]    = "sha3-384",
+       [HASH_ALGO_SHA3_512]    = "sha3-512",
 };
 EXPORT_SYMBOL_GPL(hash_algo_name);
 
@@ -53,5 +56,8 @@ const int hash_digest_size[HASH_ALGO__LAST] = {
        [HASH_ALGO_SM3_256]     = SM3256_DIGEST_SIZE,
        [HASH_ALGO_STREEBOG_256] = STREEBOG256_DIGEST_SIZE,
        [HASH_ALGO_STREEBOG_512] = STREEBOG512_DIGEST_SIZE,
+       [HASH_ALGO_SHA3_256]    = SHA3_256_DIGEST_SIZE,
+       [HASH_ALGO_SHA3_384]    = SHA3_384_DIGEST_SIZE,
+       [HASH_ALGO_SHA3_512]    = SHA3_512_DIGEST_SIZE,
 };
 EXPORT_SYMBOL_GPL(hash_digest_size);
index 6f4c1884d0e96c0a2bfd7e9d836dc181f0775abd..87e7547ad18623b8f75c6c33f49116e60410a133 100644 (file)
@@ -406,10 +406,10 @@ static int hctr2_create_common(struct crypto_template *tmpl,
                               const char *xctr_name,
                               const char *polyval_name)
 {
+       struct skcipher_alg_common *xctr_alg;
        u32 mask;
        struct skcipher_instance *inst;
        struct hctr2_instance_ctx *ictx;
-       struct skcipher_alg *xctr_alg;
        struct crypto_alg *blockcipher_alg;
        struct shash_alg *polyval_alg;
        char blockcipher_name[CRYPTO_MAX_ALG_NAME];
@@ -431,7 +431,7 @@ static int hctr2_create_common(struct crypto_template *tmpl,
                                   xctr_name, 0, mask);
        if (err)
                goto err_free_inst;
-       xctr_alg = crypto_spawn_skcipher_alg(&ictx->xctr_spawn);
+       xctr_alg = crypto_spawn_skcipher_alg_common(&ictx->xctr_spawn);
 
        err = -EINVAL;
        if (strncmp(xctr_alg->base.cra_name, "xctr(", 5))
@@ -485,8 +485,7 @@ static int hctr2_create_common(struct crypto_template *tmpl,
        inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
        inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx) +
                                     polyval_alg->statesize * 2;
-       inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask |
-                                      polyval_alg->base.cra_alignmask;
+       inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask;
        /*
         * The hash function is called twice, so it is weighted higher than the
         * xctr and blockcipher.
@@ -500,8 +499,8 @@ static int hctr2_create_common(struct crypto_template *tmpl,
        inst->alg.decrypt = hctr2_decrypt;
        inst->alg.init = hctr2_init_tfm;
        inst->alg.exit = hctr2_exit_tfm;
-       inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(xctr_alg);
-       inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(xctr_alg);
+       inst->alg.min_keysize = xctr_alg->min_keysize;
+       inst->alg.max_keysize = xctr_alg->max_keysize;
        inst->alg.ivsize = TWEAK_SIZE;
 
        inst->free = hctr2_free_instance;
index ea93f4c55f251bd96aca41046b62eaca611739a5..7cec25ff988915aa33497c58787f3a17130ea915 100644 (file)
 
 struct hmac_ctx {
        struct crypto_shash *hash;
+       /* Contains 'u8 ipad[statesize];', then 'u8 opad[statesize];' */
+       u8 pads[];
 };
 
-static inline void *align_ptr(void *p, unsigned int align)
-{
-       return (void *)ALIGN((unsigned long)p, align);
-}
-
-static inline struct hmac_ctx *hmac_ctx(struct crypto_shash *tfm)
-{
-       return align_ptr(crypto_shash_ctx_aligned(tfm) +
-                        crypto_shash_statesize(tfm) * 2,
-                        crypto_tfm_ctx_alignment());
-}
-
 static int hmac_setkey(struct crypto_shash *parent,
                       const u8 *inkey, unsigned int keylen)
 {
        int bs = crypto_shash_blocksize(parent);
        int ds = crypto_shash_digestsize(parent);
        int ss = crypto_shash_statesize(parent);
-       char *ipad = crypto_shash_ctx_aligned(parent);
-       char *opad = ipad + ss;
-       struct hmac_ctx *ctx = align_ptr(opad + ss,
-                                        crypto_tfm_ctx_alignment());
-       struct crypto_shash *hash = ctx->hash;
+       struct hmac_ctx *tctx = crypto_shash_ctx(parent);
+       struct crypto_shash *hash = tctx->hash;
+       u8 *ipad = &tctx->pads[0];
+       u8 *opad = &tctx->pads[ss];
        SHASH_DESC_ON_STACK(shash, hash);
        unsigned int i;
 
@@ -94,16 +83,18 @@ static int hmac_export(struct shash_desc *pdesc, void *out)
 static int hmac_import(struct shash_desc *pdesc, const void *in)
 {
        struct shash_desc *desc = shash_desc_ctx(pdesc);
-       struct hmac_ctx *ctx = hmac_ctx(pdesc->tfm);
+       const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
 
-       desc->tfm = ctx->hash;
+       desc->tfm = tctx->hash;
 
        return crypto_shash_import(desc, in);
 }
 
 static int hmac_init(struct shash_desc *pdesc)
 {
-       return hmac_import(pdesc, crypto_shash_ctx_aligned(pdesc->tfm));
+       const struct hmac_ctx *tctx = crypto_shash_ctx(pdesc->tfm);
+
+       return hmac_import(pdesc, &tctx->pads[0]);
 }
 
 static int hmac_update(struct shash_desc *pdesc,
@@ -119,7 +110,8 @@ static int hmac_final(struct shash_desc *pdesc, u8 *out)
        struct crypto_shash *parent = pdesc->tfm;
        int ds = crypto_shash_digestsize(parent);
        int ss = crypto_shash_statesize(parent);
-       char *opad = crypto_shash_ctx_aligned(parent) + ss;
+       const struct hmac_ctx *tctx = crypto_shash_ctx(parent);
+       const u8 *opad = &tctx->pads[ss];
        struct shash_desc *desc = shash_desc_ctx(pdesc);
 
        return crypto_shash_final(desc, out) ?:
@@ -134,7 +126,8 @@ static int hmac_finup(struct shash_desc *pdesc, const u8 *data,
        struct crypto_shash *parent = pdesc->tfm;
        int ds = crypto_shash_digestsize(parent);
        int ss = crypto_shash_statesize(parent);
-       char *opad = crypto_shash_ctx_aligned(parent) + ss;
+       const struct hmac_ctx *tctx = crypto_shash_ctx(parent);
+       const u8 *opad = &tctx->pads[ss];
        struct shash_desc *desc = shash_desc_ctx(pdesc);
 
        return crypto_shash_finup(desc, data, nbytes, out) ?:
@@ -147,7 +140,7 @@ static int hmac_init_tfm(struct crypto_shash *parent)
        struct crypto_shash *hash;
        struct shash_instance *inst = shash_alg_instance(parent);
        struct crypto_shash_spawn *spawn = shash_instance_ctx(inst);
-       struct hmac_ctx *ctx = hmac_ctx(parent);
+       struct hmac_ctx *tctx = crypto_shash_ctx(parent);
 
        hash = crypto_spawn_shash(spawn);
        if (IS_ERR(hash))
@@ -156,14 +149,14 @@ static int hmac_init_tfm(struct crypto_shash *parent)
        parent->descsize = sizeof(struct shash_desc) +
                           crypto_shash_descsize(hash);
 
-       ctx->hash = hash;
+       tctx->hash = hash;
        return 0;
 }
 
 static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src)
 {
-       struct hmac_ctx *sctx = hmac_ctx(src);
-       struct hmac_ctx *dctx = hmac_ctx(dst);
+       struct hmac_ctx *sctx = crypto_shash_ctx(src);
+       struct hmac_ctx *dctx = crypto_shash_ctx(dst);
        struct crypto_shash *hash;
 
        hash = crypto_clone_shash(sctx->hash);
@@ -176,9 +169,9 @@ static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src)
 
 static void hmac_exit_tfm(struct crypto_shash *parent)
 {
-       struct hmac_ctx *ctx = hmac_ctx(parent);
+       struct hmac_ctx *tctx = crypto_shash_ctx(parent);
 
-       crypto_free_shash(ctx->hash);
+       crypto_free_shash(tctx->hash);
 }
 
 static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
@@ -225,15 +218,10 @@ static int hmac_create(struct crypto_template *tmpl, struct rtattr **tb)
 
        inst->alg.base.cra_priority = alg->cra_priority;
        inst->alg.base.cra_blocksize = alg->cra_blocksize;
-       inst->alg.base.cra_alignmask = alg->cra_alignmask;
+       inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) + (ss * 2);
 
-       ss = ALIGN(ss, alg->cra_alignmask + 1);
        inst->alg.digestsize = ds;
        inst->alg.statesize = ss;
-
-       inst->alg.base.cra_ctxsize = sizeof(struct hmac_ctx) +
-                                    ALIGN(ss * 2, crypto_tfm_ctx_alignment());
-
        inst->alg.init = hmac_init;
        inst->alg.update = hmac_update;
        inst->alg.final = hmac_final;
index 7d1463a1562acbe7493b51c71e07d040eeb2301b..76edbf8af0ac782c942f27de7173f1eac57cfefd 100644 (file)
  * Helper function
  ***************************************************************************/
 
+void *jent_kvzalloc(unsigned int len)
+{
+       return kvzalloc(len, GFP_KERNEL);
+}
+
+void jent_kvzfree(void *ptr, unsigned int len)
+{
+       memzero_explicit(ptr, len);
+       kvfree(ptr);
+}
+
 void *jent_zalloc(unsigned int len)
 {
        return kzalloc(len, GFP_KERNEL);
@@ -245,7 +256,9 @@ static int jent_kcapi_init(struct crypto_tfm *tfm)
        crypto_shash_init(sdesc);
        rng->sdesc = sdesc;
 
-       rng->entropy_collector = jent_entropy_collector_alloc(1, 0, sdesc);
+       rng->entropy_collector =
+               jent_entropy_collector_alloc(CONFIG_CRYPTO_JITTERENTROPY_OSR, 0,
+                                            sdesc);
        if (!rng->entropy_collector) {
                ret = -ENOMEM;
                goto err;
@@ -334,7 +347,7 @@ static int __init jent_mod_init(void)
 
        desc->tfm = tfm;
        crypto_shash_init(desc);
-       ret = jent_entropy_init(desc);
+       ret = jent_entropy_init(CONFIG_CRYPTO_JITTERENTROPY_OSR, 0, desc, NULL);
        shash_desc_zero(desc);
        crypto_free_shash(tfm);
        if (ret) {
index fe9c233ec76930230a354b4697bc611d9bb16047..26a9048bc893d9e906bff0c088e7e38d208884d3 100644 (file)
@@ -72,11 +72,13 @@ struct rand_data {
        __u64 prev_time;                /* SENSITIVE Previous time stamp */
        __u64 last_delta;               /* SENSITIVE stuck test */
        __s64 last_delta2;              /* SENSITIVE stuck test */
+
+       unsigned int flags;             /* Flags used to initialize */
        unsigned int osr;               /* Oversample rate */
-#define JENT_MEMORY_BLOCKS 64
-#define JENT_MEMORY_BLOCKSIZE 32
 #define JENT_MEMORY_ACCESSLOOPS 128
-#define JENT_MEMORY_SIZE (JENT_MEMORY_BLOCKS*JENT_MEMORY_BLOCKSIZE)
+#define JENT_MEMORY_SIZE                                               \
+       (CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS *                    \
+        CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE)
        unsigned char *mem;     /* Memory access location with size of
                                 * memblocks * memblocksize */
        unsigned int memlocation; /* Pointer to byte in *mem */
@@ -88,16 +90,9 @@ struct rand_data {
        /* Repetition Count Test */
        unsigned int rct_count;                 /* Number of stuck values */
 
-       /* Intermittent health test failure threshold of 2^-30 */
-       /* From an SP800-90B perspective, this RCT cutoff value is equal to 31. */
-       /* However, our RCT implementation starts at 1, so we subtract 1 here. */
-#define JENT_RCT_CUTOFF                (31 - 1)        /* Taken from SP800-90B sec 4.4.1 */
-#define JENT_APT_CUTOFF                325                     /* Taken from SP800-90B sec 4.4.2 */
-       /* Permanent health test failure threshold of 2^-60 */
-       /* From an SP800-90B perspective, this RCT cutoff value is equal to 61. */
-       /* However, our RCT implementation starts at 1, so we subtract 1 here. */
-#define JENT_RCT_CUTOFF_PERMANENT      (61 - 1)
-#define JENT_APT_CUTOFF_PERMANENT      355
+       /* Adaptive Proportion Test cutoff values */
+       unsigned int apt_cutoff; /* Intermittent health test failure */
+       unsigned int apt_cutoff_permanent; /* Permanent health test failure */
 #define JENT_APT_WINDOW_SIZE   512     /* Data window size */
        /* LSB of time stamp to process */
 #define JENT_APT_LSB           16
@@ -105,6 +100,8 @@ struct rand_data {
        unsigned int apt_observations;  /* Number of collected observations */
        unsigned int apt_count;         /* APT counter */
        unsigned int apt_base;          /* APT base reference */
+       unsigned int health_failure;    /* Record health failure */
+
        unsigned int apt_base_set:1;    /* APT base reference set? */
 };
 
@@ -122,6 +119,16 @@ struct rand_data {
                                   * zero). */
 #define JENT_ESTUCK            8 /* Too many stuck results during init. */
 #define JENT_EHEALTH           9 /* Health test failed during initialization */
+#define JENT_ERCT             10 /* RCT failed during initialization */
+#define JENT_EHASH            11 /* Hash self test failed */
+#define JENT_EMEM             12 /* Can't allocate memory for initialization */
+
+#define JENT_RCT_FAILURE       1 /* Failure in RCT health test. */
+#define JENT_APT_FAILURE       2 /* Failure in APT health test. */
+#define JENT_PERMANENT_FAILURE_SHIFT   16
+#define JENT_PERMANENT_FAILURE(x)      (x << JENT_PERMANENT_FAILURE_SHIFT)
+#define JENT_RCT_FAILURE_PERMANENT     JENT_PERMANENT_FAILURE(JENT_RCT_FAILURE)
+#define JENT_APT_FAILURE_PERMANENT     JENT_PERMANENT_FAILURE(JENT_APT_FAILURE)
 
 /*
  * The output n bits can receive more than n bits of min entropy, of course,
@@ -147,6 +154,48 @@ struct rand_data {
  * This test complies with SP800-90B section 4.4.2.
  ***************************************************************************/
 
+/*
+ * See the SP 800-90B comment #10b for the corrected cutoff for the SP 800-90B
+ * APT.
+ * http://www.untruth.org/~josh/sp80090b/UL%20SP800-90B-final%20comments%20v1.9%2020191212.pdf
+ * In in the syntax of R, this is C = 2 + qbinom(1 − 2^(−30), 511, 2^(-1/osr)).
+ * (The original formula wasn't correct because the first symbol must
+ * necessarily have been observed, so there is no chance of observing 0 of these
+ * symbols.)
+ *
+ * For the alpha < 2^-53, R cannot be used as it uses a float data type without
+ * arbitrary precision. A SageMath script is used to calculate those cutoff
+ * values.
+ *
+ * For any value above 14, this yields the maximal allowable value of 512
+ * (by FIPS 140-2 IG 7.19 Resolution # 16, we cannot choose a cutoff value that
+ * renders the test unable to fail).
+ */
+static const unsigned int jent_apt_cutoff_lookup[15] = {
+       325, 422, 459, 477, 488, 494, 499, 502,
+       505, 507, 508, 509, 510, 511, 512 };
+static const unsigned int jent_apt_cutoff_permanent_lookup[15] = {
+       355, 447, 479, 494, 502, 507, 510, 512,
+       512, 512, 512, 512, 512, 512, 512 };
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+static void jent_apt_init(struct rand_data *ec, unsigned int osr)
+{
+       /*
+        * Establish the apt_cutoff based on the presumed entropy rate of
+        * 1/osr.
+        */
+       if (osr >= ARRAY_SIZE(jent_apt_cutoff_lookup)) {
+               ec->apt_cutoff = jent_apt_cutoff_lookup[
+                       ARRAY_SIZE(jent_apt_cutoff_lookup) - 1];
+               ec->apt_cutoff_permanent = jent_apt_cutoff_permanent_lookup[
+                       ARRAY_SIZE(jent_apt_cutoff_permanent_lookup) - 1];
+       } else {
+               ec->apt_cutoff = jent_apt_cutoff_lookup[osr - 1];
+               ec->apt_cutoff_permanent =
+                               jent_apt_cutoff_permanent_lookup[osr - 1];
+       }
+}
 /*
  * Reset the APT counter
  *
@@ -175,26 +224,22 @@ static void jent_apt_insert(struct rand_data *ec, unsigned int delta_masked)
                return;
        }
 
-       if (delta_masked == ec->apt_base)
+       if (delta_masked == ec->apt_base) {
                ec->apt_count++;
 
+               /* Note, ec->apt_count starts with one. */
+               if (ec->apt_count >= ec->apt_cutoff_permanent)
+                       ec->health_failure |= JENT_APT_FAILURE_PERMANENT;
+               else if (ec->apt_count >= ec->apt_cutoff)
+                       ec->health_failure |= JENT_APT_FAILURE;
+       }
+
        ec->apt_observations++;
 
        if (ec->apt_observations >= JENT_APT_WINDOW_SIZE)
                jent_apt_reset(ec, delta_masked);
 }
 
-/* APT health test failure detection */
-static int jent_apt_permanent_failure(struct rand_data *ec)
-{
-       return (ec->apt_count >= JENT_APT_CUTOFF_PERMANENT) ? 1 : 0;
-}
-
-static int jent_apt_failure(struct rand_data *ec)
-{
-       return (ec->apt_count >= JENT_APT_CUTOFF) ? 1 : 0;
-}
-
 /***************************************************************************
  * Stuck Test and its use as Repetition Count Test
  *
@@ -221,6 +266,30 @@ static void jent_rct_insert(struct rand_data *ec, int stuck)
 {
        if (stuck) {
                ec->rct_count++;
+
+               /*
+                * The cutoff value is based on the following consideration:
+                * alpha = 2^-30 or 2^-60 as recommended in SP800-90B.
+                * In addition, we require an entropy value H of 1/osr as this
+                * is the minimum entropy required to provide full entropy.
+                * Note, we collect (DATA_SIZE_BITS + ENTROPY_SAFETY_FACTOR)*osr
+                * deltas for inserting them into the entropy pool which should
+                * then have (close to) DATA_SIZE_BITS bits of entropy in the
+                * conditioned output.
+                *
+                * Note, ec->rct_count (which equals to value B in the pseudo
+                * code of SP800-90B section 4.4.1) starts with zero. Hence
+                * we need to subtract one from the cutoff value as calculated
+                * following SP800-90B. Thus C = ceil(-log_2(alpha)/H) = 30*osr
+                * or 60*osr.
+                */
+               if ((unsigned int)ec->rct_count >= (60 * ec->osr)) {
+                       ec->rct_count = -1;
+                       ec->health_failure |= JENT_RCT_FAILURE_PERMANENT;
+               } else if ((unsigned int)ec->rct_count >= (30 * ec->osr)) {
+                       ec->rct_count = -1;
+                       ec->health_failure |= JENT_RCT_FAILURE;
+               }
        } else {
                /* Reset RCT */
                ec->rct_count = 0;
@@ -275,26 +344,25 @@ static int jent_stuck(struct rand_data *ec, __u64 current_delta)
        return 0;
 }
 
-/* RCT health test failure detection */
-static int jent_rct_permanent_failure(struct rand_data *ec)
-{
-       return (ec->rct_count >= JENT_RCT_CUTOFF_PERMANENT) ? 1 : 0;
-}
-
-static int jent_rct_failure(struct rand_data *ec)
-{
-       return (ec->rct_count >= JENT_RCT_CUTOFF) ? 1 : 0;
-}
-
-/* Report of health test failures */
-static int jent_health_failure(struct rand_data *ec)
+/*
+ * Report any health test failures
+ *
+ * @ec [in] Reference to entropy collector
+ *
+ * @return a bitmask indicating which tests failed
+ *     0 No health test failure
+ *     1 RCT failure
+ *     2 APT failure
+ *     1<<JENT_PERMANENT_FAILURE_SHIFT RCT permanent failure
+ *     2<<JENT_PERMANENT_FAILURE_SHIFT APT permanent failure
+ */
+static unsigned int jent_health_failure(struct rand_data *ec)
 {
-       return jent_rct_failure(ec) | jent_apt_failure(ec);
-}
+       /* Test is only enabled in FIPS mode */
+       if (!fips_enabled)
+               return 0;
 
-static int jent_permanent_health_failure(struct rand_data *ec)
-{
-       return jent_rct_permanent_failure(ec) | jent_apt_permanent_failure(ec);
+       return ec->health_failure;
 }
 
 /***************************************************************************
@@ -448,7 +516,7 @@ static void jent_memaccess(struct rand_data *ec, __u64 loop_cnt)
  *
  * @return result of stuck test
  */
-static int jent_measure_jitter(struct rand_data *ec)
+static int jent_measure_jitter(struct rand_data *ec, __u64 *ret_current_delta)
 {
        __u64 time = 0;
        __u64 current_delta = 0;
@@ -472,6 +540,10 @@ static int jent_measure_jitter(struct rand_data *ec)
        if (jent_condition_data(ec, current_delta, stuck))
                stuck = 1;
 
+       /* return the raw entropy value */
+       if (ret_current_delta)
+               *ret_current_delta = current_delta;
+
        return stuck;
 }
 
@@ -489,11 +561,11 @@ static void jent_gen_entropy(struct rand_data *ec)
                safety_factor = JENT_ENTROPY_SAFETY_FACTOR;
 
        /* priming of the ->prev_time value */
-       jent_measure_jitter(ec);
+       jent_measure_jitter(ec, NULL);
 
        while (!jent_health_failure(ec)) {
                /* If a stuck measurement is received, repeat measurement */
-               if (jent_measure_jitter(ec))
+               if (jent_measure_jitter(ec, NULL))
                        continue;
 
                /*
@@ -537,11 +609,12 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
                return -1;
 
        while (len > 0) {
-               unsigned int tocopy;
+               unsigned int tocopy, health_test_result;
 
                jent_gen_entropy(ec);
 
-               if (jent_permanent_health_failure(ec)) {
+               health_test_result = jent_health_failure(ec);
+               if (health_test_result > JENT_PERMANENT_FAILURE_SHIFT) {
                        /*
                         * At this point, the Jitter RNG instance is considered
                         * as a failed instance. There is no rerun of the
@@ -549,13 +622,18 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
                         * is assumed to not further use this instance.
                         */
                        return -3;
-               } else if (jent_health_failure(ec)) {
+               } else if (health_test_result) {
                        /*
                         * Perform startup health tests and return permanent
                         * error if it fails.
                         */
-                       if (jent_entropy_init(ec->hash_state))
+                       if (jent_entropy_init(0, 0, NULL, ec)) {
+                               /* Mark the permanent error */
+                               ec->health_failure &=
+                                       JENT_RCT_FAILURE_PERMANENT |
+                                       JENT_APT_FAILURE_PERMANENT;
                                return -3;
+                       }
 
                        return -2;
                }
@@ -592,23 +670,29 @@ struct rand_data *jent_entropy_collector_alloc(unsigned int osr,
                /* Allocate memory for adding variations based on memory
                 * access
                 */
-               entropy_collector->mem = jent_zalloc(JENT_MEMORY_SIZE);
+               entropy_collector->mem = jent_kvzalloc(JENT_MEMORY_SIZE);
                if (!entropy_collector->mem) {
                        jent_zfree(entropy_collector);
                        return NULL;
                }
-               entropy_collector->memblocksize = JENT_MEMORY_BLOCKSIZE;
-               entropy_collector->memblocks = JENT_MEMORY_BLOCKS;
+               entropy_collector->memblocksize =
+                       CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKSIZE;
+               entropy_collector->memblocks =
+                       CONFIG_CRYPTO_JITTERENTROPY_MEMORY_BLOCKS;
                entropy_collector->memaccessloops = JENT_MEMORY_ACCESSLOOPS;
        }
 
        /* verify and set the oversampling rate */
        if (osr == 0)
-               osr = 1; /* minimum sampling rate is 1 */
+               osr = 1; /* H_submitter = 1 / osr */
        entropy_collector->osr = osr;
+       entropy_collector->flags = flags;
 
        entropy_collector->hash_state = hash_state;
 
+       /* Initialize the APT */
+       jent_apt_init(entropy_collector, osr);
+
        /* fill the data pad with non-zero values */
        jent_gen_entropy(entropy_collector);
 
@@ -617,25 +701,39 @@ struct rand_data *jent_entropy_collector_alloc(unsigned int osr,
 
 void jent_entropy_collector_free(struct rand_data *entropy_collector)
 {
-       jent_zfree(entropy_collector->mem);
+       jent_kvzfree(entropy_collector->mem, JENT_MEMORY_SIZE);
        entropy_collector->mem = NULL;
        jent_zfree(entropy_collector);
 }
 
-int jent_entropy_init(void *hash_state)
+int jent_entropy_init(unsigned int osr, unsigned int flags, void *hash_state,
+                     struct rand_data *p_ec)
 {
-       int i;
-       __u64 delta_sum = 0;
-       __u64 old_delta = 0;
-       unsigned int nonstuck = 0;
-       int time_backwards = 0;
-       int count_mod = 0;
-       int count_stuck = 0;
-       struct rand_data ec = { 0 };
-
-       /* Required for RCT */
-       ec.osr = 1;
-       ec.hash_state = hash_state;
+       /*
+        * If caller provides an allocated ec, reuse it which implies that the
+        * health test entropy data is used to further still the available
+        * entropy pool.
+        */
+       struct rand_data *ec = p_ec;
+       int i, time_backwards = 0, ret = 0, ec_free = 0;
+       unsigned int health_test_result;
+
+       if (!ec) {
+               ec = jent_entropy_collector_alloc(osr, flags, hash_state);
+               if (!ec)
+                       return JENT_EMEM;
+               ec_free = 1;
+       } else {
+               /* Reset the APT */
+               jent_apt_reset(ec, 0);
+               /* Ensure that a new APT base is obtained */
+               ec->apt_base_set = 0;
+               /* Reset the RCT */
+               ec->rct_count = 0;
+               /* Reset intermittent, leave permanent health test result */
+               ec->health_failure &= (~JENT_RCT_FAILURE);
+               ec->health_failure &= (~JENT_APT_FAILURE);
+       }
 
        /* We could perform statistical tests here, but the problem is
         * that we only have a few loop counts to do testing. These
@@ -664,31 +762,28 @@ int jent_entropy_init(void *hash_state)
 #define TESTLOOPCOUNT 1024
 #define CLEARCACHE 100
        for (i = 0; (TESTLOOPCOUNT + CLEARCACHE) > i; i++) {
-               __u64 time = 0;
-               __u64 time2 = 0;
-               __u64 delta = 0;
-               unsigned int lowdelta = 0;
-               int stuck;
+               __u64 start_time = 0, end_time = 0, delta = 0;
 
                /* Invoke core entropy collection logic */
-               jent_get_nstime(&time);
-               ec.prev_time = time;
-               jent_condition_data(&ec, time, 0);
-               jent_get_nstime(&time2);
+               jent_measure_jitter(ec, &delta);
+               end_time = ec->prev_time;
+               start_time = ec->prev_time - delta;
 
                /* test whether timer works */
-               if (!time || !time2)
-                       return JENT_ENOTIME;
-               delta = jent_delta(time, time2);
+               if (!start_time || !end_time) {
+                       ret = JENT_ENOTIME;
+                       goto out;
+               }
+
                /*
                 * test whether timer is fine grained enough to provide
                 * delta even when called shortly after each other -- this
                 * implies that we also have a high resolution timer
                 */
-               if (!delta)
-                       return JENT_ECOARSETIME;
-
-               stuck = jent_stuck(&ec, delta);
+               if (!delta || (end_time == start_time)) {
+                       ret = JENT_ECOARSETIME;
+                       goto out;
+               }
 
                /*
                 * up to here we did not modify any variable that will be
@@ -700,49 +795,9 @@ int jent_entropy_init(void *hash_state)
                if (i < CLEARCACHE)
                        continue;
 
-               if (stuck)
-                       count_stuck++;
-               else {
-                       nonstuck++;
-
-                       /*
-                        * Ensure that the APT succeeded.
-                        *
-                        * With the check below that count_stuck must be less
-                        * than 10% of the overall generated raw entropy values
-                        * it is guaranteed that the APT is invoked at
-                        * floor((TESTLOOPCOUNT * 0.9) / 64) == 14 times.
-                        */
-                       if ((nonstuck % JENT_APT_WINDOW_SIZE) == 0) {
-                               jent_apt_reset(&ec,
-                                              delta & JENT_APT_WORD_MASK);
-                       }
-               }
-
-               /* Validate health test result */
-               if (jent_health_failure(&ec))
-                       return JENT_EHEALTH;
-
                /* test whether we have an increasing timer */
-               if (!(time2 > time))
+               if (!(end_time > start_time))
                        time_backwards++;
-
-               /* use 32 bit value to ensure compilation on 32 bit arches */
-               lowdelta = time2 - time;
-               if (!(lowdelta % 100))
-                       count_mod++;
-
-               /*
-                * ensure that we have a varying delta timer which is necessary
-                * for the calculation of entropy -- perform this check
-                * only after the first loop is executed as we need to prime
-                * the old_data value
-                */
-               if (delta > old_delta)
-                       delta_sum += (delta - old_delta);
-               else
-                       delta_sum += (old_delta - delta);
-               old_delta = delta;
        }
 
        /*
@@ -752,31 +807,22 @@ int jent_entropy_init(void *hash_state)
         * should not fail. The value of 3 should cover the NTP case being
         * performed during our test run.
         */
-       if (time_backwards > 3)
-               return JENT_ENOMONOTONIC;
-
-       /*
-        * Variations of deltas of time must on average be larger
-        * than 1 to ensure the entropy estimation
-        * implied with 1 is preserved
-        */
-       if ((delta_sum) <= 1)
-               return JENT_EVARVAR;
+       if (time_backwards > 3) {
+               ret = JENT_ENOMONOTONIC;
+               goto out;
+       }
 
-       /*
-        * Ensure that we have variations in the time stamp below 10 for at
-        * least 10% of all checks -- on some platforms, the counter increments
-        * in multiples of 100, but not always
-        */
-       if ((TESTLOOPCOUNT/10 * 9) < count_mod)
-               return JENT_ECOARSETIME;
+       /* Did we encounter a health test failure? */
+       health_test_result = jent_health_failure(ec);
+       if (health_test_result) {
+               ret = (health_test_result & JENT_RCT_FAILURE) ? JENT_ERCT :
+                                                               JENT_EHEALTH;
+               goto out;
+       }
 
-       /*
-        * If we have more than 90% stuck results, then this Jitter RNG is
-        * likely to not work well.
-        */
-       if ((TESTLOOPCOUNT/10 * 9) < count_stuck)
-               return JENT_ESTUCK;
+out:
+       if (ec_free)
+               jent_entropy_collector_free(ec);
 
-       return 0;
+       return ret;
 }
index 4c92176ea2b1ddf76644ccdc07b10ae4ef2f1012..aa4728675ae245ca2acd5a26b6eb6f6c1cfad9c2 100644 (file)
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
+extern void *jent_kvzalloc(unsigned int len);
+extern void jent_kvzfree(void *ptr, unsigned int len);
 extern void *jent_zalloc(unsigned int len);
 extern void jent_zfree(void *ptr);
 extern void jent_get_nstime(__u64 *out);
@@ -9,7 +11,8 @@ extern int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
 int jent_read_random_block(void *hash_state, char *dst, unsigned int dst_len);
 
 struct rand_data;
-extern int jent_entropy_init(void *hash_state);
+extern int jent_entropy_init(unsigned int osr, unsigned int flags,
+                            void *hash_state, struct rand_data *p_ec);
 extern int jent_read_entropy(struct rand_data *ec, unsigned char *data,
                             unsigned int len);
 
index 59260aefed2807d948c03faf78ccc661616a2eb1..e216fbf2b7866758f35d42c69ec3fb4c6e2ba86d 100644 (file)
@@ -299,8 +299,8 @@ static void lrw_free_instance(struct skcipher_instance *inst)
 static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
        struct crypto_skcipher_spawn *spawn;
+       struct skcipher_alg_common *alg;
        struct skcipher_instance *inst;
-       struct skcipher_alg *alg;
        const char *cipher_name;
        char ecb_name[CRYPTO_MAX_ALG_NAME];
        u32 mask;
@@ -336,13 +336,13 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
        if (err)
                goto err_free_inst;
 
-       alg = crypto_skcipher_spawn_alg(spawn);
+       alg = crypto_spawn_skcipher_alg_common(spawn);
 
        err = -EINVAL;
        if (alg->base.cra_blocksize != LRW_BLOCK_SIZE)
                goto err_free_inst;
 
-       if (crypto_skcipher_alg_ivsize(alg))
+       if (alg->ivsize)
                goto err_free_inst;
 
        err = crypto_inst_setname(skcipher_crypto_instance(inst), "lrw",
@@ -382,10 +382,8 @@ static int lrw_create(struct crypto_template *tmpl, struct rtattr **tb)
                                       (__alignof__(be128) - 1);
 
        inst->alg.ivsize = LRW_BLOCK_SIZE;
-       inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) +
-                               LRW_BLOCK_SIZE;
-       inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) +
-                               LRW_BLOCK_SIZE;
+       inst->alg.min_keysize = alg->min_keysize + LRW_BLOCK_SIZE;
+       inst->alg.max_keysize = alg->max_keysize + LRW_BLOCK_SIZE;
 
        inst->alg.base.cra_ctxsize = sizeof(struct lrw_tfm_ctx);
 
diff --git a/crypto/lskcipher.c b/crypto/lskcipher.c
new file mode 100644 (file)
index 0000000..9edc897
--- /dev/null
@@ -0,0 +1,634 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Linear symmetric key cipher operations.
+ *
+ * Generic encrypt/decrypt wrapper for ciphers.
+ *
+ * Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
+ */
+
+#include <linux/cryptouser.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <net/netlink.h>
+#include "skcipher.h"
+
+static inline struct crypto_lskcipher *__crypto_lskcipher_cast(
+       struct crypto_tfm *tfm)
+{
+       return container_of(tfm, struct crypto_lskcipher, base);
+}
+
+static inline struct lskcipher_alg *__crypto_lskcipher_alg(
+       struct crypto_alg *alg)
+{
+       return container_of(alg, struct lskcipher_alg, co.base);
+}
+
+static inline struct crypto_istat_cipher *lskcipher_get_stat(
+       struct lskcipher_alg *alg)
+{
+       return skcipher_get_stat_common(&alg->co);
+}
+
+static inline int crypto_lskcipher_errstat(struct lskcipher_alg *alg, int err)
+{
+       struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+       if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+               return err;
+
+       if (err)
+               atomic64_inc(&istat->err_cnt);
+
+       return err;
+}
+
+static int lskcipher_setkey_unaligned(struct crypto_lskcipher *tfm,
+                                     const u8 *key, unsigned int keylen)
+{
+       unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+       struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+       u8 *buffer, *alignbuffer;
+       unsigned long absize;
+       int ret;
+
+       absize = keylen + alignmask;
+       buffer = kmalloc(absize, GFP_ATOMIC);
+       if (!buffer)
+               return -ENOMEM;
+
+       alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
+       memcpy(alignbuffer, key, keylen);
+       ret = cipher->setkey(tfm, alignbuffer, keylen);
+       kfree_sensitive(buffer);
+       return ret;
+}
+
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm, const u8 *key,
+                           unsigned int keylen)
+{
+       unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+       struct lskcipher_alg *cipher = crypto_lskcipher_alg(tfm);
+
+       if (keylen < cipher->co.min_keysize || keylen > cipher->co.max_keysize)
+               return -EINVAL;
+
+       if ((unsigned long)key & alignmask)
+               return lskcipher_setkey_unaligned(tfm, key, keylen);
+       else
+               return cipher->setkey(tfm, key, keylen);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_setkey);
+
+static int crypto_lskcipher_crypt_unaligned(
+       struct crypto_lskcipher *tfm, const u8 *src, u8 *dst, unsigned len,
+       u8 *iv, int (*crypt)(struct crypto_lskcipher *tfm, const u8 *src,
+                            u8 *dst, unsigned len, u8 *iv, bool final))
+{
+       unsigned ivsize = crypto_lskcipher_ivsize(tfm);
+       unsigned bs = crypto_lskcipher_blocksize(tfm);
+       unsigned cs = crypto_lskcipher_chunksize(tfm);
+       int err;
+       u8 *tiv;
+       u8 *p;
+
+       BUILD_BUG_ON(MAX_CIPHER_BLOCKSIZE > PAGE_SIZE ||
+                    MAX_CIPHER_ALIGNMASK >= PAGE_SIZE);
+
+       tiv = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+       if (!tiv)
+               return -ENOMEM;
+
+       memcpy(tiv, iv, ivsize);
+
+       p = kmalloc(PAGE_SIZE, GFP_ATOMIC);
+       err = -ENOMEM;
+       if (!p)
+               goto out;
+
+       while (len >= bs) {
+               unsigned chunk = min((unsigned)PAGE_SIZE, len);
+               int err;
+
+               if (chunk > cs)
+                       chunk &= ~(cs - 1);
+
+               memcpy(p, src, chunk);
+               err = crypt(tfm, p, p, chunk, tiv, true);
+               if (err)
+                       goto out;
+
+               memcpy(dst, p, chunk);
+               src += chunk;
+               dst += chunk;
+               len -= chunk;
+       }
+
+       err = len ? -EINVAL : 0;
+
+out:
+       memcpy(iv, tiv, ivsize);
+       kfree_sensitive(p);
+       kfree_sensitive(tiv);
+       return err;
+}
+
+static int crypto_lskcipher_crypt(struct crypto_lskcipher *tfm, const u8 *src,
+                                 u8 *dst, unsigned len, u8 *iv,
+                                 int (*crypt)(struct crypto_lskcipher *tfm,
+                                              const u8 *src, u8 *dst,
+                                              unsigned len, u8 *iv,
+                                              bool final))
+{
+       unsigned long alignmask = crypto_lskcipher_alignmask(tfm);
+       struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+       int ret;
+
+       if (((unsigned long)src | (unsigned long)dst | (unsigned long)iv) &
+           alignmask) {
+               ret = crypto_lskcipher_crypt_unaligned(tfm, src, dst, len, iv,
+                                                      crypt);
+               goto out;
+       }
+
+       ret = crypt(tfm, src, dst, len, iv, true);
+
+out:
+       return crypto_lskcipher_errstat(alg, ret);
+}
+
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+                            u8 *dst, unsigned len, u8 *iv)
+{
+       struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+       if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+               struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+               atomic64_inc(&istat->encrypt_cnt);
+               atomic64_add(len, &istat->encrypt_tlen);
+       }
+
+       return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->encrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_encrypt);
+
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+                            u8 *dst, unsigned len, u8 *iv)
+{
+       struct lskcipher_alg *alg = crypto_lskcipher_alg(tfm);
+
+       if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
+               struct crypto_istat_cipher *istat = lskcipher_get_stat(alg);
+
+               atomic64_inc(&istat->decrypt_cnt);
+               atomic64_add(len, &istat->decrypt_tlen);
+       }
+
+       return crypto_lskcipher_crypt(tfm, src, dst, len, iv, alg->decrypt);
+}
+EXPORT_SYMBOL_GPL(crypto_lskcipher_decrypt);
+
+static int crypto_lskcipher_crypt_sg(struct skcipher_request *req,
+                                    int (*crypt)(struct crypto_lskcipher *tfm,
+                                                 const u8 *src, u8 *dst,
+                                                 unsigned len, u8 *iv,
+                                                 bool final))
+{
+       struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+       struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+       struct crypto_lskcipher *tfm = *ctx;
+       struct skcipher_walk walk;
+       int err;
+
+       err = skcipher_walk_virt(&walk, req, false);
+
+       while (walk.nbytes) {
+               err = crypt(tfm, walk.src.virt.addr, walk.dst.virt.addr,
+                           walk.nbytes, walk.iv, walk.nbytes == walk.total);
+               err = skcipher_walk_done(&walk, err);
+       }
+
+       return err;
+}
+
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req)
+{
+       struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+       struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+       struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+       return crypto_lskcipher_crypt_sg(req, alg->encrypt);
+}
+
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req)
+{
+       struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
+       struct crypto_lskcipher **ctx = crypto_skcipher_ctx(skcipher);
+       struct lskcipher_alg *alg = crypto_lskcipher_alg(*ctx);
+
+       return crypto_lskcipher_crypt_sg(req, alg->decrypt);
+}
+
+static void crypto_lskcipher_exit_tfm(struct crypto_tfm *tfm)
+{
+       struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+       struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+       alg->exit(skcipher);
+}
+
+static int crypto_lskcipher_init_tfm(struct crypto_tfm *tfm)
+{
+       struct crypto_lskcipher *skcipher = __crypto_lskcipher_cast(tfm);
+       struct lskcipher_alg *alg = crypto_lskcipher_alg(skcipher);
+
+       if (alg->exit)
+               skcipher->base.exit = crypto_lskcipher_exit_tfm;
+
+       if (alg->init)
+               return alg->init(skcipher);
+
+       return 0;
+}
+
+static void crypto_lskcipher_free_instance(struct crypto_instance *inst)
+{
+       struct lskcipher_instance *skcipher =
+               container_of(inst, struct lskcipher_instance, s.base);
+
+       skcipher->free(skcipher);
+}
+
+static void __maybe_unused crypto_lskcipher_show(
+       struct seq_file *m, struct crypto_alg *alg)
+{
+       struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+
+       seq_printf(m, "type         : lskcipher\n");
+       seq_printf(m, "blocksize    : %u\n", alg->cra_blocksize);
+       seq_printf(m, "min keysize  : %u\n", skcipher->co.min_keysize);
+       seq_printf(m, "max keysize  : %u\n", skcipher->co.max_keysize);
+       seq_printf(m, "ivsize       : %u\n", skcipher->co.ivsize);
+       seq_printf(m, "chunksize    : %u\n", skcipher->co.chunksize);
+}
+
+static int __maybe_unused crypto_lskcipher_report(
+       struct sk_buff *skb, struct crypto_alg *alg)
+{
+       struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+       struct crypto_report_blkcipher rblkcipher;
+
+       memset(&rblkcipher, 0, sizeof(rblkcipher));
+
+       strscpy(rblkcipher.type, "lskcipher", sizeof(rblkcipher.type));
+       strscpy(rblkcipher.geniv, "<none>", sizeof(rblkcipher.geniv));
+
+       rblkcipher.blocksize = alg->cra_blocksize;
+       rblkcipher.min_keysize = skcipher->co.min_keysize;
+       rblkcipher.max_keysize = skcipher->co.max_keysize;
+       rblkcipher.ivsize = skcipher->co.ivsize;
+
+       return nla_put(skb, CRYPTOCFGA_REPORT_BLKCIPHER,
+                      sizeof(rblkcipher), &rblkcipher);
+}
+
+static int __maybe_unused crypto_lskcipher_report_stat(
+       struct sk_buff *skb, struct crypto_alg *alg)
+{
+       struct lskcipher_alg *skcipher = __crypto_lskcipher_alg(alg);
+       struct crypto_istat_cipher *istat;
+       struct crypto_stat_cipher rcipher;
+
+       istat = lskcipher_get_stat(skcipher);
+
+       memset(&rcipher, 0, sizeof(rcipher));
+
+       strscpy(rcipher.type, "cipher", sizeof(rcipher.type));
+
+       rcipher.stat_encrypt_cnt = atomic64_read(&istat->encrypt_cnt);
+       rcipher.stat_encrypt_tlen = atomic64_read(&istat->encrypt_tlen);
+       rcipher.stat_decrypt_cnt =  atomic64_read(&istat->decrypt_cnt);
+       rcipher.stat_decrypt_tlen = atomic64_read(&istat->decrypt_tlen);
+       rcipher.stat_err_cnt =  atomic64_read(&istat->err_cnt);
+
+       return nla_put(skb, CRYPTOCFGA_STAT_CIPHER, sizeof(rcipher), &rcipher);
+}
+
+static const struct crypto_type crypto_lskcipher_type = {
+       .extsize = crypto_alg_extsize,
+       .init_tfm = crypto_lskcipher_init_tfm,
+       .free = crypto_lskcipher_free_instance,
+#ifdef CONFIG_PROC_FS
+       .show = crypto_lskcipher_show,
+#endif
+#if IS_ENABLED(CONFIG_CRYPTO_USER)
+       .report = crypto_lskcipher_report,
+#endif
+#ifdef CONFIG_CRYPTO_STATS
+       .report_stat = crypto_lskcipher_report_stat,
+#endif
+       .maskclear = ~CRYPTO_ALG_TYPE_MASK,
+       .maskset = CRYPTO_ALG_TYPE_MASK,
+       .type = CRYPTO_ALG_TYPE_LSKCIPHER,
+       .tfmsize = offsetof(struct crypto_lskcipher, base),
+};
+
+static void crypto_lskcipher_exit_tfm_sg(struct crypto_tfm *tfm)
+{
+       struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+
+       crypto_free_lskcipher(*ctx);
+}
+
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm)
+{
+       struct crypto_lskcipher **ctx = crypto_tfm_ctx(tfm);
+       struct crypto_alg *calg = tfm->__crt_alg;
+       struct crypto_lskcipher *skcipher;
+
+       if (!crypto_mod_get(calg))
+               return -EAGAIN;
+
+       skcipher = crypto_create_tfm(calg, &crypto_lskcipher_type);
+       if (IS_ERR(skcipher)) {
+               crypto_mod_put(calg);
+               return PTR_ERR(skcipher);
+       }
+
+       *ctx = skcipher;
+       tfm->exit = crypto_lskcipher_exit_tfm_sg;
+
+       return 0;
+}
+
+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+                         struct crypto_instance *inst,
+                         const char *name, u32 type, u32 mask)
+{
+       spawn->base.frontend = &crypto_lskcipher_type;
+       return crypto_grab_spawn(&spawn->base, inst, name, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_grab_lskcipher);
+
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+                                               u32 type, u32 mask)
+{
+       return crypto_alloc_tfm(alg_name, &crypto_lskcipher_type, type, mask);
+}
+EXPORT_SYMBOL_GPL(crypto_alloc_lskcipher);
+
+static int lskcipher_prepare_alg(struct lskcipher_alg *alg)
+{
+       struct crypto_alg *base = &alg->co.base;
+       int err;
+
+       err = skcipher_prepare_alg_common(&alg->co);
+       if (err)
+               return err;
+
+       if (alg->co.chunksize & (alg->co.chunksize - 1))
+               return -EINVAL;
+
+       base->cra_type = &crypto_lskcipher_type;
+       base->cra_flags |= CRYPTO_ALG_TYPE_LSKCIPHER;
+
+       return 0;
+}
+
+int crypto_register_lskcipher(struct lskcipher_alg *alg)
+{
+       struct crypto_alg *base = &alg->co.base;
+       int err;
+
+       err = lskcipher_prepare_alg(alg);
+       if (err)
+               return err;
+
+       return crypto_register_alg(base);
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskcipher);
+
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg)
+{
+       crypto_unregister_alg(&alg->co.base);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskcipher);
+
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count)
+{
+       int i, ret;
+
+       for (i = 0; i < count; i++) {
+               ret = crypto_register_lskcipher(&algs[i]);
+               if (ret)
+                       goto err;
+       }
+
+       return 0;
+
+err:
+       for (--i; i >= 0; --i)
+               crypto_unregister_lskcipher(&algs[i]);
+
+       return ret;
+}
+EXPORT_SYMBOL_GPL(crypto_register_lskciphers);
+
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count)
+{
+       int i;
+
+       for (i = count - 1; i >= 0; --i)
+               crypto_unregister_lskcipher(&algs[i]);
+}
+EXPORT_SYMBOL_GPL(crypto_unregister_lskciphers);
+
+int lskcipher_register_instance(struct crypto_template *tmpl,
+                               struct lskcipher_instance *inst)
+{
+       int err;
+
+       if (WARN_ON(!inst->free))
+               return -EINVAL;
+
+       err = lskcipher_prepare_alg(&inst->alg);
+       if (err)
+               return err;
+
+       return crypto_register_instance(tmpl, lskcipher_crypto_instance(inst));
+}
+EXPORT_SYMBOL_GPL(lskcipher_register_instance);
+
+static int lskcipher_setkey_simple(struct crypto_lskcipher *tfm, const u8 *key,
+                                  unsigned int keylen)
+{
+       struct crypto_lskcipher *cipher = lskcipher_cipher_simple(tfm);
+
+       crypto_lskcipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+       crypto_lskcipher_set_flags(cipher, crypto_lskcipher_get_flags(tfm) &
+                                  CRYPTO_TFM_REQ_MASK);
+       return crypto_lskcipher_setkey(cipher, key, keylen);
+}
+
+static int lskcipher_init_tfm_simple(struct crypto_lskcipher *tfm)
+{
+       struct lskcipher_instance *inst = lskcipher_alg_instance(tfm);
+       struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+       struct crypto_lskcipher_spawn *spawn;
+       struct crypto_lskcipher *cipher;
+
+       spawn = lskcipher_instance_ctx(inst);
+       cipher = crypto_spawn_lskcipher(spawn);
+       if (IS_ERR(cipher))
+               return PTR_ERR(cipher);
+
+       *ctx = cipher;
+       return 0;
+}
+
+static void lskcipher_exit_tfm_simple(struct crypto_lskcipher *tfm)
+{
+       struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+       crypto_free_lskcipher(*ctx);
+}
+
+static void lskcipher_free_instance_simple(struct lskcipher_instance *inst)
+{
+       crypto_drop_lskcipher(lskcipher_instance_ctx(inst));
+       kfree(inst);
+}
+
+/**
+ * lskcipher_alloc_instance_simple - allocate instance of simple block cipher
+ *
+ * Allocate an lskcipher_instance for a simple block cipher mode of operation,
+ * e.g. cbc or ecb.  The instance context will have just a single crypto_spawn,
+ * that for the underlying cipher.  The {min,max}_keysize, ivsize, blocksize,
+ * alignmask, and priority are set from the underlying cipher but can be
+ * overridden if needed.  The tfm context defaults to
+ * struct crypto_lskcipher *, and default ->setkey(), ->init(), and
+ * ->exit() methods are installed.
+ *
+ * @tmpl: the template being instantiated
+ * @tb: the template parameters
+ *
+ * Return: a pointer to the new instance, or an ERR_PTR().  The caller still
+ *        needs to register the instance.
+ */
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+       struct crypto_template *tmpl, struct rtattr **tb)
+{
+       u32 mask;
+       struct lskcipher_instance *inst;
+       struct crypto_lskcipher_spawn *spawn;
+       char ecb_name[CRYPTO_MAX_ALG_NAME];
+       struct lskcipher_alg *cipher_alg;
+       const char *cipher_name;
+       int err;
+
+       err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_LSKCIPHER, &mask);
+       if (err)
+               return ERR_PTR(err);
+
+       cipher_name = crypto_attr_alg_name(tb[1]);
+       if (IS_ERR(cipher_name))
+               return ERR_CAST(cipher_name);
+
+       inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+       if (!inst)
+               return ERR_PTR(-ENOMEM);
+
+       spawn = lskcipher_instance_ctx(inst);
+       err = crypto_grab_lskcipher(spawn,
+                                   lskcipher_crypto_instance(inst),
+                                   cipher_name, 0, mask);
+
+       ecb_name[0] = 0;
+       if (err == -ENOENT && !!memcmp(tmpl->name, "ecb", 4)) {
+               err = -ENAMETOOLONG;
+               if (snprintf(ecb_name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+                            cipher_name) >= CRYPTO_MAX_ALG_NAME)
+                       goto err_free_inst;
+
+               err = crypto_grab_lskcipher(spawn,
+                                           lskcipher_crypto_instance(inst),
+                                           ecb_name, 0, mask);
+       }
+
+       if (err)
+               goto err_free_inst;
+
+       cipher_alg = crypto_lskcipher_spawn_alg(spawn);
+
+       err = crypto_inst_setname(lskcipher_crypto_instance(inst), tmpl->name,
+                                 &cipher_alg->co.base);
+       if (err)
+               goto err_free_inst;
+
+       if (ecb_name[0]) {
+               int len;
+
+               err = -EINVAL;
+               len = strscpy(ecb_name, &cipher_alg->co.base.cra_name[4],
+                             sizeof(ecb_name));
+               if (len < 2)
+                       goto err_free_inst;
+
+               if (ecb_name[len - 1] != ')')
+                       goto err_free_inst;
+
+               ecb_name[len - 1] = 0;
+
+               err = -ENAMETOOLONG;
+               if (snprintf(inst->alg.co.base.cra_name, CRYPTO_MAX_ALG_NAME,
+                            "%s(%s)", tmpl->name, ecb_name) >=
+                   CRYPTO_MAX_ALG_NAME)
+                       goto err_free_inst;
+
+               if (strcmp(ecb_name, cipher_name) &&
+                   snprintf(inst->alg.co.base.cra_driver_name,
+                            CRYPTO_MAX_ALG_NAME,
+                            "%s(%s)", tmpl->name, cipher_name) >=
+                   CRYPTO_MAX_ALG_NAME)
+                       goto err_free_inst;
+       } else {
+               /* Don't allow nesting. */
+               err = -ELOOP;
+               if ((cipher_alg->co.base.cra_flags & CRYPTO_ALG_INSTANCE))
+                       goto err_free_inst;
+       }
+
+       err = -EINVAL;
+       if (cipher_alg->co.ivsize)
+               goto err_free_inst;
+
+       inst->free = lskcipher_free_instance_simple;
+
+       /* Default algorithm properties, can be overridden */
+       inst->alg.co.base.cra_blocksize = cipher_alg->co.base.cra_blocksize;
+       inst->alg.co.base.cra_alignmask = cipher_alg->co.base.cra_alignmask;
+       inst->alg.co.base.cra_priority = cipher_alg->co.base.cra_priority;
+       inst->alg.co.min_keysize = cipher_alg->co.min_keysize;
+       inst->alg.co.max_keysize = cipher_alg->co.max_keysize;
+       inst->alg.co.ivsize = cipher_alg->co.base.cra_blocksize;
+
+       /* Use struct crypto_lskcipher * by default, can be overridden */
+       inst->alg.co.base.cra_ctxsize = sizeof(struct crypto_lskcipher *);
+       inst->alg.setkey = lskcipher_setkey_simple;
+       inst->alg.init = lskcipher_init_tfm_simple;
+       inst->alg.exit = lskcipher_exit_tfm_simple;
+
+       return inst;
+
+err_free_inst:
+       lskcipher_free_instance_simple(inst);
+       return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(lskcipher_alloc_instance_simple);
index 8c1d0ca412137f563ca6fdd1da2d35fbb80b15c8..d0d954fe9d54f321f7f483c750836f6aa672779b 100644 (file)
@@ -117,6 +117,8 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
        err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
        if (!err)
                return -EINPROGRESS;
+       if (err == -EBUSY)
+               return -EAGAIN;
 
        return err;
 }
@@ -164,6 +166,8 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
        err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
        if (!err)
                return -EINPROGRESS;
+       if (err == -EBUSY)
+               return -EAGAIN;
 
        return err;
 }
index d2e5e104f8cfe386d0380f5dc469c373bca425ec..cd501195f34a1a4643874b8b8f63cdbeabac1344 100644 (file)
@@ -61,6 +61,24 @@ static const u8 rsa_digest_info_sha512[] = {
        0x05, 0x00, 0x04, 0x40
 };
 
+static const u8 rsa_digest_info_sha3_256[] = {
+       0x30, 0x31, 0x30, 0x0d, 0x06, 0x09,
+       0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x08,
+       0x05, 0x00, 0x04, 0x20
+};
+
+static const u8 rsa_digest_info_sha3_384[] = {
+       0x30, 0x41, 0x30, 0x0d, 0x06, 0x09,
+       0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x09,
+       0x05, 0x00, 0x04, 0x30
+};
+
+static const u8 rsa_digest_info_sha3_512[] = {
+       0x30, 0x51, 0x30, 0x0d, 0x06, 0x09,
+       0x60, 0x86, 0x48, 0x01, 0x65, 0x03, 0x04, 0x02, 0x0A,
+       0x05, 0x00, 0x04, 0x40
+};
+
 static const struct rsa_asn1_template {
        const char      *name;
        const u8        *data;
@@ -74,8 +92,13 @@ static const struct rsa_asn1_template {
        _(sha384),
        _(sha512),
        _(sha224),
-       { NULL }
 #undef _
+#define _(X) { "sha3-" #X, rsa_digest_info_sha3_##X, sizeof(rsa_digest_info_sha3_##X) }
+       _(256),
+       _(384),
+       _(512),
+#undef _
+       { NULL }
 };
 
 static const struct rsa_asn1_template *rsa_lookup_asn1(const char *name)
@@ -687,3 +710,5 @@ struct crypto_template rsa_pkcs1pad_tmpl = {
        .create = pkcs1pad_create,
        .module = THIS_MODULE,
 };
+
+MODULE_ALIAS_CRYPTO("pkcs1pad");
index 4ce06758e8af758de779b276663e5c7d6118e470..76865124a9c716efd87a8d933d238de2476c60b0 100644 (file)
@@ -1,3 +1,10 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2016 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc8017#appendix-A.1.2
+
 RsaPrivKey ::= SEQUENCE {
        version         INTEGER,
        n               INTEGER ({ rsa_get_n }),
index 725498e461d25fa46706b62810eaea88a2d645c0..0d32b1ca6270f7a76977dd5f0d0e88a9fc03de2f 100644 (file)
@@ -1,3 +1,10 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 2016 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc8017#appendix-A.1.1
+
 RsaPubKey ::= SEQUENCE {
        n INTEGER ({ rsa_get_n }),
        e INTEGER ({ rsa_get_e })
index 1fadb6b59bdcc1afdd61b280813249327b4025d5..d5194221c88cb95bb53aa766c75504ad185b6f3c 100644 (file)
 #include <linux/err.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/slab.h>
 #include <linux/seq_file.h>
 #include <linux/string.h>
 #include <net/netlink.h>
 
 #include "hash.h"
 
-#define MAX_SHASH_ALIGNMASK 63
-
-static const struct crypto_type crypto_shash_type;
-
 static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
 {
        return hash_get_stat(&alg->halg);
@@ -28,7 +23,13 @@ static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg)
 
 static inline int crypto_shash_errstat(struct shash_alg *alg, int err)
 {
-       return crypto_hash_errstat(&alg->halg, err);
+       if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
+               return err;
+
+       if (err && err != -EINPROGRESS && err != -EBUSY)
+               atomic64_inc(&shash_get_stat(alg)->err_cnt);
+
+       return err;
 }
 
 int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
@@ -38,27 +39,6 @@ int shash_no_setkey(struct crypto_shash *tfm, const u8 *key,
 }
 EXPORT_SYMBOL_GPL(shash_no_setkey);
 
-static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key,
-                                 unsigned int keylen)
-{
-       struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
-       unsigned long absize;
-       u8 *buffer, *alignbuffer;
-       int err;
-
-       absize = keylen + (alignmask & ~(crypto_tfm_ctx_alignment() - 1));
-       buffer = kmalloc(absize, GFP_ATOMIC);
-       if (!buffer)
-               return -ENOMEM;
-
-       alignbuffer = (u8 *)ALIGN((unsigned long)buffer, alignmask + 1);
-       memcpy(alignbuffer, key, keylen);
-       err = shash->setkey(tfm, alignbuffer, keylen);
-       kfree_sensitive(buffer);
-       return err;
-}
-
 static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg)
 {
        if (crypto_shash_alg_needs_key(alg))
@@ -69,14 +49,9 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
                        unsigned int keylen)
 {
        struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
        int err;
 
-       if ((unsigned long)key & alignmask)
-               err = shash_setkey_unaligned(tfm, key, keylen);
-       else
-               err = shash->setkey(tfm, key, keylen);
-
+       err = shash->setkey(tfm, key, keylen);
        if (unlikely(err)) {
                shash_set_needkey(tfm, shash);
                return err;
@@ -87,108 +62,42 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
 }
 EXPORT_SYMBOL_GPL(crypto_shash_setkey);
 
-static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
-                                 unsigned int len)
-{
-       struct crypto_shash *tfm = desc->tfm;
-       struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
-       unsigned int unaligned_len = alignmask + 1 -
-                                    ((unsigned long)data & alignmask);
-       /*
-        * We cannot count on __aligned() working for large values:
-        * https://patchwork.kernel.org/patch/9507697/
-        */
-       u8 ubuf[MAX_SHASH_ALIGNMASK * 2];
-       u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
-       int err;
-
-       if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
-               return -EINVAL;
-
-       if (unaligned_len > len)
-               unaligned_len = len;
-
-       memcpy(buf, data, unaligned_len);
-       err = shash->update(desc, buf, unaligned_len);
-       memset(buf, 0, unaligned_len);
-
-       return err ?:
-              shash->update(desc, data + unaligned_len, len - unaligned_len);
-}
-
 int crypto_shash_update(struct shash_desc *desc, const u8 *data,
                        unsigned int len)
 {
-       struct crypto_shash *tfm = desc->tfm;
-       struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
+       struct shash_alg *shash = crypto_shash_alg(desc->tfm);
        int err;
 
        if (IS_ENABLED(CONFIG_CRYPTO_STATS))
                atomic64_add(len, &shash_get_stat(shash)->hash_tlen);
 
-       if ((unsigned long)data & alignmask)
-               err = shash_update_unaligned(desc, data, len);
-       else
-               err = shash->update(desc, data, len);
+       err = shash->update(desc, data, len);
 
        return crypto_shash_errstat(shash, err);
 }
 EXPORT_SYMBOL_GPL(crypto_shash_update);
 
-static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
-{
-       struct crypto_shash *tfm = desc->tfm;
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
-       struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned int ds = crypto_shash_digestsize(tfm);
-       /*
-        * We cannot count on __aligned() working for large values:
-        * https://patchwork.kernel.org/patch/9507697/
-        */
-       u8 ubuf[MAX_SHASH_ALIGNMASK + HASH_MAX_DIGESTSIZE];
-       u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
-       int err;
-
-       if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
-               return -EINVAL;
-
-       err = shash->final(desc, buf);
-       if (err)
-               goto out;
-
-       memcpy(out, buf, ds);
-
-out:
-       memset(buf, 0, ds);
-       return err;
-}
-
 int crypto_shash_final(struct shash_desc *desc, u8 *out)
 {
-       struct crypto_shash *tfm = desc->tfm;
-       struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
+       struct shash_alg *shash = crypto_shash_alg(desc->tfm);
        int err;
 
        if (IS_ENABLED(CONFIG_CRYPTO_STATS))
                atomic64_inc(&shash_get_stat(shash)->hash_cnt);
 
-       if ((unsigned long)out & alignmask)
-               err = shash_final_unaligned(desc, out);
-       else
-               err = shash->final(desc, out);
+       err = shash->final(desc, out);
 
        return crypto_shash_errstat(shash, err);
 }
 EXPORT_SYMBOL_GPL(crypto_shash_final);
 
-static int shash_finup_unaligned(struct shash_desc *desc, const u8 *data,
-                                unsigned int len, u8 *out)
+static int shash_default_finup(struct shash_desc *desc, const u8 *data,
+                              unsigned int len, u8 *out)
 {
-       return shash_update_unaligned(desc, data, len) ?:
-              shash_final_unaligned(desc, out);
+       struct shash_alg *shash = crypto_shash_alg(desc->tfm);
+
+       return shash->update(desc, data, len) ?:
+              shash->final(desc, out);
 }
 
 int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
@@ -196,7 +105,6 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
 {
        struct crypto_shash *tfm = desc->tfm;
        struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
        int err;
 
        if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
@@ -206,22 +114,19 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data,
                atomic64_add(len, &istat->hash_tlen);
        }
 
-       if (((unsigned long)data | (unsigned long)out) & alignmask)
-               err = shash_finup_unaligned(desc, data, len, out);
-       else
-               err = shash->finup(desc, data, len, out);
-
+       err = shash->finup(desc, data, len, out);
 
        return crypto_shash_errstat(shash, err);
 }
 EXPORT_SYMBOL_GPL(crypto_shash_finup);
 
-static int shash_digest_unaligned(struct shash_desc *desc, const u8 *data,
-                                 unsigned int len, u8 *out)
+static int shash_default_digest(struct shash_desc *desc, const u8 *data,
+                               unsigned int len, u8 *out)
 {
-       return crypto_shash_init(desc) ?:
-              shash_update_unaligned(desc, data, len) ?:
-              shash_final_unaligned(desc, out);
+       struct shash_alg *shash = crypto_shash_alg(desc->tfm);
+
+       return shash->init(desc) ?:
+              shash->finup(desc, data, len, out);
 }
 
 int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
@@ -229,7 +134,6 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
 {
        struct crypto_shash *tfm = desc->tfm;
        struct shash_alg *shash = crypto_shash_alg(tfm);
-       unsigned long alignmask = crypto_shash_alignmask(tfm);
        int err;
 
        if (IS_ENABLED(CONFIG_CRYPTO_STATS)) {
@@ -241,8 +145,6 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
 
        if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
                err = -ENOKEY;
-       else if (((unsigned long)data | (unsigned long)out) & alignmask)
-               err = shash_digest_unaligned(desc, data, len, out);
        else
                err = shash->digest(desc, data, len, out);
 
@@ -266,202 +168,34 @@ int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data,
 }
 EXPORT_SYMBOL_GPL(crypto_shash_tfm_digest);
 
-static int shash_default_export(struct shash_desc *desc, void *out)
-{
-       memcpy(out, shash_desc_ctx(desc), crypto_shash_descsize(desc->tfm));
-       return 0;
-}
-
-static int shash_default_import(struct shash_desc *desc, const void *in)
-{
-       memcpy(shash_desc_ctx(desc), in, crypto_shash_descsize(desc->tfm));
-       return 0;
-}
-
-static int shash_async_setkey(struct crypto_ahash *tfm, const u8 *key,
-                             unsigned int keylen)
-{
-       struct crypto_shash **ctx = crypto_ahash_ctx(tfm);
-
-       return crypto_shash_setkey(*ctx, key, keylen);
-}
-
-static int shash_async_init(struct ahash_request *req)
-{
-       struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
-       struct shash_desc *desc = ahash_request_ctx(req);
-
-       desc->tfm = *ctx;
-
-       return crypto_shash_init(desc);
-}
-
-int shash_ahash_update(struct ahash_request *req, struct shash_desc *desc)
-{
-       struct crypto_hash_walk walk;
-       int nbytes;
-
-       for (nbytes = crypto_hash_walk_first(req, &walk); nbytes > 0;
-            nbytes = crypto_hash_walk_done(&walk, nbytes))
-               nbytes = crypto_shash_update(desc, walk.data, nbytes);
-
-       return nbytes;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_update);
-
-static int shash_async_update(struct ahash_request *req)
-{
-       return shash_ahash_update(req, ahash_request_ctx(req));
-}
-
-static int shash_async_final(struct ahash_request *req)
-{
-       return crypto_shash_final(ahash_request_ctx(req), req->result);
-}
-
-int shash_ahash_finup(struct ahash_request *req, struct shash_desc *desc)
-{
-       struct crypto_hash_walk walk;
-       int nbytes;
-
-       nbytes = crypto_hash_walk_first(req, &walk);
-       if (!nbytes)
-               return crypto_shash_final(desc, req->result);
-
-       do {
-               nbytes = crypto_hash_walk_last(&walk) ?
-                        crypto_shash_finup(desc, walk.data, nbytes,
-                                           req->result) :
-                        crypto_shash_update(desc, walk.data, nbytes);
-               nbytes = crypto_hash_walk_done(&walk, nbytes);
-       } while (nbytes > 0);
-
-       return nbytes;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_finup);
-
-static int shash_async_finup(struct ahash_request *req)
-{
-       struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
-       struct shash_desc *desc = ahash_request_ctx(req);
-
-       desc->tfm = *ctx;
-
-       return shash_ahash_finup(req, desc);
-}
-
-int shash_ahash_digest(struct ahash_request *req, struct shash_desc *desc)
-{
-       unsigned int nbytes = req->nbytes;
-       struct scatterlist *sg;
-       unsigned int offset;
-       int err;
-
-       if (nbytes &&
-           (sg = req->src, offset = sg->offset,
-            nbytes <= min(sg->length, ((unsigned int)(PAGE_SIZE)) - offset))) {
-               void *data;
-
-               data = kmap_local_page(sg_page(sg));
-               err = crypto_shash_digest(desc, data + offset, nbytes,
-                                         req->result);
-               kunmap_local(data);
-       } else
-               err = crypto_shash_init(desc) ?:
-                     shash_ahash_finup(req, desc);
-
-       return err;
-}
-EXPORT_SYMBOL_GPL(shash_ahash_digest);
-
-static int shash_async_digest(struct ahash_request *req)
-{
-       struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
-       struct shash_desc *desc = ahash_request_ctx(req);
-
-       desc->tfm = *ctx;
-
-       return shash_ahash_digest(req, desc);
-}
-
-static int shash_async_export(struct ahash_request *req, void *out)
-{
-       return crypto_shash_export(ahash_request_ctx(req), out);
-}
-
-static int shash_async_import(struct ahash_request *req, const void *in)
-{
-       struct crypto_shash **ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
-       struct shash_desc *desc = ahash_request_ctx(req);
-
-       desc->tfm = *ctx;
-
-       return crypto_shash_import(desc, in);
-}
-
-static void crypto_exit_shash_ops_async(struct crypto_tfm *tfm)
+int crypto_shash_export(struct shash_desc *desc, void *out)
 {
-       struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
-
-       crypto_free_shash(*ctx);
-}
-
-int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
-{
-       struct crypto_alg *calg = tfm->__crt_alg;
-       struct shash_alg *alg = __crypto_shash_alg(calg);
-       struct crypto_ahash *crt = __crypto_ahash_cast(tfm);
-       struct crypto_shash **ctx = crypto_tfm_ctx(tfm);
-       struct crypto_shash *shash;
-
-       if (!crypto_mod_get(calg))
-               return -EAGAIN;
-
-       shash = crypto_create_tfm(calg, &crypto_shash_type);
-       if (IS_ERR(shash)) {
-               crypto_mod_put(calg);
-               return PTR_ERR(shash);
-       }
-
-       *ctx = shash;
-       tfm->exit = crypto_exit_shash_ops_async;
-
-       crt->init = shash_async_init;
-       crt->update = shash_async_update;
-       crt->final = shash_async_final;
-       crt->finup = shash_async_finup;
-       crt->digest = shash_async_digest;
-       if (crypto_shash_alg_has_setkey(alg))
-               crt->setkey = shash_async_setkey;
-
-       crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
-                                   CRYPTO_TFM_NEED_KEY);
-
-       crt->export = shash_async_export;
-       crt->import = shash_async_import;
+       struct crypto_shash *tfm = desc->tfm;
+       struct shash_alg *shash = crypto_shash_alg(tfm);
 
-       crt->reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash);
+       if (shash->export)
+               return shash->export(desc, out);
 
+       memcpy(out, shash_desc_ctx(desc), crypto_shash_descsize(tfm));
        return 0;
 }
+EXPORT_SYMBOL_GPL(crypto_shash_export);
 
-struct crypto_ahash *crypto_clone_shash_ops_async(struct crypto_ahash *nhash,
-                                                 struct crypto_ahash *hash)
+int crypto_shash_import(struct shash_desc *desc, const void *in)
 {
-       struct crypto_shash **nctx = crypto_ahash_ctx(nhash);
-       struct crypto_shash **ctx = crypto_ahash_ctx(hash);
-       struct crypto_shash *shash;
+       struct crypto_shash *tfm = desc->tfm;
+       struct shash_alg *shash = crypto_shash_alg(tfm);
 
-       shash = crypto_clone_shash(*ctx);
-       if (IS_ERR(shash)) {
-               crypto_free_ahash(nhash);
-               return ERR_CAST(shash);
-       }
+       if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+               return -ENOKEY;
 
-       *nctx = shash;
+       if (shash->import)
+               return shash->import(desc, in);
 
-       return nhash;
+       memcpy(shash_desc_ctx(desc), in, crypto_shash_descsize(tfm));
+       return 0;
 }
+EXPORT_SYMBOL_GPL(crypto_shash_import);
 
 static void crypto_shash_exit_tfm(struct crypto_tfm *tfm)
 {
@@ -541,7 +275,7 @@ static int __maybe_unused crypto_shash_report_stat(
        return crypto_hash_report_stat(skb, alg, "shash");
 }
 
-static const struct crypto_type crypto_shash_type = {
+const struct crypto_type crypto_shash_type = {
        .extsize = crypto_alg_extsize,
        .init_tfm = crypto_shash_init_tfm,
        .free = crypto_shash_free_instance,
@@ -626,6 +360,10 @@ int hash_prepare_alg(struct hash_alg_common *alg)
        if (alg->digestsize > HASH_MAX_DIGESTSIZE)
                return -EINVAL;
 
+       /* alignmask is not useful for hashes, so it is not supported. */
+       if (base->cra_alignmask)
+               return -EINVAL;
+
        base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
 
        if (IS_ENABLED(CONFIG_CRYPTO_STATS))
@@ -642,9 +380,6 @@ static int shash_prepare_alg(struct shash_alg *alg)
        if (alg->descsize > HASH_MAX_DESCSIZE)
                return -EINVAL;
 
-       if (base->cra_alignmask > MAX_SHASH_ALIGNMASK)
-               return -EINVAL;
-
        if ((alg->export && !alg->import) || (alg->import && !alg->export))
                return -EINVAL;
 
@@ -655,15 +390,23 @@ static int shash_prepare_alg(struct shash_alg *alg)
        base->cra_type = &crypto_shash_type;
        base->cra_flags |= CRYPTO_ALG_TYPE_SHASH;
 
+       /*
+        * Handle missing optional functions.  For each one we can either
+        * install a default here, or we can leave the pointer as NULL and check
+        * the pointer for NULL in crypto_shash_*(), avoiding an indirect call
+        * when the default behavior is desired.  For ->finup and ->digest we
+        * install defaults, since for optimal performance algorithms should
+        * implement these anyway.  On the other hand, for ->import and
+        * ->export the common case and best performance comes from the simple
+        * memcpy of the shash_desc_ctx, so when those pointers are NULL we
+        * leave them NULL and provide the memcpy with no indirect call.
+        */
        if (!alg->finup)
-               alg->finup = shash_finup_unaligned;
+               alg->finup = shash_default_finup;
        if (!alg->digest)
-               alg->digest = shash_digest_unaligned;
-       if (!alg->export) {
-               alg->export = shash_default_export;
-               alg->import = shash_default_import;
+               alg->digest = shash_default_digest;
+       if (!alg->export)
                alg->halg.statesize = alg->descsize;
-       }
        if (!alg->setkey)
                alg->setkey = shash_no_setkey;
 
index 7b275716cf4e3a2c6e39fa3ef87ba22e98c7b1be..ac8b8c04265429b2664dbe164296993560634ec0 100644 (file)
@@ -24,8 +24,9 @@
 #include <linux/slab.h>
 #include <linux/string.h>
 #include <net/netlink.h>
+#include "skcipher.h"
 
-#include "internal.h"
+#define CRYPTO_ALG_TYPE_SKCIPHER_MASK  0x0000000e
 
 enum {
        SKCIPHER_WALK_PHYS = 1 << 0,
@@ -43,6 +44,8 @@ struct skcipher_walk_buffer {
        u8 buffer[];
 };
 
+static const struct crypto_type crypto_skcipher_type;
+
 static int skcipher_walk_next(struct skcipher_walk *walk);
 
 static inline void skcipher_map_src(struct skcipher_walk *walk)
@@ -89,11 +92,7 @@ static inline struct skcipher_alg *__crypto_skcipher_alg(
 static inline struct crypto_istat_cipher *skcipher_get_stat(
        struct skcipher_alg *alg)
 {
-#ifdef CONFIG_CRYPTO_STATS
-       return &alg->stat;
-#else
-       return NULL;
-#endif
+       return skcipher_get_stat_common(&alg->co);
 }
 
 static inline int crypto_skcipher_errstat(struct skcipher_alg *alg, int err)
@@ -468,6 +467,7 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
                                  struct skcipher_request *req)
 {
        struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+       struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
 
        walk->total = req->cryptlen;
        walk->nbytes = 0;
@@ -485,10 +485,14 @@ static int skcipher_walk_skcipher(struct skcipher_walk *walk,
                       SKCIPHER_WALK_SLEEP : 0;
 
        walk->blocksize = crypto_skcipher_blocksize(tfm);
-       walk->stride = crypto_skcipher_walksize(tfm);
        walk->ivsize = crypto_skcipher_ivsize(tfm);
        walk->alignmask = crypto_skcipher_alignmask(tfm);
 
+       if (alg->co.base.cra_type != &crypto_skcipher_type)
+               walk->stride = alg->co.chunksize;
+       else
+               walk->stride = alg->walksize;
+
        return skcipher_walk_first(walk);
 }
 
@@ -616,6 +620,17 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
        unsigned long alignmask = crypto_skcipher_alignmask(tfm);
        int err;
 
+       if (cipher->co.base.cra_type != &crypto_skcipher_type) {
+               struct crypto_lskcipher **ctx = crypto_skcipher_ctx(tfm);
+
+               crypto_lskcipher_clear_flags(*ctx, CRYPTO_TFM_REQ_MASK);
+               crypto_lskcipher_set_flags(*ctx,
+                                          crypto_skcipher_get_flags(tfm) &
+                                          CRYPTO_TFM_REQ_MASK);
+               err = crypto_lskcipher_setkey(*ctx, key, keylen);
+               goto out;
+       }
+
        if (keylen < cipher->min_keysize || keylen > cipher->max_keysize)
                return -EINVAL;
 
@@ -624,6 +639,7 @@ int crypto_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
        else
                err = cipher->setkey(tfm, key, keylen);
 
+out:
        if (unlikely(err)) {
                skcipher_set_needkey(tfm);
                return err;
@@ -649,6 +665,8 @@ int crypto_skcipher_encrypt(struct skcipher_request *req)
 
        if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
                ret = -ENOKEY;
+       else if (alg->co.base.cra_type != &crypto_skcipher_type)
+               ret = crypto_lskcipher_encrypt_sg(req);
        else
                ret = alg->encrypt(req);
 
@@ -671,6 +689,8 @@ int crypto_skcipher_decrypt(struct skcipher_request *req)
 
        if (crypto_skcipher_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
                ret = -ENOKEY;
+       else if (alg->co.base.cra_type != &crypto_skcipher_type)
+               ret = crypto_lskcipher_decrypt_sg(req);
        else
                ret = alg->decrypt(req);
 
@@ -693,6 +713,9 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
 
        skcipher_set_needkey(skcipher);
 
+       if (tfm->__crt_alg->cra_type != &crypto_skcipher_type)
+               return crypto_init_lskcipher_ops_sg(tfm);
+
        if (alg->exit)
                skcipher->base.exit = crypto_skcipher_exit_tfm;
 
@@ -702,6 +725,14 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
        return 0;
 }
 
+static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg)
+{
+       if (alg->cra_type != &crypto_skcipher_type)
+               return sizeof(struct crypto_lskcipher *);
+
+       return crypto_alg_extsize(alg);
+}
+
 static void crypto_skcipher_free_instance(struct crypto_instance *inst)
 {
        struct skcipher_instance *skcipher =
@@ -770,7 +801,7 @@ static int __maybe_unused crypto_skcipher_report_stat(
 }
 
 static const struct crypto_type crypto_skcipher_type = {
-       .extsize = crypto_alg_extsize,
+       .extsize = crypto_skcipher_extsize,
        .init_tfm = crypto_skcipher_init_tfm,
        .free = crypto_skcipher_free_instance,
 #ifdef CONFIG_PROC_FS
@@ -783,7 +814,7 @@ static const struct crypto_type crypto_skcipher_type = {
        .report_stat = crypto_skcipher_report_stat,
 #endif
        .maskclear = ~CRYPTO_ALG_TYPE_MASK,
-       .maskset = CRYPTO_ALG_TYPE_MASK,
+       .maskset = CRYPTO_ALG_TYPE_SKCIPHER_MASK,
        .type = CRYPTO_ALG_TYPE_SKCIPHER,
        .tfmsize = offsetof(struct crypto_skcipher, base),
 };
@@ -834,23 +865,18 @@ int crypto_has_skcipher(const char *alg_name, u32 type, u32 mask)
 }
 EXPORT_SYMBOL_GPL(crypto_has_skcipher);
 
-static int skcipher_prepare_alg(struct skcipher_alg *alg)
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg)
 {
-       struct crypto_istat_cipher *istat = skcipher_get_stat(alg);
+       struct crypto_istat_cipher *istat = skcipher_get_stat_common(alg);
        struct crypto_alg *base = &alg->base;
 
-       if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
-           alg->walksize > PAGE_SIZE / 8)
+       if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8)
                return -EINVAL;
 
        if (!alg->chunksize)
                alg->chunksize = base->cra_blocksize;
-       if (!alg->walksize)
-               alg->walksize = alg->chunksize;
 
-       base->cra_type = &crypto_skcipher_type;
        base->cra_flags &= ~CRYPTO_ALG_TYPE_MASK;
-       base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
 
        if (IS_ENABLED(CONFIG_CRYPTO_STATS))
                memset(istat, 0, sizeof(*istat));
@@ -858,6 +884,27 @@ static int skcipher_prepare_alg(struct skcipher_alg *alg)
        return 0;
 }
 
+static int skcipher_prepare_alg(struct skcipher_alg *alg)
+{
+       struct crypto_alg *base = &alg->base;
+       int err;
+
+       err = skcipher_prepare_alg_common(&alg->co);
+       if (err)
+               return err;
+
+       if (alg->walksize > PAGE_SIZE / 8)
+               return -EINVAL;
+
+       if (!alg->walksize)
+               alg->walksize = alg->chunksize;
+
+       base->cra_type = &crypto_skcipher_type;
+       base->cra_flags |= CRYPTO_ALG_TYPE_SKCIPHER;
+
+       return 0;
+}
+
 int crypto_register_skcipher(struct skcipher_alg *alg)
 {
        struct crypto_alg *base = &alg->base;
diff --git a/crypto/skcipher.h b/crypto/skcipher.h
new file mode 100644 (file)
index 0000000..16c9484
--- /dev/null
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Cryptographic API.
+ *
+ * Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
+ */
+#ifndef _LOCAL_CRYPTO_SKCIPHER_H
+#define _LOCAL_CRYPTO_SKCIPHER_H
+
+#include <crypto/internal/skcipher.h>
+#include "internal.h"
+
+static inline struct crypto_istat_cipher *skcipher_get_stat_common(
+       struct skcipher_alg_common *alg)
+{
+#ifdef CONFIG_CRYPTO_STATS
+       return &alg->stat;
+#else
+       return NULL;
+#endif
+}
+
+int crypto_lskcipher_encrypt_sg(struct skcipher_request *req);
+int crypto_lskcipher_decrypt_sg(struct skcipher_request *req);
+int crypto_init_lskcipher_ops_sg(struct crypto_tfm *tfm);
+int skcipher_prepare_alg_common(struct skcipher_alg_common *alg);
+
+#endif /* _LOCAL_CRYPTO_SKCIPHER_H */
index 216878c8bc3d62f8abd6e708acffffae7d09e5df..15c7a3011269b71c22c96a53e9c76622bb64fef8 100644 (file)
@@ -408,17 +408,15 @@ static const struct testvec_config default_hash_testvec_configs[] = {
                .finalization_type = FINALIZATION_TYPE_FINAL,
                .key_offset = 1,
        }, {
-               .name = "digest buffer aligned only to alignmask",
+               .name = "digest misaligned buffer",
                .src_divs = {
                        {
                                .proportion_of_total = 10000,
                                .offset = 1,
-                               .offset_relative_to_alignmask = true,
                        },
                },
                .finalization_type = FINALIZATION_TYPE_DIGEST,
                .key_offset = 1,
-               .key_offset_relative_to_alignmask = true,
        }, {
                .name = "init+update+update+final two even splits",
                .src_divs = {
@@ -1275,7 +1273,6 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
                              u8 *hashstate)
 {
        struct crypto_shash *tfm = desc->tfm;
-       const unsigned int alignmask = crypto_shash_alignmask(tfm);
        const unsigned int digestsize = crypto_shash_digestsize(tfm);
        const unsigned int statesize = crypto_shash_statesize(tfm);
        const char *driver = crypto_shash_driver_name(tfm);
@@ -1287,7 +1284,7 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
        /* Set the key, if specified */
        if (vec->ksize) {
                err = do_setkey(crypto_shash_setkey, tfm, vec->key, vec->ksize,
-                               cfg, alignmask);
+                               cfg, 0);
                if (err) {
                        if (err == vec->setkey_error)
                                return 0;
@@ -1304,7 +1301,7 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec,
        }
 
        /* Build the scatterlist for the source data */
-       err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs);
+       err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
        if (err) {
                pr_err("alg: shash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
                       driver, vec_name, cfg->name);
@@ -1459,7 +1456,6 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
                              u8 *hashstate)
 {
        struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-       const unsigned int alignmask = crypto_ahash_alignmask(tfm);
        const unsigned int digestsize = crypto_ahash_digestsize(tfm);
        const unsigned int statesize = crypto_ahash_statesize(tfm);
        const char *driver = crypto_ahash_driver_name(tfm);
@@ -1475,7 +1471,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
        /* Set the key, if specified */
        if (vec->ksize) {
                err = do_setkey(crypto_ahash_setkey, tfm, vec->key, vec->ksize,
-                               cfg, alignmask);
+                               cfg, 0);
                if (err) {
                        if (err == vec->setkey_error)
                                return 0;
@@ -1492,7 +1488,7 @@ static int test_ahash_vec_cfg(const struct hash_testvec *vec,
        }
 
        /* Build the scatterlist for the source data */
-       err = build_hash_sglist(tsgl, vec, cfg, alignmask, divs);
+       err = build_hash_sglist(tsgl, vec, cfg, 0, divs);
        if (err) {
                pr_err("alg: ahash: %s: error preparing scatterlist for test vector %s, cfg=\"%s\"\n",
                       driver, vec_name, cfg->name);
@@ -4963,7 +4959,7 @@ static const struct alg_test_desc alg_test_descs[] = {
                }
        }, {
                .alg = "ecb(arc4)",
-               .generic_driver = "ecb(arc4)-generic",
+               .generic_driver = "arc4-generic",
                .test = alg_test_skcipher,
                .suite = {
                        .cipher = __VECS(arc4_tv_template)
@@ -5460,6 +5456,18 @@ static const struct alg_test_desc alg_test_descs[] = {
                .suite = {
                        .akcipher = __VECS(pkcs1pad_rsa_tv_template)
                }
+       }, {
+               .alg = "pkcs1pad(rsa,sha3-256)",
+               .test = alg_test_null,
+               .fips_allowed = 1,
+       }, {
+               .alg = "pkcs1pad(rsa,sha3-384)",
+               .test = alg_test_null,
+               .fips_allowed = 1,
+       }, {
+               .alg = "pkcs1pad(rsa,sha3-512)",
+               .test = alg_test_null,
+               .fips_allowed = 1,
        }, {
                .alg = "pkcs1pad(rsa,sha384)",
                .test = alg_test_null,
@@ -5772,16 +5780,6 @@ static const struct alg_test_desc alg_test_descs[] = {
                .suite = {
                        .hash = __VECS(xxhash64_tv_template)
                }
-       }, {
-               .alg = "zlib-deflate",
-               .test = alg_test_comp,
-               .fips_allowed = 1,
-               .suite = {
-                       .comp = {
-                               .comp = __VECS(zlib_deflate_comp_tv_template),
-                               .decomp = __VECS(zlib_deflate_decomp_tv_template)
-                       }
-               }
        }, {
                .alg = "zstd",
                .test = alg_test_comp,
@@ -5945,6 +5943,25 @@ test_done:
        return rc;
 
 notest:
+       if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_LSKCIPHER) {
+               char nalg[CRYPTO_MAX_ALG_NAME];
+
+               if (snprintf(nalg, sizeof(nalg), "ecb(%s)", alg) >=
+                   sizeof(nalg))
+                       goto notest2;
+
+               i = alg_find_test(nalg);
+               if (i < 0)
+                       goto notest2;
+
+               if (fips_enabled && !alg_test_descs[i].fips_allowed)
+                       goto non_fips_alg;
+
+               rc = alg_test_skcipher(alg_test_descs + i, driver, type, mask);
+               goto test_done;
+       }
+
+notest2:
        printk(KERN_INFO "alg: No test for %s (%s)\n", alg, driver);
 
        if (type & CRYPTO_ALG_FIPS_INTERNAL)
index 5ca7a412508fbfb239b26230cb07852b62718417..d7e98397549b5be5d5c3b67b2997c3e11dafe43b 100644 (file)
@@ -653,30 +653,6 @@ static const struct akcipher_testvec rsa_tv_template[] = {
 static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
        {
        .key =
-       "\x04\xf7\x46\xf8\x2f\x15\xf6\x22\x8e\xd7\x57\x4f\xcc\xe7\xbb\xc1"
-       "\xd4\x09\x73\xcf\xea\xd0\x15\x07\x3d\xa5\x8a\x8a\x95\x43\xe4\x68"
-       "\xea\xc6\x25\xc1\xc1\x01\x25\x4c\x7e\xc3\x3c\xa6\x04\x0a\xe7\x08"
-       "\x98",
-       .key_len = 49,
-       .params =
-       "\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
-       "\xce\x3d\x03\x01\x01",
-       .param_len = 21,
-       .m =
-       "\xcd\xb9\xd2\x1c\xb7\x6f\xcd\x44\xb3\xfd\x63\xea\xa3\x66\x7f\xae"
-       "\x63\x85\xe7\x82",
-       .m_size = 20,
-       .algo = OID_id_ecdsa_with_sha1,
-       .c =
-       "\x30\x35\x02\x19\x00\xba\xe5\x93\x83\x6e\xb6\x3b\x63\xa0\x27\x91"
-       "\xc6\xf6\x7f\xc3\x09\xad\x59\xad\x88\x27\xd6\x92\x6b\x02\x18\x10"
-       "\x68\x01\x9d\xba\xce\x83\x08\xef\x95\x52\x7b\xa0\x0f\xe4\x18\x86"
-       "\x80\x6f\xa5\x79\x77\xda\xd0",
-       .c_size = 55,
-       .public_key_vec = true,
-       .siggen_sigver_test = true,
-       }, {
-       .key =
        "\x04\xb6\x4b\xb1\xd1\xac\xba\x24\x8f\x65\xb2\x60\x00\x90\xbf\xbd"
        "\x78\x05\x73\xe9\x79\x1d\x6f\x7c\x0b\xd2\xc3\x93\xa7\x28\xe1\x75"
        "\xf7\xd5\x95\x1d\x28\x10\xc0\x75\x50\x5c\x1a\x4f\x3f\x8f\xa5\xee"
@@ -780,32 +756,6 @@ static const struct akcipher_testvec ecdsa_nist_p192_tv_template[] = {
 static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
        {
        .key =
-       "\x04\xb9\x7b\xbb\xd7\x17\x64\xd2\x7e\xfc\x81\x5d\x87\x06\x83\x41"
-       "\x22\xd6\x9a\xaa\x87\x17\xec\x4f\x63\x55\x2f\x94\xba\xdd\x83\xe9"
-       "\x34\x4b\xf3\xe9\x91\x13\x50\xb6\xcb\xca\x62\x08\xe7\x3b\x09\xdc"
-       "\xc3\x63\x4b\x2d\xb9\x73\x53\xe4\x45\xe6\x7c\xad\xe7\x6b\xb0\xe8"
-       "\xaf",
-       .key_len = 65,
-       .params =
-       "\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
-       "\xce\x3d\x03\x01\x07",
-       .param_len = 21,
-       .m =
-       "\xc2\x2b\x5f\x91\x78\x34\x26\x09\x42\x8d\x6f\x51\xb2\xc5\xaf\x4c"
-       "\x0b\xde\x6a\x42",
-       .m_size = 20,
-       .algo = OID_id_ecdsa_with_sha1,
-       .c =
-       "\x30\x46\x02\x21\x00\xf9\x25\xce\x9f\x3a\xa6\x35\x81\xcf\xd4\xe7"
-       "\xb7\xf0\x82\x56\x41\xf7\xd4\xad\x8d\x94\x5a\x69\x89\xee\xca\x6a"
-       "\x52\x0e\x48\x4d\xcc\x02\x21\x00\xd7\xe4\xef\x52\x66\xd3\x5b\x9d"
-       "\x8a\xfa\x54\x93\x29\xa7\x70\x86\xf1\x03\x03\xf3\x3b\xe2\x73\xf7"
-       "\xfb\x9d\x8b\xde\xd4\x8d\x6f\xad",
-       .c_size = 72,
-       .public_key_vec = true,
-       .siggen_sigver_test = true,
-       }, {
-       .key =
        "\x04\x8b\x6d\xc0\x33\x8e\x2d\x8b\x67\xf5\xeb\xc4\x7f\xa0\xf5\xd9"
        "\x7b\x03\xa5\x78\x9a\xb5\xea\x14\xe4\x23\xd0\xaf\xd7\x0e\x2e\xa0"
        "\xc9\x8b\xdb\x95\xf8\xb3\xaf\xac\x00\x2c\x2c\x1f\x7a\xfd\x95\x88"
@@ -916,36 +866,6 @@ static const struct akcipher_testvec ecdsa_nist_p256_tv_template[] = {
 
 static const struct akcipher_testvec ecdsa_nist_p384_tv_template[] = {
        {
-       .key = /* secp384r1(sha1) */
-       "\x04\x89\x25\xf3\x97\x88\xcb\xb0\x78\xc5\x72\x9a\x14\x6e\x7a\xb1"
-       "\x5a\xa5\x24\xf1\x95\x06\x9e\x28\xfb\xc4\xb9\xbe\x5a\x0d\xd9\x9f"
-       "\xf3\xd1\x4d\x2d\x07\x99\xbd\xda\xa7\x66\xec\xbb\xea\xba\x79\x42"
-       "\xc9\x34\x89\x6a\xe7\x0b\xc3\xf2\xfe\x32\x30\xbe\xba\xf9\xdf\x7e"
-       "\x4b\x6a\x07\x8e\x26\x66\x3f\x1d\xec\xa2\x57\x91\x51\xdd\x17\x0e"
-       "\x0b\x25\xd6\x80\x5c\x3b\xe6\x1a\x98\x48\x91\x45\x7a\x73\xb0\xc3"
-       "\xf1",
-       .key_len = 97,
-       .params =
-       "\x30\x10\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x05\x2b\x81\x04"
-       "\x00\x22",
-       .param_len = 18,
-       .m =
-       "\x12\x55\x28\xf0\x77\xd5\xb6\x21\x71\x32\x48\xcd\x28\xa8\x25\x22"
-       "\x3a\x69\xc1\x93",
-       .m_size = 20,
-       .algo = OID_id_ecdsa_with_sha1,
-       .c =
-       "\x30\x66\x02\x31\x00\xf5\x0f\x24\x4c\x07\x93\x6f\x21\x57\x55\x07"
-       "\x20\x43\x30\xde\xa0\x8d\x26\x8e\xae\x63\x3f\xbc\x20\x3a\xc6\xf1"
-       "\x32\x3c\xce\x70\x2b\x78\xf1\x4c\x26\xe6\x5b\x86\xcf\xec\x7c\x7e"
-       "\xd0\x87\xd7\xd7\x6e\x02\x31\x00\xcd\xbb\x7e\x81\x5d\x8f\x63\xc0"
-       "\x5f\x63\xb1\xbe\x5e\x4c\x0e\xa1\xdf\x28\x8c\x1b\xfa\xf9\x95\x88"
-       "\x74\xa0\x0f\xbf\xaf\xc3\x36\x76\x4a\xa1\x59\xf1\x1c\xa4\x58\x26"
-       "\x79\x12\x2a\xb7\xc5\x15\x92\xc5",
-       .c_size = 104,
-       .public_key_vec = true,
-       .siggen_sigver_test = true,
-       }, {
        .key = /* secp384r1(sha224) */
        "\x04\x69\x6c\xcf\x62\xee\xd0\x0d\xe5\xb5\x2f\x70\x54\xcf\x26\xa0"
        "\xd9\x98\x8d\x92\x2a\xab\x9b\x11\xcb\x48\x18\xa1\xa9\x0d\xd5\x18"
@@ -35754,81 +35674,6 @@ static const struct comp_testvec deflate_decomp_tv_template[] = {
        },
 };
 
-static const struct comp_testvec zlib_deflate_comp_tv_template[] = {
-       {
-               .inlen  = 70,
-               .outlen = 44,
-               .input  = "Join us now and share the software "
-                       "Join us now and share the software ",
-               .output = "\x78\x5e\xf3\xca\xcf\xcc\x53\x28"
-                         "\x2d\x56\xc8\xcb\x2f\x57\x48\xcc"
-                         "\x4b\x51\x28\xce\x48\x2c\x4a\x55"
-                         "\x28\xc9\x48\x55\x28\xce\x4f\x2b"
-                         "\x29\x07\x71\xbc\x08\x2b\x01\x00"
-                         "\x7c\x65\x19\x3d",
-       }, {
-               .inlen  = 191,
-               .outlen = 129,
-               .input  = "This document describes a compression method based on the DEFLATE"
-                       "compression algorithm.  This document defines the application of "
-                       "the DEFLATE algorithm to the IP Payload Compression Protocol.",
-               .output = "\x78\x5e\x5d\xce\x41\x0a\xc3\x30"
-                         "\x0c\x04\xc0\xaf\xec\x0b\xf2\x87"
-                         "\xd2\xa6\x50\xe8\xc1\x07\x7f\x40"
-                         "\xb1\x95\x5a\x60\x5b\xc6\x56\x0f"
-                         "\xfd\x7d\x93\x1e\x42\xe8\x51\xec"
-                         "\xee\x20\x9f\x64\x20\x6a\x78\x17"
-                         "\xae\x86\xc8\x23\x74\x59\x78\x80"
-                         "\x10\xb4\xb4\xce\x63\x88\x56\x14"
-                         "\xb6\xa4\x11\x0b\x0d\x8e\xd8\x6e"
-                         "\x4b\x8c\xdb\x7c\x7f\x5e\xfc\x7c"
-                         "\xae\x51\x7e\x69\x17\x4b\x65\x02"
-                         "\xfc\x1f\xbc\x4a\xdd\xd8\x7d\x48"
-                         "\xad\x65\x09\x64\x3b\xac\xeb\xd9"
-                         "\xc2\x01\xc0\xf4\x17\x3c\x1c\x1c"
-                         "\x7d\xb2\x52\xc4\xf5\xf4\x8f\xeb"
-                         "\x6a\x1a\x34\x4f\x5f\x2e\x32\x45"
-                         "\x4e",
-       },
-};
-
-static const struct comp_testvec zlib_deflate_decomp_tv_template[] = {
-       {
-               .inlen  = 128,
-               .outlen = 191,
-               .input  = "\x78\x9c\x5d\x8d\x31\x0e\xc2\x30"
-                         "\x10\x04\xbf\xb2\x2f\xc8\x1f\x10"
-                         "\x04\x09\x89\xc2\x85\x3f\x70\xb1"
-                         "\x2f\xf8\x24\xdb\x67\xd9\x47\xc1"
-                         "\xef\x49\x68\x12\x51\xae\x76\x67"
-                         "\xd6\x27\x19\x88\x1a\xde\x85\xab"
-                         "\x21\xf2\x08\x5d\x16\x1e\x20\x04"
-                         "\x2d\xad\xf3\x18\xa2\x15\x85\x2d"
-                         "\x69\xc4\x42\x83\x23\xb6\x6c\x89"
-                         "\x71\x9b\xef\xcf\x8b\x9f\xcf\x33"
-                         "\xca\x2f\xed\x62\xa9\x4c\x80\xff"
-                         "\x13\xaf\x52\x37\xed\x0e\x52\x6b"
-                         "\x59\x02\xd9\x4e\xe8\x7a\x76\x1d"
-                         "\x02\x98\xfe\x8a\x87\x83\xa3\x4f"
-                         "\x56\x8a\xb8\x9e\x8e\x5c\x57\xd3"
-                         "\xa0\x79\xfa\x02\x2e\x32\x45\x4e",
-               .output = "This document describes a compression method based on the DEFLATE"
-                       "compression algorithm.  This document defines the application of "
-                       "the DEFLATE algorithm to the IP Payload Compression Protocol.",
-       }, {
-               .inlen  = 44,
-               .outlen = 70,
-               .input  = "\x78\x9c\xf3\xca\xcf\xcc\x53\x28"
-                         "\x2d\x56\xc8\xcb\x2f\x57\x48\xcc"
-                         "\x4b\x51\x28\xce\x48\x2c\x4a\x55"
-                         "\x28\xc9\x48\x55\x28\xce\x4f\x2b"
-                         "\x29\x07\x71\xbc\x08\x2b\x01\x00"
-                         "\x7c\x65\x19\x3d",
-               .output = "Join us now and share the software "
-                       "Join us now and share the software ",
-       },
-};
-
 /*
  * LZO test vectors (null-terminated strings).
  */
index 4633b2dda1e0a5a4fe7f9f07e4b53468d500c44d..0a1d8efa6c1a6f42f368342f7eadef33e1b546a5 100644 (file)
@@ -649,7 +649,6 @@ static int vmac_create(struct crypto_template *tmpl, struct rtattr **tb)
 
        inst->alg.base.cra_priority = alg->cra_priority;
        inst->alg.base.cra_blocksize = alg->cra_blocksize;
-       inst->alg.base.cra_alignmask = alg->cra_alignmask;
 
        inst->alg.base.cra_ctxsize = sizeof(struct vmac_tfm_ctx);
        inst->alg.base.cra_init = vmac_init_tfm;
index 6074c5c1da492e0f9045f5ac7b105abf9fa2748c..a9e8ee9c1949cba2b20e55d0d1fd334ad7a20086 100644 (file)
@@ -27,7 +27,7 @@ static u_int32_t ks[12] = {0x01010101, 0x01010101, 0x01010101, 0x01010101,
  */
 struct xcbc_tfm_ctx {
        struct crypto_cipher *child;
-       u8 ctx[];
+       u8 consts[];
 };
 
 /*
@@ -43,7 +43,7 @@ struct xcbc_tfm_ctx {
  */
 struct xcbc_desc_ctx {
        unsigned int len;
-       u8 ctx[];
+       u8 odds[];
 };
 
 #define XCBC_BLOCKSIZE 16
@@ -51,9 +51,8 @@ struct xcbc_desc_ctx {
 static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
                                     const u8 *inkey, unsigned int keylen)
 {
-       unsigned long alignmask = crypto_shash_alignmask(parent);
        struct xcbc_tfm_ctx *ctx = crypto_shash_ctx(parent);
-       u8 *consts = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
+       u8 *consts = ctx->consts;
        int err = 0;
        u8 key1[XCBC_BLOCKSIZE];
        int bs = sizeof(key1);
@@ -71,10 +70,9 @@ static int crypto_xcbc_digest_setkey(struct crypto_shash *parent,
 
 static int crypto_xcbc_digest_init(struct shash_desc *pdesc)
 {
-       unsigned long alignmask = crypto_shash_alignmask(pdesc->tfm);
        struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
        int bs = crypto_shash_blocksize(pdesc->tfm);
-       u8 *prev = PTR_ALIGN(&ctx->ctx[0], alignmask + 1) + bs;
+       u8 *prev = &ctx->odds[bs];
 
        ctx->len = 0;
        memset(prev, 0, bs);
@@ -86,12 +84,11 @@ static int crypto_xcbc_digest_update(struct shash_desc *pdesc, const u8 *p,
                                     unsigned int len)
 {
        struct crypto_shash *parent = pdesc->tfm;
-       unsigned long alignmask = crypto_shash_alignmask(parent);
        struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent);
        struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
        struct crypto_cipher *tfm = tctx->child;
        int bs = crypto_shash_blocksize(parent);
-       u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
+       u8 *odds = ctx->odds;
        u8 *prev = odds + bs;
 
        /* checking the data can fill the block */
@@ -132,13 +129,11 @@ static int crypto_xcbc_digest_update(struct shash_desc *pdesc, const u8 *p,
 static int crypto_xcbc_digest_final(struct shash_desc *pdesc, u8 *out)
 {
        struct crypto_shash *parent = pdesc->tfm;
-       unsigned long alignmask = crypto_shash_alignmask(parent);
        struct xcbc_tfm_ctx *tctx = crypto_shash_ctx(parent);
        struct xcbc_desc_ctx *ctx = shash_desc_ctx(pdesc);
        struct crypto_cipher *tfm = tctx->child;
        int bs = crypto_shash_blocksize(parent);
-       u8 *consts = PTR_ALIGN(&tctx->ctx[0], alignmask + 1);
-       u8 *odds = PTR_ALIGN(&ctx->ctx[0], alignmask + 1);
+       u8 *odds = ctx->odds;
        u8 *prev = odds + bs;
        unsigned int offset = 0;
 
@@ -157,7 +152,7 @@ static int crypto_xcbc_digest_final(struct shash_desc *pdesc, u8 *out)
        }
 
        crypto_xor(prev, odds, bs);
-       crypto_xor(prev, consts + offset, bs);
+       crypto_xor(prev, &tctx->consts[offset], bs);
 
        crypto_cipher_encrypt_one(tfm, out, prev);
 
@@ -191,7 +186,6 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
        struct shash_instance *inst;
        struct crypto_cipher_spawn *spawn;
        struct crypto_alg *alg;
-       unsigned long alignmask;
        u32 mask;
        int err;
 
@@ -218,21 +212,15 @@ static int xcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
        if (err)
                goto err_free_inst;
 
-       alignmask = alg->cra_alignmask | 3;
-       inst->alg.base.cra_alignmask = alignmask;
        inst->alg.base.cra_priority = alg->cra_priority;
        inst->alg.base.cra_blocksize = alg->cra_blocksize;
+       inst->alg.base.cra_ctxsize = sizeof(struct xcbc_tfm_ctx) +
+                                    alg->cra_blocksize * 2;
 
        inst->alg.digestsize = alg->cra_blocksize;
-       inst->alg.descsize = ALIGN(sizeof(struct xcbc_desc_ctx),
-                                  crypto_tfm_ctx_alignment()) +
-                            (alignmask &
-                             ~(crypto_tfm_ctx_alignment() - 1)) +
+       inst->alg.descsize = sizeof(struct xcbc_desc_ctx) +
                             alg->cra_blocksize * 2;
 
-       inst->alg.base.cra_ctxsize = ALIGN(sizeof(struct xcbc_tfm_ctx),
-                                          alignmask + 1) +
-                                    alg->cra_blocksize * 2;
        inst->alg.base.cra_init = xcbc_init_tfm;
        inst->alg.base.cra_exit = xcbc_exit_tfm;
 
index 548b302c6c6a00a1ff51011123f1373d6c870dc2..672e1a3f0b0c933f6d1bd54fb597ab68ac02f50a 100644 (file)
@@ -28,7 +28,7 @@ struct xts_tfm_ctx {
 
 struct xts_instance_ctx {
        struct crypto_skcipher_spawn spawn;
-       char name[CRYPTO_MAX_ALG_NAME];
+       struct crypto_cipher_spawn tweak_spawn;
 };
 
 struct xts_request_ctx {
@@ -306,7 +306,7 @@ static int xts_init_tfm(struct crypto_skcipher *tfm)
 
        ctx->child = child;
 
-       tweak = crypto_alloc_cipher(ictx->name, 0, 0);
+       tweak = crypto_spawn_cipher(&ictx->tweak_spawn);
        if (IS_ERR(tweak)) {
                crypto_free_skcipher(ctx->child);
                return PTR_ERR(tweak);
@@ -333,14 +333,16 @@ static void xts_free_instance(struct skcipher_instance *inst)
        struct xts_instance_ctx *ictx = skcipher_instance_ctx(inst);
 
        crypto_drop_skcipher(&ictx->spawn);
+       crypto_drop_cipher(&ictx->tweak_spawn);
        kfree(inst);
 }
 
 static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
+       struct skcipher_alg_common *alg;
+       char name[CRYPTO_MAX_ALG_NAME];
        struct skcipher_instance *inst;
        struct xts_instance_ctx *ctx;
-       struct skcipher_alg *alg;
        const char *cipher_name;
        u32 mask;
        int err;
@@ -363,25 +365,25 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
                                   cipher_name, 0, mask);
        if (err == -ENOENT) {
                err = -ENAMETOOLONG;
-               if (snprintf(ctx->name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+               if (snprintf(name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
                             cipher_name) >= CRYPTO_MAX_ALG_NAME)
                        goto err_free_inst;
 
                err = crypto_grab_skcipher(&ctx->spawn,
                                           skcipher_crypto_instance(inst),
-                                          ctx->name, 0, mask);
+                                          name, 0, mask);
        }
 
        if (err)
                goto err_free_inst;
 
-       alg = crypto_skcipher_spawn_alg(&ctx->spawn);
+       alg = crypto_spawn_skcipher_alg_common(&ctx->spawn);
 
        err = -EINVAL;
        if (alg->base.cra_blocksize != XTS_BLOCK_SIZE)
                goto err_free_inst;
 
-       if (crypto_skcipher_alg_ivsize(alg))
+       if (alg->ivsize)
                goto err_free_inst;
 
        err = crypto_inst_setname(skcipher_crypto_instance(inst), "xts",
@@ -398,31 +400,36 @@ static int xts_create(struct crypto_template *tmpl, struct rtattr **tb)
        if (!strncmp(cipher_name, "ecb(", 4)) {
                int len;
 
-               len = strscpy(ctx->name, cipher_name + 4, sizeof(ctx->name));
+               len = strscpy(name, cipher_name + 4, sizeof(name));
                if (len < 2)
                        goto err_free_inst;
 
-               if (ctx->name[len - 1] != ')')
+               if (name[len - 1] != ')')
                        goto err_free_inst;
 
-               ctx->name[len - 1] = 0;
+               name[len - 1] = 0;
 
                if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
-                            "xts(%s)", ctx->name) >= CRYPTO_MAX_ALG_NAME) {
+                            "xts(%s)", name) >= CRYPTO_MAX_ALG_NAME) {
                        err = -ENAMETOOLONG;
                        goto err_free_inst;
                }
        } else
                goto err_free_inst;
 
+       err = crypto_grab_cipher(&ctx->tweak_spawn,
+                                skcipher_crypto_instance(inst), name, 0, mask);
+       if (err)
+               goto err_free_inst;
+
        inst->alg.base.cra_priority = alg->base.cra_priority;
        inst->alg.base.cra_blocksize = XTS_BLOCK_SIZE;
        inst->alg.base.cra_alignmask = alg->base.cra_alignmask |
                                       (__alignof__(u64) - 1);
 
        inst->alg.ivsize = XTS_BLOCK_SIZE;
-       inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) * 2;
-       inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg) * 2;
+       inst->alg.min_keysize = alg->min_keysize * 2;
+       inst->alg.max_keysize = alg->max_keysize * 2;
 
        inst->alg.base.cra_ctxsize = sizeof(struct xts_tfm_ctx);
 
index e19b0f9f48b97f3f9a8964dfc015ad1a6adaffe8..b03e8030062758b061cd5d17e356e7756bfb0ef4 100644 (file)
@@ -70,7 +70,7 @@ static int bcm2835_rng_read(struct hwrng *rng, void *buf, size_t max,
        while ((rng_readl(priv, RNG_STATUS) >> 24) == 0) {
                if (!wait)
                        return 0;
-               hwrng_msleep(rng, 1000);
+               hwrng_yield(rng);
        }
 
        num_words = rng_readl(priv, RNG_STATUS) >> 24;
@@ -149,8 +149,6 @@ static int bcm2835_rng_probe(struct platform_device *pdev)
        if (!priv)
                return -ENOMEM;
 
-       platform_set_drvdata(pdev, priv);
-
        /* map peripheral */
        priv->base = devm_platform_ioremap_resource(pdev, 0);
        if (IS_ERR(priv->base))
index e3598ec9cfca8b6f6e22f200dd7bf4e97423b2c9..420f155d251fb50a4e2e93089388c68b7bf8f66b 100644 (file)
@@ -678,6 +678,12 @@ long hwrng_msleep(struct hwrng *rng, unsigned int msecs)
 }
 EXPORT_SYMBOL_GPL(hwrng_msleep);
 
+long hwrng_yield(struct hwrng *rng)
+{
+       return wait_for_completion_interruptible_timeout(&rng->dying, 1);
+}
+EXPORT_SYMBOL_GPL(hwrng_yield);
+
 static int __init hwrng_modinit(void)
 {
        int ret;
index 12fbe809183190209c371a3983b1270e2f3871f1..159baf00a86755d93fd364bd71c553d84899738d 100644 (file)
@@ -58,7 +58,8 @@ struct amd_geode_priv {
 
 static int geode_rng_data_read(struct hwrng *rng, u32 *data)
 {
-       void __iomem *mem = (void __iomem *)rng->priv;
+       struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
+       void __iomem *mem = priv->membase;
 
        *data = readl(mem + GEODE_RNG_DATA_REG);
 
@@ -67,7 +68,8 @@ static int geode_rng_data_read(struct hwrng *rng, u32 *data)
 
 static int geode_rng_data_present(struct hwrng *rng, int wait)
 {
-       void __iomem *mem = (void __iomem *)rng->priv;
+       struct amd_geode_priv *priv = (struct amd_geode_priv *)rng->priv;
+       void __iomem *mem = priv->membase;
        int data, i;
 
        for (i = 0; i < 20; i++) {
index 96438f85cafa7124d85f3db377465e342c8d3038..b6f27566e0ba3b51e2df436661cb318deefcf9ad 100644 (file)
@@ -79,8 +79,6 @@ static int hisi_rng_probe(struct platform_device *pdev)
        if (!rng)
                return -ENOMEM;
 
-       platform_set_drvdata(pdev, rng);
-
        rng->base = devm_platform_ioremap_resource(pdev, 0);
        if (IS_ERR(rng->base))
                return PTR_ERR(rng->base);
index e4b385b01b1134017e37f3a0c0e9903b93492d5b..118a72acb99b40cbe5b8ab2b2b739a04ce6a8b15 100644 (file)
@@ -51,8 +51,8 @@
 
 #define RNGC_ERROR_STATUS_STAT_ERR     0x00000008
 
-#define RNGC_TIMEOUT  3000 /* 3 sec */
-
+#define RNGC_SELFTEST_TIMEOUT 2500 /* us */
+#define RNGC_SEED_TIMEOUT      200 /* ms */
 
 static bool self_test = true;
 module_param(self_test, bool, 0);
@@ -110,7 +110,8 @@ static int imx_rngc_self_test(struct imx_rngc *rngc)
        cmd = readl(rngc->base + RNGC_COMMAND);
        writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND);
 
-       ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
+       ret = wait_for_completion_timeout(&rngc->rng_op_done,
+                                         usecs_to_jiffies(RNGC_SELFTEST_TIMEOUT));
        imx_rngc_irq_mask_clear(rngc);
        if (!ret)
                return -ETIMEDOUT;
@@ -182,7 +183,8 @@ static int imx_rngc_init(struct hwrng *rng)
                cmd = readl(rngc->base + RNGC_COMMAND);
                writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND);
 
-               ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
+               ret = wait_for_completion_timeout(&rngc->rng_op_done,
+                                                 msecs_to_jiffies(RNGC_SEED_TIMEOUT));
                if (!ret) {
                        ret = -ETIMEDOUT;
                        goto err;
index 2f2f21f1b659e0a2749b95f7eb1aab53c63d07dc..dff7b9db7044ce513808e4606db80c71425f0028 100644 (file)
@@ -81,7 +81,6 @@ struct trng_regs {
 };
 
 struct ks_sa_rng {
-       struct device   *dev;
        struct hwrng    rng;
        struct clk      *clk;
        struct regmap   *regmap_cfg;
@@ -113,8 +112,7 @@ static unsigned int refill_delay_ns(unsigned long clk_rate)
 static int ks_sa_rng_init(struct hwrng *rng)
 {
        u32 value;
-       struct device *dev = (struct device *)rng->priv;
-       struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+       struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
        unsigned long clk_rate = clk_get_rate(ks_sa_rng->clk);
 
        /* Enable RNG module */
@@ -153,8 +151,7 @@ static int ks_sa_rng_init(struct hwrng *rng)
 
 static void ks_sa_rng_cleanup(struct hwrng *rng)
 {
-       struct device *dev = (struct device *)rng->priv;
-       struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+       struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
 
        /* Disable RNG */
        writel(0, &ks_sa_rng->reg_rng->control);
@@ -164,8 +161,7 @@ static void ks_sa_rng_cleanup(struct hwrng *rng)
 
 static int ks_sa_rng_data_read(struct hwrng *rng, u32 *data)
 {
-       struct device *dev = (struct device *)rng->priv;
-       struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+       struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
 
        /* Read random data */
        data[0] = readl(&ks_sa_rng->reg_rng->output_l);
@@ -179,8 +175,7 @@ static int ks_sa_rng_data_read(struct hwrng *rng, u32 *data)
 
 static int ks_sa_rng_data_present(struct hwrng *rng, int wait)
 {
-       struct device *dev = (struct device *)rng->priv;
-       struct ks_sa_rng *ks_sa_rng = dev_get_drvdata(dev);
+       struct ks_sa_rng *ks_sa_rng = container_of(rng, struct ks_sa_rng, rng);
        u64 now = ktime_get_ns();
 
        u32     ready;
@@ -217,7 +212,6 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
        if (!ks_sa_rng)
                return -ENOMEM;
 
-       ks_sa_rng->dev = dev;
        ks_sa_rng->rng = (struct hwrng) {
                .name = "ks_sa_hwrng",
                .init = ks_sa_rng_init,
@@ -225,7 +219,6 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
                .data_present = ks_sa_rng_data_present,
                .cleanup = ks_sa_rng_cleanup,
        };
-       ks_sa_rng->rng.priv = (unsigned long)dev;
 
        ks_sa_rng->reg_rng = devm_platform_ioremap_resource(pdev, 0);
        if (IS_ERR(ks_sa_rng->reg_rng))
@@ -235,21 +228,16 @@ static int ks_sa_rng_probe(struct platform_device *pdev)
                syscon_regmap_lookup_by_phandle(dev->of_node,
                                                "ti,syscon-sa-cfg");
 
-       if (IS_ERR(ks_sa_rng->regmap_cfg)) {
-               dev_err(dev, "syscon_node_to_regmap failed\n");
-               return -EINVAL;
-       }
+       if (IS_ERR(ks_sa_rng->regmap_cfg))
+               return dev_err_probe(dev, -EINVAL, "syscon_node_to_regmap failed\n");
 
        pm_runtime_enable(dev);
        ret = pm_runtime_resume_and_get(dev);
        if (ret < 0) {
-               dev_err(dev, "Failed to enable SA power-domain\n");
                pm_runtime_disable(dev);
-               return ret;
+               return dev_err_probe(dev, ret, "Failed to enable SA power-domain\n");
        }
 
-       platform_set_drvdata(pdev, ks_sa_rng);
-
        return devm_hwrng_register(&pdev->dev, &ks_sa_rng->rng);
 }
 
index a4eb8e35f13d7d02ee1317fecd3577b555843270..75225eb9fef6265a03d72f62fa0a07a63b548a5f 100644 (file)
 #include <linux/types.h>
 #include <linux/of.h>
 #include <linux/clk.h>
+#include <linux/iopoll.h>
 
-#define RNG_DATA 0x00
+#define RNG_DATA       0x00
+#define RNG_S4_DATA    0x08
+#define RNG_S4_CFG     0x00
+
+#define RUN_BIT                BIT(0)
+#define SEED_READY_STS_BIT     BIT(31)
+
+struct meson_rng_priv {
+       int (*read)(struct hwrng *rng, void *buf, size_t max, bool wait);
+};
 
 struct meson_rng_data {
        void __iomem *base;
        struct hwrng rng;
+       struct device *dev;
 };
 
 static int meson_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
@@ -31,16 +42,62 @@ static int meson_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
        return sizeof(u32);
 }
 
+static int meson_rng_wait_status(void __iomem *cfg_addr, int bit)
+{
+       u32 status = 0;
+       int ret;
+
+       ret = readl_relaxed_poll_timeout_atomic(cfg_addr,
+                                               status, !(status & bit),
+                                               10, 10000);
+       if (ret)
+               return -EBUSY;
+
+       return 0;
+}
+
+static int meson_s4_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
+{
+       struct meson_rng_data *data =
+                       container_of(rng, struct meson_rng_data, rng);
+
+       void __iomem *cfg_addr = data->base + RNG_S4_CFG;
+       int err;
+
+       writel_relaxed(readl_relaxed(cfg_addr) | SEED_READY_STS_BIT, cfg_addr);
+
+       err = meson_rng_wait_status(cfg_addr, SEED_READY_STS_BIT);
+       if (err) {
+               dev_err(data->dev, "Seed isn't ready, try again\n");
+               return err;
+       }
+
+       err = meson_rng_wait_status(cfg_addr, RUN_BIT);
+       if (err) {
+               dev_err(data->dev, "Can't get random number, try again\n");
+               return err;
+       }
+
+       *(u32 *)buf = readl_relaxed(data->base + RNG_S4_DATA);
+
+       return sizeof(u32);
+}
+
 static int meson_rng_probe(struct platform_device *pdev)
 {
        struct device *dev = &pdev->dev;
        struct meson_rng_data *data;
        struct clk *core_clk;
+       const struct meson_rng_priv *priv;
 
        data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
        if (!data)
                return -ENOMEM;
 
+       priv = device_get_match_data(&pdev->dev);
+       if (!priv)
+               return -ENODEV;
+
        data->base = devm_platform_ioremap_resource(pdev, 0);
        if (IS_ERR(data->base))
                return PTR_ERR(data->base);
@@ -51,13 +108,30 @@ static int meson_rng_probe(struct platform_device *pdev)
                                     "Failed to get core clock\n");
 
        data->rng.name = pdev->name;
-       data->rng.read = meson_rng_read;
+       data->rng.read = priv->read;
+
+       data->dev = &pdev->dev;
 
        return devm_hwrng_register(dev, &data->rng);
 }
 
+static const struct meson_rng_priv meson_rng_priv = {
+       .read = meson_rng_read,
+};
+
+static const struct meson_rng_priv meson_rng_priv_s4 = {
+       .read = meson_s4_rng_read,
+};
+
 static const struct of_device_id meson_rng_of_match[] = {
-       { .compatible = "amlogic,meson-rng", },
+       {
+               .compatible = "amlogic,meson-rng",
+               .data = (void *)&meson_rng_priv,
+       },
+       {
+               .compatible = "amlogic,meson-s4-rng",
+               .data = (void *)&meson_rng_priv_s4,
+       },
        {},
 };
 MODULE_DEVICE_TABLE(of, meson_rng_of_match);
index c6972734ae62e8e31b734b10beaa6ceed13af326..0994024daa703248ba01a7b05c06ad88853db382 100644 (file)
@@ -79,8 +79,6 @@ static int mpfs_rng_probe(struct platform_device *pdev)
        rng_priv->rng.read = mpfs_rng_read;
        rng_priv->rng.name = pdev->name;
 
-       platform_set_drvdata(pdev, rng_priv);
-
        ret = devm_hwrng_register(&pdev->dev, &rng_priv->rng);
        if (ret)
                return dev_err_probe(&pdev->dev, ret, "Failed to register MPFS hwrng\n");
index 73e408146420d5573099942520870a8078f0986e..aaae16b98475a202a84653291159732a3ca5e6ce 100644 (file)
@@ -14,7 +14,8 @@
 #include <linux/hw_random.h>
 
 #include <linux/of.h>
-#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/property.h>
 
 #include <asm/hypervisor.h>
 
@@ -695,20 +696,15 @@ static void n2rng_driver_version(void)
 static const struct of_device_id n2rng_match[];
 static int n2rng_probe(struct platform_device *op)
 {
-       const struct of_device_id *match;
        int err = -ENOMEM;
        struct n2rng *np;
 
-       match = of_match_device(n2rng_match, &op->dev);
-       if (!match)
-               return -EINVAL;
-
        n2rng_driver_version();
        np = devm_kzalloc(&op->dev, sizeof(*np), GFP_KERNEL);
        if (!np)
                goto out;
        np->op = op;
-       np->data = (struct n2rng_template *)match->data;
+       np->data = (struct n2rng_template *)device_get_match_data(&op->dev);
 
        INIT_DELAYED_WORK(&np->work, n2rng_work);
 
index 8c6a40d6ce3dea2ca56398af1df196007cb98919..a2009fc4ad3c1c0691a1189932ada0c3858d5595 100644 (file)
@@ -88,4 +88,5 @@ static struct amba_driver nmk_rng_driver = {
 
 module_amba_driver(nmk_rng_driver);
 
+MODULE_DESCRIPTION("ST-Ericsson Nomadik Random Number Generator");
 MODULE_LICENSE("GPL");
index 8561a09b46814eec46c410f62e98cd334bb8b974..412f544050364d6d76a1d502db6ec65a7e66daa8 100644 (file)
@@ -33,7 +33,7 @@ static int octeon_rng_init(struct hwrng *rng)
        ctl.u64 = 0;
        ctl.s.ent_en = 1; /* Enable the entropy source.  */
        ctl.s.rng_en = 1; /* Enable the RNG hardware.  */
-       cvmx_write_csr((__force u64)p->control_status, ctl.u64);
+       cvmx_write_csr((unsigned long)p->control_status, ctl.u64);
        return 0;
 }
 
@@ -44,14 +44,14 @@ static void octeon_rng_cleanup(struct hwrng *rng)
 
        ctl.u64 = 0;
        /* Disable everything.  */
-       cvmx_write_csr((__force u64)p->control_status, ctl.u64);
+       cvmx_write_csr((unsigned long)p->control_status, ctl.u64);
 }
 
 static int octeon_rng_data_read(struct hwrng *rng, u32 *data)
 {
        struct octeon_rng *p = container_of(rng, struct octeon_rng, ops);
 
-       *data = cvmx_read64_uint32((__force u64)p->result);
+       *data = cvmx_read64_uint32((unsigned long)p->result);
        return sizeof(u32);
 }
 
index 6e9dfac9fc9f4ccbea72a5766a93e3ac399cd613..23749817d83c71f4645ad043cd9581dd1ba6f7e9 100644 (file)
@@ -121,4 +121,5 @@ static struct platform_driver st_rng_driver = {
 module_platform_driver(st_rng_driver);
 
 MODULE_AUTHOR("Pankaj Dev <pankaj.dev@st.com>");
+MODULE_DESCRIPTION("ST Microelectronics HW Random Number Generator");
 MODULE_LICENSE("GPL v2");
index efb6a9f9a11b5c81be5f3fe45c5ae5768eb0ae01..41e1dbea5d2ebb454801e51a0e172c74f94d4be5 100644 (file)
 #include <linux/reset.h>
 #include <linux/slab.h>
 
-#define RNG_CR 0x00
-#define RNG_CR_RNGEN BIT(2)
-#define RNG_CR_CED BIT(5)
-
-#define RNG_SR 0x04
-#define RNG_SR_SEIS BIT(6)
-#define RNG_SR_CEIS BIT(5)
-#define RNG_SR_DRDY BIT(0)
+#define RNG_CR                 0x00
+#define RNG_CR_RNGEN           BIT(2)
+#define RNG_CR_CED             BIT(5)
+#define RNG_CR_CONFIG1         GENMASK(11, 8)
+#define RNG_CR_NISTC           BIT(12)
+#define RNG_CR_CONFIG2         GENMASK(15, 13)
+#define RNG_CR_CLKDIV_SHIFT    16
+#define RNG_CR_CLKDIV          GENMASK(19, 16)
+#define RNG_CR_CONFIG3         GENMASK(25, 20)
+#define RNG_CR_CONDRST         BIT(30)
+#define RNG_CR_CONFLOCK                BIT(31)
+#define RNG_CR_ENTROPY_SRC_MASK        (RNG_CR_CONFIG1 | RNG_CR_NISTC | RNG_CR_CONFIG2 | RNG_CR_CONFIG3)
+#define RNG_CR_CONFIG_MASK     (RNG_CR_ENTROPY_SRC_MASK | RNG_CR_CED | RNG_CR_CLKDIV)
+
+#define RNG_SR                 0x04
+#define RNG_SR_DRDY            BIT(0)
+#define RNG_SR_CECS            BIT(1)
+#define RNG_SR_SECS            BIT(2)
+#define RNG_SR_CEIS            BIT(5)
+#define RNG_SR_SEIS            BIT(6)
+
+#define RNG_DR                 0x08
+
+#define RNG_NSCR               0x0C
+#define RNG_NSCR_MASK          GENMASK(17, 0)
+
+#define RNG_HTCR               0x10
+
+#define RNG_NB_RECOVER_TRIES   3
+
+struct stm32_rng_data {
+       uint    max_clock_rate;
+       u32     cr;
+       u32     nscr;
+       u32     htcr;
+       bool    has_cond_reset;
+};
 
-#define RNG_DR 0x08
+/**
+ * struct stm32_rng_config - RNG configuration data
+ *
+ * @cr:                        RNG configuration. 0 means default hardware RNG configuration
+ * @nscr:              Noise sources control configuration.
+ * @htcr:              Health tests configuration.
+ */
+struct stm32_rng_config {
+       u32 cr;
+       u32 nscr;
+       u32 htcr;
+};
 
 struct stm32_rng_private {
        struct hwrng rng;
        void __iomem *base;
        struct clk *clk;
        struct reset_control *rst;
+       struct stm32_rng_config pm_conf;
+       const struct stm32_rng_data *data;
        bool ced;
+       bool lock_conf;
 };
 
+/*
+ * Extracts from the STM32 RNG specification when RNG supports CONDRST.
+ *
+ * When a noise source (or seed) error occurs, the RNG stops generating
+ * random numbers and sets to “1” both SEIS and SECS bits to indicate
+ * that a seed error occurred. (...)
+ *
+ * 1. Software reset by writing CONDRST at 1 and at 0 (see bitfield
+ * description for details). This step is needed only if SECS is set.
+ * Indeed, when SEIS is set and SECS is cleared it means RNG performed
+ * the reset automatically (auto-reset).
+ * 2. If SECS was set in step 1 (no auto-reset) wait for CONDRST
+ * to be cleared in the RNG_CR register, then confirm that SEIS is
+ * cleared in the RNG_SR register. Otherwise just clear SEIS bit in
+ * the RNG_SR register.
+ * 3. If SECS was set in step 1 (no auto-reset) wait for SECS to be
+ * cleared by RNG. The random number generation is now back to normal.
+ */
+static int stm32_rng_conceal_seed_error_cond_reset(struct stm32_rng_private *priv)
+{
+       struct device *dev = (struct device *)priv->rng.priv;
+       u32 sr = readl_relaxed(priv->base + RNG_SR);
+       u32 cr = readl_relaxed(priv->base + RNG_CR);
+       int err;
+
+       if (sr & RNG_SR_SECS) {
+               /* Conceal by resetting the subsystem (step 1.) */
+               writel_relaxed(cr | RNG_CR_CONDRST, priv->base + RNG_CR);
+               writel_relaxed(cr & ~RNG_CR_CONDRST, priv->base + RNG_CR);
+       } else {
+               /* RNG auto-reset (step 2.) */
+               writel_relaxed(sr & ~RNG_SR_SEIS, priv->base + RNG_SR);
+               goto end;
+       }
+
+       err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_CR, cr, !(cr & RNG_CR_CONDRST), 10,
+                                               100000);
+       if (err) {
+               dev_err(dev, "%s: timeout %x\n", __func__, sr);
+               return err;
+       }
+
+       /* Check SEIS is cleared (step 2.) */
+       if (readl_relaxed(priv->base + RNG_SR) & RNG_SR_SEIS)
+               return -EINVAL;
+
+       err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_SR, sr, !(sr & RNG_SR_SECS), 10,
+                                               100000);
+       if (err) {
+               dev_err(dev, "%s: timeout %x\n", __func__, sr);
+               return err;
+       }
+
+end:
+       return 0;
+}
+
+/*
+ * Extracts from the STM32 RNG specification, when CONDRST is not supported
+ *
+ * When a noise source (or seed) error occurs, the RNG stops generating
+ * random numbers and sets to “1” both SEIS and SECS bits to indicate
+ * that a seed error occurred. (...)
+ *
+ * The following sequence shall be used to fully recover from a seed
+ * error after the RNG initialization:
+ * 1. Clear the SEIS bit by writing it to “0”.
+ * 2. Read out 12 words from the RNG_DR register, and discard each of
+ * them in order to clean the pipeline.
+ * 3. Confirm that SEIS is still cleared. Random number generation is
+ * back to normal.
+ */
+static int stm32_rng_conceal_seed_error_sw_reset(struct stm32_rng_private *priv)
+{
+       unsigned int i = 0;
+       u32 sr = readl_relaxed(priv->base + RNG_SR);
+
+       writel_relaxed(sr & ~RNG_SR_SEIS, priv->base + RNG_SR);
+
+       for (i = 12; i != 0; i--)
+               (void)readl_relaxed(priv->base + RNG_DR);
+
+       if (readl_relaxed(priv->base + RNG_SR) & RNG_SR_SEIS)
+               return -EINVAL;
+
+       return 0;
+}
+
+static int stm32_rng_conceal_seed_error(struct hwrng *rng)
+{
+       struct stm32_rng_private *priv = container_of(rng, struct stm32_rng_private, rng);
+
+       dev_dbg((struct device *)priv->rng.priv, "Concealing seed error\n");
+
+       if (priv->data->has_cond_reset)
+               return stm32_rng_conceal_seed_error_cond_reset(priv);
+       else
+               return stm32_rng_conceal_seed_error_sw_reset(priv);
+};
+
+
 static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
 {
-       struct stm32_rng_private *priv =
-           container_of(rng, struct stm32_rng_private, rng);
+       struct stm32_rng_private *priv = container_of(rng, struct stm32_rng_private, rng);
+       unsigned int i = 0;
+       int retval = 0, err = 0;
        u32 sr;
-       int retval = 0;
 
        pm_runtime_get_sync((struct device *) priv->rng.priv);
 
+       if (readl_relaxed(priv->base + RNG_SR) & RNG_SR_SEIS)
+               stm32_rng_conceal_seed_error(rng);
+
        while (max >= sizeof(u32)) {
                sr = readl_relaxed(priv->base + RNG_SR);
-               /* Manage timeout which is based on timer and take */
-               /* care of initial delay time when enabling rng */
+               /*
+                * Manage timeout which is based on timer and take
+                * care of initial delay time when enabling the RNG.
+                */
                if (!sr && wait) {
-                       int err;
-
                        err = readl_relaxed_poll_timeout_atomic(priv->base
                                                                   + RNG_SR,
                                                                   sr, sr,
                                                                   10, 50000);
-                       if (err)
+                       if (err) {
                                dev_err((struct device *)priv->rng.priv,
                                        "%s: timeout %x!\n", __func__, sr);
+                               break;
+                       }
+               } else if (!sr) {
+                       /* The FIFO is being filled up */
+                       break;
                }
 
-               /* If error detected or data not ready... */
                if (sr != RNG_SR_DRDY) {
-                       if (WARN_ONCE(sr & (RNG_SR_SEIS | RNG_SR_CEIS),
-                                       "bad RNG status - %x\n", sr))
+                       if (sr & RNG_SR_SEIS) {
+                               err = stm32_rng_conceal_seed_error(rng);
+                               i++;
+                               if (err && i > RNG_NB_RECOVER_TRIES) {
+                                       dev_err((struct device *)priv->rng.priv,
+                                               "Couldn't recover from seed error\n");
+                                       return -ENOTRECOVERABLE;
+                               }
+
+                               continue;
+                       }
+
+                       if (WARN_ONCE((sr & RNG_SR_CEIS), "RNG clock too slow - %x\n", sr))
                                writel_relaxed(0, priv->base + RNG_SR);
-                       break;
                }
 
+               /* Late seed error case: DR being 0 is an error status */
                *(u32 *)data = readl_relaxed(priv->base + RNG_DR);
+               if (!*(u32 *)data) {
+                       err = stm32_rng_conceal_seed_error(rng);
+                       i++;
+                       if (err && i > RNG_NB_RECOVER_TRIES) {
+                               dev_err((struct device *)priv->rng.priv,
+                                       "Couldn't recover from seed error");
+                               return -ENOTRECOVERABLE;
+                       }
 
+                       continue;
+               }
+
+               i = 0;
                retval += sizeof(u32);
                data += sizeof(u32);
                max -= sizeof(u32);
@@ -82,54 +256,264 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
        return retval || !wait ? retval : -EIO;
 }
 
+static uint stm32_rng_clock_freq_restrain(struct hwrng *rng)
+{
+       struct stm32_rng_private *priv =
+           container_of(rng, struct stm32_rng_private, rng);
+       unsigned long clock_rate = 0;
+       uint clock_div = 0;
+
+       clock_rate = clk_get_rate(priv->clk);
+
+       /*
+        * Get the exponent to apply on the CLKDIV field in RNG_CR register
+        * No need to handle the case when clock-div > 0xF as it is physically
+        * impossible
+        */
+       while ((clock_rate >> clock_div) > priv->data->max_clock_rate)
+               clock_div++;
+
+       pr_debug("RNG clk rate : %lu\n", clk_get_rate(priv->clk) >> clock_div);
+
+       return clock_div;
+}
+
 static int stm32_rng_init(struct hwrng *rng)
 {
        struct stm32_rng_private *priv =
            container_of(rng, struct stm32_rng_private, rng);
        int err;
+       u32 reg;
 
        err = clk_prepare_enable(priv->clk);
        if (err)
                return err;
 
-       if (priv->ced)
-               writel_relaxed(RNG_CR_RNGEN, priv->base + RNG_CR);
-       else
-               writel_relaxed(RNG_CR_RNGEN | RNG_CR_CED,
-                              priv->base + RNG_CR);
-
        /* clear error indicators */
        writel_relaxed(0, priv->base + RNG_SR);
 
+       reg = readl_relaxed(priv->base + RNG_CR);
+
+       /*
+        * Keep default RNG configuration if none was specified.
+        * 0 is an invalid value as it disables all entropy sources.
+        */
+       if (priv->data->has_cond_reset && priv->data->cr) {
+               uint clock_div = stm32_rng_clock_freq_restrain(rng);
+
+               reg &= ~RNG_CR_CONFIG_MASK;
+               reg |= RNG_CR_CONDRST | (priv->data->cr & RNG_CR_ENTROPY_SRC_MASK) |
+                      (clock_div << RNG_CR_CLKDIV_SHIFT);
+               if (priv->ced)
+                       reg &= ~RNG_CR_CED;
+               else
+                       reg |= RNG_CR_CED;
+               writel_relaxed(reg, priv->base + RNG_CR);
+
+               /* Health tests and noise control registers */
+               writel_relaxed(priv->data->htcr, priv->base + RNG_HTCR);
+               writel_relaxed(priv->data->nscr & RNG_NSCR_MASK, priv->base + RNG_NSCR);
+
+               reg &= ~RNG_CR_CONDRST;
+               reg |= RNG_CR_RNGEN;
+               if (priv->lock_conf)
+                       reg |= RNG_CR_CONFLOCK;
+
+               writel_relaxed(reg, priv->base + RNG_CR);
+
+               err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_CR, reg,
+                                                       (!(reg & RNG_CR_CONDRST)),
+                                                       10, 50000);
+               if (err) {
+                       dev_err((struct device *)priv->rng.priv,
+                               "%s: timeout %x!\n", __func__, reg);
+                       return -EINVAL;
+               }
+       } else {
+               /* Handle all RNG versions by checking if conditional reset should be set */
+               if (priv->data->has_cond_reset)
+                       reg |= RNG_CR_CONDRST;
+
+               if (priv->ced)
+                       reg &= ~RNG_CR_CED;
+               else
+                       reg |= RNG_CR_CED;
+
+               writel_relaxed(reg, priv->base + RNG_CR);
+
+               if (priv->data->has_cond_reset)
+                       reg &= ~RNG_CR_CONDRST;
+
+               reg |= RNG_CR_RNGEN;
+
+               writel_relaxed(reg, priv->base + RNG_CR);
+       }
+
+       err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_SR, reg,
+                                               reg & RNG_SR_DRDY,
+                                               10, 100000);
+       if (err | (reg & ~RNG_SR_DRDY)) {
+               clk_disable_unprepare(priv->clk);
+               dev_err((struct device *)priv->rng.priv,
+                       "%s: timeout:%x SR: %x!\n", __func__, err, reg);
+               return -EINVAL;
+       }
+
        return 0;
 }
 
-static void stm32_rng_cleanup(struct hwrng *rng)
+static int stm32_rng_remove(struct platform_device *ofdev)
 {
-       struct stm32_rng_private *priv =
-           container_of(rng, struct stm32_rng_private, rng);
+       pm_runtime_disable(&ofdev->dev);
+
+       return 0;
+}
+
+static int __maybe_unused stm32_rng_runtime_suspend(struct device *dev)
+{
+       struct stm32_rng_private *priv = dev_get_drvdata(dev);
+       u32 reg;
 
-       writel_relaxed(0, priv->base + RNG_CR);
+       reg = readl_relaxed(priv->base + RNG_CR);
+       reg &= ~RNG_CR_RNGEN;
+       writel_relaxed(reg, priv->base + RNG_CR);
        clk_disable_unprepare(priv->clk);
+
+       return 0;
 }
 
+static int __maybe_unused stm32_rng_suspend(struct device *dev)
+{
+       struct stm32_rng_private *priv = dev_get_drvdata(dev);
+
+       if (priv->data->has_cond_reset) {
+               priv->pm_conf.nscr = readl_relaxed(priv->base + RNG_NSCR);
+               priv->pm_conf.htcr = readl_relaxed(priv->base + RNG_HTCR);
+       }
+
+       /* Do not save that RNG is enabled as it will be handled at resume */
+       priv->pm_conf.cr = readl_relaxed(priv->base + RNG_CR) & ~RNG_CR_RNGEN;
+
+       writel_relaxed(priv->pm_conf.cr, priv->base + RNG_CR);
+
+       clk_disable_unprepare(priv->clk);
+
+       return 0;
+}
+
+static int __maybe_unused stm32_rng_runtime_resume(struct device *dev)
+{
+       struct stm32_rng_private *priv = dev_get_drvdata(dev);
+       int err;
+       u32 reg;
+
+       err = clk_prepare_enable(priv->clk);
+       if (err)
+               return err;
+
+       /* Clean error indications */
+       writel_relaxed(0, priv->base + RNG_SR);
+
+       reg = readl_relaxed(priv->base + RNG_CR);
+       reg |= RNG_CR_RNGEN;
+       writel_relaxed(reg, priv->base + RNG_CR);
+
+       return 0;
+}
+
+static int __maybe_unused stm32_rng_resume(struct device *dev)
+{
+       struct stm32_rng_private *priv = dev_get_drvdata(dev);
+       int err;
+       u32 reg;
+
+       err = clk_prepare_enable(priv->clk);
+       if (err)
+               return err;
+
+       /* Clean error indications */
+       writel_relaxed(0, priv->base + RNG_SR);
+
+       if (priv->data->has_cond_reset) {
+               /*
+                * Correct configuration in bits [29:4] must be set in the same
+                * access that set RNG_CR_CONDRST bit. Else config setting is
+                * not taken into account. CONFIGLOCK bit must also be unset but
+                * it is not handled at the moment.
+                */
+               writel_relaxed(priv->pm_conf.cr | RNG_CR_CONDRST, priv->base + RNG_CR);
+
+               writel_relaxed(priv->pm_conf.nscr, priv->base + RNG_NSCR);
+               writel_relaxed(priv->pm_conf.htcr, priv->base + RNG_HTCR);
+
+               reg = readl_relaxed(priv->base + RNG_CR);
+               reg |= RNG_CR_RNGEN;
+               reg &= ~RNG_CR_CONDRST;
+               writel_relaxed(reg, priv->base + RNG_CR);
+
+               err = readl_relaxed_poll_timeout_atomic(priv->base + RNG_CR, reg,
+                                                       reg & ~RNG_CR_CONDRST, 10, 100000);
+
+               if (err) {
+                       clk_disable_unprepare(priv->clk);
+                       dev_err((struct device *)priv->rng.priv,
+                               "%s: timeout:%x CR: %x!\n", __func__, err, reg);
+                       return -EINVAL;
+               }
+       } else {
+               reg = priv->pm_conf.cr;
+               reg |= RNG_CR_RNGEN;
+               writel_relaxed(reg, priv->base + RNG_CR);
+       }
+
+       return 0;
+}
+
+static const struct dev_pm_ops __maybe_unused stm32_rng_pm_ops = {
+       SET_RUNTIME_PM_OPS(stm32_rng_runtime_suspend,
+                          stm32_rng_runtime_resume, NULL)
+       SET_SYSTEM_SLEEP_PM_OPS(stm32_rng_suspend,
+                               stm32_rng_resume)
+};
+
+static const struct stm32_rng_data stm32mp13_rng_data = {
+       .has_cond_reset = true,
+       .max_clock_rate = 48000000,
+       .cr = 0x00F00D00,
+       .nscr = 0x2B5BB,
+       .htcr = 0x969D,
+};
+
+static const struct stm32_rng_data stm32_rng_data = {
+       .has_cond_reset = false,
+       .max_clock_rate = 3000000,
+};
+
+static const struct of_device_id stm32_rng_match[] = {
+       {
+               .compatible = "st,stm32mp13-rng",
+               .data = &stm32mp13_rng_data,
+       },
+       {
+               .compatible = "st,stm32-rng",
+               .data = &stm32_rng_data,
+       },
+       {},
+};
+MODULE_DEVICE_TABLE(of, stm32_rng_match);
+
 static int stm32_rng_probe(struct platform_device *ofdev)
 {
        struct device *dev = &ofdev->dev;
        struct device_node *np = ofdev->dev.of_node;
        struct stm32_rng_private *priv;
-       struct resource res;
-       int err;
+       struct resource *res;
 
        priv = devm_kzalloc(dev, sizeof(struct stm32_rng_private), GFP_KERNEL);
        if (!priv)
                return -ENOMEM;
 
-       err = of_address_to_resource(np, 0, &res);
-       if (err)
-               return err;
-
-       priv->base = devm_ioremap_resource(dev, &res);
+       priv->base = devm_platform_get_and_ioremap_resource(ofdev, 0, &res);
        if (IS_ERR(priv->base))
                return PTR_ERR(priv->base);
 
@@ -145,14 +529,16 @@ static int stm32_rng_probe(struct platform_device *ofdev)
        }
 
        priv->ced = of_property_read_bool(np, "clock-error-detect");
+       priv->lock_conf = of_property_read_bool(np, "st,rng-lock-conf");
+
+       priv->data = of_device_get_match_data(dev);
+       if (!priv->data)
+               return -ENODEV;
 
        dev_set_drvdata(dev, priv);
 
        priv->rng.name = dev_driver_string(dev);
-#ifndef CONFIG_PM
        priv->rng.init = stm32_rng_init;
-       priv->rng.cleanup = stm32_rng_cleanup;
-#endif
        priv->rng.read = stm32_rng_read;
        priv->rng.priv = (unsigned long) dev;
        priv->rng.quality = 900;
@@ -164,51 +550,10 @@ static int stm32_rng_probe(struct platform_device *ofdev)
        return devm_hwrng_register(dev, &priv->rng);
 }
 
-static int stm32_rng_remove(struct platform_device *ofdev)
-{
-       pm_runtime_disable(&ofdev->dev);
-
-       return 0;
-}
-
-#ifdef CONFIG_PM
-static int stm32_rng_runtime_suspend(struct device *dev)
-{
-       struct stm32_rng_private *priv = dev_get_drvdata(dev);
-
-       stm32_rng_cleanup(&priv->rng);
-
-       return 0;
-}
-
-static int stm32_rng_runtime_resume(struct device *dev)
-{
-       struct stm32_rng_private *priv = dev_get_drvdata(dev);
-
-       return stm32_rng_init(&priv->rng);
-}
-#endif
-
-static const struct dev_pm_ops stm32_rng_pm_ops = {
-       SET_RUNTIME_PM_OPS(stm32_rng_runtime_suspend,
-                          stm32_rng_runtime_resume, NULL)
-       SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
-                               pm_runtime_force_resume)
-};
-
-
-static const struct of_device_id stm32_rng_match[] = {
-       {
-               .compatible = "st,stm32-rng",
-       },
-       {},
-};
-MODULE_DEVICE_TABLE(of, stm32_rng_match);
-
 static struct platform_driver stm32_rng_driver = {
        .driver = {
                .name = "stm32-rng",
-               .pm = &stm32_rng_pm_ops,
+               .pm = pm_ptr(&stm32_rng_pm_ops),
                .of_match_table = stm32_rng_match,
        },
        .probe = stm32_rng_probe,
index 99f4e86ac3e9ad67fab5b42b46a30beef6ae1990..7382724bf501c231029d837cf568f73f818922b9 100644 (file)
@@ -321,7 +321,6 @@ static int xgene_rng_probe(struct platform_device *pdev)
                return -ENOMEM;
 
        ctx->dev = &pdev->dev;
-       platform_set_drvdata(pdev, ctx);
 
        ctx->csr_base = devm_platform_ioremap_resource(pdev, 0);
        if (IS_ERR(ctx->csr_base))
index 2c586d1fe8a952b500e4aa2dfc3bf5ce4cc6ff30..4af64f76c8d613189cca8714ee2b5d86651698e6 100644 (file)
@@ -121,8 +121,6 @@ static int xiphera_trng_probe(struct platform_device *pdev)
                return ret;
        }
 
-       platform_set_drvdata(pdev, trng);
-
        return 0;
 }
 
index c761952f0dc6df92e1ee37ad5707bb7539c2cef3..79c3bb9c99c3bf78a4f65db2b648a174e5dedb35 100644 (file)
@@ -601,6 +601,7 @@ config CRYPTO_DEV_QCE_SW_MAX_LEN
 config CRYPTO_DEV_QCOM_RNG
        tristate "Qualcomm Random Number Generator Driver"
        depends on ARCH_QCOM || COMPILE_TEST
+       depends on HW_RANDOM
        select CRYPTO_RNG
        help
          This driver provides support for the Random Number
index 3bcfcfc3708426ae28f967d43f6cbccb2c29960c..890664bd5f0f133552acf833f1cadda0778e02bc 100644 (file)
@@ -49,7 +49,6 @@ static struct sun4i_ss_alg_template ss_algs[] = {
                                .cra_name = "md5",
                                .cra_driver_name = "md5-sun4i-ss",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_blocksize = MD5_HMAC_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct sun4i_req_ctx),
                                .cra_module = THIS_MODULE,
@@ -76,7 +75,6 @@ static struct sun4i_ss_alg_template ss_algs[] = {
                                .cra_name = "sha1",
                                .cra_driver_name = "sha1-sun4i-ss",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_blocksize = SHA1_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct sun4i_req_ctx),
                                .cra_module = THIS_MODULE,
@@ -509,7 +507,7 @@ error_pm:
        return err;
 }
 
-static int sun4i_ss_remove(struct platform_device *pdev)
+static void sun4i_ss_remove(struct platform_device *pdev)
 {
        int i;
        struct sun4i_ss_ctx *ss = platform_get_drvdata(pdev);
@@ -529,7 +527,6 @@ static int sun4i_ss_remove(struct platform_device *pdev)
        }
 
        sun4i_ss_pm_exit(ss);
-       return 0;
 }
 
 static const struct of_device_id a20ss_crypto_of_match_table[] = {
@@ -545,7 +542,7 @@ MODULE_DEVICE_TABLE(of, a20ss_crypto_of_match_table);
 
 static struct platform_driver sun4i_ss_driver = {
        .probe          = sun4i_ss_probe,
-       .remove         = sun4i_ss_remove,
+       .remove_new     = sun4i_ss_remove,
        .driver         = {
                .name           = "sun4i-ss",
                .pm             = &sun4i_ss_pm_ops,
index d4ccd5254280bcf8e2a37df9eae83106507dbff6..0408b2d5d533b856e8c3add8d5f0a24827b9c8d1 100644 (file)
@@ -414,7 +414,6 @@ static struct sun8i_ce_alg_template ce_algs[] = {
                                .cra_name = "md5",
                                .cra_driver_name = "md5-sun8i-ce",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -448,7 +447,6 @@ static struct sun8i_ce_alg_template ce_algs[] = {
                                .cra_name = "sha1",
                                .cra_driver_name = "sha1-sun8i-ce",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -481,7 +479,6 @@ static struct sun8i_ce_alg_template ce_algs[] = {
                                .cra_name = "sha224",
                                .cra_driver_name = "sha224-sun8i-ce",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -514,7 +511,6 @@ static struct sun8i_ce_alg_template ce_algs[] = {
                                .cra_name = "sha256",
                                .cra_driver_name = "sha256-sun8i-ce",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -547,7 +543,6 @@ static struct sun8i_ce_alg_template ce_algs[] = {
                                .cra_name = "sha384",
                                .cra_driver_name = "sha384-sun8i-ce",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -580,7 +575,6 @@ static struct sun8i_ce_alg_template ce_algs[] = {
                                .cra_name = "sha512",
                                .cra_driver_name = "sha512-sun8i-ce",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -1071,7 +1065,7 @@ error_pm:
        return err;
 }
 
-static int sun8i_ce_remove(struct platform_device *pdev)
+static void sun8i_ce_remove(struct platform_device *pdev)
 {
        struct sun8i_ce_dev *ce = platform_get_drvdata(pdev);
 
@@ -1088,7 +1082,6 @@ static int sun8i_ce_remove(struct platform_device *pdev)
        sun8i_ce_free_chanlist(ce, MAXFLOW - 1);
 
        sun8i_ce_pm_exit(ce);
-       return 0;
 }
 
 static const struct of_device_id sun8i_ce_crypto_of_match_table[] = {
@@ -1110,7 +1103,7 @@ MODULE_DEVICE_TABLE(of, sun8i_ce_crypto_of_match_table);
 
 static struct platform_driver sun8i_ce_driver = {
        .probe           = sun8i_ce_probe,
-       .remove          = sun8i_ce_remove,
+       .remove_new      = sun8i_ce_remove,
        .driver          = {
                .name           = "sun8i-ce",
                .pm             = &sun8i_ce_pm_ops,
index 4a9587285c04f51a34958c0941a8979b050aa35f..0dbc0220146c71dc04591a9d75d220623ed729b0 100644 (file)
@@ -322,7 +322,6 @@ static struct sun8i_ss_alg_template ss_algs[] = {
                                .cra_name = "md5",
                                .cra_driver_name = "md5-sun8i-ss",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -355,7 +354,6 @@ static struct sun8i_ss_alg_template ss_algs[] = {
                                .cra_name = "sha1",
                                .cra_driver_name = "sha1-sun8i-ss",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -388,7 +386,6 @@ static struct sun8i_ss_alg_template ss_algs[] = {
                                .cra_name = "sha224",
                                .cra_driver_name = "sha224-sun8i-ss",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -421,7 +418,6 @@ static struct sun8i_ss_alg_template ss_algs[] = {
                                .cra_name = "sha256",
                                .cra_driver_name = "sha256-sun8i-ss",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -455,7 +451,6 @@ static struct sun8i_ss_alg_template ss_algs[] = {
                                .cra_name = "hmac(sha1)",
                                .cra_driver_name = "hmac-sha1-sun8i-ss",
                                .cra_priority = 300,
-                               .cra_alignmask = 3,
                                .cra_flags = CRYPTO_ALG_TYPE_AHASH |
                                        CRYPTO_ALG_ASYNC |
                                        CRYPTO_ALG_NEED_FALLBACK,
@@ -908,7 +903,7 @@ error_pm:
        return err;
 }
 
-static int sun8i_ss_remove(struct platform_device *pdev)
+static void sun8i_ss_remove(struct platform_device *pdev)
 {
        struct sun8i_ss_dev *ss = platform_get_drvdata(pdev);
 
@@ -921,8 +916,6 @@ static int sun8i_ss_remove(struct platform_device *pdev)
        sun8i_ss_free_flows(ss, MAXFLOW - 1);
 
        sun8i_ss_pm_exit(ss);
-
-       return 0;
 }
 
 static const struct of_device_id sun8i_ss_crypto_of_match_table[] = {
@@ -936,7 +929,7 @@ MODULE_DEVICE_TABLE(of, sun8i_ss_crypto_of_match_table);
 
 static struct platform_driver sun8i_ss_driver = {
        .probe           = sun8i_ss_probe,
-       .remove          = sun8i_ss_remove,
+       .remove_new      = sun8i_ss_remove,
        .driver          = {
                .name           = "sun8i-ss",
                .pm             = &sun8i_ss_pm_ops,
index d553f3f1efbeea65c498d212d6dbe2ecab1ab00b..8d53372245ad6bc93121c37c064899fb32c06c9a 100644 (file)
@@ -1507,7 +1507,7 @@ err_alloc_dev:
        return rc;
 }
 
-static int crypto4xx_remove(struct platform_device *ofdev)
+static void crypto4xx_remove(struct platform_device *ofdev)
 {
        struct device *dev = &ofdev->dev;
        struct crypto4xx_core_device *core_dev = dev_get_drvdata(dev);
@@ -1523,8 +1523,6 @@ static int crypto4xx_remove(struct platform_device *ofdev)
        mutex_destroy(&core_dev->rng_lock);
        /* Free all allocated memory */
        crypto4xx_stop_all(core_dev);
-
-       return 0;
 }
 
 static const struct of_device_id crypto4xx_match[] = {
@@ -1539,7 +1537,7 @@ static struct platform_driver crypto4xx_driver = {
                .of_match_table = crypto4xx_match,
        },
        .probe          = crypto4xx_probe,
-       .remove         = crypto4xx_remove,
+       .remove_new     = crypto4xx_remove,
 };
 
 module_platform_driver(crypto4xx_driver);
index da6dfe0f9ac33232fd77c09a26be50d34d91a02c..f54ab0d0b1e852f7bb71d998684d23e58389a0ad 100644 (file)
@@ -299,7 +299,7 @@ error_flow:
        return err;
 }
 
-static int meson_crypto_remove(struct platform_device *pdev)
+static void meson_crypto_remove(struct platform_device *pdev)
 {
        struct meson_dev *mc = platform_get_drvdata(pdev);
 
@@ -312,7 +312,6 @@ static int meson_crypto_remove(struct platform_device *pdev)
        meson_free_chanlist(mc, MAXFLOW - 1);
 
        clk_disable_unprepare(mc->busclk);
-       return 0;
 }
 
 static const struct of_device_id meson_crypto_of_match_table[] = {
@@ -323,7 +322,7 @@ MODULE_DEVICE_TABLE(of, meson_crypto_of_match_table);
 
 static struct platform_driver meson_crypto_driver = {
        .probe           = meson_crypto_probe,
-       .remove          = meson_crypto_remove,
+       .remove_new      = meson_crypto_remove,
        .driver          = {
                .name              = "gxl-crypto",
                .of_match_table = meson_crypto_of_match_table,
index 247c568aa8dfe3eeeb99c985cbd46822cd342ba3..b4613bd4ad964398ef0b123c046391c54f3e3e2e 100644 (file)
@@ -794,7 +794,7 @@ clk_exit:
        return rc;
 }
 
-static int aspeed_acry_remove(struct platform_device *pdev)
+static void aspeed_acry_remove(struct platform_device *pdev)
 {
        struct aspeed_acry_dev *acry_dev = platform_get_drvdata(pdev);
 
@@ -802,15 +802,13 @@ static int aspeed_acry_remove(struct platform_device *pdev)
        crypto_engine_exit(acry_dev->crypt_engine_rsa);
        tasklet_kill(&acry_dev->done_task);
        clk_disable_unprepare(acry_dev->clk);
-
-       return 0;
 }
 
 MODULE_DEVICE_TABLE(of, aspeed_acry_of_matches);
 
 static struct platform_driver aspeed_acry_driver = {
        .probe          = aspeed_acry_probe,
-       .remove         = aspeed_acry_remove,
+       .remove_new     = aspeed_acry_remove,
        .driver         = {
                .name   = KBUILD_MODNAME,
                .of_match_table = aspeed_acry_of_matches,
index 8f7aab82e1d82c8834c7881f12d762526c071b1d..062f2a66dd23992675a625f1a92e5aba7ea7e82c 100644 (file)
 #include <linux/io.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/of_address.h>
-#include <linux/of_device.h>
-#include <linux/of_irq.h>
 #include <linux/of.h>
 #include <linux/platform_device.h>
+#include <linux/property.h>
 
 #ifdef CONFIG_CRYPTO_DEV_ASPEED_DEBUG
 #define HACE_DBG(d, fmt, ...)  \
@@ -101,7 +99,6 @@ static const struct of_device_id aspeed_hace_of_matches[] = {
 static int aspeed_hace_probe(struct platform_device *pdev)
 {
        struct aspeed_engine_crypto *crypto_engine;
-       const struct of_device_id *hace_dev_id;
        struct aspeed_engine_hash *hash_engine;
        struct aspeed_hace_dev *hace_dev;
        int rc;
@@ -111,14 +108,13 @@ static int aspeed_hace_probe(struct platform_device *pdev)
        if (!hace_dev)
                return -ENOMEM;
 
-       hace_dev_id = of_match_device(aspeed_hace_of_matches, &pdev->dev);
-       if (!hace_dev_id) {
+       hace_dev->version = (uintptr_t)device_get_match_data(&pdev->dev);
+       if (!hace_dev->version) {
                dev_err(&pdev->dev, "Failed to match hace dev id\n");
                return -EINVAL;
        }
 
        hace_dev->dev = &pdev->dev;
-       hace_dev->version = (unsigned long)hace_dev_id->data;
        hash_engine = &hace_dev->hash_engine;
        crypto_engine = &hace_dev->crypto_engine;
 
@@ -249,7 +245,7 @@ clk_exit:
        return rc;
 }
 
-static int aspeed_hace_remove(struct platform_device *pdev)
+static void aspeed_hace_remove(struct platform_device *pdev)
 {
        struct aspeed_hace_dev *hace_dev = platform_get_drvdata(pdev);
        struct aspeed_engine_crypto *crypto_engine = &hace_dev->crypto_engine;
@@ -264,15 +260,13 @@ static int aspeed_hace_remove(struct platform_device *pdev)
        tasklet_kill(&crypto_engine->done_task);
 
        clk_disable_unprepare(hace_dev->clk);
-
-       return 0;
 }
 
 MODULE_DEVICE_TABLE(of, aspeed_hace_of_matches);
 
 static struct platform_driver aspeed_hace_driver = {
        .probe          = aspeed_hace_probe,
-       .remove         = aspeed_hace_remove,
+       .remove_new     = aspeed_hace_remove,
        .driver         = {
                .name   = KBUILD_MODNAME,
                .of_match_table = aspeed_hace_of_matches,
index 55b5f577b01c84a92a78df73dcb6783c489b6827..d1d93e897892e242a6249479b41b99abae106349 100644 (file)
@@ -2648,7 +2648,7 @@ err_tasklet_kill:
        return err;
 }
 
-static int atmel_aes_remove(struct platform_device *pdev)
+static void atmel_aes_remove(struct platform_device *pdev)
 {
        struct atmel_aes_dev *aes_dd;
 
@@ -2667,13 +2667,11 @@ static int atmel_aes_remove(struct platform_device *pdev)
        atmel_aes_buff_cleanup(aes_dd);
 
        clk_unprepare(aes_dd->iclk);
-
-       return 0;
 }
 
 static struct platform_driver atmel_aes_driver = {
        .probe          = atmel_aes_probe,
-       .remove         = atmel_aes_remove,
+       .remove_new     = atmel_aes_remove,
        .driver         = {
                .name   = "atmel_aes",
                .of_match_table = atmel_aes_dt_ids,
index 3622120add625af2242201e5964ef699def24a5c..f4cd6158a4f7877f3e816c683e22ecacbde16310 100644 (file)
@@ -1300,7 +1300,6 @@ static struct ahash_alg sha_384_512_algs[] = {
        .halg.base.cra_name             = "sha384",
        .halg.base.cra_driver_name      = "atmel-sha384",
        .halg.base.cra_blocksize        = SHA384_BLOCK_SIZE,
-       .halg.base.cra_alignmask        = 0x3,
 
        .halg.digestsize = SHA384_DIGEST_SIZE,
 },
@@ -1308,7 +1307,6 @@ static struct ahash_alg sha_384_512_algs[] = {
        .halg.base.cra_name             = "sha512",
        .halg.base.cra_driver_name      = "atmel-sha512",
        .halg.base.cra_blocksize        = SHA512_BLOCK_SIZE,
-       .halg.base.cra_alignmask        = 0x3,
 
        .halg.digestsize = SHA512_DIGEST_SIZE,
 },
@@ -2680,7 +2678,7 @@ err_tasklet_kill:
        return err;
 }
 
-static int atmel_sha_remove(struct platform_device *pdev)
+static void atmel_sha_remove(struct platform_device *pdev)
 {
        struct atmel_sha_dev *sha_dd = platform_get_drvdata(pdev);
 
@@ -2697,13 +2695,11 @@ static int atmel_sha_remove(struct platform_device *pdev)
                atmel_sha_dma_cleanup(sha_dd);
 
        clk_unprepare(sha_dd->iclk);
-
-       return 0;
 }
 
 static struct platform_driver atmel_sha_driver = {
        .probe          = atmel_sha_probe,
-       .remove         = atmel_sha_remove,
+       .remove_new     = atmel_sha_remove,
        .driver         = {
                .name   = "atmel_sha",
                .of_match_table = atmel_sha_dt_ids,
index 099b32a10dd753d8ce5230fd53dba176cad42c87..27b7000e25bc721cae7081397af64b5f49f26cbb 100644 (file)
@@ -1246,7 +1246,7 @@ err_tasklet_kill:
        return err;
 }
 
-static int atmel_tdes_remove(struct platform_device *pdev)
+static void atmel_tdes_remove(struct platform_device *pdev)
 {
        struct atmel_tdes_dev *tdes_dd = platform_get_drvdata(pdev);
 
@@ -1263,13 +1263,11 @@ static int atmel_tdes_remove(struct platform_device *pdev)
                atmel_tdes_dma_cleanup(tdes_dd);
 
        atmel_tdes_buff_cleanup(tdes_dd);
-
-       return 0;
 }
 
 static struct platform_driver atmel_tdes_driver = {
        .probe          = atmel_tdes_probe,
-       .remove         = atmel_tdes_remove,
+       .remove_new     = atmel_tdes_remove,
        .driver         = {
                .name   = "atmel_tdes",
                .of_match_table = atmel_tdes_dt_ids,
index 8493a45e1bd46f0b18c62fb03e6f0451104145ea..ef9fe13ffa593d7e5e996b3a78dd1b102f36d293 100644 (file)
@@ -2635,7 +2635,6 @@ static struct ahash_alg hash_algos[] = {
                                     CRYPTO_ALG_ALLOCATES_MEMORY,
                        .cra_blocksize = SHA1_BLOCK_SIZE,
                        .cra_ctxsize = sizeof(struct artpec6_hashalg_context),
-                       .cra_alignmask = 3,
                        .cra_module = THIS_MODULE,
                        .cra_init = artpec6_crypto_ahash_init,
                        .cra_exit = artpec6_crypto_ahash_exit,
@@ -2659,7 +2658,6 @@ static struct ahash_alg hash_algos[] = {
                                     CRYPTO_ALG_ALLOCATES_MEMORY,
                        .cra_blocksize = SHA256_BLOCK_SIZE,
                        .cra_ctxsize = sizeof(struct artpec6_hashalg_context),
-                       .cra_alignmask = 3,
                        .cra_module = THIS_MODULE,
                        .cra_init = artpec6_crypto_ahash_init,
                        .cra_exit = artpec6_crypto_ahash_exit,
@@ -2684,7 +2682,6 @@ static struct ahash_alg hash_algos[] = {
                                     CRYPTO_ALG_ALLOCATES_MEMORY,
                        .cra_blocksize = SHA256_BLOCK_SIZE,
                        .cra_ctxsize = sizeof(struct artpec6_hashalg_context),
-                       .cra_alignmask = 3,
                        .cra_module = THIS_MODULE,
                        .cra_init = artpec6_crypto_ahash_init_hmac_sha256,
                        .cra_exit = artpec6_crypto_ahash_exit,
@@ -2957,7 +2954,7 @@ free_cache:
        return err;
 }
 
-static int artpec6_crypto_remove(struct platform_device *pdev)
+static void artpec6_crypto_remove(struct platform_device *pdev)
 {
        struct artpec6_crypto *ac = platform_get_drvdata(pdev);
        int irq = platform_get_irq(pdev, 0);
@@ -2977,12 +2974,11 @@ static int artpec6_crypto_remove(struct platform_device *pdev)
 #ifdef CONFIG_DEBUG_FS
        artpec6_crypto_free_debugfs();
 #endif
-       return 0;
 }
 
 static struct platform_driver artpec6_crypto_driver = {
        .probe   = artpec6_crypto_probe,
-       .remove  = artpec6_crypto_remove,
+       .remove_new = artpec6_crypto_remove,
        .driver  = {
                .name  = "artpec6-crypto",
                .of_match_table = artpec6_crypto_of_match,
index 689be70d69c18b955ef58bf30a8fe6e3db12580f..10968ddb146b1540d1283a4ed0ba72758a30cb68 100644 (file)
@@ -4713,7 +4713,7 @@ failure:
        return err;
 }
 
-static int bcm_spu_remove(struct platform_device *pdev)
+static void bcm_spu_remove(struct platform_device *pdev)
 {
        int i;
        struct device *dev = &pdev->dev;
@@ -4751,7 +4751,6 @@ static int bcm_spu_remove(struct platform_device *pdev)
        }
        spu_free_debugfs();
        spu_mb_release(pdev);
-       return 0;
 }
 
 /* ===== Kernel Module API ===== */
@@ -4762,7 +4761,7 @@ static struct platform_driver bcm_spu_pdriver = {
                   .of_match_table = of_match_ptr(bcm_spu_dt_ids),
                   },
        .probe = bcm_spu_probe,
-       .remove = bcm_spu_remove,
+       .remove_new = bcm_spu_remove,
 };
 module_platform_driver(bcm_spu_pdriver);
 
index eba2d750c3b074b4de245c202c60aa12219fc052..066f08a3a040d875aab98bc9f2603fbb2381a3f6 100644 (file)
@@ -575,7 +575,8 @@ static int chachapoly_setkey(struct crypto_aead *aead, const u8 *key,
        if (keylen != CHACHA_KEY_SIZE + saltlen)
                return -EINVAL;
 
-       ctx->cdata.key_virt = key;
+       memcpy(ctx->key, key, keylen);
+       ctx->cdata.key_virt = ctx->key;
        ctx->cdata.keylen = keylen - saltlen;
 
        return chachapoly_set_sh_desc(aead);
index 9156bbe038b7b0820608066e0c428f7daa260ce4..a148ff1f0872c419fc2198f64174d26e45342289 100644 (file)
@@ -641,7 +641,8 @@ static int chachapoly_setkey(struct crypto_aead *aead, const u8 *key,
        if (keylen != CHACHA_KEY_SIZE + saltlen)
                return -EINVAL;
 
-       ctx->cdata.key_virt = key;
+       memcpy(ctx->key, key, keylen);
+       ctx->cdata.key_virt = ctx->key;
        ctx->cdata.keylen = keylen - saltlen;
 
        return chachapoly_set_sh_desc(aead);
index b1f1b393b98e6ca8a8f1d600b9bb94056adcd060..26eba7de3fb0e4b7b1127a72b8e4a7657bd1895d 100644 (file)
@@ -180,7 +180,7 @@ static int caam_jr_shutdown(struct device *dev)
        return ret;
 }
 
-static int caam_jr_remove(struct platform_device *pdev)
+static void caam_jr_remove(struct platform_device *pdev)
 {
        int ret;
        struct device *jrdev;
@@ -193,11 +193,14 @@ static int caam_jr_remove(struct platform_device *pdev)
                caam_rng_exit(jrdev->parent);
 
        /*
-        * Return EBUSY if job ring already allocated.
+        * If a job ring is still allocated there is trouble ahead. Once
+        * caam_jr_remove() returned, jrpriv will be freed and the registers
+        * will get unmapped. So any user of such a job ring will probably
+        * crash.
         */
        if (atomic_read(&jrpriv->tfm_count)) {
-               dev_err(jrdev, "Device is busy\n");
-               return -EBUSY;
+               dev_alert(jrdev, "Device is busy; consumers might start to crash\n");
+               return;
        }
 
        /* Unregister JR-based RNG & crypto algorithms */
@@ -212,13 +215,6 @@ static int caam_jr_remove(struct platform_device *pdev)
        ret = caam_jr_shutdown(jrdev);
        if (ret)
                dev_err(jrdev, "Failed to shut down job ring\n");
-
-       return ret;
-}
-
-static void caam_jr_platform_shutdown(struct platform_device *pdev)
-{
-       caam_jr_remove(pdev);
 }
 
 /* Main per-ring interrupt handler */
@@ -823,8 +819,8 @@ static struct platform_driver caam_jr_driver = {
                .pm = pm_ptr(&caam_jr_pm_ops),
        },
        .probe       = caam_jr_probe,
-       .remove      = caam_jr_remove,
-       .shutdown    = caam_jr_platform_shutdown,
+       .remove_new  = caam_jr_remove,
+       .shutdown    = caam_jr_remove,
 };
 
 static int __init jr_driver_init(void)
index 13b137410b75272b36927c3bda03bcfe99eee605..1b5abdb6cc5e15804937077dc6ad0b3acf1aa141 100644 (file)
@@ -647,7 +647,7 @@ void nitrox_get_hwinfo(struct nitrox_device *ndev)
                 ndev->hw.revision_id);
 
        /* copy partname */
-       strncpy(ndev->hw.partname, name, sizeof(ndev->hw.partname));
+       strscpy(ndev->hw.partname, name, sizeof(ndev->hw.partname));
 }
 
 void enable_pf2vf_mbox_interrupts(struct nitrox_device *ndev)
index 839ea14b9a853f1737ad7693b4c719dc024c3de0..d373caab52f886799c4ca58b1e9011dbefcd86ee 100644 (file)
@@ -9,6 +9,7 @@
 
 #include "dbc.h"
 
+#define DBC_DEFAULT_TIMEOUT            (10 * MSEC_PER_SEC)
 struct error_map {
        u32 psp;
        int ret;
@@ -37,22 +38,37 @@ static struct error_map error_codes[] = {
        {0x0,   0x0},
 };
 
-static int send_dbc_cmd(struct psp_dbc_device *dbc_dev,
-                       enum psp_platform_access_msg msg)
+static inline int send_dbc_cmd_thru_ext(struct psp_dbc_device *dbc_dev, int msg)
+{
+       dbc_dev->mbox->ext_req.header.sub_cmd_id = msg;
+
+       return psp_extended_mailbox_cmd(dbc_dev->psp,
+                                       DBC_DEFAULT_TIMEOUT,
+                                       (struct psp_ext_request *)dbc_dev->mbox);
+}
+
+static inline int send_dbc_cmd_thru_pa(struct psp_dbc_device *dbc_dev, int msg)
+{
+       return psp_send_platform_access_msg(msg,
+                                           (struct psp_request *)dbc_dev->mbox);
+}
+
+static int send_dbc_cmd(struct psp_dbc_device *dbc_dev, int msg)
 {
        int ret;
 
-       dbc_dev->mbox->req.header.status = 0;
-       ret = psp_send_platform_access_msg(msg, (struct psp_request *)dbc_dev->mbox);
+       *dbc_dev->result = 0;
+       ret = dbc_dev->use_ext ? send_dbc_cmd_thru_ext(dbc_dev, msg) :
+                                send_dbc_cmd_thru_pa(dbc_dev, msg);
        if (ret == -EIO) {
                int i;
 
                dev_dbg(dbc_dev->dev,
                         "msg 0x%x failed with PSP error: 0x%x\n",
-                        msg, dbc_dev->mbox->req.header.status);
+                        msg, *dbc_dev->result);
 
                for (i = 0; error_codes[i].psp; i++) {
-                       if (dbc_dev->mbox->req.header.status == error_codes[i].psp)
+                       if (*dbc_dev->result == error_codes[i].psp)
                                return error_codes[i].ret;
                }
        }
@@ -64,7 +80,7 @@ static int send_dbc_nonce(struct psp_dbc_device *dbc_dev)
 {
        int ret;
 
-       dbc_dev->mbox->req.header.payload_size = sizeof(dbc_dev->mbox->dbc_nonce);
+       *dbc_dev->payload_size = dbc_dev->header_size + sizeof(struct dbc_user_nonce);
        ret = send_dbc_cmd(dbc_dev, PSP_DYNAMIC_BOOST_GET_NONCE);
        if (ret == -EAGAIN) {
                dev_dbg(dbc_dev->dev, "retrying get nonce\n");
@@ -76,9 +92,9 @@ static int send_dbc_nonce(struct psp_dbc_device *dbc_dev)
 
 static int send_dbc_parameter(struct psp_dbc_device *dbc_dev)
 {
-       dbc_dev->mbox->req.header.payload_size = sizeof(dbc_dev->mbox->dbc_param);
+       struct dbc_user_param *user_param = (struct dbc_user_param *)dbc_dev->payload;
 
-       switch (dbc_dev->mbox->dbc_param.user.msg_index) {
+       switch (user_param->msg_index) {
        case PARAM_SET_FMAX_CAP:
        case PARAM_SET_PWR_CAP:
        case PARAM_SET_GFX_MODE:
@@ -125,8 +141,7 @@ static long dbc_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
 
        switch (cmd) {
        case DBCIOCNONCE:
-               if (copy_from_user(&dbc_dev->mbox->dbc_nonce.user, argp,
-                                  sizeof(struct dbc_user_nonce))) {
+               if (copy_from_user(dbc_dev->payload, argp, sizeof(struct dbc_user_nonce))) {
                        ret = -EFAULT;
                        goto unlock;
                }
@@ -135,43 +150,39 @@ static long dbc_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
                if (ret)
                        goto unlock;
 
-               if (copy_to_user(argp, &dbc_dev->mbox->dbc_nonce.user,
-                                sizeof(struct dbc_user_nonce))) {
+               if (copy_to_user(argp, dbc_dev->payload, sizeof(struct dbc_user_nonce))) {
                        ret = -EFAULT;
                        goto unlock;
                }
                break;
        case DBCIOCUID:
-               dbc_dev->mbox->req.header.payload_size = sizeof(dbc_dev->mbox->dbc_set_uid);
-               if (copy_from_user(&dbc_dev->mbox->dbc_set_uid.user, argp,
-                                  sizeof(struct dbc_user_setuid))) {
+               if (copy_from_user(dbc_dev->payload, argp, sizeof(struct dbc_user_setuid))) {
                        ret = -EFAULT;
                        goto unlock;
                }
 
+               *dbc_dev->payload_size = dbc_dev->header_size + sizeof(struct dbc_user_setuid);
                ret = send_dbc_cmd(dbc_dev, PSP_DYNAMIC_BOOST_SET_UID);
                if (ret)
                        goto unlock;
 
-               if (copy_to_user(argp, &dbc_dev->mbox->dbc_set_uid.user,
-                                sizeof(struct dbc_user_setuid))) {
+               if (copy_to_user(argp, dbc_dev->payload, sizeof(struct dbc_user_setuid))) {
                        ret = -EFAULT;
                        goto unlock;
                }
                break;
        case DBCIOCPARAM:
-               if (copy_from_user(&dbc_dev->mbox->dbc_param.user, argp,
-                                  sizeof(struct dbc_user_param))) {
+               if (copy_from_user(dbc_dev->payload, argp, sizeof(struct dbc_user_param))) {
                        ret = -EFAULT;
                        goto unlock;
                }
 
+               *dbc_dev->payload_size = dbc_dev->header_size + sizeof(struct dbc_user_param);
                ret = send_dbc_parameter(dbc_dev);
                if (ret)
                        goto unlock;
 
-               if (copy_to_user(argp, &dbc_dev->mbox->dbc_param.user,
-                                sizeof(struct dbc_user_param)))  {
+               if (copy_to_user(argp, dbc_dev->payload, sizeof(struct dbc_user_param)))  {
                        ret = -EFAULT;
                        goto unlock;
                }
@@ -197,15 +208,12 @@ int dbc_dev_init(struct psp_device *psp)
        struct psp_dbc_device *dbc_dev;
        int ret;
 
-       if (!PSP_FEATURE(psp, DBC))
-               return 0;
-
        dbc_dev = devm_kzalloc(dev, sizeof(*dbc_dev), GFP_KERNEL);
        if (!dbc_dev)
                return -ENOMEM;
 
        BUILD_BUG_ON(sizeof(union dbc_buffer) > PAGE_SIZE);
-       dbc_dev->mbox = (void *)devm_get_free_pages(dev, GFP_KERNEL, 0);
+       dbc_dev->mbox = (void *)devm_get_free_pages(dev, GFP_KERNEL | __GFP_ZERO, 0);
        if (!dbc_dev->mbox) {
                ret = -ENOMEM;
                goto cleanup_dev;
@@ -213,6 +221,20 @@ int dbc_dev_init(struct psp_device *psp)
 
        psp->dbc_data = dbc_dev;
        dbc_dev->dev = dev;
+       dbc_dev->psp = psp;
+
+       if (PSP_CAPABILITY(psp, DBC_THRU_EXT)) {
+               dbc_dev->use_ext = true;
+               dbc_dev->payload_size = &dbc_dev->mbox->ext_req.header.payload_size;
+               dbc_dev->result = &dbc_dev->mbox->ext_req.header.status;
+               dbc_dev->payload = &dbc_dev->mbox->ext_req.buf;
+               dbc_dev->header_size = sizeof(struct psp_ext_req_buffer_hdr);
+       } else {
+               dbc_dev->payload_size = &dbc_dev->mbox->pa_req.header.payload_size;
+               dbc_dev->result = &dbc_dev->mbox->pa_req.header.status;
+               dbc_dev->payload = &dbc_dev->mbox->pa_req.buf;
+               dbc_dev->header_size = sizeof(struct psp_req_buffer_hdr);
+       }
 
        ret = send_dbc_nonce(dbc_dev);
        if (ret == -EACCES) {
index e963099ca38ec62f7646b62e5720242dea101383..e0fecbe92eb1f0c919d4a69ef6f39a24e77eb96a 100644 (file)
 
 struct psp_dbc_device {
        struct device *dev;
+       struct psp_device *psp;
 
        union dbc_buffer *mbox;
 
        struct mutex ioctl_mutex;
 
        struct miscdevice char_dev;
-};
-
-struct dbc_nonce {
-       struct psp_req_buffer_hdr       header;
-       struct dbc_user_nonce           user;
-} __packed;
 
-struct dbc_set_uid {
-       struct psp_req_buffer_hdr       header;
-       struct dbc_user_setuid          user;
-} __packed;
-
-struct dbc_param {
-       struct psp_req_buffer_hdr       header;
-       struct dbc_user_param           user;
-} __packed;
+       /* used to abstract communication path */
+       bool    use_ext;
+       u32     header_size;
+       u32     *payload_size;
+       u32     *result;
+       void    *payload;
+};
 
 union dbc_buffer {
-       struct psp_request              req;
-       struct dbc_nonce                dbc_nonce;
-       struct dbc_set_uid              dbc_set_uid;
-       struct dbc_param                dbc_param;
+       struct psp_request              pa_req;
+       struct psp_ext_request          ext_req;
 };
 
 void dbc_dev_destroy(struct psp_device *psp);
index d42d7bc623523dad25f4665d18405b499be2bee9..124a2e0c89993786843b88daa609b5dfd86917ef 100644 (file)
@@ -9,6 +9,9 @@
 
 #include <linux/kernel.h>
 #include <linux/irqreturn.h>
+#include <linux/mutex.h>
+#include <linux/bitfield.h>
+#include <linux/delay.h>
 
 #include "sp-dev.h"
 #include "psp-dev.h"
 
 struct psp_device *psp_master;
 
+#define PSP_C2PMSG_17_CMDRESP_CMD      GENMASK(19, 16)
+
+static int psp_mailbox_poll(const void __iomem *cmdresp_reg, unsigned int *cmdresp,
+                           unsigned int timeout_msecs)
+{
+       while (true) {
+               *cmdresp = ioread32(cmdresp_reg);
+               if (FIELD_GET(PSP_CMDRESP_RESP, *cmdresp))
+                       return 0;
+
+               if (!timeout_msecs--)
+                       break;
+
+               usleep_range(1000, 1100);
+       }
+
+       return -ETIMEDOUT;
+}
+
+int psp_mailbox_command(struct psp_device *psp, enum psp_cmd cmd, void *cmdbuff,
+                       unsigned int timeout_msecs, unsigned int *cmdresp)
+{
+       void __iomem *cmdresp_reg, *cmdbuff_lo_reg, *cmdbuff_hi_reg;
+       int ret;
+
+       if (!psp || !psp->vdata || !psp->vdata->cmdresp_reg ||
+           !psp->vdata->cmdbuff_addr_lo_reg || !psp->vdata->cmdbuff_addr_hi_reg)
+               return -ENODEV;
+
+       cmdresp_reg    = psp->io_regs + psp->vdata->cmdresp_reg;
+       cmdbuff_lo_reg = psp->io_regs + psp->vdata->cmdbuff_addr_lo_reg;
+       cmdbuff_hi_reg = psp->io_regs + psp->vdata->cmdbuff_addr_hi_reg;
+
+       mutex_lock(&psp->mailbox_mutex);
+
+       /* Ensure mailbox is ready for a command */
+       ret = -EBUSY;
+       if (psp_mailbox_poll(cmdresp_reg, cmdresp, 0))
+               goto unlock;
+
+       if (cmdbuff) {
+               iowrite32(lower_32_bits(__psp_pa(cmdbuff)), cmdbuff_lo_reg);
+               iowrite32(upper_32_bits(__psp_pa(cmdbuff)), cmdbuff_hi_reg);
+       }
+
+       *cmdresp = FIELD_PREP(PSP_C2PMSG_17_CMDRESP_CMD, cmd);
+       iowrite32(*cmdresp, cmdresp_reg);
+
+       ret = psp_mailbox_poll(cmdresp_reg, cmdresp, timeout_msecs);
+
+unlock:
+       mutex_unlock(&psp->mailbox_mutex);
+
+       return ret;
+}
+
+int psp_extended_mailbox_cmd(struct psp_device *psp, unsigned int timeout_msecs,
+                            struct psp_ext_request *req)
+{
+       unsigned int reg;
+       int ret;
+
+       print_hex_dump_debug("->psp ", DUMP_PREFIX_OFFSET, 16, 2, req,
+                            req->header.payload_size, false);
+
+       ret = psp_mailbox_command(psp, PSP_CMD_TEE_EXTENDED_CMD, (void *)req,
+                                 timeout_msecs, &reg);
+       if (ret) {
+               return ret;
+       } else if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
+               req->header.status = FIELD_GET(PSP_CMDRESP_STS, reg);
+               return -EIO;
+       }
+
+       print_hex_dump_debug("<-psp ", DUMP_PREFIX_OFFSET, 16, 2, req,
+                            req->header.payload_size, false);
+
+       return 0;
+}
+
 static struct psp_device *psp_alloc_struct(struct sp_device *sp)
 {
        struct device *dev = sp->dev;
@@ -74,7 +157,7 @@ static unsigned int psp_get_capability(struct psp_device *psp)
        psp->capability = val;
 
        /* Detect if TSME and SME are both enabled */
-       if (psp->capability & PSP_CAPABILITY_PSP_SECURITY_REPORTING &&
+       if (PSP_CAPABILITY(psp, PSP_SECURITY_REPORTING) &&
            psp->capability & (PSP_SECURITY_TSME_STATUS << PSP_CAPABILITY_PSP_SECURITY_OFFSET) &&
            cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
                dev_notice(psp->dev, "psp: Both TSME and SME are active, SME is unnecessary when TSME is active.\n");
@@ -85,7 +168,7 @@ static unsigned int psp_get_capability(struct psp_device *psp)
 static int psp_check_sev_support(struct psp_device *psp)
 {
        /* Check if device supports SEV feature */
-       if (!(psp->capability & PSP_CAPABILITY_SEV)) {
+       if (!PSP_CAPABILITY(psp, SEV)) {
                dev_dbg(psp->dev, "psp does not support SEV\n");
                return -ENODEV;
        }
@@ -96,7 +179,7 @@ static int psp_check_sev_support(struct psp_device *psp)
 static int psp_check_tee_support(struct psp_device *psp)
 {
        /* Check if device supports TEE feature */
-       if (!(psp->capability & PSP_CAPABILITY_TEE)) {
+       if (!PSP_CAPABILITY(psp, TEE)) {
                dev_dbg(psp->dev, "psp does not support TEE\n");
                return -ENODEV;
        }
@@ -104,23 +187,6 @@ static int psp_check_tee_support(struct psp_device *psp)
        return 0;
 }
 
-static void psp_init_platform_access(struct psp_device *psp)
-{
-       int ret;
-
-       ret = platform_access_dev_init(psp);
-       if (ret) {
-               dev_warn(psp->dev, "platform access init failed: %d\n", ret);
-               return;
-       }
-
-       /* dbc must come after platform access as it tests the feature */
-       ret = dbc_dev_init(psp);
-       if (ret)
-               dev_warn(psp->dev, "failed to init dynamic boost control: %d\n",
-                        ret);
-}
-
 static int psp_init(struct psp_device *psp)
 {
        int ret;
@@ -137,8 +203,19 @@ static int psp_init(struct psp_device *psp)
                        return ret;
        }
 
-       if (psp->vdata->platform_access)
-               psp_init_platform_access(psp);
+       if (psp->vdata->platform_access) {
+               ret = platform_access_dev_init(psp);
+               if (ret)
+                       return ret;
+       }
+
+       /* dbc must come after platform access as it tests the feature */
+       if (PSP_FEATURE(psp, DBC) ||
+           PSP_CAPABILITY(psp, DBC_THRU_EXT)) {
+               ret = dbc_dev_init(psp);
+               if (ret)
+                       return ret;
+       }
 
        return 0;
 }
@@ -164,6 +241,7 @@ int psp_dev_init(struct sp_device *sp)
        }
 
        psp->io_regs = sp->io_map;
+       mutex_init(&psp->mailbox_mutex);
 
        ret = psp_get_capability(psp);
        if (ret)
index 8a4de69399c59abc8271ca346c88c30e0dc2682f..ae582ba637295d57654abf251f5b711d5944e058 100644 (file)
@@ -14,6 +14,9 @@
 #include <linux/list.h>
 #include <linux/bits.h>
 #include <linux/interrupt.h>
+#include <linux/mutex.h>
+#include <linux/psp.h>
+#include <linux/psp-platform-access.h>
 
 #include "sp-dev.h"
 
@@ -33,6 +36,7 @@ struct psp_device {
        struct sp_device *sp;
 
        void __iomem *io_regs;
+       struct mutex mailbox_mutex;
 
        psp_irq_handler_t sev_irq_handler;
        void *sev_irq_data;
@@ -53,6 +57,7 @@ struct psp_device *psp_get_master_device(void);
 
 #define PSP_CAPABILITY_SEV                     BIT(0)
 #define PSP_CAPABILITY_TEE                     BIT(1)
+#define PSP_CAPABILITY_DBC_THRU_EXT            BIT(2)
 #define PSP_CAPABILITY_PSP_SECURITY_REPORTING  BIT(7)
 
 #define PSP_CAPABILITY_PSP_SECURITY_OFFSET     8
@@ -71,4 +76,54 @@ struct psp_device *psp_get_master_device(void);
 #define PSP_SECURITY_HSP_TPM_AVAILABLE         BIT(10)
 #define PSP_SECURITY_ROM_ARMOR_ENFORCED                BIT(11)
 
+/**
+ * enum psp_cmd - PSP mailbox commands
+ * @PSP_CMD_TEE_RING_INIT:     Initialize TEE ring buffer
+ * @PSP_CMD_TEE_RING_DESTROY:  Destroy TEE ring buffer
+ * @PSP_CMD_TEE_EXTENDED_CMD:  Extended command
+ * @PSP_CMD_MAX:               Maximum command id
+ */
+enum psp_cmd {
+       PSP_CMD_TEE_RING_INIT           = 1,
+       PSP_CMD_TEE_RING_DESTROY        = 2,
+       PSP_CMD_TEE_EXTENDED_CMD        = 14,
+       PSP_CMD_MAX                     = 15,
+};
+
+int psp_mailbox_command(struct psp_device *psp, enum psp_cmd cmd, void *cmdbuff,
+                       unsigned int timeout_msecs, unsigned int *cmdresp);
+
+/**
+ * struct psp_ext_req_buffer_hdr - Structure of the extended command header
+ * @payload_size: total payload size
+ * @sub_cmd_id: extended command ID
+ * @status: status of command execution (out)
+ */
+struct psp_ext_req_buffer_hdr {
+       u32 payload_size;
+       u32 sub_cmd_id;
+       u32 status;
+} __packed;
+
+struct psp_ext_request {
+       struct psp_ext_req_buffer_hdr header;
+       void *buf;
+} __packed;
+
+/**
+ * enum psp_sub_cmd - PSP mailbox sub commands
+ * @PSP_SUB_CMD_DBC_GET_NONCE:         Get nonce from DBC
+ * @PSP_SUB_CMD_DBC_SET_UID:           Set UID for DBC
+ * @PSP_SUB_CMD_DBC_GET_PARAMETER:     Get parameter from DBC
+ * @PSP_SUB_CMD_DBC_SET_PARAMETER:     Set parameter for DBC
+ */
+enum psp_sub_cmd {
+       PSP_SUB_CMD_DBC_GET_NONCE       = PSP_DYNAMIC_BOOST_GET_NONCE,
+       PSP_SUB_CMD_DBC_SET_UID         = PSP_DYNAMIC_BOOST_SET_UID,
+       PSP_SUB_CMD_DBC_GET_PARAMETER   = PSP_DYNAMIC_BOOST_GET_PARAMETER,
+       PSP_SUB_CMD_DBC_SET_PARAMETER   = PSP_DYNAMIC_BOOST_SET_PARAMETER,
+};
+
+int psp_extended_mailbox_cmd(struct psp_device *psp, unsigned int timeout_msecs,
+                            struct psp_ext_request *req);
 #endif /* __PSP_DEV_H */
index f97166fba9d93061737f51766f7bb3d1e4f2757f..fcaccd0b5a651e995f8135ad39924f8f76b6ec45 100644 (file)
@@ -309,6 +309,7 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
 {
        struct psp_device *psp = psp_master;
        struct sev_device *sev;
+       unsigned int cmdbuff_hi, cmdbuff_lo;
        unsigned int phys_lsb, phys_msb;
        unsigned int reg, ret = 0;
        int buf_len;
@@ -371,6 +372,19 @@ static int __sev_do_cmd_locked(int cmd, void *data, int *psp_ret)
        if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
                dev_dbg(sev->dev, "sev command %#x failed (%#010lx)\n",
                        cmd, FIELD_GET(PSP_CMDRESP_STS, reg));
+
+               /*
+                * PSP firmware may report additional error information in the
+                * command buffer registers on error. Print contents of command
+                * buffer registers if they changed.
+                */
+               cmdbuff_hi = ioread32(sev->io_regs + sev->vdata->cmdbuff_addr_hi_reg);
+               cmdbuff_lo = ioread32(sev->io_regs + sev->vdata->cmdbuff_addr_lo_reg);
+               if (cmdbuff_hi != phys_msb || cmdbuff_lo != phys_lsb) {
+                       dev_dbg(sev->dev, "Additional error information reported in cmdbuff:");
+                       dev_dbg(sev->dev, "  cmdbuff hi: %#010x\n", cmdbuff_hi);
+                       dev_dbg(sev->dev, "  cmdbuff lo: %#010x\n", cmdbuff_lo);
+               }
                ret = -EIO;
        } else {
                ret = sev_write_init_ex_file_if_required(cmd);
index 2329ad524b4945b29bac80e1b0843c4de6a72a54..03d5b9e04084828f8437eb78ad8dcb3a5d1ad739 100644 (file)
@@ -30,6 +30,7 @@
 
 #define PLATFORM_FEATURE_DBC           0x1
 
+#define PSP_CAPABILITY(psp, cap) (psp->capability & PSP_CAPABILITY_##cap)
 #define PSP_FEATURE(psp, feat) (psp->vdata && psp->vdata->platform_features & PLATFORM_FEATURE_##feat)
 
 /* Structure to hold CCP device data */
@@ -71,6 +72,9 @@ struct psp_vdata {
        const struct sev_vdata *sev;
        const struct tee_vdata *tee;
        const struct platform_access_vdata *platform_access;
+       const unsigned int cmdresp_reg;
+       const unsigned int cmdbuff_addr_lo_reg;
+       const unsigned int cmdbuff_addr_hi_reg;
        const unsigned int feature_reg;
        const unsigned int inten_reg;
        const unsigned int intsts_reg;
index b6ab56abeb682f89f913558e957ac364d57fbeec..300dda14182b8e08dd89bee6d16862ec6a80270d 100644 (file)
@@ -84,7 +84,7 @@ static umode_t psp_security_is_visible(struct kobject *kobj, struct attribute *a
        struct sp_device *sp = dev_get_drvdata(dev);
        struct psp_device *psp = sp->psp_data;
 
-       if (psp && (psp->capability & PSP_CAPABILITY_PSP_SECURITY_REPORTING))
+       if (psp && PSP_CAPABILITY(psp, PSP_SECURITY_REPORTING))
                return 0444;
 
        return 0;
@@ -135,7 +135,7 @@ static umode_t psp_firmware_is_visible(struct kobject *kobj, struct attribute *a
                val = ioread32(psp->io_regs + psp->vdata->bootloader_info_reg);
 
        if (attr == &dev_attr_tee_version.attr &&
-           psp->capability & PSP_CAPABILITY_TEE &&
+           PSP_CAPABILITY(psp, TEE) &&
            psp->vdata->tee->info_reg)
                val = ioread32(psp->io_regs + psp->vdata->tee->info_reg);
 
@@ -418,18 +418,12 @@ static const struct sev_vdata sevv2 = {
 };
 
 static const struct tee_vdata teev1 = {
-       .cmdresp_reg            = 0x10544,      /* C2PMSG_17 */
-       .cmdbuff_addr_lo_reg    = 0x10548,      /* C2PMSG_18 */
-       .cmdbuff_addr_hi_reg    = 0x1054c,      /* C2PMSG_19 */
        .ring_wptr_reg          = 0x10550,      /* C2PMSG_20 */
        .ring_rptr_reg          = 0x10554,      /* C2PMSG_21 */
        .info_reg               = 0x109e8,      /* C2PMSG_58 */
 };
 
 static const struct tee_vdata teev2 = {
-       .cmdresp_reg            = 0x10944,      /* C2PMSG_17 */
-       .cmdbuff_addr_lo_reg    = 0x10948,      /* C2PMSG_18 */
-       .cmdbuff_addr_hi_reg    = 0x1094c,      /* C2PMSG_19 */
        .ring_wptr_reg          = 0x10950,      /* C2PMSG_20 */
        .ring_rptr_reg          = 0x10954,      /* C2PMSG_21 */
 };
@@ -466,6 +460,9 @@ static const struct psp_vdata pspv2 = {
 static const struct psp_vdata pspv3 = {
        .tee                    = &teev1,
        .platform_access        = &pa_v1,
+       .cmdresp_reg            = 0x10544,      /* C2PMSG_17 */
+       .cmdbuff_addr_lo_reg    = 0x10548,      /* C2PMSG_18 */
+       .cmdbuff_addr_hi_reg    = 0x1054c,      /* C2PMSG_19 */
        .bootloader_info_reg    = 0x109ec,      /* C2PMSG_59 */
        .feature_reg            = 0x109fc,      /* C2PMSG_63 */
        .inten_reg              = 0x10690,      /* P2CMSG_INTEN */
@@ -476,6 +473,9 @@ static const struct psp_vdata pspv3 = {
 static const struct psp_vdata pspv4 = {
        .sev                    = &sevv2,
        .tee                    = &teev1,
+       .cmdresp_reg            = 0x10544,      /* C2PMSG_17 */
+       .cmdbuff_addr_lo_reg    = 0x10548,      /* C2PMSG_18 */
+       .cmdbuff_addr_hi_reg    = 0x1054c,      /* C2PMSG_19 */
        .bootloader_info_reg    = 0x109ec,      /* C2PMSG_59 */
        .feature_reg            = 0x109fc,      /* C2PMSG_63 */
        .inten_reg              = 0x10690,      /* P2CMSG_INTEN */
@@ -485,6 +485,9 @@ static const struct psp_vdata pspv4 = {
 static const struct psp_vdata pspv5 = {
        .tee                    = &teev2,
        .platform_access        = &pa_v2,
+       .cmdresp_reg            = 0x10944,      /* C2PMSG_17 */
+       .cmdbuff_addr_lo_reg    = 0x10948,      /* C2PMSG_18 */
+       .cmdbuff_addr_hi_reg    = 0x1094c,      /* C2PMSG_19 */
        .feature_reg            = 0x109fc,      /* C2PMSG_63 */
        .inten_reg              = 0x10510,      /* P2CMSG_INTEN */
        .intsts_reg             = 0x10514,      /* P2CMSG_INTSTS */
@@ -493,6 +496,9 @@ static const struct psp_vdata pspv5 = {
 static const struct psp_vdata pspv6 = {
        .sev                    = &sevv2,
        .tee                    = &teev2,
+       .cmdresp_reg            = 0x10944,      /* C2PMSG_17 */
+       .cmdbuff_addr_lo_reg    = 0x10948,      /* C2PMSG_18 */
+       .cmdbuff_addr_hi_reg    = 0x1094c,      /* C2PMSG_19 */
        .feature_reg            = 0x109fc,      /* C2PMSG_63 */
        .inten_reg              = 0x10510,      /* P2CMSG_INTEN */
        .intsts_reg             = 0x10514,      /* P2CMSG_INTSTS */
index 7d79a8744f9a6a279307b41ab692b2982f28144d..47330123776015e15c5be1fc7008ce7cbb16295d 100644 (file)
@@ -180,7 +180,7 @@ e_err:
        return ret;
 }
 
-static int sp_platform_remove(struct platform_device *pdev)
+static void sp_platform_remove(struct platform_device *pdev)
 {
        struct device *dev = &pdev->dev;
        struct sp_device *sp = dev_get_drvdata(dev);
@@ -188,8 +188,6 @@ static int sp_platform_remove(struct platform_device *pdev)
        sp_destroy(sp);
 
        dev_notice(dev, "disabled\n");
-
-       return 0;
 }
 
 #ifdef CONFIG_PM
@@ -222,7 +220,7 @@ static struct platform_driver sp_platform_driver = {
 #endif
        },
        .probe = sp_platform_probe,
-       .remove = sp_platform_remove,
+       .remove_new = sp_platform_remove,
 #ifdef CONFIG_PM
        .suspend = sp_platform_suspend,
        .resume = sp_platform_resume,
index 5560bf8329a127eb335683514aee86a08d868162..5e1d80724678d03f7fc75e89c3f50ae867087dde 100644 (file)
@@ -62,26 +62,6 @@ static void tee_free_ring(struct psp_tee_device *tee)
        mutex_destroy(&rb_mgr->mutex);
 }
 
-static int tee_wait_cmd_poll(struct psp_tee_device *tee, unsigned int timeout,
-                            unsigned int *reg)
-{
-       /* ~10ms sleep per loop => nloop = timeout * 100 */
-       int nloop = timeout * 100;
-
-       while (--nloop) {
-               *reg = ioread32(tee->io_regs + tee->vdata->cmdresp_reg);
-               if (FIELD_GET(PSP_CMDRESP_RESP, *reg))
-                       return 0;
-
-               usleep_range(10000, 10100);
-       }
-
-       dev_err(tee->dev, "tee: command timed out, disabling PSP\n");
-       psp_dead = true;
-
-       return -ETIMEDOUT;
-}
-
 static
 struct tee_init_ring_cmd *tee_alloc_cmd_buffer(struct psp_tee_device *tee)
 {
@@ -110,7 +90,6 @@ static int tee_init_ring(struct psp_tee_device *tee)
 {
        int ring_size = MAX_RING_BUFFER_ENTRIES * sizeof(struct tee_ring_cmd);
        struct tee_init_ring_cmd *cmd;
-       phys_addr_t cmd_buffer;
        unsigned int reg;
        int ret;
 
@@ -130,23 +109,15 @@ static int tee_init_ring(struct psp_tee_device *tee)
                return -ENOMEM;
        }
 
-       cmd_buffer = __psp_pa((void *)cmd);
-
        /* Send command buffer details to Trusted OS by writing to
         * CPU-PSP message registers
         */
-
-       iowrite32(lower_32_bits(cmd_buffer),
-                 tee->io_regs + tee->vdata->cmdbuff_addr_lo_reg);
-       iowrite32(upper_32_bits(cmd_buffer),
-                 tee->io_regs + tee->vdata->cmdbuff_addr_hi_reg);
-       iowrite32(TEE_RING_INIT_CMD,
-                 tee->io_regs + tee->vdata->cmdresp_reg);
-
-       ret = tee_wait_cmd_poll(tee, TEE_DEFAULT_TIMEOUT, &reg);
+       ret = psp_mailbox_command(tee->psp, PSP_CMD_TEE_RING_INIT, cmd,
+                                 TEE_DEFAULT_CMD_TIMEOUT, &reg);
        if (ret) {
-               dev_err(tee->dev, "tee: ring init command timed out\n");
+               dev_err(tee->dev, "tee: ring init command timed out, disabling TEE support\n");
                tee_free_ring(tee);
+               psp_dead = true;
                goto free_buf;
        }
 
@@ -174,12 +145,11 @@ static void tee_destroy_ring(struct psp_tee_device *tee)
        if (psp_dead)
                goto free_ring;
 
-       iowrite32(TEE_RING_DESTROY_CMD,
-                 tee->io_regs + tee->vdata->cmdresp_reg);
-
-       ret = tee_wait_cmd_poll(tee, TEE_DEFAULT_TIMEOUT, &reg);
+       ret = psp_mailbox_command(tee->psp, PSP_CMD_TEE_RING_DESTROY, NULL,
+                                 TEE_DEFAULT_CMD_TIMEOUT, &reg);
        if (ret) {
-               dev_err(tee->dev, "tee: ring destroy command timed out\n");
+               dev_err(tee->dev, "tee: ring destroy command timed out, disabling TEE support\n");
+               psp_dead = true;
        } else if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
                dev_err(tee->dev, "tee: ring destroy command failed (%#010lx)\n",
                        FIELD_GET(PSP_CMDRESP_STS, reg));
@@ -370,7 +340,7 @@ int psp_tee_process_cmd(enum tee_cmd_id cmd_id, void *buf, size_t len,
        if (ret)
                return ret;
 
-       ret = tee_wait_cmd_completion(tee, resp, TEE_DEFAULT_TIMEOUT);
+       ret = tee_wait_cmd_completion(tee, resp, TEE_DEFAULT_RING_TIMEOUT);
        if (ret) {
                resp->flag = CMD_RESPONSE_TIMEDOUT;
                return ret;
index 49d26158b71e31635557d971ad19a148b9859043..ea9a2b7c05f577c9919836e68ff92dc7de8bc10e 100644 (file)
 #include <linux/device.h>
 #include <linux/mutex.h>
 
-#define TEE_DEFAULT_TIMEOUT            10
+#define TEE_DEFAULT_CMD_TIMEOUT                (10 * MSEC_PER_SEC)
+#define TEE_DEFAULT_RING_TIMEOUT       10
 #define MAX_BUFFER_SIZE                        988
 
-/**
- * enum tee_ring_cmd_id - TEE interface commands for ring buffer configuration
- * @TEE_RING_INIT_CMD:         Initialize ring buffer
- * @TEE_RING_DESTROY_CMD:      Destroy ring buffer
- * @TEE_RING_MAX_CMD:          Maximum command id
- */
-enum tee_ring_cmd_id {
-       TEE_RING_INIT_CMD               = 0x00010000,
-       TEE_RING_DESTROY_CMD            = 0x00020000,
-       TEE_RING_MAX_CMD                = 0x000F0000,
-};
-
 /**
  * struct tee_init_ring_cmd - Command to init TEE ring buffer
  * @low_addr:  bits [31:0] of the physical address of ring buffer
index 0f0694037dd71240993e3d1cda44e815ea04fdb0..9177b54bb0f58a784088d9d664410ef6105712ec 100644 (file)
@@ -623,7 +623,7 @@ static int ccree_probe(struct platform_device *plat_dev)
        return 0;
 }
 
-static int ccree_remove(struct platform_device *plat_dev)
+static void ccree_remove(struct platform_device *plat_dev)
 {
        struct device *dev = &plat_dev->dev;
 
@@ -632,8 +632,6 @@ static int ccree_remove(struct platform_device *plat_dev)
        cleanup_cc_resources(plat_dev);
 
        dev_info(dev, "ARM ccree device terminated\n");
-
-       return 0;
 }
 
 static struct platform_driver ccree_driver = {
@@ -645,7 +643,7 @@ static struct platform_driver ccree_driver = {
 #endif
        },
        .probe = ccree_probe,
-       .remove = ccree_remove,
+       .remove_new = ccree_remove,
 };
 
 static int __init ccree_init(void)
index 16298ae4a00bfa41ae7eb18a043dde7ce6317d03..177428480c7d16424e5f05dbc5b2553fa1e31803 100644 (file)
@@ -1920,6 +1920,9 @@ err:
        return error;
 }
 
+static int chcr_hmac_init(struct ahash_request *areq);
+static int chcr_sha_init(struct ahash_request *areq);
+
 static int chcr_ahash_digest(struct ahash_request *req)
 {
        struct chcr_ahash_req_ctx *req_ctx = ahash_request_ctx(req);
@@ -1938,7 +1941,11 @@ static int chcr_ahash_digest(struct ahash_request *req)
        req_ctx->rxqidx = cpu % ctx->nrxq;
        put_cpu();
 
-       rtfm->init(req);
+       if (is_hmac(crypto_ahash_tfm(rtfm)))
+               chcr_hmac_init(req);
+       else
+               chcr_sha_init(req);
+
        bs = crypto_tfm_alg_blocksize(crypto_ahash_tfm(rtfm));
        error = chcr_inc_wrcount(dev);
        if (error)
index 5d60a4bcb51188c1dbdc7b49e55710caa93cc187..0dd8baf16cb4570c143864f43564e3a92a6c3d1d 100644 (file)
@@ -306,13 +306,11 @@ static int exynos_rng_probe(struct platform_device *pdev)
        return ret;
 }
 
-static int exynos_rng_remove(struct platform_device *pdev)
+static void exynos_rng_remove(struct platform_device *pdev)
 {
        crypto_unregister_rng(&exynos_rng_alg);
 
        exynos_rng_dev = NULL;
-
-       return 0;
 }
 
 static int __maybe_unused exynos_rng_suspend(struct device *dev)
@@ -391,7 +389,7 @@ static struct platform_driver exynos_rng_driver = {
                .of_match_table = exynos_rng_dt_match,
        },
        .probe          = exynos_rng_probe,
-       .remove         = exynos_rng_remove,
+       .remove_new     = exynos_rng_remove,
 };
 
 module_platform_driver(exynos_rng_driver);
index 0f43c6e39bb9d68d148706f4c94b30634e79758e..1d1a889599bb43e213d7d39bac0ee40b1e453f02 100644 (file)
@@ -505,7 +505,7 @@ error_pm:
        return err;
 }
 
-static int sl3516_ce_remove(struct platform_device *pdev)
+static void sl3516_ce_remove(struct platform_device *pdev)
 {
        struct sl3516_ce_dev *ce = platform_get_drvdata(pdev);
 
@@ -518,8 +518,6 @@ static int sl3516_ce_remove(struct platform_device *pdev)
 #ifdef CONFIG_CRYPTO_DEV_SL3516_DEBUG
        debugfs_remove_recursive(ce->dbgfs_dir);
 #endif
-
-       return 0;
 }
 
 static const struct of_device_id sl3516_ce_crypto_of_match_table[] = {
@@ -530,7 +528,7 @@ MODULE_DEVICE_TABLE(of, sl3516_ce_crypto_of_match_table);
 
 static struct platform_driver sl3516_ce_driver = {
        .probe           = sl3516_ce_probe,
-       .remove          = sl3516_ce_remove,
+       .remove_new      = sl3516_ce_remove,
        .driver          = {
                .name           = "sl3516-crypto",
                .pm             = &sl3516_ce_pm_ops,
index 8e4a49b7ab4fba967fd94b69b8a3802dc0191406..7bddc3c786c1a7f3ceead4114e53ea1f8d4f6532 100644 (file)
@@ -2393,9 +2393,13 @@ static int hifn_alg_alloc(struct hifn_device *dev, const struct hifn_alg_templat
        alg->alg = t->skcipher;
        alg->alg.init = hifn_init_tfm;
 
-       snprintf(alg->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", t->name);
-       snprintf(alg->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s-%s",
-                t->drv_name, dev->name);
+       err = -EINVAL;
+       if (snprintf(alg->alg.base.cra_name, CRYPTO_MAX_ALG_NAME,
+                    "%s", t->name) >= CRYPTO_MAX_ALG_NAME)
+               goto out_free_alg;
+       if (snprintf(alg->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+                    "%s-%s", t->drv_name, dev->name) >= CRYPTO_MAX_ALG_NAME)
+               goto out_free_alg;
 
        alg->alg.base.cra_priority = 300;
        alg->alg.base.cra_flags = CRYPTO_ALG_KERN_DRIVER_ONLY | CRYPTO_ALG_ASYNC;
@@ -2411,6 +2415,7 @@ static int hifn_alg_alloc(struct hifn_device *dev, const struct hifn_alg_templat
        err = crypto_register_skcipher(&alg->alg);
        if (err) {
                list_del(&alg->entry);
+out_free_alg:
                kfree(alg);
        }
 
index 2cc1591949db7e078846115e5bd646e6afe2e37f..7e8186fe051243f8cc72decbdbf5ddd3bc3cb424 100644 (file)
@@ -137,8 +137,8 @@ static void dump_show(struct hisi_qm *qm, void *info,
 static int qm_sqc_dump(struct hisi_qm *qm, char *s, char *name)
 {
        struct device *dev = &qm->pdev->dev;
-       struct qm_sqc *sqc, *sqc_curr;
-       dma_addr_t sqc_dma;
+       struct qm_sqc *sqc_curr;
+       struct qm_sqc sqc;
        u32 qp_id;
        int ret;
 
@@ -151,35 +151,29 @@ static int qm_sqc_dump(struct hisi_qm *qm, char *s, char *name)
                return -EINVAL;
        }
 
-       sqc = hisi_qm_ctx_alloc(qm, sizeof(*sqc), &sqc_dma);
-       if (IS_ERR(sqc))
-               return PTR_ERR(sqc);
+       ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp_id, 1);
+       if (!ret) {
+               dump_show(qm, &sqc, sizeof(struct qm_sqc), name);
 
-       ret = hisi_qm_mb(qm, QM_MB_CMD_SQC, sqc_dma, qp_id, 1);
-       if (ret) {
-               down_read(&qm->qps_lock);
-               if (qm->sqc) {
-                       sqc_curr = qm->sqc + qp_id;
+               return 0;
+       }
 
-                       dump_show(qm, sqc_curr, sizeof(*sqc), "SOFT SQC");
-               }
-               up_read(&qm->qps_lock);
+       down_read(&qm->qps_lock);
+       if (qm->sqc) {
+               sqc_curr = qm->sqc + qp_id;
 
-               goto free_ctx;
+               dump_show(qm, sqc_curr, sizeof(*sqc_curr), "SOFT SQC");
        }
+       up_read(&qm->qps_lock);
 
-       dump_show(qm, sqc, sizeof(*sqc), name);
-
-free_ctx:
-       hisi_qm_ctx_free(qm, sizeof(*sqc), sqc, &sqc_dma);
        return 0;
 }
 
 static int qm_cqc_dump(struct hisi_qm *qm, char *s, char *name)
 {
        struct device *dev = &qm->pdev->dev;
-       struct qm_cqc *cqc, *cqc_curr;
-       dma_addr_t cqc_dma;
+       struct qm_cqc *cqc_curr;
+       struct qm_cqc cqc;
        u32 qp_id;
        int ret;
 
@@ -192,34 +186,29 @@ static int qm_cqc_dump(struct hisi_qm *qm, char *s, char *name)
                return -EINVAL;
        }
 
-       cqc = hisi_qm_ctx_alloc(qm, sizeof(*cqc), &cqc_dma);
-       if (IS_ERR(cqc))
-               return PTR_ERR(cqc);
+       ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp_id, 1);
+       if (!ret) {
+               dump_show(qm, &cqc, sizeof(struct qm_cqc), name);
 
-       ret = hisi_qm_mb(qm, QM_MB_CMD_CQC, cqc_dma, qp_id, 1);
-       if (ret) {
-               down_read(&qm->qps_lock);
-               if (qm->cqc) {
-                       cqc_curr = qm->cqc + qp_id;
+               return 0;
+       }
 
-                       dump_show(qm, cqc_curr, sizeof(*cqc), "SOFT CQC");
-               }
-               up_read(&qm->qps_lock);
+       down_read(&qm->qps_lock);
+       if (qm->cqc) {
+               cqc_curr = qm->cqc + qp_id;
 
-               goto free_ctx;
+               dump_show(qm, cqc_curr, sizeof(*cqc_curr), "SOFT CQC");
        }
+       up_read(&qm->qps_lock);
 
-       dump_show(qm, cqc, sizeof(*cqc), name);
-
-free_ctx:
-       hisi_qm_ctx_free(qm, sizeof(*cqc), cqc, &cqc_dma);
        return 0;
 }
 
 static int qm_eqc_aeqc_dump(struct hisi_qm *qm, char *s, char *name)
 {
        struct device *dev = &qm->pdev->dev;
-       dma_addr_t xeqc_dma;
+       struct qm_aeqc aeqc;
+       struct qm_eqc eqc;
        size_t size;
        void *xeqc;
        int ret;
@@ -233,23 +222,19 @@ static int qm_eqc_aeqc_dump(struct hisi_qm *qm, char *s, char *name)
        if (!strcmp(name, "EQC")) {
                cmd = QM_MB_CMD_EQC;
                size = sizeof(struct qm_eqc);
+               xeqc = &eqc;
        } else {
                cmd = QM_MB_CMD_AEQC;
                size = sizeof(struct qm_aeqc);
+               xeqc = &aeqc;
        }
 
-       xeqc = hisi_qm_ctx_alloc(qm, size, &xeqc_dma);
-       if (IS_ERR(xeqc))
-               return PTR_ERR(xeqc);
-
-       ret = hisi_qm_mb(qm, cmd, xeqc_dma, 0, 1);
+       ret = qm_set_and_get_xqc(qm, cmd, xeqc, 0, 1);
        if (ret)
-               goto err_free_ctx;
+               return ret;
 
        dump_show(qm, xeqc, size, name);
 
-err_free_ctx:
-       hisi_qm_ctx_free(qm, size, xeqc, &xeqc_dma);
        return ret;
 }
 
index 9a1c61be32ccdba55b8d4960b9350f7f705bfc5d..764532a6ca828df884a2420862a3e16c8694136e 100644 (file)
@@ -57,6 +57,9 @@ struct hpre_ctx;
 #define HPRE_DRV_ECDH_MASK_CAP         BIT(2)
 #define HPRE_DRV_X25519_MASK_CAP       BIT(5)
 
+static DEFINE_MUTEX(hpre_algs_lock);
+static unsigned int hpre_available_devs;
+
 typedef void (*hpre_cb)(struct hpre_ctx *ctx, void *sqe);
 
 struct hpre_rsa_ctx {
@@ -2202,11 +2205,17 @@ static void hpre_unregister_x25519(struct hisi_qm *qm)
 
 int hpre_algs_register(struct hisi_qm *qm)
 {
-       int ret;
+       int ret = 0;
+
+       mutex_lock(&hpre_algs_lock);
+       if (hpre_available_devs) {
+               hpre_available_devs++;
+               goto unlock;
+       }
 
        ret = hpre_register_rsa(qm);
        if (ret)
-               return ret;
+               goto unlock;
 
        ret = hpre_register_dh(qm);
        if (ret)
@@ -2220,6 +2229,9 @@ int hpre_algs_register(struct hisi_qm *qm)
        if (ret)
                goto unreg_ecdh;
 
+       hpre_available_devs++;
+       mutex_unlock(&hpre_algs_lock);
+
        return ret;
 
 unreg_ecdh:
@@ -2228,13 +2240,22 @@ unreg_dh:
        hpre_unregister_dh(qm);
 unreg_rsa:
        hpre_unregister_rsa(qm);
+unlock:
+       mutex_unlock(&hpre_algs_lock);
        return ret;
 }
 
 void hpre_algs_unregister(struct hisi_qm *qm)
 {
+       mutex_lock(&hpre_algs_lock);
+       if (--hpre_available_devs)
+               goto unlock;
+
        hpre_unregister_x25519(qm);
        hpre_unregister_ecdh(qm);
        hpre_unregister_dh(qm);
        hpre_unregister_rsa(qm);
+
+unlock:
+       mutex_unlock(&hpre_algs_lock);
 }
index 39297ce70f441eababf9ad7825c09246882be443..56777099ef69651189f5d7fb7d4881ba54b04f32 100644 (file)
 #define HPRE_VIA_MSI_DSM               1
 #define HPRE_SQE_MASK_OFFSET           8
 #define HPRE_SQE_MASK_LEN              24
+#define HPRE_CTX_Q_NUM_DEF             1
 
 #define HPRE_DFX_BASE          0x301000
 #define HPRE_DFX_COMMON1               0x301400
@@ -433,8 +434,11 @@ static u32 uacce_mode = UACCE_MODE_NOUACCE;
 module_param_cb(uacce_mode, &hpre_uacce_mode_ops, &uacce_mode, 0444);
 MODULE_PARM_DESC(uacce_mode, UACCE_MODE_DESC);
 
+static bool pf_q_num_flag;
 static int pf_q_num_set(const char *val, const struct kernel_param *kp)
 {
+       pf_q_num_flag = true;
+
        return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_HPRE_PF);
 }
 
@@ -1033,7 +1037,7 @@ static int hpre_cluster_debugfs_init(struct hisi_qm *qm)
 
        for (i = 0; i < clusters_num; i++) {
                ret = snprintf(buf, HPRE_DBGFS_VAL_MAX_LEN, "cluster%d", i);
-               if (ret < 0)
+               if (ret >= HPRE_DBGFS_VAL_MAX_LEN)
                        return -EINVAL;
                tmp_d = debugfs_create_dir(buf, qm->debug.debug_root);
 
@@ -1157,6 +1161,8 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
                qm->qp_num = pf_q_num;
                qm->debug.curr_qm_qp_num = pf_q_num;
                qm->qm_list = &hpre_devices;
+               if (pf_q_num_flag)
+                       set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
        }
 
        ret = hisi_qm_init(qm);
@@ -1394,10 +1400,11 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
        if (ret)
                dev_warn(&pdev->dev, "init debugfs fail!\n");
 
-       ret = hisi_qm_alg_register(qm, &hpre_devices);
+       hisi_qm_add_list(qm, &hpre_devices);
+       ret = hisi_qm_alg_register(qm, &hpre_devices, HPRE_CTX_Q_NUM_DEF);
        if (ret < 0) {
                pci_err(pdev, "fail to register algs to crypto!\n");
-               goto err_with_qm_start;
+               goto err_qm_del_list;
        }
 
        if (qm->uacce) {
@@ -1419,9 +1426,10 @@ static int hpre_probe(struct pci_dev *pdev, const struct pci_device_id *id)
        return 0;
 
 err_with_alg_register:
-       hisi_qm_alg_unregister(qm, &hpre_devices);
+       hisi_qm_alg_unregister(qm, &hpre_devices, HPRE_CTX_Q_NUM_DEF);
 
-err_with_qm_start:
+err_qm_del_list:
+       hisi_qm_del_list(qm, &hpre_devices);
        hpre_debugfs_exit(qm);
        hisi_qm_stop(qm, QM_NORMAL);
 
@@ -1441,7 +1449,8 @@ static void hpre_remove(struct pci_dev *pdev)
 
        hisi_qm_pm_uninit(qm);
        hisi_qm_wait_task_finish(qm, &hpre_devices);
-       hisi_qm_alg_unregister(qm, &hpre_devices);
+       hisi_qm_alg_unregister(qm, &hpre_devices, HPRE_CTX_Q_NUM_DEF);
+       hisi_qm_del_list(qm, &hpre_devices);
        if (qm->fun_type == QM_HW_PF && qm->vfs_num)
                hisi_qm_sriov_disable(pdev, true);
 
index a99fd589445cef9bcc90dda56c933bc17537f999..18599f3634c3c1bc0adde2402cf8d12c119c0ae1 100644 (file)
@@ -46,7 +46,7 @@
 #define QM_QC_PASID_ENABLE_SHIFT       7
 
 #define QM_SQ_TYPE_MASK                        GENMASK(3, 0)
-#define QM_SQ_TAIL_IDX(sqc)            ((le16_to_cpu((sqc)->w11) >> 6) & 0x1)
+#define QM_SQ_TAIL_IDX(sqc)            ((le16_to_cpu((sqc).w11) >> 6) & 0x1)
 
 /* cqc shift */
 #define QM_CQ_HOP_NUM_SHIFT            0
@@ -58,7 +58,7 @@
 
 #define QM_CQE_PHASE(cqe)              (le16_to_cpu((cqe)->w7) & 0x1)
 #define QM_QC_CQE_SIZE                 4
-#define QM_CQ_TAIL_IDX(cqc)            ((le16_to_cpu((cqc)->w11) >> 6) & 0x1)
+#define QM_CQ_TAIL_IDX(cqc)            ((le16_to_cpu((cqc).w11) >> 6) & 0x1)
 
 /* eqc shift */
 #define QM_EQE_AEQE_SIZE               (2UL << 12)
@@ -69,6 +69,7 @@
 
 #define QM_AEQE_PHASE(aeqe)            ((le32_to_cpu((aeqe)->dw0) >> 16) & 0x1)
 #define QM_AEQE_TYPE_SHIFT             17
+#define QM_AEQE_TYPE_MASK              0xf
 #define QM_AEQE_CQN_MASK               GENMASK(15, 0)
 #define QM_CQ_OVERFLOW                 0
 #define QM_EQ_OVERFLOW                 1
 #define WAIT_PERIOD                    20
 #define REMOVE_WAIT_DELAY              10
 
-#define QM_DRIVER_REMOVING             0
-#define QM_RST_SCHED                   1
 #define QM_QOS_PARAM_NUM               2
 #define QM_QOS_MAX_VAL                 1000
 #define QM_QOS_RATE                    100
 #define QM_MK_SQC_DW3_V2(sqe_sz, sq_depth) \
        ((((u32)sq_depth) - 1) | ((u32)ilog2(sqe_sz) << QM_SQ_SQE_SIZE_SHIFT))
 
-#define INIT_QC_COMMON(qc, base, pasid) do {                   \
-       (qc)->head = 0;                                         \
-       (qc)->tail = 0;                                         \
-       (qc)->base_l = cpu_to_le32(lower_32_bits(base));        \
-       (qc)->base_h = cpu_to_le32(upper_32_bits(base));        \
-       (qc)->dw3 = 0;                                          \
-       (qc)->w8 = 0;                                           \
-       (qc)->rsvd0 = 0;                                        \
-       (qc)->pasid = cpu_to_le16(pasid);                       \
-       (qc)->w11 = 0;                                          \
-       (qc)->rsvd1 = 0;                                        \
-} while (0)
-
 enum vft_type {
        SQC_VFT = 0,
        CQC_VFT,
@@ -687,6 +673,59 @@ int hisi_qm_mb(struct hisi_qm *qm, u8 cmd, dma_addr_t dma_addr, u16 queue,
 }
 EXPORT_SYMBOL_GPL(hisi_qm_mb);
 
+/* op 0: set xqc information to hardware, 1: get xqc information from hardware. */
+int qm_set_and_get_xqc(struct hisi_qm *qm, u8 cmd, void *xqc, u32 qp_id, bool op)
+{
+       struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(qm->pdev));
+       struct qm_mailbox mailbox;
+       dma_addr_t xqc_dma;
+       void *tmp_xqc;
+       size_t size;
+       int ret;
+
+       switch (cmd) {
+       case QM_MB_CMD_SQC:
+               size = sizeof(struct qm_sqc);
+               tmp_xqc = qm->xqc_buf.sqc;
+               xqc_dma = qm->xqc_buf.sqc_dma;
+               break;
+       case QM_MB_CMD_CQC:
+               size = sizeof(struct qm_cqc);
+               tmp_xqc = qm->xqc_buf.cqc;
+               xqc_dma = qm->xqc_buf.cqc_dma;
+               break;
+       case QM_MB_CMD_EQC:
+               size = sizeof(struct qm_eqc);
+               tmp_xqc = qm->xqc_buf.eqc;
+               xqc_dma = qm->xqc_buf.eqc_dma;
+               break;
+       case QM_MB_CMD_AEQC:
+               size = sizeof(struct qm_aeqc);
+               tmp_xqc = qm->xqc_buf.aeqc;
+               xqc_dma = qm->xqc_buf.aeqc_dma;
+               break;
+       }
+
+       /* Setting xqc will fail if master OOO is blocked. */
+       if (qm_check_dev_error(pf_qm)) {
+               dev_err(&qm->pdev->dev, "failed to send mailbox since qm is stop!\n");
+               return -EIO;
+       }
+
+       mutex_lock(&qm->mailbox_lock);
+       if (!op)
+               memcpy(tmp_xqc, xqc, size);
+
+       qm_mb_pre_init(&mailbox, cmd, xqc_dma, qp_id, op);
+       ret = qm_mb_nolock(qm, &mailbox);
+       if (!ret && op)
+               memcpy(xqc, tmp_xqc, size);
+
+       mutex_unlock(&qm->mailbox_lock);
+
+       return ret;
+}
+
 static void qm_db_v1(struct hisi_qm *qm, u16 qn, u8 cmd, u16 index, u8 priority)
 {
        u64 doorbell;
@@ -849,53 +888,23 @@ static void qm_poll_req_cb(struct hisi_qp *qp)
                qm_db(qm, qp->qp_id, QM_DOORBELL_CMD_CQ,
                      qp->qp_status.cq_head, 0);
                atomic_dec(&qp->qp_status.used);
+
+               cond_resched();
        }
 
        /* set c_flag */
        qm_db(qm, qp->qp_id, QM_DOORBELL_CMD_CQ, qp->qp_status.cq_head, 1);
 }
 
-static int qm_get_complete_eqe_num(struct hisi_qm_poll_data *poll_data)
-{
-       struct hisi_qm *qm = poll_data->qm;
-       struct qm_eqe *eqe = qm->eqe + qm->status.eq_head;
-       u16 eq_depth = qm->eq_depth;
-       int eqe_num = 0;
-       u16 cqn;
-
-       while (QM_EQE_PHASE(eqe) == qm->status.eqc_phase) {
-               cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK;
-               poll_data->qp_finish_id[eqe_num] = cqn;
-               eqe_num++;
-
-               if (qm->status.eq_head == eq_depth - 1) {
-                       qm->status.eqc_phase = !qm->status.eqc_phase;
-                       eqe = qm->eqe;
-                       qm->status.eq_head = 0;
-               } else {
-                       eqe++;
-                       qm->status.eq_head++;
-               }
-
-               if (eqe_num == (eq_depth >> 1) - 1)
-                       break;
-       }
-
-       qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
-
-       return eqe_num;
-}
-
 static void qm_work_process(struct work_struct *work)
 {
        struct hisi_qm_poll_data *poll_data =
                container_of(work, struct hisi_qm_poll_data, work);
        struct hisi_qm *qm = poll_data->qm;
+       u16 eqe_num = poll_data->eqe_num;
        struct hisi_qp *qp;
-       int eqe_num, i;
+       int i;
 
-       /* Get qp id of completed tasks and re-enable the interrupt. */
-       eqe_num = qm_get_complete_eqe_num(poll_data);
        for (i = eqe_num - 1; i >= 0; i--) {
                qp = &qm->qp_array[poll_data->qp_finish_id[i]];
                if (unlikely(atomic_read(&qp->qp_status.flags) == QP_STOP))
@@ -911,39 +920,55 @@ static void qm_work_process(struct work_struct *work)
        }
 }
 
-static bool do_qm_eq_irq(struct hisi_qm *qm)
+static void qm_get_complete_eqe_num(struct hisi_qm *qm)
 {
        struct qm_eqe *eqe = qm->eqe + qm->status.eq_head;
-       struct hisi_qm_poll_data *poll_data;
-       u16 cqn;
+       struct hisi_qm_poll_data *poll_data = NULL;
+       u16 eq_depth = qm->eq_depth;
+       u16 cqn, eqe_num = 0;
 
-       if (!readl(qm->io_base + QM_VF_EQ_INT_SOURCE))
-               return false;
+       if (QM_EQE_PHASE(eqe) != qm->status.eqc_phase) {
+               atomic64_inc(&qm->debug.dfx.err_irq_cnt);
+               qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
+               return;
+       }
 
-       if (QM_EQE_PHASE(eqe) == qm->status.eqc_phase) {
+       cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK;
+       if (unlikely(cqn >= qm->qp_num))
+               return;
+       poll_data = &qm->poll_data[cqn];
+
+       while (QM_EQE_PHASE(eqe) == qm->status.eqc_phase) {
                cqn = le32_to_cpu(eqe->dw0) & QM_EQE_CQN_MASK;
-               poll_data = &qm->poll_data[cqn];
-               queue_work(qm->wq, &poll_data->work);
+               poll_data->qp_finish_id[eqe_num] = cqn;
+               eqe_num++;
+
+               if (qm->status.eq_head == eq_depth - 1) {
+                       qm->status.eqc_phase = !qm->status.eqc_phase;
+                       eqe = qm->eqe;
+                       qm->status.eq_head = 0;
+               } else {
+                       eqe++;
+                       qm->status.eq_head++;
+               }
 
-               return true;
+               if (eqe_num == (eq_depth >> 1) - 1)
+                       break;
        }
 
-       return false;
+       poll_data->eqe_num = eqe_num;
+       queue_work(qm->wq, &poll_data->work);
+       qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
 }
 
 static irqreturn_t qm_eq_irq(int irq, void *data)
 {
        struct hisi_qm *qm = data;
-       bool ret;
-
-       ret = do_qm_eq_irq(qm);
-       if (ret)
-               return IRQ_HANDLED;
 
-       atomic64_inc(&qm->debug.dfx.err_irq_cnt);
-       qm_db(qm, 0, QM_DOORBELL_CMD_EQ, qm->status.eq_head, 0);
+       /* Get qp id of completed tasks and re-enable the interrupt */
+       qm_get_complete_eqe_num(qm);
 
-       return IRQ_NONE;
+       return IRQ_HANDLED;
 }
 
 static irqreturn_t qm_mb_cmd_irq(int irq, void *data)
@@ -1025,8 +1050,11 @@ static irqreturn_t qm_aeq_thread(int irq, void *data)
        u16 aeq_depth = qm->aeq_depth;
        u32 type, qp_id;
 
+       atomic64_inc(&qm->debug.dfx.aeq_irq_cnt);
+
        while (QM_AEQE_PHASE(aeqe) == qm->status.aeqc_phase) {
-               type = le32_to_cpu(aeqe->dw0) >> QM_AEQE_TYPE_SHIFT;
+               type = (le32_to_cpu(aeqe->dw0) >> QM_AEQE_TYPE_SHIFT) &
+                       QM_AEQE_TYPE_MASK;
                qp_id = le32_to_cpu(aeqe->dw0) & QM_AEQE_CQN_MASK;
 
                switch (type) {
@@ -1062,17 +1090,6 @@ static irqreturn_t qm_aeq_thread(int irq, void *data)
        return IRQ_HANDLED;
 }
 
-static irqreturn_t qm_aeq_irq(int irq, void *data)
-{
-       struct hisi_qm *qm = data;
-
-       atomic64_inc(&qm->debug.dfx.aeq_irq_cnt);
-       if (!readl(qm->io_base + QM_VF_AEQ_INT_SOURCE))
-               return IRQ_NONE;
-
-       return IRQ_WAKE_THREAD;
-}
-
 static void qm_init_qp_status(struct hisi_qp *qp)
 {
        struct hisi_qp_status *qp_status = &qp->qp_status;
@@ -1321,45 +1338,6 @@ static int qm_get_vft_v2(struct hisi_qm *qm, u32 *base, u32 *number)
        return 0;
 }
 
-void *hisi_qm_ctx_alloc(struct hisi_qm *qm, size_t ctx_size,
-                         dma_addr_t *dma_addr)
-{
-       struct device *dev = &qm->pdev->dev;
-       void *ctx_addr;
-
-       ctx_addr = kzalloc(ctx_size, GFP_KERNEL);
-       if (!ctx_addr)
-               return ERR_PTR(-ENOMEM);
-
-       *dma_addr = dma_map_single(dev, ctx_addr, ctx_size, DMA_FROM_DEVICE);
-       if (dma_mapping_error(dev, *dma_addr)) {
-               dev_err(dev, "DMA mapping error!\n");
-               kfree(ctx_addr);
-               return ERR_PTR(-ENOMEM);
-       }
-
-       return ctx_addr;
-}
-
-void hisi_qm_ctx_free(struct hisi_qm *qm, size_t ctx_size,
-                       const void *ctx_addr, dma_addr_t *dma_addr)
-{
-       struct device *dev = &qm->pdev->dev;
-
-       dma_unmap_single(dev, *dma_addr, ctx_size, DMA_FROM_DEVICE);
-       kfree(ctx_addr);
-}
-
-static int qm_dump_sqc_raw(struct hisi_qm *qm, dma_addr_t dma_addr, u16 qp_id)
-{
-       return hisi_qm_mb(qm, QM_MB_CMD_SQC, dma_addr, qp_id, 1);
-}
-
-static int qm_dump_cqc_raw(struct hisi_qm *qm, dma_addr_t dma_addr, u16 qp_id)
-{
-       return hisi_qm_mb(qm, QM_MB_CMD_CQC, dma_addr, qp_id, 1);
-}
-
 static void qm_hw_error_init_v1(struct hisi_qm *qm)
 {
        writel(QM_ABNORMAL_INT_MASK_VALUE, qm->io_base + QM_ABNORMAL_INT_MASK);
@@ -1952,84 +1930,51 @@ static void hisi_qm_release_qp(struct hisi_qp *qp)
 static int qm_sq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
 {
        struct hisi_qm *qm = qp->qm;
-       struct device *dev = &qm->pdev->dev;
        enum qm_hw_ver ver = qm->ver;
-       struct qm_sqc *sqc;
-       dma_addr_t sqc_dma;
-       int ret;
-
-       sqc = kzalloc(sizeof(struct qm_sqc), GFP_KERNEL);
-       if (!sqc)
-               return -ENOMEM;
+       struct qm_sqc sqc = {0};
 
-       INIT_QC_COMMON(sqc, qp->sqe_dma, pasid);
        if (ver == QM_HW_V1) {
-               sqc->dw3 = cpu_to_le32(QM_MK_SQC_DW3_V1(0, 0, 0, qm->sqe_size));
-               sqc->w8 = cpu_to_le16(qp->sq_depth - 1);
+               sqc.dw3 = cpu_to_le32(QM_MK_SQC_DW3_V1(0, 0, 0, qm->sqe_size));
+               sqc.w8 = cpu_to_le16(qp->sq_depth - 1);
        } else {
-               sqc->dw3 = cpu_to_le32(QM_MK_SQC_DW3_V2(qm->sqe_size, qp->sq_depth));
-               sqc->w8 = 0; /* rand_qc */
+               sqc.dw3 = cpu_to_le32(QM_MK_SQC_DW3_V2(qm->sqe_size, qp->sq_depth));
+               sqc.w8 = 0; /* rand_qc */
        }
-       sqc->cq_num = cpu_to_le16(qp_id);
-       sqc->w13 = cpu_to_le16(QM_MK_SQC_W13(0, 1, qp->alg_type));
+       sqc.w13 = cpu_to_le16(QM_MK_SQC_W13(0, 1, qp->alg_type));
+       sqc.base_l = cpu_to_le32(lower_32_bits(qp->sqe_dma));
+       sqc.base_h = cpu_to_le32(upper_32_bits(qp->sqe_dma));
+       sqc.cq_num = cpu_to_le16(qp_id);
+       sqc.pasid = cpu_to_le16(pasid);
 
        if (ver >= QM_HW_V3 && qm->use_sva && !qp->is_in_kernel)
-               sqc->w11 = cpu_to_le16(QM_QC_PASID_ENABLE <<
-                                      QM_QC_PASID_ENABLE_SHIFT);
-
-       sqc_dma = dma_map_single(dev, sqc, sizeof(struct qm_sqc),
-                                DMA_TO_DEVICE);
-       if (dma_mapping_error(dev, sqc_dma)) {
-               kfree(sqc);
-               return -ENOMEM;
-       }
+               sqc.w11 = cpu_to_le16(QM_QC_PASID_ENABLE <<
+                                     QM_QC_PASID_ENABLE_SHIFT);
 
-       ret = hisi_qm_mb(qm, QM_MB_CMD_SQC, sqc_dma, qp_id, 0);
-       dma_unmap_single(dev, sqc_dma, sizeof(struct qm_sqc), DMA_TO_DEVICE);
-       kfree(sqc);
-
-       return ret;
+       return qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp_id, 0);
 }
 
 static int qm_cq_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
 {
        struct hisi_qm *qm = qp->qm;
-       struct device *dev = &qm->pdev->dev;
        enum qm_hw_ver ver = qm->ver;
-       struct qm_cqc *cqc;
-       dma_addr_t cqc_dma;
-       int ret;
-
-       cqc = kzalloc(sizeof(struct qm_cqc), GFP_KERNEL);
-       if (!cqc)
-               return -ENOMEM;
+       struct qm_cqc cqc = {0};
 
-       INIT_QC_COMMON(cqc, qp->cqe_dma, pasid);
        if (ver == QM_HW_V1) {
-               cqc->dw3 = cpu_to_le32(QM_MK_CQC_DW3_V1(0, 0, 0,
-                                                       QM_QC_CQE_SIZE));
-               cqc->w8 = cpu_to_le16(qp->cq_depth - 1);
+               cqc.dw3 = cpu_to_le32(QM_MK_CQC_DW3_V1(0, 0, 0, QM_QC_CQE_SIZE));
+               cqc.w8 = cpu_to_le16(qp->cq_depth - 1);
        } else {
-               cqc->dw3 = cpu_to_le32(QM_MK_CQC_DW3_V2(QM_QC_CQE_SIZE, qp->cq_depth));
-               cqc->w8 = 0; /* rand_qc */
+               cqc.dw3 = cpu_to_le32(QM_MK_CQC_DW3_V2(QM_QC_CQE_SIZE, qp->cq_depth));
+               cqc.w8 = 0; /* rand_qc */
        }
-       cqc->dw6 = cpu_to_le32(1 << QM_CQ_PHASE_SHIFT | 1 << QM_CQ_FLAG_SHIFT);
+       cqc.dw6 = cpu_to_le32(1 << QM_CQ_PHASE_SHIFT | 1 << QM_CQ_FLAG_SHIFT);
+       cqc.base_l = cpu_to_le32(lower_32_bits(qp->cqe_dma));
+       cqc.base_h = cpu_to_le32(upper_32_bits(qp->cqe_dma));
+       cqc.pasid = cpu_to_le16(pasid);
 
        if (ver >= QM_HW_V3 && qm->use_sva && !qp->is_in_kernel)
-               cqc->w11 = cpu_to_le16(QM_QC_PASID_ENABLE);
+               cqc.w11 = cpu_to_le16(QM_QC_PASID_ENABLE);
 
-       cqc_dma = dma_map_single(dev, cqc, sizeof(struct qm_cqc),
-                                DMA_TO_DEVICE);
-       if (dma_mapping_error(dev, cqc_dma)) {
-               kfree(cqc);
-               return -ENOMEM;
-       }
-
-       ret = hisi_qm_mb(qm, QM_MB_CMD_CQC, cqc_dma, qp_id, 0);
-       dma_unmap_single(dev, cqc_dma, sizeof(struct qm_cqc), DMA_TO_DEVICE);
-       kfree(cqc);
-
-       return ret;
+       return qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp_id, 0);
 }
 
 static int qm_qp_ctx_cfg(struct hisi_qp *qp, int qp_id, u32 pasid)
@@ -2119,14 +2064,11 @@ static void qp_stop_fail_cb(struct hisi_qp *qp)
  */
 static int qm_drain_qp(struct hisi_qp *qp)
 {
-       size_t size = sizeof(struct qm_sqc) + sizeof(struct qm_cqc);
        struct hisi_qm *qm = qp->qm;
        struct device *dev = &qm->pdev->dev;
-       struct qm_sqc *sqc;
-       struct qm_cqc *cqc;
-       dma_addr_t dma_addr;
-       int ret = 0, i = 0;
-       void *addr;
+       struct qm_sqc sqc;
+       struct qm_cqc cqc;
+       int ret, i = 0;
 
        /* No need to judge if master OOO is blocked. */
        if (qm_check_dev_error(qm))
@@ -2140,44 +2082,32 @@ static int qm_drain_qp(struct hisi_qp *qp)
                return ret;
        }
 
-       addr = hisi_qm_ctx_alloc(qm, size, &dma_addr);
-       if (IS_ERR(addr)) {
-               dev_err(dev, "Failed to alloc ctx for sqc and cqc!\n");
-               return -ENOMEM;
-       }
-
        while (++i) {
-               ret = qm_dump_sqc_raw(qm, dma_addr, qp->qp_id);
+               ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp->qp_id, 1);
                if (ret) {
                        dev_err_ratelimited(dev, "Failed to dump sqc!\n");
-                       break;
+                       return ret;
                }
-               sqc = addr;
 
-               ret = qm_dump_cqc_raw(qm, (dma_addr + sizeof(struct qm_sqc)),
-                                     qp->qp_id);
+               ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp->qp_id, 1);
                if (ret) {
                        dev_err_ratelimited(dev, "Failed to dump cqc!\n");
-                       break;
+                       return ret;
                }
-               cqc = addr + sizeof(struct qm_sqc);
 
-               if ((sqc->tail == cqc->tail) &&
+               if ((sqc.tail == cqc.tail) &&
                    (QM_SQ_TAIL_IDX(sqc) == QM_CQ_TAIL_IDX(cqc)))
                        break;
 
                if (i == MAX_WAIT_COUNTS) {
                        dev_err(dev, "Fail to empty queue %u!\n", qp->qp_id);
-                       ret = -EBUSY;
-                       break;
+                       return -EBUSY;
                }
 
                usleep_range(WAIT_PERIOD_US_MIN, WAIT_PERIOD_US_MAX);
        }
 
-       hisi_qm_ctx_free(qm, size, addr, &dma_addr);
-
-       return ret;
+       return 0;
 }
 
 static int qm_stop_qp_nolock(struct hisi_qp *qp)
@@ -2824,7 +2754,6 @@ static void hisi_qm_pre_init(struct hisi_qm *qm)
        mutex_init(&qm->mailbox_lock);
        init_rwsem(&qm->qps_lock);
        qm->qp_in_used = 0;
-       qm->misc_ctl = false;
        if (test_bit(QM_SUPPORT_RPM, &qm->caps)) {
                if (!acpi_device_power_manageable(ACPI_COMPANION(&pdev->dev)))
                        dev_info(&pdev->dev, "_PS0 and _PR0 are not defined");
@@ -2890,11 +2819,20 @@ static void hisi_qm_unint_work(struct hisi_qm *qm)
        destroy_workqueue(qm->wq);
 }
 
+static void hisi_qm_free_rsv_buf(struct hisi_qm *qm)
+{
+       struct qm_dma *xqc_dma = &qm->xqc_buf.qcdma;
+       struct device *dev = &qm->pdev->dev;
+
+       dma_free_coherent(dev, xqc_dma->size, xqc_dma->va, xqc_dma->dma);
+}
+
 static void hisi_qm_memory_uninit(struct hisi_qm *qm)
 {
        struct device *dev = &qm->pdev->dev;
 
        hisi_qp_memory_uninit(qm, qm->qp_num);
+       hisi_qm_free_rsv_buf(qm);
        if (qm->qdma.va) {
                hisi_qm_cache_wb(qm);
                dma_free_coherent(dev, qm->qdma.size,
@@ -3016,62 +2954,26 @@ static void qm_disable_eq_aeq_interrupts(struct hisi_qm *qm)
 
 static int qm_eq_ctx_cfg(struct hisi_qm *qm)
 {
-       struct device *dev = &qm->pdev->dev;
-       struct qm_eqc *eqc;
-       dma_addr_t eqc_dma;
-       int ret;
-
-       eqc = kzalloc(sizeof(struct qm_eqc), GFP_KERNEL);
-       if (!eqc)
-               return -ENOMEM;
+       struct qm_eqc eqc = {0};
 
-       eqc->base_l = cpu_to_le32(lower_32_bits(qm->eqe_dma));
-       eqc->base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma));
+       eqc.base_l = cpu_to_le32(lower_32_bits(qm->eqe_dma));
+       eqc.base_h = cpu_to_le32(upper_32_bits(qm->eqe_dma));
        if (qm->ver == QM_HW_V1)
-               eqc->dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE);
-       eqc->dw6 = cpu_to_le32(((u32)qm->eq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
-
-       eqc_dma = dma_map_single(dev, eqc, sizeof(struct qm_eqc),
-                                DMA_TO_DEVICE);
-       if (dma_mapping_error(dev, eqc_dma)) {
-               kfree(eqc);
-               return -ENOMEM;
-       }
+               eqc.dw3 = cpu_to_le32(QM_EQE_AEQE_SIZE);
+       eqc.dw6 = cpu_to_le32(((u32)qm->eq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
 
-       ret = hisi_qm_mb(qm, QM_MB_CMD_EQC, eqc_dma, 0, 0);
-       dma_unmap_single(dev, eqc_dma, sizeof(struct qm_eqc), DMA_TO_DEVICE);
-       kfree(eqc);
-
-       return ret;
+       return qm_set_and_get_xqc(qm, QM_MB_CMD_EQC, &eqc, 0, 0);
 }
 
 static int qm_aeq_ctx_cfg(struct hisi_qm *qm)
 {
-       struct device *dev = &qm->pdev->dev;
-       struct qm_aeqc *aeqc;
-       dma_addr_t aeqc_dma;
-       int ret;
-
-       aeqc = kzalloc(sizeof(struct qm_aeqc), GFP_KERNEL);
-       if (!aeqc)
-               return -ENOMEM;
+       struct qm_aeqc aeqc = {0};
 
-       aeqc->base_l = cpu_to_le32(lower_32_bits(qm->aeqe_dma));
-       aeqc->base_h = cpu_to_le32(upper_32_bits(qm->aeqe_dma));
-       aeqc->dw6 = cpu_to_le32(((u32)qm->aeq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
+       aeqc.base_l = cpu_to_le32(lower_32_bits(qm->aeqe_dma));
+       aeqc.base_h = cpu_to_le32(upper_32_bits(qm->aeqe_dma));
+       aeqc.dw6 = cpu_to_le32(((u32)qm->aeq_depth - 1) | (1 << QM_EQC_PHASE_SHIFT));
 
-       aeqc_dma = dma_map_single(dev, aeqc, sizeof(struct qm_aeqc),
-                                 DMA_TO_DEVICE);
-       if (dma_mapping_error(dev, aeqc_dma)) {
-               kfree(aeqc);
-               return -ENOMEM;
-       }
-
-       ret = hisi_qm_mb(qm, QM_MB_CMD_AEQC, aeqc_dma, 0, 0);
-       dma_unmap_single(dev, aeqc_dma, sizeof(struct qm_aeqc), DMA_TO_DEVICE);
-       kfree(aeqc);
-
-       return ret;
+       return qm_set_and_get_xqc(qm, QM_MB_CMD_AEQC, &aeqc, 0, 0);
 }
 
 static int qm_eq_aeq_ctx_cfg(struct hisi_qm *qm)
@@ -4861,63 +4763,48 @@ static void qm_cmd_process(struct work_struct *cmd_process)
 }
 
 /**
- * hisi_qm_alg_register() - Register alg to crypto and add qm to qm_list.
+ * hisi_qm_alg_register() - Register alg to crypto.
  * @qm: The qm needs add.
  * @qm_list: The qm list.
+ * @guard: Guard of qp_num.
  *
- * This function adds qm to qm list, and will register algorithm to
- * crypto when the qm list is empty.
+ * Register algorithm to crypto when the function is satisfy guard.
  */
-int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard)
 {
        struct device *dev = &qm->pdev->dev;
-       int flag = 0;
-       int ret = 0;
-
-       mutex_lock(&qm_list->lock);
-       if (list_empty(&qm_list->list))
-               flag = 1;
-       list_add_tail(&qm->list, &qm_list->list);
-       mutex_unlock(&qm_list->lock);
 
        if (qm->ver <= QM_HW_V2 && qm->use_sva) {
                dev_info(dev, "HW V2 not both use uacce sva mode and hardware crypto algs.\n");
                return 0;
        }
 
-       if (flag) {
-               ret = qm_list->register_to_crypto(qm);
-               if (ret) {
-                       mutex_lock(&qm_list->lock);
-                       list_del(&qm->list);
-                       mutex_unlock(&qm_list->lock);
-               }
+       if (qm->qp_num < guard) {
+               dev_info(dev, "qp_num is less than task need.\n");
+               return 0;
        }
 
-       return ret;
+       return qm_list->register_to_crypto(qm);
 }
 EXPORT_SYMBOL_GPL(hisi_qm_alg_register);
 
 /**
- * hisi_qm_alg_unregister() - Unregister alg from crypto and delete qm from
- * qm list.
+ * hisi_qm_alg_unregister() - Unregister alg from crypto.
  * @qm: The qm needs delete.
  * @qm_list: The qm list.
+ * @guard: Guard of qp_num.
  *
- * This function deletes qm from qm list, and will unregister algorithm
- * from crypto when the qm list is empty.
+ * Unregister algorithm from crypto when the last function is satisfy guard.
  */
-void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard)
 {
-       mutex_lock(&qm_list->lock);
-       list_del(&qm->list);
-       mutex_unlock(&qm_list->lock);
-
        if (qm->ver <= QM_HW_V2 && qm->use_sva)
                return;
 
-       if (list_empty(&qm_list->list))
-               qm_list->unregister_from_crypto(qm);
+       if (qm->qp_num < guard)
+               return;
+
+       qm_list->unregister_from_crypto(qm);
 }
 EXPORT_SYMBOL_GPL(hisi_qm_alg_unregister);
 
@@ -5013,8 +4900,8 @@ static int qm_register_aeq_irq(struct hisi_qm *qm)
                return 0;
 
        irq_vector = val & QM_IRQ_VECTOR_MASK;
-       ret = request_threaded_irq(pci_irq_vector(pdev, irq_vector), qm_aeq_irq,
-                                                  qm_aeq_thread, 0, qm->dev_name, qm);
+       ret = request_threaded_irq(pci_irq_vector(pdev, irq_vector), NULL,
+                                                  qm_aeq_thread, IRQF_ONESHOT, qm->dev_name, qm);
        if (ret)
                dev_err(&pdev->dev, "failed to request eq irq, ret = %d", ret);
 
@@ -5093,6 +4980,7 @@ free_eq_irq:
 
 static int qm_get_qp_num(struct hisi_qm *qm)
 {
+       struct device *dev = &qm->pdev->dev;
        bool is_db_isolation;
 
        /* VF's qp_num assigned by PF in v2, and VF can get qp_num by vft. */
@@ -5109,13 +4997,21 @@ static int qm_get_qp_num(struct hisi_qm *qm)
        qm->max_qp_num = hisi_qm_get_hw_info(qm, qm_basic_info,
                                             QM_FUNC_MAX_QP_CAP, is_db_isolation);
 
-       /* check if qp number is valid */
-       if (qm->qp_num > qm->max_qp_num) {
-               dev_err(&qm->pdev->dev, "qp num(%u) is more than max qp num(%u)!\n",
+       if (qm->qp_num <= qm->max_qp_num)
+               return 0;
+
+       if (test_bit(QM_MODULE_PARAM, &qm->misc_ctl)) {
+               /* Check whether the set qp number is valid */
+               dev_err(dev, "qp num(%u) is more than max qp num(%u)!\n",
                        qm->qp_num, qm->max_qp_num);
                return -EINVAL;
        }
 
+       dev_info(dev, "Default qp num(%u) is too big, reset it to Function's max qp num(%u)!\n",
+                qm->qp_num, qm->max_qp_num);
+       qm->qp_num = qm->max_qp_num;
+       qm->debug.curr_qm_qp_num = qm->qp_num;
+
        return 0;
 }
 
@@ -5303,6 +5199,36 @@ err_init_qp_mem:
        return ret;
 }
 
+static int hisi_qm_alloc_rsv_buf(struct hisi_qm *qm)
+{
+       struct qm_rsv_buf *xqc_buf = &qm->xqc_buf;
+       struct qm_dma *xqc_dma = &xqc_buf->qcdma;
+       struct device *dev = &qm->pdev->dev;
+       size_t off = 0;
+
+#define QM_XQC_BUF_INIT(xqc_buf, type) do { \
+       (xqc_buf)->type = ((xqc_buf)->qcdma.va + (off)); \
+       (xqc_buf)->type##_dma = (xqc_buf)->qcdma.dma + (off); \
+       off += QMC_ALIGN(sizeof(struct qm_##type)); \
+} while (0)
+
+       xqc_dma->size = QMC_ALIGN(sizeof(struct qm_eqc)) +
+                       QMC_ALIGN(sizeof(struct qm_aeqc)) +
+                       QMC_ALIGN(sizeof(struct qm_sqc)) +
+                       QMC_ALIGN(sizeof(struct qm_cqc));
+       xqc_dma->va = dma_alloc_coherent(dev, xqc_dma->size,
+                                        &xqc_dma->dma, GFP_KERNEL);
+       if (!xqc_dma->va)
+               return -ENOMEM;
+
+       QM_XQC_BUF_INIT(xqc_buf, eqc);
+       QM_XQC_BUF_INIT(xqc_buf, aeqc);
+       QM_XQC_BUF_INIT(xqc_buf, sqc);
+       QM_XQC_BUF_INIT(xqc_buf, cqc);
+
+       return 0;
+}
+
 static int hisi_qm_memory_init(struct hisi_qm *qm)
 {
        struct device *dev = &qm->pdev->dev;
@@ -5344,13 +5270,19 @@ static int hisi_qm_memory_init(struct hisi_qm *qm)
        QM_INIT_BUF(qm, sqc, qm->qp_num);
        QM_INIT_BUF(qm, cqc, qm->qp_num);
 
+       ret = hisi_qm_alloc_rsv_buf(qm);
+       if (ret)
+               goto err_free_qdma;
+
        ret = hisi_qp_alloc_memory(qm);
        if (ret)
-               goto err_alloc_qp_array;
+               goto err_free_reserve_buf;
 
        return 0;
 
-err_alloc_qp_array:
+err_free_reserve_buf:
+       hisi_qm_free_rsv_buf(qm);
+err_free_qdma:
        dma_free_coherent(dev, qm->qdma.size, qm->qdma.va, qm->qdma.dma);
 err_destroy_idr:
        idr_destroy(&qm->qp_idr);
index 1406a422d4551714b6e9453616b1e0e68e7b8449..7b0b15c83ec1205fa69bad6f48f3f8fd9a9353fa 100644 (file)
@@ -4,7 +4,6 @@
 #define QM_COMMON_H
 
 #define QM_DBG_READ_LEN                256
-#define QM_RESETTING           2
 
 struct qm_cqe {
        __le32 rsvd0;
@@ -77,10 +76,7 @@ static const char * const qm_s[] = {
        "init", "start", "close", "stop",
 };
 
-void *hisi_qm_ctx_alloc(struct hisi_qm *qm, size_t ctx_size,
-                       dma_addr_t *dma_addr);
-void hisi_qm_ctx_free(struct hisi_qm *qm, size_t ctx_size,
-                     const void *ctx_addr, dma_addr_t *dma_addr);
+int qm_set_and_get_xqc(struct hisi_qm *qm, u8 cmd, void *xqc, u32 qp_id, bool op);
 void hisi_qm_show_last_dfx_regs(struct hisi_qm *qm);
 void hisi_qm_set_algqos_init(struct hisi_qm *qm);
 
index e1e08993de125136eaa9a3b0c7a5bd1de6261231..afdddf87cc348aca7d8acf129cea2aa905a8cd73 100644 (file)
@@ -1271,7 +1271,7 @@ queues_unconfig:
        return ret;
 }
 
-static int sec_remove(struct platform_device *pdev)
+static void sec_remove(struct platform_device *pdev)
 {
        struct sec_dev_info *info = platform_get_drvdata(pdev);
        int i;
@@ -1287,8 +1287,6 @@ static int sec_remove(struct platform_device *pdev)
        }
 
        sec_base_exit(info);
-
-       return 0;
 }
 
 static const __maybe_unused struct of_device_id sec_match[] = {
@@ -1306,7 +1304,7 @@ MODULE_DEVICE_TABLE(acpi, sec_acpi_match);
 
 static struct platform_driver sec_driver = {
        .probe = sec_probe,
-       .remove = sec_remove,
+       .remove_new = sec_remove,
        .driver = {
                .name = "hisi_sec_platform_driver",
                .of_match_table = sec_match,
index 074e50ef512c117bdbe8904393f9150f5dfde5b5..6fcabbc87860a6419b426d31b8a48e8f94e2d75a 100644 (file)
 #define IV_CTR_INIT            0x1
 #define IV_BYTE_OFFSET         0x8
 
+static DEFINE_MUTEX(sec_algs_lock);
+static unsigned int sec_available_devs;
+
 struct sec_skcipher {
        u64 alg_msk;
        struct skcipher_alg alg;
@@ -1011,6 +1014,7 @@ static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
                ret = sec_aead_mac_init(a_req);
                if (unlikely(ret)) {
                        dev_err(dev, "fail to init mac data for ICV!\n");
+                       hisi_acc_sg_buf_unmap(dev, src, req->in);
                        return ret;
                }
        }
@@ -2544,16 +2548,31 @@ err:
 int sec_register_to_crypto(struct hisi_qm *qm)
 {
        u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
-       int ret;
+       int ret = 0;
+
+       mutex_lock(&sec_algs_lock);
+       if (sec_available_devs) {
+               sec_available_devs++;
+               goto unlock;
+       }
 
        ret = sec_register_skcipher(alg_mask);
        if (ret)
-               return ret;
+               goto unlock;
 
        ret = sec_register_aead(alg_mask);
        if (ret)
-               sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
+               goto unreg_skcipher;
 
+       sec_available_devs++;
+       mutex_unlock(&sec_algs_lock);
+
+       return 0;
+
+unreg_skcipher:
+       sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
+unlock:
+       mutex_unlock(&sec_algs_lock);
        return ret;
 }
 
@@ -2561,6 +2580,13 @@ void sec_unregister_from_crypto(struct hisi_qm *qm)
 {
        u64 alg_mask = sec_get_alg_bitmap(qm, SEC_DRV_ALG_BITMAP_HIGH, SEC_DRV_ALG_BITMAP_LOW);
 
+       mutex_lock(&sec_algs_lock);
+       if (--sec_available_devs)
+               goto unlock;
+
        sec_unregister_aead(alg_mask, ARRAY_SIZE(sec_aeads));
        sec_unregister_skcipher(alg_mask, ARRAY_SIZE(sec_skciphers));
+
+unlock:
+       mutex_unlock(&sec_algs_lock);
 }
index 77f9f131b85035eeb3494a2fc08ff03aecb5ae7b..0e56a47eb86263a97406ac975e3156977717bea5 100644 (file)
@@ -311,8 +311,11 @@ static int sec_diff_regs_show(struct seq_file *s, void *unused)
 }
 DEFINE_SHOW_ATTRIBUTE(sec_diff_regs);
 
+static bool pf_q_num_flag;
 static int sec_pf_q_num_set(const char *val, const struct kernel_param *kp)
 {
+       pf_q_num_flag = true;
+
        return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_SEC_PF);
 }
 
@@ -1120,6 +1123,8 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
                qm->qp_num = pf_q_num;
                qm->debug.curr_qm_qp_num = pf_q_num;
                qm->qm_list = &sec_devices;
+               if (pf_q_num_flag)
+                       set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
        } else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
                /*
                 * have no way to get qm configure in VM in v1 hardware,
@@ -1229,15 +1234,11 @@ static int sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
        if (ret)
                pci_warn(pdev, "Failed to init debugfs!\n");
 
-       if (qm->qp_num >= ctx_q_num) {
-               ret = hisi_qm_alg_register(qm, &sec_devices);
-               if (ret < 0) {
-                       pr_err("Failed to register driver to crypto.\n");
-                       goto err_qm_stop;
-               }
-       } else {
-               pci_warn(qm->pdev,
-                       "Failed to use kernel mode, qp not enough!\n");
+       hisi_qm_add_list(qm, &sec_devices);
+       ret = hisi_qm_alg_register(qm, &sec_devices, ctx_q_num);
+       if (ret < 0) {
+               pr_err("Failed to register driver to crypto.\n");
+               goto err_qm_del_list;
        }
 
        if (qm->uacce) {
@@ -1259,9 +1260,9 @@ static int sec_probe(struct pci_dev *pdev, const struct pci_device_id *id)
        return 0;
 
 err_alg_unregister:
-       if (qm->qp_num >= ctx_q_num)
-               hisi_qm_alg_unregister(qm, &sec_devices);
-err_qm_stop:
+       hisi_qm_alg_unregister(qm, &sec_devices, ctx_q_num);
+err_qm_del_list:
+       hisi_qm_del_list(qm, &sec_devices);
        sec_debugfs_exit(qm);
        hisi_qm_stop(qm, QM_NORMAL);
 err_probe_uninit:
@@ -1278,8 +1279,8 @@ static void sec_remove(struct pci_dev *pdev)
 
        hisi_qm_pm_uninit(qm);
        hisi_qm_wait_task_finish(qm, &sec_devices);
-       if (qm->qp_num >= ctx_q_num)
-               hisi_qm_alg_unregister(qm, &sec_devices);
+       hisi_qm_alg_unregister(qm, &sec_devices, ctx_q_num);
+       hisi_qm_del_list(qm, &sec_devices);
 
        if (qm->fun_type == QM_HW_PF && qm->vfs_num)
                hisi_qm_sriov_disable(pdev, true);
index 97e500db0a8259b27f7186453f9153a55ad805d8..451b167bcc73dcd3d70bb4c04a266fa3b5b07c07 100644 (file)
@@ -303,7 +303,7 @@ err_remove_from_list:
        return ret;
 }
 
-static int hisi_trng_remove(struct platform_device *pdev)
+static void hisi_trng_remove(struct platform_device *pdev)
 {
        struct hisi_trng *trng = platform_get_drvdata(pdev);
 
@@ -314,8 +314,6 @@ static int hisi_trng_remove(struct platform_device *pdev)
        if (trng->ver != HISI_TRNG_VER_V1 &&
            atomic_dec_return(&trng_active_devs) == 0)
                crypto_unregister_rng(&hisi_trng_alg);
-
-       return 0;
 }
 
 static const struct acpi_device_id hisi_trng_acpi_match[] = {
@@ -326,7 +324,7 @@ MODULE_DEVICE_TABLE(acpi, hisi_trng_acpi_match);
 
 static struct platform_driver hisi_trng_driver = {
        .probe          = hisi_trng_probe,
-       .remove         = hisi_trng_remove,
+       .remove_new     = hisi_trng_remove,
        .driver         = {
                .name   = "hisi-trng-v2",
                .acpi_match_table = ACPI_PTR(hisi_trng_acpi_match),
index 6608971d10cdc18ef73bcfc9a0af89d825314710..c650c741a18d8ab7ec5ae9ccd547d516132fba17 100644 (file)
 #define HZIP_OUT_SGE_DATA_OFFSET_M             GENMASK(23, 0)
 /* hisi_zip_sqe dw9 */
 #define HZIP_REQ_TYPE_M                                GENMASK(7, 0)
-#define HZIP_ALG_TYPE_ZLIB                     0x02
-#define HZIP_ALG_TYPE_GZIP                     0x03
+#define HZIP_ALG_TYPE_DEFLATE                  0x01
 #define HZIP_BUF_TYPE_M                                GENMASK(11, 8)
-#define HZIP_PBUFFER                           0x0
 #define HZIP_SGL                               0x1
 
-#define HZIP_ZLIB_HEAD_SIZE                    2
-#define HZIP_GZIP_HEAD_SIZE                    10
-
-#define GZIP_HEAD_FHCRC_BIT                    BIT(1)
-#define GZIP_HEAD_FEXTRA_BIT                   BIT(2)
-#define GZIP_HEAD_FNAME_BIT                    BIT(3)
-#define GZIP_HEAD_FCOMMENT_BIT                 BIT(4)
-
-#define GZIP_HEAD_FLG_SHIFT                    3
-#define GZIP_HEAD_FEXTRA_SHIFT                 10
-#define GZIP_HEAD_FEXTRA_XLEN                  2UL
-#define GZIP_HEAD_FHCRC_SIZE                   2
-
-#define HZIP_GZIP_HEAD_BUF                     256
 #define HZIP_ALG_PRIORITY                      300
 #define HZIP_SGL_SGE_NR                                10
 
-#define HZIP_ALG_ZLIB                          GENMASK(1, 0)
-#define HZIP_ALG_GZIP                          GENMASK(3, 2)
+#define HZIP_ALG_DEFLATE                       GENMASK(5, 4)
 
-static const u8 zlib_head[HZIP_ZLIB_HEAD_SIZE] = {0x78, 0x9c};
-static const u8 gzip_head[HZIP_GZIP_HEAD_SIZE] = {
-       0x1f, 0x8b, 0x08, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x03
-};
+static DEFINE_MUTEX(zip_algs_lock);
+static unsigned int zip_available_devs;
 
 enum hisi_zip_alg_type {
        HZIP_ALG_TYPE_COMP = 0,
@@ -59,21 +40,10 @@ enum {
 };
 
 #define COMP_NAME_TO_TYPE(alg_name)                                    \
-       (!strcmp((alg_name), "zlib-deflate") ? HZIP_ALG_TYPE_ZLIB :     \
-        !strcmp((alg_name), "gzip") ? HZIP_ALG_TYPE_GZIP : 0)          \
-
-#define TO_HEAD_SIZE(req_type)                                         \
-       (((req_type) == HZIP_ALG_TYPE_ZLIB) ? sizeof(zlib_head) :       \
-        ((req_type) == HZIP_ALG_TYPE_GZIP) ? sizeof(gzip_head) : 0)    \
-
-#define TO_HEAD(req_type)                                              \
-       (((req_type) == HZIP_ALG_TYPE_ZLIB) ? zlib_head :               \
-        ((req_type) == HZIP_ALG_TYPE_GZIP) ? gzip_head : NULL)         \
+       (!strcmp((alg_name), "deflate") ? HZIP_ALG_TYPE_DEFLATE : 0)
 
 struct hisi_zip_req {
        struct acomp_req *req;
-       u32 sskip;
-       u32 dskip;
        struct hisi_acc_hw_sgl *hw_src;
        struct hisi_acc_hw_sgl *hw_dst;
        dma_addr_t dma_src;
@@ -138,85 +108,8 @@ static u16 sgl_sge_nr = HZIP_SGL_SGE_NR;
 module_param_cb(sgl_sge_nr, &sgl_sge_nr_ops, &sgl_sge_nr, 0444);
 MODULE_PARM_DESC(sgl_sge_nr, "Number of sge in sgl(1-255)");
 
-static u32 get_extra_field_size(const u8 *start)
-{
-       return *((u16 *)start) + GZIP_HEAD_FEXTRA_XLEN;
-}
-
-static u32 get_name_field_size(const u8 *start)
-{
-       return strlen(start) + 1;
-}
-
-static u32 get_comment_field_size(const u8 *start)
-{
-       return strlen(start) + 1;
-}
-
-static u32 __get_gzip_head_size(const u8 *src)
-{
-       u8 head_flg = *(src + GZIP_HEAD_FLG_SHIFT);
-       u32 size = GZIP_HEAD_FEXTRA_SHIFT;
-
-       if (head_flg & GZIP_HEAD_FEXTRA_BIT)
-               size += get_extra_field_size(src + size);
-       if (head_flg & GZIP_HEAD_FNAME_BIT)
-               size += get_name_field_size(src + size);
-       if (head_flg & GZIP_HEAD_FCOMMENT_BIT)
-               size += get_comment_field_size(src + size);
-       if (head_flg & GZIP_HEAD_FHCRC_BIT)
-               size += GZIP_HEAD_FHCRC_SIZE;
-
-       return size;
-}
-
-static u32 __maybe_unused get_gzip_head_size(struct scatterlist *sgl)
-{
-       char buf[HZIP_GZIP_HEAD_BUF];
-
-       sg_copy_to_buffer(sgl, sg_nents(sgl), buf, sizeof(buf));
-
-       return __get_gzip_head_size(buf);
-}
-
-static int add_comp_head(struct scatterlist *dst, u8 req_type)
-{
-       int head_size = TO_HEAD_SIZE(req_type);
-       const u8 *head = TO_HEAD(req_type);
-       int ret;
-
-       ret = sg_copy_from_buffer(dst, sg_nents(dst), head, head_size);
-       if (unlikely(ret != head_size)) {
-               pr_err("the head size of buffer is wrong (%d)!\n", ret);
-               return -ENOMEM;
-       }
-
-       return head_size;
-}
-
-static int get_comp_head_size(struct acomp_req *acomp_req, u8 req_type)
-{
-       if (unlikely(!acomp_req->src || !acomp_req->slen))
-               return -EINVAL;
-
-       if (unlikely(req_type == HZIP_ALG_TYPE_GZIP &&
-                    acomp_req->slen < GZIP_HEAD_FEXTRA_SHIFT))
-               return -EINVAL;
-
-       switch (req_type) {
-       case HZIP_ALG_TYPE_ZLIB:
-               return TO_HEAD_SIZE(HZIP_ALG_TYPE_ZLIB);
-       case HZIP_ALG_TYPE_GZIP:
-               return TO_HEAD_SIZE(HZIP_ALG_TYPE_GZIP);
-       default:
-               pr_err("request type does not support!\n");
-               return -EINVAL;
-       }
-}
-
-static struct hisi_zip_req *hisi_zip_create_req(struct acomp_req *req,
-                                               struct hisi_zip_qp_ctx *qp_ctx,
-                                               size_t head_size, bool is_comp)
+static struct hisi_zip_req *hisi_zip_create_req(struct hisi_zip_qp_ctx *qp_ctx,
+                                               struct acomp_req *req)
 {
        struct hisi_zip_req_q *req_q = &qp_ctx->req_q;
        struct hisi_zip_req *q = req_q->q;
@@ -239,14 +132,6 @@ static struct hisi_zip_req *hisi_zip_create_req(struct acomp_req *req,
        req_cache->req_id = req_id;
        req_cache->req = req;
 
-       if (is_comp) {
-               req_cache->sskip = 0;
-               req_cache->dskip = head_size;
-       } else {
-               req_cache->sskip = head_size;
-               req_cache->dskip = 0;
-       }
-
        return req_cache;
 }
 
@@ -272,10 +157,8 @@ static void hisi_zip_fill_buf_size(struct hisi_zip_sqe *sqe, struct hisi_zip_req
 {
        struct acomp_req *a_req = req->req;
 
-       sqe->input_data_length = a_req->slen - req->sskip;
-       sqe->dest_avail_out = a_req->dlen - req->dskip;
-       sqe->dw7 = FIELD_PREP(HZIP_IN_SGE_DATA_OFFSET_M, req->sskip);
-       sqe->dw8 = FIELD_PREP(HZIP_OUT_SGE_DATA_OFFSET_M, req->dskip);
+       sqe->input_data_length = a_req->slen;
+       sqe->dest_avail_out = a_req->dlen;
 }
 
 static void hisi_zip_fill_buf_type(struct hisi_zip_sqe *sqe, u8 buf_type)
@@ -296,12 +179,7 @@ static void hisi_zip_fill_req_type(struct hisi_zip_sqe *sqe, u8 req_type)
        sqe->dw9 = val;
 }
 
-static void hisi_zip_fill_tag_v1(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req)
-{
-       sqe->dw13 = req->req_id;
-}
-
-static void hisi_zip_fill_tag_v2(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req)
+static void hisi_zip_fill_tag(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req)
 {
        sqe->dw26 = req->req_id;
 }
@@ -330,8 +208,8 @@ static void hisi_zip_fill_sqe(struct hisi_zip_ctx *ctx, struct hisi_zip_sqe *sqe
        ops->fill_sqe_type(sqe, ops->sqe_type);
 }
 
-static int hisi_zip_do_work(struct hisi_zip_req *req,
-                           struct hisi_zip_qp_ctx *qp_ctx)
+static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx,
+                           struct hisi_zip_req *req)
 {
        struct hisi_acc_sgl_pool *pool = qp_ctx->sgl_pool;
        struct hisi_zip_dfx *dfx = &qp_ctx->zip_dev->dfx;
@@ -383,12 +261,7 @@ err_unmap_input:
        return ret;
 }
 
-static u32 hisi_zip_get_tag_v1(struct hisi_zip_sqe *sqe)
-{
-       return sqe->dw13;
-}
-
-static u32 hisi_zip_get_tag_v2(struct hisi_zip_sqe *sqe)
+static u32 hisi_zip_get_tag(struct hisi_zip_sqe *sqe)
 {
        return sqe->dw26;
 }
@@ -414,8 +287,8 @@ static void hisi_zip_acomp_cb(struct hisi_qp *qp, void *data)
        u32 tag = ops->get_tag(sqe);
        struct hisi_zip_req *req = req_q->q + tag;
        struct acomp_req *acomp_req = req->req;
-       u32 status, dlen, head_size;
        int err = 0;
+       u32 status;
 
        atomic64_inc(&dfx->recv_cnt);
        status = ops->get_status(sqe);
@@ -427,13 +300,10 @@ static void hisi_zip_acomp_cb(struct hisi_qp *qp, void *data)
                err = -EIO;
        }
 
-       dlen = ops->get_dstlen(sqe);
-
        hisi_acc_sg_buf_unmap(dev, acomp_req->src, req->hw_src);
        hisi_acc_sg_buf_unmap(dev, acomp_req->dst, req->hw_dst);
 
-       head_size = (qp->alg_type == 0) ? TO_HEAD_SIZE(qp->req_type) : 0;
-       acomp_req->dlen = dlen + head_size;
+       acomp_req->dlen = ops->get_dstlen(sqe);
 
        if (acomp_req->base.complete)
                acomp_request_complete(acomp_req, err);
@@ -447,22 +317,13 @@ static int hisi_zip_acompress(struct acomp_req *acomp_req)
        struct hisi_zip_qp_ctx *qp_ctx = &ctx->qp_ctx[HZIP_QPC_COMP];
        struct device *dev = &qp_ctx->qp->qm->pdev->dev;
        struct hisi_zip_req *req;
-       int head_size;
        int ret;
 
-       /* let's output compression head now */
-       head_size = add_comp_head(acomp_req->dst, qp_ctx->qp->req_type);
-       if (unlikely(head_size < 0)) {
-               dev_err_ratelimited(dev, "failed to add comp head (%d)!\n",
-                                   head_size);
-               return head_size;
-       }
-
-       req = hisi_zip_create_req(acomp_req, qp_ctx, head_size, true);
+       req = hisi_zip_create_req(qp_ctx, acomp_req);
        if (IS_ERR(req))
                return PTR_ERR(req);
 
-       ret = hisi_zip_do_work(req, qp_ctx);
+       ret = hisi_zip_do_work(qp_ctx, req);
        if (unlikely(ret != -EINPROGRESS)) {
                dev_info_ratelimited(dev, "failed to do compress (%d)!\n", ret);
                hisi_zip_remove_req(qp_ctx, req);
@@ -477,20 +338,13 @@ static int hisi_zip_adecompress(struct acomp_req *acomp_req)
        struct hisi_zip_qp_ctx *qp_ctx = &ctx->qp_ctx[HZIP_QPC_DECOMP];
        struct device *dev = &qp_ctx->qp->qm->pdev->dev;
        struct hisi_zip_req *req;
-       int head_size, ret;
-
-       head_size = get_comp_head_size(acomp_req, qp_ctx->qp->req_type);
-       if (unlikely(head_size < 0)) {
-               dev_err_ratelimited(dev, "failed to get comp head size (%d)!\n",
-                                   head_size);
-               return head_size;
-       }
+       int ret;
 
-       req = hisi_zip_create_req(acomp_req, qp_ctx, head_size, false);
+       req = hisi_zip_create_req(qp_ctx, acomp_req);
        if (IS_ERR(req))
                return PTR_ERR(req);
 
-       ret = hisi_zip_do_work(req, qp_ctx);
+       ret = hisi_zip_do_work(qp_ctx, req);
        if (unlikely(ret != -EINPROGRESS)) {
                dev_info_ratelimited(dev, "failed to do decompress (%d)!\n",
                                     ret);
@@ -527,28 +381,15 @@ static void hisi_zip_release_qp(struct hisi_zip_qp_ctx *qp_ctx)
        hisi_qm_free_qps(&qp_ctx->qp, 1);
 }
 
-static const struct hisi_zip_sqe_ops hisi_zip_ops_v1 = {
-       .sqe_type               = 0,
-       .fill_addr              = hisi_zip_fill_addr,
-       .fill_buf_size          = hisi_zip_fill_buf_size,
-       .fill_buf_type          = hisi_zip_fill_buf_type,
-       .fill_req_type          = hisi_zip_fill_req_type,
-       .fill_tag               = hisi_zip_fill_tag_v1,
-       .fill_sqe_type          = hisi_zip_fill_sqe_type,
-       .get_tag                = hisi_zip_get_tag_v1,
-       .get_status             = hisi_zip_get_status,
-       .get_dstlen             = hisi_zip_get_dstlen,
-};
-
-static const struct hisi_zip_sqe_ops hisi_zip_ops_v2 = {
+static const struct hisi_zip_sqe_ops hisi_zip_ops = {
        .sqe_type               = 0x3,
        .fill_addr              = hisi_zip_fill_addr,
        .fill_buf_size          = hisi_zip_fill_buf_size,
        .fill_buf_type          = hisi_zip_fill_buf_type,
        .fill_req_type          = hisi_zip_fill_req_type,
-       .fill_tag               = hisi_zip_fill_tag_v2,
+       .fill_tag               = hisi_zip_fill_tag,
        .fill_sqe_type          = hisi_zip_fill_sqe_type,
-       .get_tag                = hisi_zip_get_tag_v2,
+       .get_tag                = hisi_zip_get_tag,
        .get_status             = hisi_zip_get_status,
        .get_dstlen             = hisi_zip_get_dstlen,
 };
@@ -584,10 +425,7 @@ static int hisi_zip_ctx_init(struct hisi_zip_ctx *hisi_zip_ctx, u8 req_type, int
                qp_ctx->zip_dev = hisi_zip;
        }
 
-       if (hisi_zip->qm.ver < QM_HW_V3)
-               hisi_zip_ctx->ops = &hisi_zip_ops_v1;
-       else
-               hisi_zip_ctx->ops = &hisi_zip_ops_v2;
+       hisi_zip_ctx->ops = &hisi_zip_ops;
 
        return 0;
 }
@@ -745,95 +583,67 @@ static void hisi_zip_acomp_exit(struct crypto_acomp *tfm)
        hisi_zip_ctx_exit(ctx);
 }
 
-static struct acomp_alg hisi_zip_acomp_zlib = {
-       .init                   = hisi_zip_acomp_init,
-       .exit                   = hisi_zip_acomp_exit,
-       .compress               = hisi_zip_acompress,
-       .decompress             = hisi_zip_adecompress,
-       .base                   = {
-               .cra_name               = "zlib-deflate",
-               .cra_driver_name        = "hisi-zlib-acomp",
-               .cra_module             = THIS_MODULE,
-               .cra_priority           = HZIP_ALG_PRIORITY,
-               .cra_ctxsize            = sizeof(struct hisi_zip_ctx),
-       }
-};
-
-static int hisi_zip_register_zlib(struct hisi_qm *qm)
-{
-       int ret;
-
-       if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB))
-               return 0;
-
-       ret = crypto_register_acomp(&hisi_zip_acomp_zlib);
-       if (ret)
-               dev_err(&qm->pdev->dev, "failed to register to zlib (%d)!\n", ret);
-
-       return ret;
-}
-
-static void hisi_zip_unregister_zlib(struct hisi_qm *qm)
-{
-       if (!hisi_zip_alg_support(qm, HZIP_ALG_ZLIB))
-               return;
-
-       crypto_unregister_acomp(&hisi_zip_acomp_zlib);
-}
-
-static struct acomp_alg hisi_zip_acomp_gzip = {
+static struct acomp_alg hisi_zip_acomp_deflate = {
        .init                   = hisi_zip_acomp_init,
        .exit                   = hisi_zip_acomp_exit,
        .compress               = hisi_zip_acompress,
        .decompress             = hisi_zip_adecompress,
        .base                   = {
-               .cra_name               = "gzip",
-               .cra_driver_name        = "hisi-gzip-acomp",
+               .cra_name               = "deflate",
+               .cra_driver_name        = "hisi-deflate-acomp",
                .cra_module             = THIS_MODULE,
-               .cra_priority           = HZIP_ALG_PRIORITY,
+               .cra_priority           = HZIP_ALG_PRIORITY,
                .cra_ctxsize            = sizeof(struct hisi_zip_ctx),
        }
 };
 
-static int hisi_zip_register_gzip(struct hisi_qm *qm)
+static int hisi_zip_register_deflate(struct hisi_qm *qm)
 {
        int ret;
 
-       if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP))
+       if (!hisi_zip_alg_support(qm, HZIP_ALG_DEFLATE))
                return 0;
 
-       ret = crypto_register_acomp(&hisi_zip_acomp_gzip);
+       ret = crypto_register_acomp(&hisi_zip_acomp_deflate);
        if (ret)
-               dev_err(&qm->pdev->dev, "failed to register to gzip (%d)!\n", ret);
+               dev_err(&qm->pdev->dev, "failed to register to deflate (%d)!\n", ret);
 
        return ret;
 }
 
-static void hisi_zip_unregister_gzip(struct hisi_qm *qm)
+static void hisi_zip_unregister_deflate(struct hisi_qm *qm)
 {
-       if (!hisi_zip_alg_support(qm, HZIP_ALG_GZIP))
+       if (!hisi_zip_alg_support(qm, HZIP_ALG_DEFLATE))
                return;
 
-       crypto_unregister_acomp(&hisi_zip_acomp_gzip);
+       crypto_unregister_acomp(&hisi_zip_acomp_deflate);
 }
 
 int hisi_zip_register_to_crypto(struct hisi_qm *qm)
 {
        int ret = 0;
 
-       ret = hisi_zip_register_zlib(qm);
-       if (ret)
-               return ret;
+       mutex_lock(&zip_algs_lock);
+       if (zip_available_devs++)
+               goto unlock;
 
-       ret = hisi_zip_register_gzip(qm);
+       ret = hisi_zip_register_deflate(qm);
        if (ret)
-               hisi_zip_unregister_zlib(qm);
+               zip_available_devs--;
 
+unlock:
+       mutex_unlock(&zip_algs_lock);
        return ret;
 }
 
 void hisi_zip_unregister_from_crypto(struct hisi_qm *qm)
 {
-       hisi_zip_unregister_zlib(qm);
-       hisi_zip_unregister_gzip(qm);
+       mutex_lock(&zip_algs_lock);
+       if (--zip_available_devs)
+               goto unlock;
+
+       hisi_zip_unregister_deflate(qm);
+
+unlock:
+       mutex_unlock(&zip_algs_lock);
 }
index f3ce34198775d889d30e20f50df660e39edf7fde..db4c964cd64952502fbd03d3e94134b2a9dcf9f3 100644 (file)
@@ -66,6 +66,7 @@
 #define HZIP_SQE_SIZE                  128
 #define HZIP_PF_DEF_Q_NUM              64
 #define HZIP_PF_DEF_Q_BASE             0
+#define HZIP_CTX_Q_NUM_DEF             2
 
 #define HZIP_SOFT_CTRL_CNT_CLR_CE      0x301000
 #define HZIP_SOFT_CTRL_CNT_CLR_CE_BIT  BIT(0)
@@ -236,8 +237,8 @@ static struct hisi_qm_cap_info zip_basic_cap_info[] = {
        {ZIP_CLUSTER_DECOMP_NUM_CAP, 0x313C, 0, GENMASK(7, 0), 0x6, 0x6, 0x3},
        {ZIP_DECOMP_ENABLE_BITMAP, 0x3140, 16, GENMASK(15, 0), 0xFC, 0xFC, 0x1C},
        {ZIP_COMP_ENABLE_BITMAP, 0x3140, 0, GENMASK(15, 0), 0x3, 0x3, 0x3},
-       {ZIP_DRV_ALG_BITMAP, 0x3144, 0, GENMASK(31, 0), 0xF, 0xF, 0xF},
-       {ZIP_DEV_ALG_BITMAP, 0x3148, 0, GENMASK(31, 0), 0xF, 0xF, 0xFF},
+       {ZIP_DRV_ALG_BITMAP, 0x3144, 0, GENMASK(31, 0), 0x0, 0x0, 0x30},
+       {ZIP_DEV_ALG_BITMAP, 0x3148, 0, GENMASK(31, 0), 0xF, 0xF, 0x3F},
        {ZIP_CORE1_ALG_BITMAP, 0x314C, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5},
        {ZIP_CORE2_ALG_BITMAP, 0x3150, 0, GENMASK(31, 0), 0x5, 0x5, 0xD5},
        {ZIP_CORE3_ALG_BITMAP, 0x3154, 0, GENMASK(31, 0), 0xA, 0xA, 0x2A},
@@ -364,8 +365,11 @@ static u32 uacce_mode = UACCE_MODE_NOUACCE;
 module_param_cb(uacce_mode, &zip_uacce_mode_ops, &uacce_mode, 0444);
 MODULE_PARM_DESC(uacce_mode, UACCE_MODE_DESC);
 
+static bool pf_q_num_flag;
 static int pf_q_num_set(const char *val, const struct kernel_param *kp)
 {
+       pf_q_num_flag = true;
+
        return q_num_set(val, kp, PCI_DEVICE_ID_HUAWEI_ZIP_PF);
 }
 
@@ -1139,6 +1143,8 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
                qm->qp_num = pf_q_num;
                qm->debug.curr_qm_qp_num = pf_q_num;
                qm->qm_list = &zip_devices;
+               if (pf_q_num_flag)
+                       set_bit(QM_MODULE_PARAM, &qm->misc_ctl);
        } else if (qm->fun_type == QM_HW_VF && qm->ver == QM_HW_V1) {
                /*
                 * have no way to get qm configure in VM in v1 hardware,
@@ -1226,10 +1232,11 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
        if (ret)
                pci_err(pdev, "failed to init debugfs (%d)!\n", ret);
 
-       ret = hisi_qm_alg_register(qm, &zip_devices);
+       hisi_qm_add_list(qm, &zip_devices);
+       ret = hisi_qm_alg_register(qm, &zip_devices, HZIP_CTX_Q_NUM_DEF);
        if (ret < 0) {
                pci_err(pdev, "failed to register driver to crypto!\n");
-               goto err_qm_stop;
+               goto err_qm_del_list;
        }
 
        if (qm->uacce) {
@@ -1251,9 +1258,10 @@ static int hisi_zip_probe(struct pci_dev *pdev, const struct pci_device_id *id)
        return 0;
 
 err_qm_alg_unregister:
-       hisi_qm_alg_unregister(qm, &zip_devices);
+       hisi_qm_alg_unregister(qm, &zip_devices, HZIP_CTX_Q_NUM_DEF);
 
-err_qm_stop:
+err_qm_del_list:
+       hisi_qm_del_list(qm, &zip_devices);
        hisi_zip_debugfs_exit(qm);
        hisi_qm_stop(qm, QM_NORMAL);
 
@@ -1273,7 +1281,8 @@ static void hisi_zip_remove(struct pci_dev *pdev)
 
        hisi_qm_pm_uninit(qm);
        hisi_qm_wait_task_finish(qm, &zip_devices);
-       hisi_qm_alg_unregister(qm, &zip_devices);
+       hisi_qm_alg_unregister(qm, &zip_devices, HZIP_CTX_Q_NUM_DEF);
+       hisi_qm_del_list(qm, &zip_devices);
 
        if (qm->fun_type == QM_HW_PF && qm->vfs_num)
                hisi_qm_sriov_disable(pdev, true);
index 45063693859c014ee9b2794808c126a2bcdea3de..d269036bdaa39f70cd255be2926eceab4bb1fbf0 100644 (file)
@@ -1043,7 +1043,7 @@ res_err:
        return err;
 }
 
-static int img_hash_remove(struct platform_device *pdev)
+static void img_hash_remove(struct platform_device *pdev)
 {
        struct img_hash_dev *hdev;
 
@@ -1061,8 +1061,6 @@ static int img_hash_remove(struct platform_device *pdev)
 
        clk_disable_unprepare(hdev->hash_clk);
        clk_disable_unprepare(hdev->sys_clk);
-
-       return 0;
 }
 
 #ifdef CONFIG_PM_SLEEP
@@ -1101,7 +1099,7 @@ static const struct dev_pm_ops img_hash_pm_ops = {
 
 static struct platform_driver img_hash_driver = {
        .probe          = img_hash_probe,
-       .remove         = img_hash_remove,
+       .remove_new     = img_hash_remove,
        .driver         = {
                .name   = "img-hash-accelerator",
                .pm     = &img_hash_pm_ops,
index 9ff02b5abc4aeb1f9867a47e761027cdea6a3e05..76da14af74b592529b137fca045c6ec4fc22c721 100644 (file)
@@ -1801,7 +1801,7 @@ err_core_clk:
        return ret;
 }
 
-static int safexcel_remove(struct platform_device *pdev)
+static void safexcel_remove(struct platform_device *pdev)
 {
        struct safexcel_crypto_priv *priv = platform_get_drvdata(pdev);
        int i;
@@ -1816,8 +1816,6 @@ static int safexcel_remove(struct platform_device *pdev)
                irq_set_affinity_hint(priv->ring[i].irq, NULL);
                destroy_workqueue(priv->ring[i].workqueue);
        }
-
-       return 0;
 }
 
 static const struct safexcel_priv_data eip97ies_mrvl_data = {
@@ -1874,7 +1872,7 @@ MODULE_DEVICE_TABLE(of, safexcel_of_match_table);
 
 static struct platform_driver  crypto_safexcel = {
        .probe          = safexcel_probe,
-       .remove         = safexcel_remove,
+       .remove_new     = safexcel_remove,
        .driver         = {
                .name   = "crypto-safexcel",
                .of_match_table = safexcel_of_match_table,
index 4a18095ae5d8082b90dc87c9338b06bd2ad818b8..f8a77bff88448de0d4767ae13965322eec12206f 100644 (file)
@@ -1563,7 +1563,7 @@ static int ixp_crypto_probe(struct platform_device *_pdev)
        return 0;
 }
 
-static int ixp_crypto_remove(struct platform_device *pdev)
+static void ixp_crypto_remove(struct platform_device *pdev)
 {
        int num = ARRAY_SIZE(ixp4xx_algos);
        int i;
@@ -1578,8 +1578,6 @@ static int ixp_crypto_remove(struct platform_device *pdev)
                        crypto_unregister_skcipher(&ixp4xx_algos[i].crypto);
        }
        release_ixp_crypto(&pdev->dev);
-
-       return 0;
 }
 static const struct of_device_id ixp4xx_crypto_of_match[] = {
        {
@@ -1590,7 +1588,7 @@ static const struct of_device_id ixp4xx_crypto_of_match[] = {
 
 static struct platform_driver ixp_crypto_driver = {
        .probe = ixp_crypto_probe,
-       .remove = ixp_crypto_remove,
+       .remove_new = ixp_crypto_remove,
        .driver = {
                .name = "ixp4xx_crypto",
                .of_match_table = ixp4xx_crypto_of_match,
index 1e2fd9a754ec0040def72bea505bb69df2c5670f..9b2d098e5eb2c4212c9c384f5d5f6e6ede8fc9e5 100644 (file)
@@ -1562,7 +1562,7 @@ static const struct of_device_id kmb_ocs_aes_of_match[] = {
        {}
 };
 
-static int kmb_ocs_aes_remove(struct platform_device *pdev)
+static void kmb_ocs_aes_remove(struct platform_device *pdev)
 {
        struct ocs_aes_dev *aes_dev;
 
@@ -1575,8 +1575,6 @@ static int kmb_ocs_aes_remove(struct platform_device *pdev)
        spin_unlock(&ocs_aes.lock);
 
        crypto_engine_exit(aes_dev->engine);
-
-       return 0;
 }
 
 static int kmb_ocs_aes_probe(struct platform_device *pdev)
@@ -1658,7 +1656,7 @@ list_del:
 /* The OCS driver is a platform device. */
 static struct platform_driver kmb_ocs_aes_driver = {
        .probe = kmb_ocs_aes_probe,
-       .remove = kmb_ocs_aes_remove,
+       .remove_new = kmb_ocs_aes_remove,
        .driver = {
                        .name = DRV_NAME,
                        .of_match_table = kmb_ocs_aes_of_match,
index fb95deed9057a2cf38af55d44452c3ddc761e80e..5e24f2d8affc6319d98c256e52b0cf4468436035 100644 (file)
@@ -964,7 +964,7 @@ list_del:
        return rc;
 }
 
-static int kmb_ocs_ecc_remove(struct platform_device *pdev)
+static void kmb_ocs_ecc_remove(struct platform_device *pdev)
 {
        struct ocs_ecc_dev *ecc_dev;
 
@@ -978,8 +978,6 @@ static int kmb_ocs_ecc_remove(struct platform_device *pdev)
        spin_unlock(&ocs_ecc.lock);
 
        crypto_engine_exit(ecc_dev->engine);
-
-       return 0;
 }
 
 /* Device tree driver match. */
@@ -993,7 +991,7 @@ static const struct of_device_id kmb_ocs_ecc_of_match[] = {
 /* The OCS driver is a platform device. */
 static struct platform_driver kmb_ocs_ecc_driver = {
        .probe = kmb_ocs_ecc_probe,
-       .remove = kmb_ocs_ecc_remove,
+       .remove_new = kmb_ocs_ecc_remove,
        .driver = {
                        .name = DRV_NAME,
                        .of_match_table = kmb_ocs_ecc_of_match,
index daba8ca05dbe42c283fcb79fd1ddb92c7a3e85a9..c2dfca73fe4ea3e3ce35bbf53b79671bc6ee1585 100644 (file)
@@ -1151,24 +1151,17 @@ static const struct of_device_id kmb_ocs_hcu_of_match[] = {
        {}
 };
 
-static int kmb_ocs_hcu_remove(struct platform_device *pdev)
+static void kmb_ocs_hcu_remove(struct platform_device *pdev)
 {
-       struct ocs_hcu_dev *hcu_dev;
-       int rc;
-
-       hcu_dev = platform_get_drvdata(pdev);
-       if (!hcu_dev)
-               return -ENODEV;
+       struct ocs_hcu_dev *hcu_dev = platform_get_drvdata(pdev);
 
        crypto_engine_unregister_ahashes(ocs_hcu_algs, ARRAY_SIZE(ocs_hcu_algs));
 
-       rc = crypto_engine_exit(hcu_dev->engine);
+       crypto_engine_exit(hcu_dev->engine);
 
        spin_lock_bh(&ocs_hcu.lock);
        list_del(&hcu_dev->list);
        spin_unlock_bh(&ocs_hcu.lock);
-
-       return rc;
 }
 
 static int kmb_ocs_hcu_probe(struct platform_device *pdev)
@@ -1249,7 +1242,7 @@ list_del:
 /* The OCS driver is a platform device. */
 static struct platform_driver kmb_ocs_hcu_driver = {
        .probe = kmb_ocs_hcu_probe,
-       .remove = kmb_ocs_hcu_remove,
+       .remove_new = kmb_ocs_hcu_remove,
        .driver = {
                        .name = DRV_NAME,
                        .of_match_table = kmb_ocs_hcu_of_match,
index dd4464b7e00b18763f02b06ea90560a447fb739d..0faedb5b2eb5a867fd446b4c5508662b63f66b57 100644 (file)
@@ -2,17 +2,24 @@
 /* Copyright(c) 2020 - 2021 Intel Corporation */
 #include <linux/iopoll.h>
 #include <adf_accel_devices.h>
+#include <adf_admin.h>
 #include <adf_cfg.h>
+#include <adf_cfg_services.h>
 #include <adf_clock.h>
 #include <adf_common_drv.h>
 #include <adf_gen4_dc.h>
 #include <adf_gen4_hw_data.h>
 #include <adf_gen4_pfvf.h>
 #include <adf_gen4_pm.h>
+#include "adf_gen4_ras.h"
 #include <adf_gen4_timer.h>
 #include "adf_4xxx_hw_data.h"
 #include "icp_qat_hw.h"
 
+#define ADF_AE_GROUP_0         GENMASK(3, 0)
+#define ADF_AE_GROUP_1         GENMASK(7, 4)
+#define ADF_AE_GROUP_2         BIT(8)
+
 enum adf_fw_objs {
        ADF_FW_SYM_OBJ,
        ADF_FW_ASYM_OBJ,
@@ -40,39 +47,45 @@ struct adf_fw_config {
 };
 
 static const struct adf_fw_config adf_fw_cy_config[] = {
-       {0xF0, ADF_FW_SYM_OBJ},
-       {0xF, ADF_FW_ASYM_OBJ},
-       {0x100, ADF_FW_ADMIN_OBJ},
+       {ADF_AE_GROUP_1, ADF_FW_SYM_OBJ},
+       {ADF_AE_GROUP_0, ADF_FW_ASYM_OBJ},
+       {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
 };
 
 static const struct adf_fw_config adf_fw_dc_config[] = {
-       {0xF0, ADF_FW_DC_OBJ},
-       {0xF, ADF_FW_DC_OBJ},
-       {0x100, ADF_FW_ADMIN_OBJ},
+       {ADF_AE_GROUP_1, ADF_FW_DC_OBJ},
+       {ADF_AE_GROUP_0, ADF_FW_DC_OBJ},
+       {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
 };
 
 static const struct adf_fw_config adf_fw_sym_config[] = {
-       {0xF0, ADF_FW_SYM_OBJ},
-       {0xF, ADF_FW_SYM_OBJ},
-       {0x100, ADF_FW_ADMIN_OBJ},
+       {ADF_AE_GROUP_1, ADF_FW_SYM_OBJ},
+       {ADF_AE_GROUP_0, ADF_FW_SYM_OBJ},
+       {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
 };
 
 static const struct adf_fw_config adf_fw_asym_config[] = {
-       {0xF0, ADF_FW_ASYM_OBJ},
-       {0xF, ADF_FW_ASYM_OBJ},
-       {0x100, ADF_FW_ADMIN_OBJ},
+       {ADF_AE_GROUP_1, ADF_FW_ASYM_OBJ},
+       {ADF_AE_GROUP_0, ADF_FW_ASYM_OBJ},
+       {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
 };
 
 static const struct adf_fw_config adf_fw_asym_dc_config[] = {
-       {0xF0, ADF_FW_ASYM_OBJ},
-       {0xF, ADF_FW_DC_OBJ},
-       {0x100, ADF_FW_ADMIN_OBJ},
+       {ADF_AE_GROUP_1, ADF_FW_ASYM_OBJ},
+       {ADF_AE_GROUP_0, ADF_FW_DC_OBJ},
+       {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
 };
 
 static const struct adf_fw_config adf_fw_sym_dc_config[] = {
-       {0xF0, ADF_FW_SYM_OBJ},
-       {0xF, ADF_FW_DC_OBJ},
-       {0x100, ADF_FW_ADMIN_OBJ},
+       {ADF_AE_GROUP_1, ADF_FW_SYM_OBJ},
+       {ADF_AE_GROUP_0, ADF_FW_DC_OBJ},
+       {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
+};
+
+static const struct adf_fw_config adf_fw_dcc_config[] = {
+       {ADF_AE_GROUP_1, ADF_FW_DC_OBJ},
+       {ADF_AE_GROUP_0, ADF_FW_SYM_OBJ},
+       {ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ},
 };
 
 static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_dc_config));
@@ -80,6 +93,7 @@ static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_sym_config));
 static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_asym_config));
 static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_asym_dc_config));
 static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_sym_dc_config));
+static_assert(ARRAY_SIZE(adf_fw_cy_config) == ARRAY_SIZE(adf_fw_dcc_config));
 
 /* Worker thread to service arbiter mappings */
 static const u32 default_thrd_to_arb_map[ADF_4XXX_MAX_ACCELENGINES] = {
@@ -94,36 +108,18 @@ static const u32 thrd_to_arb_map_dc[ADF_4XXX_MAX_ACCELENGINES] = {
        0x0
 };
 
+static const u32 thrd_to_arb_map_dcc[ADF_4XXX_MAX_ACCELENGINES] = {
+       0x00000000, 0x00000000, 0x00000000, 0x00000000,
+       0x0000FFFF, 0x0000FFFF, 0x0000FFFF, 0x0000FFFF,
+       0x0
+};
+
 static struct adf_hw_device_class adf_4xxx_class = {
        .name = ADF_4XXX_DEVICE_NAME,
        .type = DEV_4XXX,
        .instances = 0,
 };
 
-enum dev_services {
-       SVC_CY = 0,
-       SVC_CY2,
-       SVC_DC,
-       SVC_SYM,
-       SVC_ASYM,
-       SVC_DC_ASYM,
-       SVC_ASYM_DC,
-       SVC_DC_SYM,
-       SVC_SYM_DC,
-};
-
-static const char *const dev_cfg_services[] = {
-       [SVC_CY] = ADF_CFG_CY,
-       [SVC_CY2] = ADF_CFG_ASYM_SYM,
-       [SVC_DC] = ADF_CFG_DC,
-       [SVC_SYM] = ADF_CFG_SYM,
-       [SVC_ASYM] = ADF_CFG_ASYM,
-       [SVC_DC_ASYM] = ADF_CFG_DC_ASYM,
-       [SVC_ASYM_DC] = ADF_CFG_ASYM_DC,
-       [SVC_DC_SYM] = ADF_CFG_DC_SYM,
-       [SVC_SYM_DC] = ADF_CFG_SYM_DC,
-};
-
 static int get_service_enabled(struct adf_accel_dev *accel_dev)
 {
        char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
@@ -137,7 +133,7 @@ static int get_service_enabled(struct adf_accel_dev *accel_dev)
                return ret;
        }
 
-       ret = match_string(dev_cfg_services, ARRAY_SIZE(dev_cfg_services),
+       ret = match_string(adf_cfg_services, ARRAY_SIZE(adf_cfg_services),
                           services);
        if (ret < 0)
                dev_err(&GET_DEV(accel_dev),
@@ -212,6 +208,7 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
 {
        struct pci_dev *pdev = accel_dev->accel_pci_dev.pci_dev;
        u32 capabilities_sym, capabilities_asym, capabilities_dc;
+       u32 capabilities_dcc;
        u32 fusectl1;
 
        /* Read accelerator capabilities mask */
@@ -284,6 +281,14 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
                return capabilities_sym | capabilities_asym;
        case SVC_DC:
                return capabilities_dc;
+       case SVC_DCC:
+               /*
+                * Sym capabilities are available for chaining operations,
+                * but sym crypto instances cannot be supported
+                */
+               capabilities_dcc = capabilities_dc | capabilities_sym;
+               capabilities_dcc &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC;
+               return capabilities_dcc;
        case SVC_SYM:
                return capabilities_sym;
        case SVC_ASYM:
@@ -309,6 +314,8 @@ static const u32 *adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev)
        switch (get_service_enabled(accel_dev)) {
        case SVC_DC:
                return thrd_to_arb_map_dc;
+       case SVC_DCC:
+               return thrd_to_arb_map_dcc;
        default:
                return default_thrd_to_arb_map;
        }
@@ -336,6 +343,24 @@ static u32 get_heartbeat_clock(struct adf_hw_device_data *self)
        return ADF_4XXX_KPT_COUNTER_FREQ;
 }
 
+static void adf_init_rl_data(struct adf_rl_hw_data *rl_data)
+{
+       rl_data->pciout_tb_offset = ADF_GEN4_RL_TOKEN_PCIEOUT_BUCKET_OFFSET;
+       rl_data->pciin_tb_offset = ADF_GEN4_RL_TOKEN_PCIEIN_BUCKET_OFFSET;
+       rl_data->r2l_offset = ADF_GEN4_RL_R2L_OFFSET;
+       rl_data->l2c_offset = ADF_GEN4_RL_L2C_OFFSET;
+       rl_data->c2s_offset = ADF_GEN4_RL_C2S_OFFSET;
+
+       rl_data->pcie_scale_div = ADF_4XXX_RL_PCIE_SCALE_FACTOR_DIV;
+       rl_data->pcie_scale_mul = ADF_4XXX_RL_PCIE_SCALE_FACTOR_MUL;
+       rl_data->dcpr_correction = ADF_4XXX_RL_DCPR_CORRECTION;
+       rl_data->max_tp[ADF_SVC_ASYM] = ADF_4XXX_RL_MAX_TP_ASYM;
+       rl_data->max_tp[ADF_SVC_SYM] = ADF_4XXX_RL_MAX_TP_SYM;
+       rl_data->max_tp[ADF_SVC_DC] = ADF_4XXX_RL_MAX_TP_DC;
+       rl_data->scan_interval = ADF_4XXX_RL_SCANS_PER_SEC;
+       rl_data->scale_ref = ADF_4XXX_RL_SLICE_REF;
+}
+
 static void adf_enable_error_correction(struct adf_accel_dev *accel_dev)
 {
        struct adf_bar *misc_bar = &GET_BARS(accel_dev)[ADF_4XXX_PMISC_BAR];
@@ -393,38 +418,96 @@ static u32 uof_get_num_objs(void)
        return ARRAY_SIZE(adf_fw_cy_config);
 }
 
-static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
-                               const char * const fw_objs[], int num_objs)
+static const struct adf_fw_config *get_fw_config(struct adf_accel_dev *accel_dev)
 {
-       int id;
-
        switch (get_service_enabled(accel_dev)) {
        case SVC_CY:
        case SVC_CY2:
-               id = adf_fw_cy_config[obj_num].obj;
-               break;
+               return adf_fw_cy_config;
        case SVC_DC:
-               id = adf_fw_dc_config[obj_num].obj;
-               break;
+               return adf_fw_dc_config;
+       case SVC_DCC:
+               return adf_fw_dcc_config;
        case SVC_SYM:
-               id = adf_fw_sym_config[obj_num].obj;
-               break;
+               return adf_fw_sym_config;
        case SVC_ASYM:
-               id =  adf_fw_asym_config[obj_num].obj;
-               break;
+               return adf_fw_asym_config;
        case SVC_ASYM_DC:
        case SVC_DC_ASYM:
-               id = adf_fw_asym_dc_config[obj_num].obj;
-               break;
+               return adf_fw_asym_dc_config;
        case SVC_SYM_DC:
        case SVC_DC_SYM:
-               id = adf_fw_sym_dc_config[obj_num].obj;
-               break;
+               return adf_fw_sym_dc_config;
        default:
-               id = -EINVAL;
-               break;
+               return NULL;
+       }
+}
+
+enum adf_rp_groups {
+       RP_GROUP_0 = 0,
+       RP_GROUP_1,
+       RP_GROUP_COUNT
+};
+
+static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev)
+{
+       enum adf_cfg_service_type rps[RP_GROUP_COUNT];
+       const struct adf_fw_config *fw_config;
+       u16 ring_to_svc_map;
+       int i, j;
+
+       fw_config = get_fw_config(accel_dev);
+       if (!fw_config)
+               return 0;
+
+       for (i = 0; i < RP_GROUP_COUNT; i++) {
+               switch (fw_config[i].ae_mask) {
+               case ADF_AE_GROUP_0:
+                       j = RP_GROUP_0;
+                       break;
+               case ADF_AE_GROUP_1:
+                       j = RP_GROUP_1;
+                       break;
+               default:
+                       return 0;
+               }
+
+               switch (fw_config[i].obj) {
+               case ADF_FW_SYM_OBJ:
+                       rps[j] = SYM;
+                       break;
+               case ADF_FW_ASYM_OBJ:
+                       rps[j] = ASYM;
+                       break;
+               case ADF_FW_DC_OBJ:
+                       rps[j] = COMP;
+                       break;
+               default:
+                       rps[j] = 0;
+                       break;
+               }
        }
 
+       ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT |
+                         rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT |
+                         rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT |
+                         rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT;
+
+       return ring_to_svc_map;
+}
+
+static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+                               const char * const fw_objs[], int num_objs)
+{
+       const struct adf_fw_config *fw_config;
+       int id;
+
+       fw_config = get_fw_config(accel_dev);
+       if (fw_config)
+               id = fw_config[obj_num].obj;
+       else
+               id = -EINVAL;
+
        if (id < 0 || id > num_objs)
                return NULL;
 
@@ -447,26 +530,23 @@ static const char *uof_get_name_402xx(struct adf_accel_dev *accel_dev, u32 obj_n
 
 static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num)
 {
-       switch (get_service_enabled(accel_dev)) {
-       case SVC_CY:
-               return adf_fw_cy_config[obj_num].ae_mask;
-       case SVC_DC:
-               return adf_fw_dc_config[obj_num].ae_mask;
-       case SVC_CY2:
-               return adf_fw_cy_config[obj_num].ae_mask;
-       case SVC_SYM:
-               return adf_fw_sym_config[obj_num].ae_mask;
-       case SVC_ASYM:
-               return adf_fw_asym_config[obj_num].ae_mask;
-       case SVC_ASYM_DC:
-       case SVC_DC_ASYM:
-               return adf_fw_asym_dc_config[obj_num].ae_mask;
-       case SVC_SYM_DC:
-       case SVC_DC_SYM:
-               return adf_fw_sym_dc_config[obj_num].ae_mask;
-       default:
+       const struct adf_fw_config *fw_config;
+
+       fw_config = get_fw_config(accel_dev);
+       if (!fw_config)
                return 0;
-       }
+
+       return fw_config[obj_num].ae_mask;
+}
+
+static void adf_gen4_set_err_mask(struct adf_dev_err_mask *dev_err_mask)
+{
+       dev_err_mask->cppagentcmdpar_mask = ADF_4XXX_HICPPAGENTCMDPARERRLOG_MASK;
+       dev_err_mask->parerr_ath_cph_mask = ADF_4XXX_PARITYERRORMASK_ATH_CPH_MASK;
+       dev_err_mask->parerr_cpr_xlt_mask = ADF_4XXX_PARITYERRORMASK_CPR_XLT_MASK;
+       dev_err_mask->parerr_dcpr_ucs_mask = ADF_4XXX_PARITYERRORMASK_DCPR_UCS_MASK;
+       dev_err_mask->parerr_pke_mask = ADF_4XXX_PARITYERRORMASK_PKE_MASK;
+       dev_err_mask->ssmfeatren_mask = ADF_4XXX_SSMFEATREN_MASK;
 }
 
 void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
@@ -522,6 +602,7 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
        hw_data->uof_get_ae_mask = uof_get_ae_mask;
        hw_data->set_msix_rttable = set_msix_default_rttable;
        hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;
+       hw_data->get_ring_to_svc_map = get_ring_to_svc_map;
        hw_data->disable_iov = adf_disable_sriov;
        hw_data->ring_pair_reset = adf_gen4_ring_pair_reset;
        hw_data->enable_pm = adf_gen4_enable_pm;
@@ -531,10 +612,14 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
        hw_data->stop_timer = adf_gen4_timer_stop;
        hw_data->get_hb_clock = get_heartbeat_clock;
        hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE;
+       hw_data->clock_frequency = ADF_4XXX_AE_FREQ;
 
+       adf_gen4_set_err_mask(&hw_data->dev_err_mask);
        adf_gen4_init_hw_csr_ops(&hw_data->csr_ops);
        adf_gen4_init_pf_pfvf_ops(&hw_data->pfvf_ops);
        adf_gen4_init_dc_ops(&hw_data->dc_ops);
+       adf_gen4_init_ras_ops(&hw_data->ras_ops);
+       adf_init_rl_data(&hw_data->rl_data);
 }
 
 void adf_clean_hw_data_4xxx(struct adf_hw_device_data *hw_data)
index bb3d95a8fb2129db35b07964825f90a562c6e0d1..33423295e90fbfaca4314bc2614c11887500a1a8 100644 (file)
 #define ADF_4XXX_ACCELENGINES_MASK     (0x1FF)
 #define ADF_4XXX_ADMIN_AE_MASK         (0x100)
 
+#define ADF_4XXX_HICPPAGENTCMDPARERRLOG_MASK   0x1F
+#define ADF_4XXX_PARITYERRORMASK_ATH_CPH_MASK  0xF000F
+#define ADF_4XXX_PARITYERRORMASK_CPR_XLT_MASK  0x10001
+#define ADF_4XXX_PARITYERRORMASK_DCPR_UCS_MASK 0x30007
+#define ADF_4XXX_PARITYERRORMASK_PKE_MASK      0x3F
+
+/*
+ * SSMFEATREN bit mask
+ * BIT(4) - enables parity detection on CPP
+ * BIT(12) - enables the logging of push/pull data errors
+ *          in pperr register
+ * BIT(16) - BIT(23) - enable parity detection on SPPs
+ */
+#define ADF_4XXX_SSMFEATREN_MASK \
+       (BIT(4) | BIT(12) | BIT(16) | BIT(17) | BIT(18) | \
+        BIT(19) | BIT(20) | BIT(21) | BIT(22) | BIT(23))
+
 #define ADF_4XXX_ETR_MAX_BANKS         64
 
 /* MSIX interrupt */
 #define ADF_402XX_ASYM_OBJ     "qat_402xx_asym.bin"
 #define ADF_402XX_ADMIN_OBJ    "qat_402xx_admin.bin"
 
+/* RL constants */
+#define ADF_4XXX_RL_PCIE_SCALE_FACTOR_DIV      100
+#define ADF_4XXX_RL_PCIE_SCALE_FACTOR_MUL      102
+#define ADF_4XXX_RL_DCPR_CORRECTION            1
+#define ADF_4XXX_RL_SCANS_PER_SEC              954
+#define ADF_4XXX_RL_MAX_TP_ASYM                        173750UL
+#define ADF_4XXX_RL_MAX_TP_SYM                 95000UL
+#define ADF_4XXX_RL_MAX_TP_DC                  45000UL
+#define ADF_4XXX_RL_SLICE_REF                  1000UL
+
 /* Clocks frequency */
-#define ADF_4XXX_KPT_COUNTER_FREQ (100 * HZ_PER_MHZ)
+#define ADF_4XXX_KPT_COUNTER_FREQ      (100 * HZ_PER_MHZ)
+#define ADF_4XXX_AE_FREQ               (1000 * HZ_PER_MHZ)
 
 /* qat_4xxx fuse bits are different from old GENs, redefine them */
 enum icp_qat_4xxx_slice_mask {
index 6d4e2e139ffa24575198ec564fe6671aa8ab907d..8f483d1197dda290528c67a779e9fe17282cd8b4 100644 (file)
@@ -11,6 +11,7 @@
 #include <adf_heartbeat.h>
 
 #include "adf_4xxx_hw_data.h"
+#include "adf_cfg_services.h"
 #include "qat_compression.h"
 #include "qat_crypto.h"
 #include "adf_transport_access_macros.h"
@@ -23,30 +24,6 @@ static const struct pci_device_id adf_pci_tbl[] = {
 };
 MODULE_DEVICE_TABLE(pci, adf_pci_tbl);
 
-enum configs {
-       DEV_CFG_CY = 0,
-       DEV_CFG_DC,
-       DEV_CFG_SYM,
-       DEV_CFG_ASYM,
-       DEV_CFG_ASYM_SYM,
-       DEV_CFG_ASYM_DC,
-       DEV_CFG_DC_ASYM,
-       DEV_CFG_SYM_DC,
-       DEV_CFG_DC_SYM,
-};
-
-static const char * const services_operations[] = {
-       ADF_CFG_CY,
-       ADF_CFG_DC,
-       ADF_CFG_SYM,
-       ADF_CFG_ASYM,
-       ADF_CFG_ASYM_SYM,
-       ADF_CFG_ASYM_DC,
-       ADF_CFG_DC_ASYM,
-       ADF_CFG_SYM_DC,
-       ADF_CFG_DC_SYM,
-};
-
 static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
 {
        if (accel_dev->hw_device) {
@@ -292,16 +269,17 @@ int adf_gen4_dev_config(struct adf_accel_dev *accel_dev)
        if (ret)
                goto err;
 
-       ret = sysfs_match_string(services_operations, services);
+       ret = sysfs_match_string(adf_cfg_services, services);
        if (ret < 0)
                goto err;
 
        switch (ret) {
-       case DEV_CFG_CY:
-       case DEV_CFG_ASYM_SYM:
+       case SVC_CY:
+       case SVC_CY2:
                ret = adf_crypto_dev_config(accel_dev);
                break;
-       case DEV_CFG_DC:
+       case SVC_DC:
+       case SVC_DCC:
                ret = adf_comp_dev_config(accel_dev);
                break;
        default:
@@ -440,6 +418,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
                goto out_err;
        }
 
+       accel_dev->ras_errors.enabled = true;
        adf_dbgfs_init(accel_dev);
 
        ret = adf_dev_up(accel_dev, true);
@@ -489,3 +468,4 @@ MODULE_FIRMWARE(ADF_4XXX_MMP);
 MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
 MODULE_VERSION(ADF_DRV_VERSION);
 MODULE_SOFTDEP("pre: crypto-intel_qat");
+MODULE_IMPORT_NS(CRYPTO_QAT);
index 9c00c441b602d2d2a5b22e73be20dac1eca552fc..a882e0ea2279629dc19d55d444340202afd3aa17 100644 (file)
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
 /* Copyright(c) 2014 - 2021 Intel Corporation */
 #include <adf_accel_devices.h>
+#include <adf_admin.h>
 #include <adf_clock.h>
 #include <adf_common_drv.h>
 #include <adf_gen2_config.h>
index 468c9102093fce93303fe2cfeb3ed27df10a8598..956a4c85609a9504b8e73f23eed3c3f7add81b7a 100644 (file)
@@ -252,3 +252,4 @@ MODULE_FIRMWARE(ADF_C3XXX_FW);
 MODULE_FIRMWARE(ADF_C3XXX_MMP);
 MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
 MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
index d5a0ecca9d0bba4929863448d479d52258d00a8d..a8de9cd09c05a2608ce2c1d918ba3e9edbbcfb12 100644 (file)
@@ -226,3 +226,4 @@ MODULE_LICENSE("Dual BSD/GPL");
 MODULE_AUTHOR("Intel");
 MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
 MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
index 355a781693eb3fbd7af8ef01ae432168a0459d64..48cf3eb7c73499f01dd56de7192b9586a199d67b 100644 (file)
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
 /* Copyright(c) 2014 - 2021 Intel Corporation */
 #include <adf_accel_devices.h>
+#include <adf_admin.h>
 #include <adf_clock.h>
 #include <adf_common_drv.h>
 #include <adf_gen2_config.h>
index 0186921be93689d041b0e0b7a88ef4437ae49907..ad0ca4384998524db6a4b1a89f3a3c94fb8b522b 100644 (file)
@@ -252,3 +252,4 @@ MODULE_FIRMWARE(ADF_C62X_FW);
 MODULE_FIRMWARE(ADF_C62X_MMP);
 MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
 MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
index c9ae6c0d0dca2ec39b872de9d0e8e443c8605b3a..53b8ddb63364197278c945e257b473a81ff4913f 100644 (file)
@@ -226,3 +226,4 @@ MODULE_LICENSE("Dual BSD/GPL");
 MODULE_AUTHOR("Intel");
 MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
 MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
index 43622c7fca712c0266a845b75a1d1e9471b02d30..779a8aa0b8d2035f980ee92848dcef3ad0a648e8 100644 (file)
@@ -1,8 +1,10 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_CRYPTO_DEV_QAT) += intel_qat.o
+ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CRYPTO_QAT
 intel_qat-objs := adf_cfg.o \
        adf_isr.o \
        adf_ctl_drv.o \
+       adf_cfg_services.o \
        adf_dev_mgr.o \
        adf_init.o \
        adf_accel_engine.o \
@@ -11,12 +13,14 @@ intel_qat-objs := adf_cfg.o \
        adf_admin.o \
        adf_hw_arbiter.o \
        adf_sysfs.o \
+       adf_sysfs_ras_counters.o \
        adf_gen2_hw_data.o \
        adf_gen2_config.o \
        adf_gen4_hw_data.o \
        adf_gen4_pm.o \
        adf_gen2_dc.o \
        adf_gen4_dc.o \
+       adf_gen4_ras.o \
        adf_gen4_timer.o \
        adf_clock.o \
        qat_crypto.o \
@@ -25,14 +29,20 @@ intel_qat-objs := adf_cfg.o \
        qat_algs.o \
        qat_asym_algs.o \
        qat_algs_send.o \
+       adf_rl.o \
+       adf_rl_admin.o \
+       adf_sysfs_rl.o \
        qat_uclo.o \
        qat_hal.o \
        qat_bl.o
 
 intel_qat-$(CONFIG_DEBUG_FS) += adf_transport_debug.o \
                                adf_fw_counters.o \
+                               adf_cnv_dbgfs.o \
+                               adf_gen4_pm_debugfs.o \
                                adf_heartbeat.o \
                                adf_heartbeat_dbgfs.o \
+                               adf_pm_dbgfs.o \
                                adf_dbgfs.o
 
 intel_qat-$(CONFIG_PCI_IOV) += adf_sriov.o adf_vf_isr.o adf_pfvf_utils.o \
index e57abde66f4fb34568b3012e145489cd9d6e2ccc..4ff5729a34969bd5e5e7aaed6cdb23782f87d0cc 100644 (file)
@@ -7,7 +7,9 @@
 #include <linux/list.h>
 #include <linux/io.h>
 #include <linux/ratelimit.h>
+#include <linux/types.h>
 #include "adf_cfg_common.h"
+#include "adf_rl.h"
 #include "adf_pfvf_msg.h"
 
 #define ADF_DH895XCC_DEVICE_NAME "dh895xcc"
@@ -29,7 +31,7 @@
 #define ADF_PCI_MAX_BARS 3
 #define ADF_DEVICE_NAME_LENGTH 32
 #define ADF_ETR_MAX_RINGS_PER_BANK 16
-#define ADF_MAX_MSIX_VECTOR_NAME 16
+#define ADF_MAX_MSIX_VECTOR_NAME 48
 #define ADF_DEVICE_NAME_PREFIX "qat_"
 
 enum adf_accel_capabilities {
@@ -81,6 +83,18 @@ enum dev_sku_info {
        DEV_SKU_UNKNOWN,
 };
 
+enum ras_errors {
+       ADF_RAS_CORR,
+       ADF_RAS_UNCORR,
+       ADF_RAS_FATAL,
+       ADF_RAS_ERRORS,
+};
+
+struct adf_error_counters {
+       atomic_t counter[ADF_RAS_ERRORS];
+       bool enabled;
+};
+
 static inline const char *get_sku_info(enum dev_sku_info info)
 {
        switch (info) {
@@ -152,6 +166,13 @@ struct adf_accel_dev;
 struct adf_etr_data;
 struct adf_etr_ring_data;
 
+struct adf_ras_ops {
+       void (*enable_ras_errors)(struct adf_accel_dev *accel_dev);
+       void (*disable_ras_errors)(struct adf_accel_dev *accel_dev);
+       bool (*handle_interrupt)(struct adf_accel_dev *accel_dev,
+                                bool *reset_required);
+};
+
 struct adf_pfvf_ops {
        int (*enable_comms)(struct adf_accel_dev *accel_dev);
        u32 (*get_pf2vf_offset)(u32 i);
@@ -169,6 +190,16 @@ struct adf_dc_ops {
        void (*build_deflate_ctx)(void *ctx);
 };
 
+struct adf_dev_err_mask {
+       u32 cppagentcmdpar_mask;
+       u32 parerr_ath_cph_mask;
+       u32 parerr_cpr_xlt_mask;
+       u32 parerr_dcpr_ucs_mask;
+       u32 parerr_pke_mask;
+       u32 parerr_wat_wcp_mask;
+       u32 ssmfeatren_mask;
+};
+
 struct adf_hw_device_data {
        struct adf_hw_device_class *dev_class;
        u32 (*get_accel_mask)(struct adf_hw_device_data *self);
@@ -182,6 +213,7 @@ struct adf_hw_device_data {
        void (*get_arb_info)(struct arb_info *arb_csrs_info);
        void (*get_admin_info)(struct admin_info *admin_csrs_info);
        enum dev_sku_info (*get_sku)(struct adf_hw_device_data *self);
+       u16 (*get_ring_to_svc_map)(struct adf_accel_dev *accel_dev);
        int (*alloc_irq)(struct adf_accel_dev *accel_dev);
        void (*free_irq)(struct adf_accel_dev *accel_dev);
        void (*enable_error_correction)(struct adf_accel_dev *accel_dev);
@@ -214,12 +246,16 @@ struct adf_hw_device_data {
        struct adf_pfvf_ops pfvf_ops;
        struct adf_hw_csr_ops csr_ops;
        struct adf_dc_ops dc_ops;
+       struct adf_ras_ops ras_ops;
+       struct adf_dev_err_mask dev_err_mask;
+       struct adf_rl_hw_data rl_data;
        const char *fw_name;
        const char *fw_mmp_name;
        u32 fuses;
        u32 straps;
        u32 accel_capabilities_mask;
        u32 extended_dc_capabilities;
+       u16 fw_capabilities;
        u32 clock_frequency;
        u32 instance_id;
        u16 accel_mask;
@@ -262,6 +298,7 @@ struct adf_hw_device_data {
 #define GET_SRV_TYPE(accel_dev, idx) \
        (((GET_HW_DATA(accel_dev)->ring_to_svc_map) >> (ADF_SRV_TYPE_BIT_LEN * (idx))) \
        & ADF_SRV_TYPE_MASK)
+#define GET_ERR_MASK(accel_dev) (&GET_HW_DATA(accel_dev)->dev_err_mask)
 #define GET_MAX_ACCELENGINES(accel_dev) (GET_HW_DATA(accel_dev)->num_engines)
 #define GET_CSR_OPS(accel_dev) (&(accel_dev)->hw_device->csr_ops)
 #define GET_PFVF_OPS(accel_dev) (&(accel_dev)->hw_device->pfvf_ops)
@@ -291,6 +328,23 @@ struct adf_dc_data {
        dma_addr_t ovf_buff_p;
 };
 
+struct adf_pm {
+       struct dentry *debugfs_pm_status;
+       bool present;
+       int idle_irq_counters;
+       int throttle_irq_counters;
+       int fw_irq_counters;
+       int host_ack_counter;
+       int host_nack_counter;
+       ssize_t (*print_pm_status)(struct adf_accel_dev *accel_dev,
+                                  char __user *buf, size_t count, loff_t *pos);
+};
+
+struct adf_sysfs {
+       int ring_num;
+       struct rw_semaphore lock; /* protects access to the fields in this struct */
+};
+
 struct adf_accel_dev {
        struct adf_etr_data *transport;
        struct adf_hw_device_data *hw_device;
@@ -298,17 +352,21 @@ struct adf_accel_dev {
        struct adf_fw_loader_data *fw_loader;
        struct adf_admin_comms *admin;
        struct adf_dc_data *dc_data;
+       struct adf_pm power_management;
        struct list_head crypto_list;
        struct list_head compression_list;
        unsigned long status;
        atomic_t ref_count;
        struct dentry *debugfs_dir;
        struct dentry *fw_cntr_dbgfile;
+       struct dentry *cnv_dbgfile;
        struct list_head list;
        struct module *owner;
        struct adf_accel_pci accel_pci_dev;
        struct adf_timer *timer;
        struct adf_heartbeat *heartbeat;
+       struct adf_rl *rate_limiting;
+       struct adf_sysfs sysfs;
        union {
                struct {
                        /* protects VF2PF interrupts access */
@@ -326,6 +384,7 @@ struct adf_accel_dev {
                        u8 pf_compat_ver;
                } vf;
        };
+       struct adf_error_counters ras_errors;
        struct mutex state_lock; /* protect state of the device */
        bool is_vf;
        u32 accel_id;
index ff790823b86861c5102ac7b011d32686d04ccd36..54b673ec23622359617a78b4c60fcbe8d494bbde 100644 (file)
@@ -7,7 +7,9 @@
 #include <linux/pci.h>
 #include <linux/dma-mapping.h>
 #include "adf_accel_devices.h"
+#include "adf_admin.h"
 #include "adf_common_drv.h"
+#include "adf_cfg.h"
 #include "adf_heartbeat.h"
 #include "icp_qat_fw_init_admin.h"
 
@@ -212,6 +214,17 @@ int adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp)
        return 0;
 }
 
+static int adf_set_chaining(struct adf_accel_dev *accel_dev)
+{
+       u32 ae_mask = GET_HW_DATA(accel_dev)->ae_mask;
+       struct icp_qat_fw_init_admin_resp resp = { };
+       struct icp_qat_fw_init_admin_req req = { };
+
+       req.cmd_id = ICP_QAT_FW_DC_CHAIN_INIT;
+
+       return adf_send_admin(accel_dev, &req, &resp, ae_mask);
+}
+
 static int adf_get_dc_capabilities(struct adf_accel_dev *accel_dev,
                                   u32 *capabilities)
 {
@@ -284,6 +297,86 @@ int adf_send_admin_hb_timer(struct adf_accel_dev *accel_dev, uint32_t ticks)
        return adf_send_admin(accel_dev, &req, &resp, ae_mask);
 }
 
+static bool is_dcc_enabled(struct adf_accel_dev *accel_dev)
+{
+       char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
+       int ret;
+
+       ret = adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
+                                     ADF_SERVICES_ENABLED, services);
+       if (ret)
+               return false;
+
+       return !strcmp(services, "dcc");
+}
+
+static int adf_get_fw_capabilities(struct adf_accel_dev *accel_dev, u16 *caps)
+{
+       u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+       struct icp_qat_fw_init_admin_resp resp = { };
+       struct icp_qat_fw_init_admin_req req = { };
+       int ret;
+
+       if (!ae_mask)
+               return 0;
+
+       req.cmd_id = ICP_QAT_FW_CAPABILITIES_GET;
+       ret = adf_send_admin(accel_dev, &req, &resp, ae_mask);
+       if (ret)
+               return ret;
+
+       *caps = resp.fw_capabilities;
+
+       return 0;
+}
+
+int adf_send_admin_rl_init(struct adf_accel_dev *accel_dev,
+                          struct icp_qat_fw_init_admin_slice_cnt *slices)
+{
+       u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+       struct icp_qat_fw_init_admin_resp resp = { };
+       struct icp_qat_fw_init_admin_req req = { };
+       int ret;
+
+       req.cmd_id = ICP_QAT_FW_RL_INIT;
+
+       ret = adf_send_admin(accel_dev, &req, &resp, ae_mask);
+       if (ret)
+               return ret;
+
+       memcpy(slices, &resp.slices, sizeof(*slices));
+
+       return 0;
+}
+
+int adf_send_admin_rl_add_update(struct adf_accel_dev *accel_dev,
+                                struct icp_qat_fw_init_admin_req *req)
+{
+       u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+       struct icp_qat_fw_init_admin_resp resp = { };
+
+       /*
+        * req struct filled in rl implementation. Used commands
+        * ICP_QAT_FW_RL_ADD for a new SLA
+        * ICP_QAT_FW_RL_UPDATE for update SLA
+        */
+       return adf_send_admin(accel_dev, req, &resp, ae_mask);
+}
+
+int adf_send_admin_rl_delete(struct adf_accel_dev *accel_dev, u16 node_id,
+                            u8 node_type)
+{
+       u32 ae_mask = accel_dev->hw_device->admin_ae_mask;
+       struct icp_qat_fw_init_admin_resp resp = { };
+       struct icp_qat_fw_init_admin_req req = { };
+
+       req.cmd_id = ICP_QAT_FW_RL_REMOVE;
+       req.node_id = node_id;
+       req.node_type = node_type;
+
+       return adf_send_admin(accel_dev, &req, &resp, ae_mask);
+}
+
 /**
  * adf_send_admin_init() - Function sends init message to FW
  * @accel_dev: Pointer to acceleration device.
@@ -294,9 +387,20 @@ int adf_send_admin_hb_timer(struct adf_accel_dev *accel_dev, uint32_t ticks)
  */
 int adf_send_admin_init(struct adf_accel_dev *accel_dev)
 {
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
        u32 dc_capabilities = 0;
        int ret;
 
+       ret = adf_set_fw_constants(accel_dev);
+       if (ret)
+               return ret;
+
+       if (is_dcc_enabled(accel_dev)) {
+               ret = adf_set_chaining(accel_dev);
+               if (ret)
+                       return ret;
+       }
+
        ret = adf_get_dc_capabilities(accel_dev, &dc_capabilities);
        if (ret) {
                dev_err(&GET_DEV(accel_dev), "Cannot get dc capabilities\n");
@@ -304,9 +408,7 @@ int adf_send_admin_init(struct adf_accel_dev *accel_dev)
        }
        accel_dev->hw_device->extended_dc_capabilities = dc_capabilities;
 
-       ret = adf_set_fw_constants(accel_dev);
-       if (ret)
-               return ret;
+       adf_get_fw_capabilities(accel_dev, &hw_data->fw_capabilities);
 
        return adf_init_ae(accel_dev);
 }
@@ -348,6 +450,54 @@ int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay)
        return adf_send_admin(accel_dev, &req, &resp, ae_mask);
 }
 
+int adf_get_pm_info(struct adf_accel_dev *accel_dev, dma_addr_t p_state_addr,
+                   size_t buff_size)
+{
+       struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+       struct icp_qat_fw_init_admin_req req = { };
+       struct icp_qat_fw_init_admin_resp resp;
+       u32 ae_mask = hw_data->admin_ae_mask;
+       int ret;
+
+       /* Query pm info via init/admin cmd */
+       if (!accel_dev->admin) {
+               dev_err(&GET_DEV(accel_dev), "adf_admin is not available\n");
+               return -EFAULT;
+       }
+
+       req.cmd_id = ICP_QAT_FW_PM_INFO;
+       req.init_cfg_sz = buff_size;
+       req.init_cfg_ptr = p_state_addr;
+
+       ret = adf_send_admin(accel_dev, &req, &resp, ae_mask);
+       if (ret)
+               dev_err(&GET_DEV(accel_dev),
+                       "Failed to query power-management info\n");
+
+       return ret;
+}
+
+int adf_get_cnv_stats(struct adf_accel_dev *accel_dev, u16 ae, u16 *err_cnt,
+                     u16 *latest_err)
+{
+       struct icp_qat_fw_init_admin_req req = { };
+       struct icp_qat_fw_init_admin_resp resp;
+       int ret;
+
+       req.cmd_id = ICP_QAT_FW_CNV_STATS_GET;
+
+       ret = adf_put_admin_msg_sync(accel_dev, ae, &req, &resp);
+       if (ret)
+               return ret;
+       if (resp.status)
+               return -EPROTONOSUPPORT;
+
+       *err_cnt = resp.error_count;
+       *latest_err = resp.latest_error;
+
+       return ret;
+}
+
 int adf_init_admin_comms(struct adf_accel_dev *accel_dev)
 {
        struct adf_admin_comms *admin;
diff --git a/drivers/crypto/intel/qat/qat_common/adf_admin.h b/drivers/crypto/intel/qat/qat_common/adf_admin.h
new file mode 100644 (file)
index 0000000..55cbcbc
--- /dev/null
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_ADMIN
+#define ADF_ADMIN
+
+#include "icp_qat_fw_init_admin.h"
+
+struct adf_accel_dev;
+
+int adf_init_admin_comms(struct adf_accel_dev *accel_dev);
+void adf_exit_admin_comms(struct adf_accel_dev *accel_dev);
+int adf_send_admin_init(struct adf_accel_dev *accel_dev);
+int adf_get_ae_fw_counters(struct adf_accel_dev *accel_dev, u16 ae, u64 *reqs, u64 *resps);
+int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay);
+int adf_send_admin_tim_sync(struct adf_accel_dev *accel_dev, u32 cnt);
+int adf_send_admin_hb_timer(struct adf_accel_dev *accel_dev, uint32_t ticks);
+int adf_send_admin_rl_init(struct adf_accel_dev *accel_dev,
+                          struct icp_qat_fw_init_admin_slice_cnt *slices);
+int adf_send_admin_rl_add_update(struct adf_accel_dev *accel_dev,
+                                struct icp_qat_fw_init_admin_req *req);
+int adf_send_admin_rl_delete(struct adf_accel_dev *accel_dev, u16 node_id,
+                            u8 node_type);
+int adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp);
+int adf_get_pm_info(struct adf_accel_dev *accel_dev, dma_addr_t p_state_addr, size_t buff_size);
+int adf_get_cnv_stats(struct adf_accel_dev *accel_dev, u16 ae, u16 *err_cnt, u16 *latest_err);
+
+#endif
index 04af32a2811c8f77b438a375fac184f63de48ed1..a39e70bd4b21bbc4ecd9180e194f8cf335ab167b 100644 (file)
@@ -92,7 +92,8 @@ static void adf_device_reset_worker(struct work_struct *work)
        if (adf_dev_restart(accel_dev)) {
                /* The device hanged and we can't restart it so stop here */
                dev_err(&GET_DEV(accel_dev), "Restart device failed\n");
-               kfree(reset_data);
+               if (reset_data->mode == ADF_DEV_RESET_ASYNC)
+                       kfree(reset_data);
                WARN(1, "QAT: device restart failed. Device is unusable\n");
                return;
        }
diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.c
new file mode 100644 (file)
index 0000000..8e13fe9
--- /dev/null
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/export.h>
+#include "adf_cfg_services.h"
+#include "adf_cfg_strings.h"
+
+const char *const adf_cfg_services[] = {
+       [SVC_CY] = ADF_CFG_CY,
+       [SVC_CY2] = ADF_CFG_ASYM_SYM,
+       [SVC_DC] = ADF_CFG_DC,
+       [SVC_DCC] = ADF_CFG_DCC,
+       [SVC_SYM] = ADF_CFG_SYM,
+       [SVC_ASYM] = ADF_CFG_ASYM,
+       [SVC_DC_ASYM] = ADF_CFG_DC_ASYM,
+       [SVC_ASYM_DC] = ADF_CFG_ASYM_DC,
+       [SVC_DC_SYM] = ADF_CFG_DC_SYM,
+       [SVC_SYM_DC] = ADF_CFG_SYM_DC,
+};
+EXPORT_SYMBOL_GPL(adf_cfg_services);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h b/drivers/crypto/intel/qat/qat_common/adf_cfg_services.h
new file mode 100644 (file)
index 0000000..f78fd69
--- /dev/null
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef _ADF_CFG_SERVICES_H_
+#define _ADF_CFG_SERVICES_H_
+
+#include "adf_cfg_strings.h"
+
+enum adf_services {
+       SVC_CY = 0,
+       SVC_CY2,
+       SVC_DC,
+       SVC_DCC,
+       SVC_SYM,
+       SVC_ASYM,
+       SVC_DC_ASYM,
+       SVC_ASYM_DC,
+       SVC_DC_SYM,
+       SVC_SYM_DC,
+       SVC_COUNT
+};
+
+extern const char *const adf_cfg_services[SVC_COUNT];
+
+#endif
index 6066dc637352cae50e943fe0583e4340d0d83f45..322b76903a737d4e0fce0371a355f6a1f17fb0b0 100644 (file)
@@ -32,6 +32,7 @@
 #define ADF_CFG_DC_ASYM "dc;asym"
 #define ADF_CFG_SYM_DC "sym;dc"
 #define ADF_CFG_DC_SYM "dc;sym"
+#define ADF_CFG_DCC "dcc"
 #define ADF_SERVICES_ENABLED "ServicesEnabled"
 #define ADF_PM_IDLE_SUPPORT "PmIdleSupport"
 #define ADF_ETRMGR_COALESCING_ENABLED "InterruptCoalescingEnabled"
index dc0778691eb0ba9b7e4767c0da18c78ea921217a..01e0a389e462b027cefb1d5d658029297b5104b4 100644 (file)
@@ -10,6 +10,7 @@
 #include <linux/types.h>
 #include <linux/units.h>
 #include <asm/errno.h>
+#include "adf_admin.h"
 #include "adf_accel_devices.h"
 #include "adf_clock.h"
 #include "adf_common_drv.h"
diff --git a/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.c
new file mode 100644 (file)
index 0000000..07119c4
--- /dev/null
@@ -0,0 +1,300 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/bitfield.h>
+#include <linux/debugfs.h>
+#include <linux/kernel.h>
+
+#include "adf_accel_devices.h"
+#include "adf_admin.h"
+#include "adf_common_drv.h"
+#include "adf_cnv_dbgfs.h"
+#include "qat_compression.h"
+
+#define CNV_DEBUGFS_FILENAME           "cnv_errors"
+#define CNV_MIN_PADDING                        16
+
+#define CNV_ERR_INFO_MASK              GENMASK(11, 0)
+#define CNV_ERR_TYPE_MASK              GENMASK(15, 12)
+#define CNV_SLICE_ERR_MASK             GENMASK(7, 0)
+#define CNV_SLICE_ERR_SIGN_BIT_INDEX   7
+#define CNV_DELTA_ERR_SIGN_BIT_INDEX   11
+
+enum cnv_error_type {
+       CNV_ERR_TYPE_NONE,
+       CNV_ERR_TYPE_CHECKSUM,
+       CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH,
+       CNV_ERR_TYPE_DECOMPRESSION,
+       CNV_ERR_TYPE_TRANSLATION,
+       CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH,
+       CNV_ERR_TYPE_UNKNOWN,
+       CNV_ERR_TYPES_COUNT
+};
+
+#define CNV_ERROR_TYPE_GET(latest_err) \
+       min_t(u16, u16_get_bits(latest_err, CNV_ERR_TYPE_MASK), CNV_ERR_TYPE_UNKNOWN)
+
+#define CNV_GET_DELTA_ERR_INFO(latest_error)   \
+       sign_extend32(latest_error, CNV_DELTA_ERR_SIGN_BIT_INDEX)
+
+#define CNV_GET_SLICE_ERR_INFO(latest_error)   \
+       sign_extend32(latest_error, CNV_SLICE_ERR_SIGN_BIT_INDEX)
+
+#define CNV_GET_DEFAULT_ERR_INFO(latest_error) \
+       u16_get_bits(latest_error, CNV_ERR_INFO_MASK)
+
+enum cnv_fields {
+       CNV_ERR_COUNT,
+       CNV_LATEST_ERR,
+       CNV_FIELDS_COUNT
+};
+
+static const char * const cnv_field_names[CNV_FIELDS_COUNT] = {
+       [CNV_ERR_COUNT] = "Total Errors",
+       [CNV_LATEST_ERR] = "Last Error",
+};
+
+static const char * const cnv_error_names[CNV_ERR_TYPES_COUNT] = {
+       [CNV_ERR_TYPE_NONE] = "No Error",
+       [CNV_ERR_TYPE_CHECKSUM] = "Checksum Error",
+       [CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH] = "Length Error-P",
+       [CNV_ERR_TYPE_DECOMPRESSION] = "Decomp Error",
+       [CNV_ERR_TYPE_TRANSLATION] = "Xlat Error",
+       [CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH] = "Length Error-C",
+       [CNV_ERR_TYPE_UNKNOWN] = "Unknown Error",
+};
+
+struct ae_cnv_errors {
+       u16 ae;
+       u16 err_cnt;
+       u16 latest_err;
+       bool is_comp_ae;
+};
+
+struct cnv_err_stats {
+       u16 ae_count;
+       struct ae_cnv_errors ae_cnv_errors[];
+};
+
+static s16 get_err_info(u8 error_type, u16 latest)
+{
+       switch (error_type) {
+       case CNV_ERR_TYPE_DECOMP_PRODUCED_LENGTH:
+       case CNV_ERR_TYPE_DECOMP_CONSUMED_LENGTH:
+               return CNV_GET_DELTA_ERR_INFO(latest);
+       case CNV_ERR_TYPE_DECOMPRESSION:
+       case CNV_ERR_TYPE_TRANSLATION:
+               return CNV_GET_SLICE_ERR_INFO(latest);
+       default:
+               return CNV_GET_DEFAULT_ERR_INFO(latest);
+       }
+}
+
+static void *qat_cnv_errors_seq_start(struct seq_file *sfile, loff_t *pos)
+{
+       struct cnv_err_stats *err_stats = sfile->private;
+
+       if (*pos == 0)
+               return SEQ_START_TOKEN;
+
+       if (*pos > err_stats->ae_count)
+               return NULL;
+
+       return &err_stats->ae_cnv_errors[*pos - 1];
+}
+
+static void *qat_cnv_errors_seq_next(struct seq_file *sfile, void *v,
+                                    loff_t *pos)
+{
+       struct cnv_err_stats *err_stats = sfile->private;
+
+       (*pos)++;
+
+       if (*pos > err_stats->ae_count)
+               return NULL;
+
+       return &err_stats->ae_cnv_errors[*pos - 1];
+}
+
+static void qat_cnv_errors_seq_stop(struct seq_file *sfile, void *v)
+{
+}
+
+static int qat_cnv_errors_seq_show(struct seq_file *sfile, void *v)
+{
+       struct ae_cnv_errors *ae_errors;
+       unsigned int i;
+       s16 err_info;
+       u8 err_type;
+
+       if (v == SEQ_START_TOKEN) {
+               seq_puts(sfile, "AE ");
+               for (i = 0; i < CNV_FIELDS_COUNT; ++i)
+                       seq_printf(sfile, " %*s", CNV_MIN_PADDING,
+                                  cnv_field_names[i]);
+       } else {
+               ae_errors = v;
+
+               if (!ae_errors->is_comp_ae)
+                       return 0;
+
+               err_type = CNV_ERROR_TYPE_GET(ae_errors->latest_err);
+               err_info = get_err_info(err_type, ae_errors->latest_err);
+
+               seq_printf(sfile, "%d:", ae_errors->ae);
+               seq_printf(sfile, " %*d", CNV_MIN_PADDING, ae_errors->err_cnt);
+               seq_printf(sfile, "%*s [%d]", CNV_MIN_PADDING,
+                          cnv_error_names[err_type], err_info);
+       }
+       seq_putc(sfile, '\n');
+
+       return 0;
+}
+
+static const struct seq_operations qat_cnv_errors_sops = {
+       .start = qat_cnv_errors_seq_start,
+       .next = qat_cnv_errors_seq_next,
+       .stop = qat_cnv_errors_seq_stop,
+       .show = qat_cnv_errors_seq_show,
+};
+
+/**
+ * cnv_err_stats_alloc() - Get CNV stats for the provided device.
+ * @accel_dev: Pointer to a QAT acceleration device
+ *
+ * Allocates and populates table of CNV errors statistics for each non-admin AE
+ * available through the supplied acceleration device. The caller becomes the
+ * owner of such memory and is responsible for the deallocation through a call
+ * to kfree().
+ *
+ * Returns: a pointer to a dynamically allocated struct cnv_err_stats on success
+ * or a negative value on error.
+ */
+static struct cnv_err_stats *cnv_err_stats_alloc(struct adf_accel_dev *accel_dev)
+{
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       struct cnv_err_stats *err_stats;
+       unsigned long ae_count;
+       unsigned long ae_mask;
+       size_t err_stats_size;
+       unsigned long ae;
+       unsigned int i;
+       u16 latest_err;
+       u16 err_cnt;
+       int ret;
+
+       if (!adf_dev_started(accel_dev)) {
+               dev_err(&GET_DEV(accel_dev), "QAT Device not started\n");
+               return ERR_PTR(-EBUSY);
+       }
+
+       /* Ignore the admin AEs */
+       ae_mask = hw_data->ae_mask & ~hw_data->admin_ae_mask;
+       ae_count = hweight_long(ae_mask);
+       if (unlikely(!ae_count))
+               return ERR_PTR(-EINVAL);
+
+       err_stats_size = struct_size(err_stats, ae_cnv_errors, ae_count);
+       err_stats = kmalloc(err_stats_size, GFP_KERNEL);
+       if (!err_stats)
+               return ERR_PTR(-ENOMEM);
+
+       err_stats->ae_count = ae_count;
+
+       i = 0;
+       for_each_set_bit(ae, &ae_mask, GET_MAX_ACCELENGINES(accel_dev)) {
+               ret = adf_get_cnv_stats(accel_dev, ae, &err_cnt, &latest_err);
+               if (ret) {
+                       dev_dbg(&GET_DEV(accel_dev),
+                               "Failed to get CNV stats for ae %ld, [%d].\n",
+                               ae, ret);
+                       err_stats->ae_cnv_errors[i++].is_comp_ae = false;
+                       continue;
+               }
+               err_stats->ae_cnv_errors[i].is_comp_ae = true;
+               err_stats->ae_cnv_errors[i].latest_err = latest_err;
+               err_stats->ae_cnv_errors[i].err_cnt = err_cnt;
+               err_stats->ae_cnv_errors[i].ae = ae;
+               i++;
+       }
+
+       return err_stats;
+}
+
+static int qat_cnv_errors_file_open(struct inode *inode, struct file *file)
+{
+       struct adf_accel_dev *accel_dev = inode->i_private;
+       struct seq_file *cnv_errors_seq_file;
+       struct cnv_err_stats *cnv_err_stats;
+       int ret;
+
+       cnv_err_stats = cnv_err_stats_alloc(accel_dev);
+       if (IS_ERR(cnv_err_stats))
+               return PTR_ERR(cnv_err_stats);
+
+       ret = seq_open(file, &qat_cnv_errors_sops);
+       if (unlikely(ret)) {
+               kfree(cnv_err_stats);
+               return ret;
+       }
+
+       cnv_errors_seq_file = file->private_data;
+       cnv_errors_seq_file->private = cnv_err_stats;
+       return ret;
+}
+
+static int qat_cnv_errors_file_release(struct inode *inode, struct file *file)
+{
+       struct seq_file *cnv_errors_seq_file = file->private_data;
+
+       kfree(cnv_errors_seq_file->private);
+       cnv_errors_seq_file->private = NULL;
+
+       return seq_release(inode, file);
+}
+
+static const struct file_operations qat_cnv_fops = {
+       .owner = THIS_MODULE,
+       .open = qat_cnv_errors_file_open,
+       .read = seq_read,
+       .llseek = seq_lseek,
+       .release = qat_cnv_errors_file_release,
+};
+
+static ssize_t no_comp_file_read(struct file *f, char __user *buf, size_t count,
+                                loff_t *pos)
+{
+       char *file_msg = "No engine configured for comp\n";
+
+       return simple_read_from_buffer(buf, count, pos, file_msg,
+                                      strlen(file_msg));
+}
+
+static const struct file_operations qat_cnv_no_comp_fops = {
+       .owner = THIS_MODULE,
+       .read = no_comp_file_read,
+};
+
+void adf_cnv_dbgfs_add(struct adf_accel_dev *accel_dev)
+{
+       const struct file_operations *fops;
+       void *data;
+
+       if (adf_hw_dev_has_compression(accel_dev)) {
+               fops = &qat_cnv_fops;
+               data = accel_dev;
+       } else {
+               fops = &qat_cnv_no_comp_fops;
+               data = NULL;
+       }
+
+       accel_dev->cnv_dbgfile = debugfs_create_file(CNV_DEBUGFS_FILENAME, 0400,
+                                                    accel_dev->debugfs_dir,
+                                                    data, fops);
+}
+
+void adf_cnv_dbgfs_rm(struct adf_accel_dev *accel_dev)
+{
+       debugfs_remove(accel_dev->cnv_dbgfile);
+       accel_dev->cnv_dbgfile = NULL;
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.h b/drivers/crypto/intel/qat/qat_common/adf_cnv_dbgfs.h
new file mode 100644 (file)
index 0000000..b02b096
--- /dev/null
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_CNV_DBG_H
+#define ADF_CNV_DBG_H
+
+struct adf_accel_dev;
+
+void adf_cnv_dbgfs_add(struct adf_accel_dev *accel_dev);
+void adf_cnv_dbgfs_rm(struct adf_accel_dev *accel_dev);
+
+#endif
index 673b5044c62a508c185712d7eab539d0d9958549..f06188033a93fb2d5beef42af7c34345029fe72d 100644 (file)
@@ -25,6 +25,8 @@
 #define ADF_STATUS_AE_STARTED 6
 #define ADF_STATUS_PF_RUNNING 7
 #define ADF_STATUS_IRQ_ALLOCATED 8
+#define ADF_STATUS_CRYPTO_ALGS_REGISTERED 9
+#define ADF_STATUS_COMP_ALGS_REGISTERED 10
 
 enum adf_dev_reset_mode {
        ADF_DEV_RESET_ASYNC = 0,
@@ -85,14 +87,6 @@ void adf_reset_flr(struct adf_accel_dev *accel_dev);
 void adf_dev_restore(struct adf_accel_dev *accel_dev);
 int adf_init_aer(void);
 void adf_exit_aer(void);
-int adf_init_admin_comms(struct adf_accel_dev *accel_dev);
-void adf_exit_admin_comms(struct adf_accel_dev *accel_dev);
-int adf_send_admin_init(struct adf_accel_dev *accel_dev);
-int adf_get_ae_fw_counters(struct adf_accel_dev *accel_dev, u16 ae, u64 *reqs, u64 *resps);
-int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay);
-int adf_send_admin_tim_sync(struct adf_accel_dev *accel_dev, u32 cnt);
-int adf_send_admin_hb_timer(struct adf_accel_dev *accel_dev, uint32_t ticks);
-int adf_get_fw_timestamp(struct adf_accel_dev *accel_dev, u64 *timestamp);
 int adf_init_arb(struct adf_accel_dev *accel_dev);
 void adf_exit_arb(struct adf_accel_dev *accel_dev);
 void adf_update_ring_arb(struct adf_etr_ring_data *ring);
@@ -244,4 +238,14 @@ static inline void __iomem *adf_get_pmisc_base(struct adf_accel_dev *accel_dev)
        return pmisc->virt_addr;
 }
 
+static inline void __iomem *adf_get_aram_base(struct adf_accel_dev *accel_dev)
+{
+       struct adf_hw_device_data *hw_data = accel_dev->hw_device;
+       struct adf_bar *param;
+
+       param = &GET_BARS(accel_dev)[hw_data->get_sram_bar_id(hw_data)];
+
+       return param->virt_addr;
+}
+
 #endif
index 04845f8d72be6fee817e73232e1e69b73af4525b..477efcc81a163745c157dffdc348aa261709290f 100644 (file)
@@ -5,9 +5,11 @@
 #include "adf_accel_devices.h"
 #include "adf_cfg.h"
 #include "adf_common_drv.h"
+#include "adf_cnv_dbgfs.h"
 #include "adf_dbgfs.h"
 #include "adf_fw_counters.h"
 #include "adf_heartbeat_dbgfs.h"
+#include "adf_pm_dbgfs.h"
 
 /**
  * adf_dbgfs_init() - add persistent debugfs entries
@@ -62,6 +64,8 @@ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
        if (!accel_dev->is_vf) {
                adf_fw_counters_dbgfs_add(accel_dev);
                adf_heartbeat_dbgfs_add(accel_dev);
+               adf_pm_dbgfs_add(accel_dev);
+               adf_cnv_dbgfs_add(accel_dev);
        }
 }
 
@@ -75,6 +79,8 @@ void adf_dbgfs_rm(struct adf_accel_dev *accel_dev)
                return;
 
        if (!accel_dev->is_vf) {
+               adf_cnv_dbgfs_rm(accel_dev);
+               adf_pm_dbgfs_rm(accel_dev);
                adf_heartbeat_dbgfs_rm(accel_dev);
                adf_fw_counters_dbgfs_rm(accel_dev);
        }
index cb6e09ef5c9ff92241f426586d6e6ea0ac900bd5..98fb7ccfed9fc30ab3dbbef17838eacaaf78cce3 100644 (file)
@@ -9,6 +9,7 @@
 #include <linux/types.h>
 
 #include "adf_accel_devices.h"
+#include "adf_admin.h"
 #include "adf_common_drv.h"
 #include "adf_fw_counters.h"
 
@@ -34,7 +35,7 @@ struct adf_ae_counters {
 
 struct adf_fw_counters {
        u16 ae_count;
-       struct adf_ae_counters ae_counters[];
+       struct adf_ae_counters ae_counters[] __counted_by(ae_count);
 };
 
 static void adf_fw_counters_parse_ae_values(struct adf_ae_counters *ae_counters, u32 ae,
index 02d7a019ebf8aa1c530708192687fcc73b8c8dc2..1813fe1d5a06cc2f57faea11633067794e74fcb1 100644 (file)
@@ -139,6 +139,13 @@ do { \
 /* Number of heartbeat counter pairs */
 #define ADF_NUM_HB_CNT_PER_AE ADF_NUM_THREADS_PER_AE
 
+/* Rate Limiting */
+#define ADF_GEN4_RL_R2L_OFFSET                 0x508000
+#define ADF_GEN4_RL_L2C_OFFSET                 0x509000
+#define ADF_GEN4_RL_C2S_OFFSET                 0x508818
+#define ADF_GEN4_RL_TOKEN_PCIEIN_BUCKET_OFFSET 0x508800
+#define ADF_GEN4_RL_TOKEN_PCIEOUT_BUCKET_OFFSET        0x508804
+
 void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev);
 void adf_gen4_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops);
 int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number);
index 34c6cd8e27c0b58d16db092e0a1cbc875c0dd1e6..5dafd9a270dbd87f261a6d6ea228326dfcb6e78b 100644 (file)
@@ -2,7 +2,10 @@
 /* Copyright(c) 2022 Intel Corporation */
 #include <linux/bitfield.h>
 #include <linux/iopoll.h>
+#include <linux/kernel.h>
+
 #include "adf_accel_devices.h"
+#include "adf_admin.h"
 #include "adf_common_drv.h"
 #include "adf_gen4_pm.h"
 #include "adf_cfg_strings.h"
 #include "adf_gen4_hw_data.h"
 #include "adf_cfg.h"
 
-enum qat_pm_host_msg {
-       PM_NO_CHANGE = 0,
-       PM_SET_MIN,
-};
-
 struct adf_gen4_pm_data {
        struct work_struct pm_irq_work;
        struct adf_accel_dev *accel_dev;
@@ -25,6 +23,7 @@ static int send_host_msg(struct adf_accel_dev *accel_dev)
 {
        char pm_idle_support_cfg[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {};
        void __iomem *pmisc = adf_get_pmisc_base(accel_dev);
+       struct adf_pm *pm = &accel_dev->power_management;
        bool pm_idle_support;
        u32 msg;
        int ret;
@@ -39,6 +38,11 @@ static int send_host_msg(struct adf_accel_dev *accel_dev)
        if (ret)
                pm_idle_support = true;
 
+       if (pm_idle_support)
+               pm->host_ack_counter++;
+       else
+               pm->host_nack_counter++;
+
        /* Send HOST_MSG */
        msg = FIELD_PREP(ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK,
                         pm_idle_support ? PM_SET_MIN : PM_NO_CHANGE);
@@ -59,17 +63,27 @@ static void pm_bh_handler(struct work_struct *work)
                container_of(work, struct adf_gen4_pm_data, pm_irq_work);
        struct adf_accel_dev *accel_dev = pm_data->accel_dev;
        void __iomem *pmisc = adf_get_pmisc_base(accel_dev);
+       struct adf_pm *pm = &accel_dev->power_management;
        u32 pm_int_sts = pm_data->pm_int_sts;
        u32 val;
 
        /* PM Idle interrupt */
        if (pm_int_sts & ADF_GEN4_PM_IDLE_STS) {
+               pm->idle_irq_counters++;
                /* Issue host message to FW */
                if (send_host_msg(accel_dev))
                        dev_warn_ratelimited(&GET_DEV(accel_dev),
                                             "Failed to send host msg to FW\n");
        }
 
+       /* PM throttle interrupt */
+       if (pm_int_sts & ADF_GEN4_PM_THR_STS)
+               pm->throttle_irq_counters++;
+
+       /* PM fw interrupt */
+       if (pm_int_sts & ADF_GEN4_PM_FW_INT_STS)
+               pm->fw_irq_counters++;
+
        /* Clear interrupt status */
        ADF_CSR_WR(pmisc, ADF_GEN4_PM_INTERRUPT, pm_int_sts);
 
@@ -129,6 +143,9 @@ int adf_gen4_enable_pm(struct adf_accel_dev *accel_dev)
        if (ret)
                return ret;
 
+       /* Initialize PM internal data */
+       adf_gen4_init_dev_pm_data(accel_dev);
+
        /* Enable default PM interrupts: IDLE, THROTTLE */
        val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_INTERRUPT);
        val |= ADF_GEN4_PM_INT_EN_DEFAULT;
index c2768762cca3b683513936bdd1be1d40586d4bdc..a49352b79a7adff1b14eea0880f463d2010df7ff 100644 (file)
@@ -3,7 +3,14 @@
 #ifndef ADF_GEN4_PM_H
 #define ADF_GEN4_PM_H
 
-#include "adf_accel_devices.h"
+#include <linux/bits.h>
+
+struct adf_accel_dev;
+
+enum qat_pm_host_msg {
+       PM_NO_CHANGE = 0,
+       PM_SET_MIN,
+};
 
 /* Power management registers */
 #define ADF_GEN4_PM_HOST_MSG (0x50A01C)
 #define ADF_GEN4_PM_MAX_IDLE_FILTER            (0x7)
 #define ADF_GEN4_PM_DEFAULT_IDLE_SUPPORT       (0x1)
 
+/* PM CSRs fields masks */
+#define ADF_GEN4_PM_DOMAIN_POWER_GATED_MASK    GENMASK(15, 0)
+#define ADF_GEN4_PM_SSM_PM_ENABLE_MASK         GENMASK(15, 0)
+#define ADF_GEN4_PM_IDLE_FILTER_MASK           GENMASK(5, 3)
+#define ADF_GEN4_PM_IDLE_ENABLE_MASK           BIT(2)
+#define ADF_GEN4_PM_ENABLE_PM_MASK             BIT(21)
+#define ADF_GEN4_PM_ENABLE_PM_IDLE_MASK                BIT(22)
+#define ADF_GEN4_PM_ENABLE_DEEP_PM_IDLE_MASK   BIT(23)
+#define ADF_GEN4_PM_CURRENT_WP_MASK            GENMASK(19, 11)
+#define ADF_GEN4_PM_CPM_PM_STATE_MASK          GENMASK(22, 20)
+#define ADF_GEN4_PM_PENDING_WP_MASK            GENMASK(31, 23)
+#define ADF_GEN4_PM_THR_VALUE_MASK             GENMASK(6, 4)
+#define ADF_GEN4_PM_MIN_PWR_ACK_MASK           BIT(7)
+#define ADF_GEN4_PM_MIN_PWR_ACK_PENDING_MASK   BIT(17)
+#define ADF_GEN4_PM_CPR_ACTIVE_COUNT_MASK      BIT(0)
+#define ADF_GEN4_PM_CPR_MANAGED_COUNT_MASK     BIT(0)
+#define ADF_GEN4_PM_XLT_ACTIVE_COUNT_MASK      BIT(1)
+#define ADF_GEN4_PM_XLT_MANAGED_COUNT_MASK     BIT(1)
+#define ADF_GEN4_PM_DCPR_ACTIVE_COUNT_MASK     GENMASK(3, 2)
+#define ADF_GEN4_PM_DCPR_MANAGED_COUNT_MASK    GENMASK(3, 2)
+#define ADF_GEN4_PM_PKE_ACTIVE_COUNT_MASK      GENMASK(8, 4)
+#define ADF_GEN4_PM_PKE_MANAGED_COUNT_MASK     GENMASK(8, 4)
+#define ADF_GEN4_PM_WAT_ACTIVE_COUNT_MASK      GENMASK(13, 9)
+#define ADF_GEN4_PM_WAT_MANAGED_COUNT_MASK     GENMASK(13, 9)
+#define ADF_GEN4_PM_WCP_ACTIVE_COUNT_MASK      GENMASK(18, 14)
+#define ADF_GEN4_PM_WCP_MANAGED_COUNT_MASK     GENMASK(18, 14)
+#define ADF_GEN4_PM_UCS_ACTIVE_COUNT_MASK      GENMASK(20, 19)
+#define ADF_GEN4_PM_UCS_MANAGED_COUNT_MASK     GENMASK(20, 19)
+#define ADF_GEN4_PM_CPH_ACTIVE_COUNT_MASK      GENMASK(24, 21)
+#define ADF_GEN4_PM_CPH_MANAGED_COUNT_MASK     GENMASK(24, 21)
+#define ADF_GEN4_PM_ATH_ACTIVE_COUNT_MASK      GENMASK(28, 25)
+#define ADF_GEN4_PM_ATH_MANAGED_COUNT_MASK     GENMASK(28, 25)
+
 int adf_gen4_enable_pm(struct adf_accel_dev *accel_dev);
 bool adf_gen4_handle_pm_interrupt(struct adf_accel_dev *accel_dev);
 
+#ifdef CONFIG_DEBUG_FS
+void adf_gen4_init_dev_pm_data(struct adf_accel_dev *accel_dev);
+#else
+static inline void adf_gen4_init_dev_pm_data(struct adf_accel_dev *accel_dev)
+{
+}
+#endif /* CONFIG_DEBUG_FS */
+
 #endif
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_pm_debugfs.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_pm_debugfs.c
new file mode 100644 (file)
index 0000000..ee0b507
--- /dev/null
@@ -0,0 +1,266 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/string_helpers.h>
+#include <linux/stringify.h>
+
+#include "adf_accel_devices.h"
+#include "adf_admin.h"
+#include "adf_common_drv.h"
+#include "adf_gen4_pm.h"
+#include "icp_qat_fw_init_admin.h"
+
+/*
+ * This is needed because a variable is used to index the mask at
+ * pm_scnprint_table(), making it not compile time constant, so the compile
+ * asserts from FIELD_GET() or u32_get_bits() won't be fulfilled.
+ */
+#define field_get(_mask, _reg) (((_reg) & (_mask)) >> (ffs(_mask) - 1))
+
+#define PM_INFO_MEMBER_OFF(member)     \
+       (offsetof(struct icp_qat_fw_init_admin_pm_info, member) / sizeof(u32))
+
+#define PM_INFO_REGSET_ENTRY_MASK(_reg_, _field_, _mask_)      \
+{                                                              \
+       .reg_offset = PM_INFO_MEMBER_OFF(_reg_),                \
+       .key = __stringify(_field_),                            \
+       .field_mask = _mask_,                                   \
+}
+
+#define PM_INFO_REGSET_ENTRY32(_reg_, _field_) \
+       PM_INFO_REGSET_ENTRY_MASK(_reg_, _field_, GENMASK(31, 0))
+
+#define PM_INFO_REGSET_ENTRY(_reg_, _field_)   \
+       PM_INFO_REGSET_ENTRY_MASK(_reg_, _field_, ADF_GEN4_PM_##_field_##_MASK)
+
+#define PM_INFO_MAX_KEY_LEN    21
+
+struct pm_status_row {
+       int reg_offset;
+       u32 field_mask;
+       const char *key;
+};
+
+static struct pm_status_row pm_fuse_rows[] = {
+       PM_INFO_REGSET_ENTRY(fusectl0, ENABLE_PM),
+       PM_INFO_REGSET_ENTRY(fusectl0, ENABLE_PM_IDLE),
+       PM_INFO_REGSET_ENTRY(fusectl0, ENABLE_DEEP_PM_IDLE),
+};
+
+static struct pm_status_row pm_info_rows[] = {
+       PM_INFO_REGSET_ENTRY(pm.status, CPM_PM_STATE),
+       PM_INFO_REGSET_ENTRY(pm.status, PENDING_WP),
+       PM_INFO_REGSET_ENTRY(pm.status, CURRENT_WP),
+       PM_INFO_REGSET_ENTRY(pm.fw_init, IDLE_ENABLE),
+       PM_INFO_REGSET_ENTRY(pm.fw_init, IDLE_FILTER),
+       PM_INFO_REGSET_ENTRY(pm.main, MIN_PWR_ACK),
+       PM_INFO_REGSET_ENTRY(pm.thread, MIN_PWR_ACK_PENDING),
+       PM_INFO_REGSET_ENTRY(pm.main, THR_VALUE),
+};
+
+static struct pm_status_row pm_ssm_rows[] = {
+       PM_INFO_REGSET_ENTRY(ssm.pm_enable, SSM_PM_ENABLE),
+       PM_INFO_REGSET_ENTRY32(ssm.active_constraint, ACTIVE_CONSTRAINT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_domain_status, DOMAIN_POWER_GATED),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, ATH_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, CPH_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, PKE_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, CPR_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, DCPR_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, UCS_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, XLT_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, WAT_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_active_status, WCP_ACTIVE_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, ATH_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, CPH_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, PKE_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, CPR_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, DCPR_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, UCS_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, XLT_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, WAT_MANAGED_COUNT),
+       PM_INFO_REGSET_ENTRY(ssm.pm_managed_status, WCP_MANAGED_COUNT),
+};
+
+static struct pm_status_row pm_log_rows[] = {
+       PM_INFO_REGSET_ENTRY32(event_counters.host_msg, HOST_MSG_EVENT_COUNT),
+       PM_INFO_REGSET_ENTRY32(event_counters.sys_pm, SYS_PM_EVENT_COUNT),
+       PM_INFO_REGSET_ENTRY32(event_counters.local_ssm, SSM_EVENT_COUNT),
+       PM_INFO_REGSET_ENTRY32(event_counters.timer, TIMER_EVENT_COUNT),
+       PM_INFO_REGSET_ENTRY32(event_counters.unknown, UNKNOWN_EVENT_COUNT),
+};
+
+static struct pm_status_row pm_event_rows[ICP_QAT_NUMBER_OF_PM_EVENTS] = {
+       PM_INFO_REGSET_ENTRY32(event_log[0], EVENT0),
+       PM_INFO_REGSET_ENTRY32(event_log[1], EVENT1),
+       PM_INFO_REGSET_ENTRY32(event_log[2], EVENT2),
+       PM_INFO_REGSET_ENTRY32(event_log[3], EVENT3),
+       PM_INFO_REGSET_ENTRY32(event_log[4], EVENT4),
+       PM_INFO_REGSET_ENTRY32(event_log[5], EVENT5),
+       PM_INFO_REGSET_ENTRY32(event_log[6], EVENT6),
+       PM_INFO_REGSET_ENTRY32(event_log[7], EVENT7),
+};
+
+static struct pm_status_row pm_csrs_rows[] = {
+       PM_INFO_REGSET_ENTRY32(pm.fw_init, CPM_PM_FW_INIT),
+       PM_INFO_REGSET_ENTRY32(pm.status, CPM_PM_STATUS),
+       PM_INFO_REGSET_ENTRY32(pm.main, CPM_PM_MASTER_FW),
+       PM_INFO_REGSET_ENTRY32(pm.pwrreq, CPM_PM_PWRREQ),
+};
+
+static int pm_scnprint_table(char *buff, struct pm_status_row *table,
+                            u32 *pm_info_regs, size_t buff_size, int table_len,
+                            bool lowercase)
+{
+       char key[PM_INFO_MAX_KEY_LEN];
+       int wr = 0;
+       int i;
+
+       for (i = 0; i < table_len; i++) {
+               if (lowercase)
+                       string_lower(key, table[i].key);
+               else
+                       string_upper(key, table[i].key);
+
+               wr += scnprintf(&buff[wr], buff_size - wr, "%s: %#x\n", key,
+                               field_get(table[i].field_mask,
+                                         pm_info_regs[table[i].reg_offset]));
+       }
+
+       return wr;
+}
+
+static int pm_scnprint_table_upper_keys(char *buff, struct pm_status_row *table,
+                                       u32 *pm_info_regs, size_t buff_size,
+                                       int table_len)
+{
+       return pm_scnprint_table(buff, table, pm_info_regs, buff_size,
+                                table_len, false);
+}
+
+static int pm_scnprint_table_lower_keys(char *buff, struct pm_status_row *table,
+                                       u32 *pm_info_regs, size_t buff_size,
+                                       int table_len)
+{
+       return pm_scnprint_table(buff, table, pm_info_regs, buff_size,
+                                table_len, true);
+}
+
+static_assert(sizeof(struct icp_qat_fw_init_admin_pm_info) < PAGE_SIZE);
+
+static ssize_t adf_gen4_print_pm_status(struct adf_accel_dev *accel_dev,
+                                       char __user *buf, size_t count,
+                                       loff_t *pos)
+{
+       void __iomem *pmisc = adf_get_pmisc_base(accel_dev);
+       struct adf_pm *pm = &accel_dev->power_management;
+       struct icp_qat_fw_init_admin_pm_info *pm_info;
+       dma_addr_t p_state_addr;
+       u32 *pm_info_regs;
+       char *pm_kv;
+       int len = 0;
+       u32 val;
+       int ret;
+
+       pm_info = kmalloc(PAGE_SIZE, GFP_KERNEL);
+       if (!pm_info)
+               return -ENOMEM;
+
+       pm_kv = kmalloc(PAGE_SIZE, GFP_KERNEL);
+       if (!pm_kv) {
+               ret = -ENOMEM;
+               goto out_free;
+       }
+
+       p_state_addr = dma_map_single(&GET_DEV(accel_dev), pm_info, PAGE_SIZE,
+                                     DMA_FROM_DEVICE);
+       ret = dma_mapping_error(&GET_DEV(accel_dev), p_state_addr);
+       if (ret)
+               goto out_free;
+
+       /* Query PM info from QAT FW */
+       ret = adf_get_pm_info(accel_dev, p_state_addr, PAGE_SIZE);
+       dma_unmap_single(&GET_DEV(accel_dev), p_state_addr, PAGE_SIZE,
+                        DMA_FROM_DEVICE);
+       if (ret)
+               goto out_free;
+
+       pm_info_regs = (u32 *)pm_info;
+
+       /* Fusectl related */
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "----------- PM Fuse info ---------\n");
+       len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_fuse_rows,
+                                           pm_info_regs, PAGE_SIZE - len,
+                                           ARRAY_SIZE(pm_fuse_rows));
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "max_pwrreq: %#x\n",
+                        pm_info->max_pwrreq);
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "min_pwrreq: %#x\n",
+                        pm_info->min_pwrreq);
+
+       /* PM related */
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "------------  PM Info ------------\n");
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "power_level: %s\n",
+                        pm_info->pwr_state == PM_SET_MIN ? "min" : "max");
+       len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_info_rows,
+                                           pm_info_regs, PAGE_SIZE - len,
+                                           ARRAY_SIZE(pm_info_rows));
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "pm_mode: STATIC\n");
+
+       /* SSM related */
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "----------- SSM_PM Info ----------\n");
+       len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_ssm_rows,
+                                           pm_info_regs, PAGE_SIZE - len,
+                                           ARRAY_SIZE(pm_ssm_rows));
+
+       /* Log related */
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "------------- PM Log -------------\n");
+       len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_log_rows,
+                                           pm_info_regs, PAGE_SIZE - len,
+                                           ARRAY_SIZE(pm_log_rows));
+
+       len += pm_scnprint_table_lower_keys(&pm_kv[len], pm_event_rows,
+                                           pm_info_regs, PAGE_SIZE - len,
+                                           ARRAY_SIZE(pm_event_rows));
+
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "idle_irq_count: %#x\n",
+                        pm->idle_irq_counters);
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "fw_irq_count: %#x\n",
+                        pm->fw_irq_counters);
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "throttle_irq_count: %#x\n", pm->throttle_irq_counters);
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "host_ack_count: %#x\n",
+                        pm->host_ack_counter);
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len, "host_nack_count: %#x\n",
+                        pm->host_nack_counter);
+
+       /* CSRs content */
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "----------- HW PM CSRs -----------\n");
+       len += pm_scnprint_table_upper_keys(&pm_kv[len], pm_csrs_rows,
+                                           pm_info_regs, PAGE_SIZE - len,
+                                           ARRAY_SIZE(pm_csrs_rows));
+
+       val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_HOST_MSG);
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "CPM_PM_HOST_MSG: %#x\n", val);
+       val = ADF_CSR_RD(pmisc, ADF_GEN4_PM_INTERRUPT);
+       len += scnprintf(&pm_kv[len], PAGE_SIZE - len,
+                        "CPM_PM_INTERRUPT: %#x\n", val);
+       ret = simple_read_from_buffer(buf, count, pos, pm_kv, len);
+
+out_free:
+       kfree(pm_info);
+       kfree(pm_kv);
+       return ret;
+}
+
+void adf_gen4_init_dev_pm_data(struct adf_accel_dev *accel_dev)
+{
+       accel_dev->power_management.print_pm_status = adf_gen4_print_pm_status;
+       accel_dev->power_management.present = true;
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
new file mode 100644 (file)
index 0000000..048c246
--- /dev/null
@@ -0,0 +1,1566 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+#include "adf_common_drv.h"
+#include "adf_gen4_hw_data.h"
+#include "adf_gen4_ras.h"
+#include "adf_sysfs_ras_counters.h"
+
+#define BITS_PER_REG(_n_) (sizeof(_n_) * BITS_PER_BYTE)
+
+static void enable_errsou_reporting(void __iomem *csr)
+{
+       /* Enable correctable error reporting in ERRSOU0 */
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK0, 0);
+
+       /* Enable uncorrectable error reporting in ERRSOU1 */
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK1, 0);
+
+       /*
+        * Enable uncorrectable error reporting in ERRSOU2
+        * but disable PM interrupt and CFC attention interrupt by default
+        */
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK2,
+                  ADF_GEN4_ERRSOU2_PM_INT_BIT |
+                  ADF_GEN4_ERRSOU2_CPP_CFC_ATT_INT_BITMASK);
+
+       /*
+        * Enable uncorrectable error reporting in ERRSOU3
+        * but disable RLT error interrupt and VFLR notify interrupt by default
+        */
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK3,
+                  ADF_GEN4_ERRSOU3_RLTERROR_BIT |
+                  ADF_GEN4_ERRSOU3_VFLRNOTIFY_BIT);
+}
+
+static void disable_errsou_reporting(void __iomem *csr)
+{
+       u32 val = 0;
+
+       /* Disable correctable error reporting in ERRSOU0 */
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK0, ADF_GEN4_ERRSOU0_BIT);
+
+       /* Disable uncorrectable error reporting in ERRSOU1 */
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK1, ADF_GEN4_ERRSOU1_BITMASK);
+
+       /* Disable uncorrectable error reporting in ERRSOU2 */
+       val = ADF_CSR_RD(csr, ADF_GEN4_ERRMSK2);
+       val |= ADF_GEN4_ERRSOU2_DIS_BITMASK;
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK2, val);
+
+       /* Disable uncorrectable error reporting in ERRSOU3 */
+       ADF_CSR_WR(csr, ADF_GEN4_ERRMSK3, ADF_GEN4_ERRSOU3_BITMASK);
+}
+
+static void enable_ae_error_reporting(struct adf_accel_dev *accel_dev,
+                                     void __iomem *csr)
+{
+       u32 ae_mask = GET_HW_DATA(accel_dev)->ae_mask;
+
+       /* Enable Acceleration Engine correctable error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_HIAECORERRLOGENABLE_CPP0, ae_mask);
+
+       /* Enable Acceleration Engine uncorrectable error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_HIAEUNCERRLOGENABLE_CPP0, ae_mask);
+}
+
+static void disable_ae_error_reporting(void __iomem *csr)
+{
+       /* Disable Acceleration Engine correctable error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_HIAECORERRLOGENABLE_CPP0, 0);
+
+       /* Disable Acceleration Engine uncorrectable error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_HIAEUNCERRLOGENABLE_CPP0, 0);
+}
+
+static void enable_cpp_error_reporting(struct adf_accel_dev *accel_dev,
+                                      void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+       /* Enable HI CPP Agents Command Parity Error Reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOGENABLE,
+                  err_mask->cppagentcmdpar_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_CPP_CFC_ERR_CTRL,
+                  ADF_GEN4_CPP_CFC_ERR_CTRL_BITMASK);
+}
+
+static void disable_cpp_error_reporting(void __iomem *csr)
+{
+       /* Disable HI CPP Agents Command Parity Error Reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOGENABLE, 0);
+
+       ADF_CSR_WR(csr, ADF_GEN4_CPP_CFC_ERR_CTRL,
+                  ADF_GEN4_CPP_CFC_ERR_CTRL_DIS_BITMASK);
+}
+
+static void enable_ti_ri_error_reporting(void __iomem *csr)
+{
+       u32 reg;
+
+       /* Enable RI Memory error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_RI_MEM_PAR_ERR_EN0,
+                  ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK |
+                  ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK);
+
+       /* Enable IOSF Primary Command Parity error Reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_RIMISCCTL, ADF_GEN4_RIMISCSTS_BIT);
+
+       /* Enable TI Internal Memory Parity Error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_TI_CI_PAR_ERR_MASK, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_PULL0FUB_PAR_ERR_MASK, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_PUSHFUB_PAR_ERR_MASK, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_CD_PAR_ERR_MASK, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_TRNSB_PAR_ERR_MASK, 0);
+
+       /* Enable error handling in RI, TI CPP interface control registers */
+       ADF_CSR_WR(csr, ADF_GEN4_RICPPINTCTL, ADF_GEN4_RICPPINTCTL_BITMASK);
+
+       ADF_CSR_WR(csr, ADF_GEN4_TICPPINTCTL, ADF_GEN4_TICPPINTCTL_BITMASK);
+
+       /*
+        * Enable error detection and reporting in TIMISCSTS
+        * with bits 1, 2 and 30 value preserved
+        */
+       reg = ADF_CSR_RD(csr, ADF_GEN4_TIMISCCTL);
+       reg &= ADF_GEN4_TIMSCCTL_RELAY_BITMASK;
+       reg |= ADF_GEN4_TIMISCCTL_BIT;
+       ADF_CSR_WR(csr, ADF_GEN4_TIMISCCTL, reg);
+}
+
+static void disable_ti_ri_error_reporting(void __iomem *csr)
+{
+       u32 reg;
+
+       /* Disable RI Memory error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_RI_MEM_PAR_ERR_EN0, 0);
+
+       /* Disable IOSF Primary Command Parity error Reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_RIMISCCTL, 0);
+
+       /* Disable TI Internal Memory Parity Error reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_TI_CI_PAR_ERR_MASK,
+                  ADF_GEN4_TI_CI_PAR_STS_BITMASK);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_PULL0FUB_PAR_ERR_MASK,
+                  ADF_GEN4_TI_PULL0FUB_PAR_STS_BITMASK);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_PUSHFUB_PAR_ERR_MASK,
+                  ADF_GEN4_TI_PUSHFUB_PAR_STS_BITMASK);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_CD_PAR_ERR_MASK,
+                  ADF_GEN4_TI_CD_PAR_STS_BITMASK);
+       ADF_CSR_WR(csr, ADF_GEN4_TI_TRNSB_PAR_ERR_MASK,
+                  ADF_GEN4_TI_TRNSB_PAR_STS_BITMASK);
+
+       /* Disable error handling in RI, TI CPP interface control registers */
+       ADF_CSR_WR(csr, ADF_GEN4_RICPPINTCTL, 0);
+
+       ADF_CSR_WR(csr, ADF_GEN4_TICPPINTCTL, 0);
+
+       /*
+        * Disable error detection and reporting in TIMISCSTS
+        * with bits 1, 2 and 30 value preserved
+        */
+       reg = ADF_CSR_RD(csr, ADF_GEN4_TIMISCCTL);
+       reg &= ADF_GEN4_TIMSCCTL_RELAY_BITMASK;
+       ADF_CSR_WR(csr, ADF_GEN4_TIMISCCTL, reg);
+}
+
+static void enable_rf_error_reporting(struct adf_accel_dev *accel_dev,
+                                     void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+       /* Enable RF parity error in Shared RAM */
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_SRC, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_ATH_CPH, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_CPR_XLT, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_DCPR_UCS, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_PKE, 0);
+
+       if (err_mask->parerr_wat_wcp_mask)
+               ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_WAT_WCP, 0);
+}
+
+static void disable_rf_error_reporting(struct adf_accel_dev *accel_dev,
+                                      void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+       /* Disable RF Parity Error reporting in Shared RAM */
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_SRC,
+                  ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_ATH_CPH,
+                  err_mask->parerr_ath_cph_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_CPR_XLT,
+                  err_mask->parerr_cpr_xlt_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_DCPR_UCS,
+                  err_mask->parerr_dcpr_ucs_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_PKE,
+                  err_mask->parerr_pke_mask);
+
+       if (err_mask->parerr_wat_wcp_mask)
+               ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITYMASK_WAT_WCP,
+                          err_mask->parerr_wat_wcp_mask);
+}
+
+static void enable_ssm_error_reporting(struct adf_accel_dev *accel_dev,
+                                      void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       u32 val = 0;
+
+       /* Enable SSM interrupts */
+       ADF_CSR_WR(csr, ADF_GEN4_INTMASKSSM, 0);
+
+       /* Enable shared memory error detection & correction */
+       val = ADF_CSR_RD(csr, ADF_GEN4_SSMFEATREN);
+       val |= err_mask->ssmfeatren_mask;
+       ADF_CSR_WR(csr, ADF_GEN4_SSMFEATREN, val);
+
+       /* Enable SER detection in SER_err_ssmsh register */
+       ADF_CSR_WR(csr, ADF_GEN4_SER_EN_SSMSH,
+                  ADF_GEN4_SER_EN_SSMSH_BITMASK);
+
+       /* Enable SSM soft parity error */
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_ATH_CPH, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_CPR_XLT, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_DCPR_UCS, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_PKE, 0);
+
+       if (err_mask->parerr_wat_wcp_mask)
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_WAT_WCP, 0);
+
+       /* Enable slice hang interrupt reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_ATH_CPH, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_CPR_XLT, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_DCPR_UCS, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_PKE, 0);
+
+       if (err_mask->parerr_wat_wcp_mask)
+               ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_WAT_WCP, 0);
+}
+
+static void disable_ssm_error_reporting(struct adf_accel_dev *accel_dev,
+                                       void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       u32 val = 0;
+
+       /* Disable SSM interrupts */
+       ADF_CSR_WR(csr, ADF_GEN4_INTMASKSSM,
+                  ADF_GEN4_INTMASKSSM_BITMASK);
+
+       /* Disable shared memory error detection & correction */
+       val = ADF_CSR_RD(csr, ADF_GEN4_SSMFEATREN);
+       val &= ADF_GEN4_SSMFEATREN_DIS_BITMASK;
+       ADF_CSR_WR(csr, ADF_GEN4_SSMFEATREN, val);
+
+       /* Disable SER detection in SER_err_ssmsh register */
+       ADF_CSR_WR(csr, ADF_GEN4_SER_EN_SSMSH, 0);
+
+       /* Disable SSM soft parity error */
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_ATH_CPH,
+                  err_mask->parerr_ath_cph_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_CPR_XLT,
+                  err_mask->parerr_cpr_xlt_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_DCPR_UCS,
+                  err_mask->parerr_dcpr_ucs_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_PKE,
+                  err_mask->parerr_pke_mask);
+
+       if (err_mask->parerr_wat_wcp_mask)
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPARERRMSK_WAT_WCP,
+                          err_mask->parerr_wat_wcp_mask);
+
+       /* Disable slice hang interrupt reporting */
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_ATH_CPH,
+                  err_mask->parerr_ath_cph_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_CPR_XLT,
+                  err_mask->parerr_cpr_xlt_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_DCPR_UCS,
+                  err_mask->parerr_dcpr_ucs_mask);
+
+       ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_PKE,
+                  err_mask->parerr_pke_mask);
+
+       if (err_mask->parerr_wat_wcp_mask)
+               ADF_CSR_WR(csr, ADF_GEN4_SHINTMASKSSM_WAT_WCP,
+                          err_mask->parerr_wat_wcp_mask);
+}
+
+static void enable_aram_error_reporting(void __iomem *csr)
+{
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERRUERR_EN,
+                  ADF_GEN4_REG_ARAMCERRUERR_EN_BITMASK);
+
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERR,
+                  ADF_GEN4_REG_ARAMCERR_EN_BITMASK);
+
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMUERR,
+                  ADF_GEN4_REG_ARAMUERR_EN_BITMASK);
+
+       ADF_CSR_WR(csr, ADF_GEN4_REG_CPPMEMTGTERR,
+                  ADF_GEN4_REG_CPPMEMTGTERR_EN_BITMASK);
+}
+
+static void disable_aram_error_reporting(void __iomem *csr)
+{
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERRUERR_EN, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERR, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMUERR, 0);
+       ADF_CSR_WR(csr, ADF_GEN4_REG_CPPMEMTGTERR, 0);
+}
+
+static void adf_gen4_enable_ras(struct adf_accel_dev *accel_dev)
+{
+       void __iomem *aram_csr = adf_get_aram_base(accel_dev);
+       void __iomem *csr = adf_get_pmisc_base(accel_dev);
+
+       enable_errsou_reporting(csr);
+       enable_ae_error_reporting(accel_dev, csr);
+       enable_cpp_error_reporting(accel_dev, csr);
+       enable_ti_ri_error_reporting(csr);
+       enable_rf_error_reporting(accel_dev, csr);
+       enable_ssm_error_reporting(accel_dev, csr);
+       enable_aram_error_reporting(aram_csr);
+}
+
+static void adf_gen4_disable_ras(struct adf_accel_dev *accel_dev)
+{
+       void __iomem *aram_csr = adf_get_aram_base(accel_dev);
+       void __iomem *csr = adf_get_pmisc_base(accel_dev);
+
+       disable_errsou_reporting(csr);
+       disable_ae_error_reporting(csr);
+       disable_cpp_error_reporting(csr);
+       disable_ti_ri_error_reporting(csr);
+       disable_rf_error_reporting(accel_dev, csr);
+       disable_ssm_error_reporting(accel_dev, csr);
+       disable_aram_error_reporting(aram_csr);
+}
+
+static void adf_gen4_process_errsou0(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr)
+{
+       u32 aecorrerr = ADF_CSR_RD(csr, ADF_GEN4_HIAECORERRLOG_CPP0);
+
+       aecorrerr &= GET_HW_DATA(accel_dev)->ae_mask;
+
+       dev_warn(&GET_DEV(accel_dev),
+                "Correctable error detected in AE: 0x%x\n",
+                aecorrerr);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+
+       /* Clear interrupt from ERRSOU0 */
+       ADF_CSR_WR(csr, ADF_GEN4_HIAECORERRLOG_CPP0, aecorrerr);
+}
+
+static bool adf_handle_cpp_aeunc(struct adf_accel_dev *accel_dev,
+                                void __iomem *csr, u32 errsou)
+{
+       u32 aeuncorerr;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_HIAEUNCERRLOG_CPP0_BIT))
+               return false;
+
+       aeuncorerr = ADF_CSR_RD(csr, ADF_GEN4_HIAEUNCERRLOG_CPP0);
+       aeuncorerr &= GET_HW_DATA(accel_dev)->ae_mask;
+
+       dev_err(&GET_DEV(accel_dev),
+               "Uncorrectable error detected in AE: 0x%x\n",
+               aeuncorerr);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+       ADF_CSR_WR(csr, ADF_GEN4_HIAEUNCERRLOG_CPP0, aeuncorerr);
+
+       return false;
+}
+
+static bool adf_handle_cppcmdparerr(struct adf_accel_dev *accel_dev,
+                                   void __iomem *csr, u32 errsou)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       u32 cmdparerr;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_HICPPAGENTCMDPARERRLOG_BIT))
+               return false;
+
+       cmdparerr = ADF_CSR_RD(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOG);
+       cmdparerr &= err_mask->cppagentcmdpar_mask;
+
+       dev_err(&GET_DEV(accel_dev),
+               "HI CPP agent command parity error: 0x%x\n",
+               cmdparerr);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+       ADF_CSR_WR(csr, ADF_GEN4_HICPPAGENTCMDPARERRLOG, cmdparerr);
+
+       return true;
+}
+
+static bool adf_handle_ri_mem_par_err(struct adf_accel_dev *accel_dev,
+                                     void __iomem *csr, u32 errsou)
+{
+       bool reset_required = false;
+       u32 rimem_parerr_sts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_RIMEM_PARERR_STS_BIT))
+               return false;
+
+       rimem_parerr_sts = ADF_CSR_RD(csr, ADF_GEN4_RIMEM_PARERR_STS);
+       rimem_parerr_sts &= ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK |
+                           ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK;
+
+       if (rimem_parerr_sts & ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK) {
+               dev_err(&GET_DEV(accel_dev),
+                       "RI Memory Parity uncorrectable error: 0x%x\n",
+                       rimem_parerr_sts);
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       }
+
+       if (rimem_parerr_sts & ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK) {
+               dev_err(&GET_DEV(accel_dev),
+                       "RI Memory Parity fatal error: 0x%x\n",
+                       rimem_parerr_sts);
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+               reset_required = true;
+       }
+
+       ADF_CSR_WR(csr, ADF_GEN4_RIMEM_PARERR_STS, rimem_parerr_sts);
+
+       return reset_required;
+}
+
+static bool adf_handle_ti_ci_par_sts(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr, u32 errsou)
+{
+       u32 ti_ci_par_sts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+               return false;
+
+       ti_ci_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_CI_PAR_STS);
+       ti_ci_par_sts &= ADF_GEN4_TI_CI_PAR_STS_BITMASK;
+
+       if (ti_ci_par_sts) {
+               dev_err(&GET_DEV(accel_dev),
+                       "TI Memory Parity Error: 0x%x\n", ti_ci_par_sts);
+               ADF_CSR_WR(csr, ADF_GEN4_TI_CI_PAR_STS, ti_ci_par_sts);
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       }
+
+       return false;
+}
+
+static bool adf_handle_ti_pullfub_par_sts(struct adf_accel_dev *accel_dev,
+                                         void __iomem *csr, u32 errsou)
+{
+       u32 ti_pullfub_par_sts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+               return false;
+
+       ti_pullfub_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_PULL0FUB_PAR_STS);
+       ti_pullfub_par_sts &= ADF_GEN4_TI_PULL0FUB_PAR_STS_BITMASK;
+
+       if (ti_pullfub_par_sts) {
+               dev_err(&GET_DEV(accel_dev),
+                       "TI Pull Parity Error: 0x%x\n", ti_pullfub_par_sts);
+
+               ADF_CSR_WR(csr, ADF_GEN4_TI_PULL0FUB_PAR_STS,
+                          ti_pullfub_par_sts);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       }
+
+       return false;
+}
+
+static bool adf_handle_ti_pushfub_par_sts(struct adf_accel_dev *accel_dev,
+                                         void __iomem *csr, u32 errsou)
+{
+       u32 ti_pushfub_par_sts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+               return false;
+
+       ti_pushfub_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_PUSHFUB_PAR_STS);
+       ti_pushfub_par_sts &= ADF_GEN4_TI_PUSHFUB_PAR_STS_BITMASK;
+
+       if (ti_pushfub_par_sts) {
+               dev_err(&GET_DEV(accel_dev),
+                       "TI Push Parity Error: 0x%x\n", ti_pushfub_par_sts);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_TI_PUSHFUB_PAR_STS,
+                          ti_pushfub_par_sts);
+       }
+
+       return false;
+}
+
+static bool adf_handle_ti_cd_par_sts(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr, u32 errsou)
+{
+       u32 ti_cd_par_sts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+               return false;
+
+       ti_cd_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_CD_PAR_STS);
+       ti_cd_par_sts &= ADF_GEN4_TI_CD_PAR_STS_BITMASK;
+
+       if (ti_cd_par_sts) {
+               dev_err(&GET_DEV(accel_dev),
+                       "TI CD Parity Error: 0x%x\n", ti_cd_par_sts);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_TI_CD_PAR_STS, ti_cd_par_sts);
+       }
+
+       return false;
+}
+
+static bool adf_handle_ti_trnsb_par_sts(struct adf_accel_dev *accel_dev,
+                                       void __iomem *csr, u32 errsou)
+{
+       u32 ti_trnsb_par_sts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+               return false;
+
+       ti_trnsb_par_sts = ADF_CSR_RD(csr, ADF_GEN4_TI_TRNSB_PAR_STS);
+       ti_trnsb_par_sts &= ADF_GEN4_TI_TRNSB_PAR_STS_BITMASK;
+
+       if (ti_trnsb_par_sts) {
+               dev_err(&GET_DEV(accel_dev),
+                       "TI TRNSB Parity Error: 0x%x\n", ti_trnsb_par_sts);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_TI_TRNSB_PAR_STS, ti_trnsb_par_sts);
+       }
+
+       return false;
+}
+
+static bool adf_handle_iosfp_cmd_parerr(struct adf_accel_dev *accel_dev,
+                                       void __iomem *csr, u32 errsou)
+{
+       u32 rimiscsts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT))
+               return false;
+
+       rimiscsts = ADF_CSR_RD(csr, ADF_GEN4_RIMISCSTS);
+       rimiscsts &= ADF_GEN4_RIMISCSTS_BIT;
+
+       dev_err(&GET_DEV(accel_dev),
+               "Command Parity error detected on IOSFP: 0x%x\n",
+               rimiscsts);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+       ADF_CSR_WR(csr, ADF_GEN4_RIMISCSTS, rimiscsts);
+
+       return true;
+}
+
+static void adf_gen4_process_errsou1(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr, u32 errsou,
+                                    bool *reset_required)
+{
+       *reset_required |= adf_handle_cpp_aeunc(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_cppcmdparerr(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ri_mem_par_err(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ti_ci_par_sts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ti_pullfub_par_sts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ti_pushfub_par_sts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ti_cd_par_sts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ti_trnsb_par_sts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_iosfp_cmd_parerr(accel_dev, csr, errsou);
+}
+
+static bool adf_handle_uerrssmsh(struct adf_accel_dev *accel_dev,
+                                void __iomem *csr, u32 iastatssm)
+{
+       u32 reg;
+
+       if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_UERRSSMSH_BIT))
+               return false;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_UERRSSMSH);
+       reg &= ADF_GEN4_UERRSSMSH_BITMASK;
+
+       dev_err(&GET_DEV(accel_dev),
+               "Uncorrectable error on ssm shared memory: 0x%x\n",
+               reg);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+       ADF_CSR_WR(csr, ADF_GEN4_UERRSSMSH, reg);
+
+       return false;
+}
+
+static bool adf_handle_cerrssmsh(struct adf_accel_dev *accel_dev,
+                                void __iomem *csr, u32 iastatssm)
+{
+       u32 reg;
+
+       if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_CERRSSMSH_BIT))
+               return false;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_CERRSSMSH);
+       reg &= ADF_GEN4_CERRSSMSH_ERROR_BIT;
+
+       dev_warn(&GET_DEV(accel_dev),
+                "Correctable error on ssm shared memory: 0x%x\n",
+                reg);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+
+       ADF_CSR_WR(csr, ADF_GEN4_CERRSSMSH, reg);
+
+       return false;
+}
+
+static bool adf_handle_pperr_err(struct adf_accel_dev *accel_dev,
+                                void __iomem *csr, u32 iastatssm)
+{
+       u32 reg;
+
+       if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_PPERR_BIT))
+               return false;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_PPERR);
+       reg &= ADF_GEN4_PPERR_BITMASK;
+
+       dev_err(&GET_DEV(accel_dev),
+               "Uncorrectable error CPP transaction on memory target: 0x%x\n",
+               reg);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+       ADF_CSR_WR(csr, ADF_GEN4_PPERR, reg);
+
+       return false;
+}
+
+static void adf_poll_slicehang_csr(struct adf_accel_dev *accel_dev,
+                                  void __iomem *csr, u32 slice_hang_offset,
+                                  char *slice_name)
+{
+       u32 slice_hang_reg = ADF_CSR_RD(csr, slice_hang_offset);
+
+       if (!slice_hang_reg)
+               return;
+
+       dev_err(&GET_DEV(accel_dev),
+               "Slice %s hang error encountered\n", slice_name);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+}
+
+static bool adf_handle_slice_hang_error(struct adf_accel_dev *accel_dev,
+                                       void __iomem *csr, u32 iastatssm)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+
+       if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SLICEHANG_ERR_BIT))
+               return false;
+
+       adf_poll_slicehang_csr(accel_dev, csr,
+                              ADF_GEN4_SLICEHANGSTATUS_ATH_CPH, "ath_cph");
+       adf_poll_slicehang_csr(accel_dev, csr,
+                              ADF_GEN4_SLICEHANGSTATUS_CPR_XLT, "cpr_xlt");
+       adf_poll_slicehang_csr(accel_dev, csr,
+                              ADF_GEN4_SLICEHANGSTATUS_DCPR_UCS, "dcpr_ucs");
+       adf_poll_slicehang_csr(accel_dev, csr,
+                              ADF_GEN4_SLICEHANGSTATUS_PKE, "pke");
+
+       if (err_mask->parerr_wat_wcp_mask)
+               adf_poll_slicehang_csr(accel_dev, csr,
+                                      ADF_GEN4_SLICEHANGSTATUS_WAT_WCP,
+                                      "ath_cph");
+
+       return false;
+}
+
+static bool adf_handle_spp_pullcmd_err(struct adf_accel_dev *accel_dev,
+                                      void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       bool reset_required = false;
+       u32 reg;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_ATH_CPH);
+       reg &= err_mask->parerr_ath_cph_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull command fatal error ATH_CPH: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_ATH_CPH, reg);
+
+               reset_required = true;
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_CPR_XLT);
+       reg &= err_mask->parerr_cpr_xlt_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull command fatal error CPR_XLT: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_CPR_XLT, reg);
+
+               reset_required = true;
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_DCPR_UCS);
+       reg &= err_mask->parerr_dcpr_ucs_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull command fatal error DCPR_UCS: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_DCPR_UCS, reg);
+
+               reset_required = true;
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_PKE);
+       reg &= err_mask->parerr_pke_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull command fatal error PKE: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_PKE, reg);
+
+               reset_required = true;
+       }
+
+       if (err_mask->parerr_wat_wcp_mask) {
+               reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLCMDPARERR_WAT_WCP);
+               reg &= err_mask->parerr_wat_wcp_mask;
+               if (reg) {
+                       dev_err(&GET_DEV(accel_dev),
+                               "SPP pull command fatal error WAT_WCP: 0x%x\n", reg);
+
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+                       ADF_CSR_WR(csr, ADF_GEN4_SPPPULLCMDPARERR_WAT_WCP, reg);
+
+                       reset_required = true;
+               }
+       }
+
+       return reset_required;
+}
+
+static bool adf_handle_spp_pulldata_err(struct adf_accel_dev *accel_dev,
+                                       void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       u32 reg;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_ATH_CPH);
+       reg &= err_mask->parerr_ath_cph_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull data err ATH_CPH: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_ATH_CPH, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_CPR_XLT);
+       reg &= err_mask->parerr_cpr_xlt_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull data err CPR_XLT: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_CPR_XLT, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_DCPR_UCS);
+       reg &= err_mask->parerr_dcpr_ucs_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull data err DCPR_UCS: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_DCPR_UCS, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_PKE);
+       reg &= err_mask->parerr_pke_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP pull data err PKE: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_PKE, reg);
+       }
+
+       if (err_mask->parerr_wat_wcp_mask) {
+               reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPULLDATAPARERR_WAT_WCP);
+               reg &= err_mask->parerr_wat_wcp_mask;
+               if (reg) {
+                       dev_err(&GET_DEV(accel_dev),
+                               "SPP pull data err WAT_WCP: 0x%x\n", reg);
+
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+                       ADF_CSR_WR(csr, ADF_GEN4_SPPPULLDATAPARERR_WAT_WCP, reg);
+               }
+       }
+
+       return false;
+}
+
+static bool adf_handle_spp_pushcmd_err(struct adf_accel_dev *accel_dev,
+                                      void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       bool reset_required = false;
+       u32 reg;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_ATH_CPH);
+       reg &= err_mask->parerr_ath_cph_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push command fatal error ATH_CPH: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_ATH_CPH, reg);
+
+               reset_required = true;
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_CPR_XLT);
+       reg &= err_mask->parerr_cpr_xlt_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push command fatal error CPR_XLT: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_CPR_XLT, reg);
+
+               reset_required = true;
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_DCPR_UCS);
+       reg &= err_mask->parerr_dcpr_ucs_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push command fatal error DCPR_UCS: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_DCPR_UCS, reg);
+
+               reset_required = true;
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_PKE);
+       reg &= err_mask->parerr_pke_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push command fatal error PKE: 0x%x\n",
+                       reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_PKE, reg);
+
+               reset_required = true;
+       }
+
+       if (err_mask->parerr_wat_wcp_mask) {
+               reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHCMDPARERR_WAT_WCP);
+               reg &= err_mask->parerr_wat_wcp_mask;
+               if (reg) {
+                       dev_err(&GET_DEV(accel_dev),
+                               "SPP push command fatal error WAT_WCP: 0x%x\n", reg);
+
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+                       ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHCMDPARERR_WAT_WCP, reg);
+
+                       reset_required = true;
+               }
+       }
+
+       return reset_required;
+}
+
+static bool adf_handle_spp_pushdata_err(struct adf_accel_dev *accel_dev,
+                                       void __iomem *csr)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       u32 reg;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_ATH_CPH);
+       reg &= err_mask->parerr_ath_cph_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push data err ATH_CPH: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_ATH_CPH, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_CPR_XLT);
+       reg &= err_mask->parerr_cpr_xlt_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push data err CPR_XLT: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_CPR_XLT, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_DCPR_UCS);
+       reg &= err_mask->parerr_dcpr_ucs_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push data err DCPR_UCS: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_DCPR_UCS, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_PKE);
+       reg &= err_mask->parerr_pke_mask;
+       if (reg) {
+               dev_err(&GET_DEV(accel_dev),
+                       "SPP push data err PKE: 0x%x\n", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+               ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_PKE, reg);
+       }
+
+       if (err_mask->parerr_wat_wcp_mask) {
+               reg = ADF_CSR_RD(csr, ADF_GEN4_SPPPUSHDATAPARERR_WAT_WCP);
+               reg &= err_mask->parerr_wat_wcp_mask;
+               if (reg) {
+                       dev_err(&GET_DEV(accel_dev),
+                               "SPP push data err WAT_WCP: 0x%x\n", reg);
+
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+                       ADF_CSR_WR(csr, ADF_GEN4_SPPPUSHDATAPARERR_WAT_WCP,
+                                  reg);
+               }
+       }
+
+       return false;
+}
+
+static bool adf_handle_spppar_err(struct adf_accel_dev *accel_dev,
+                                 void __iomem *csr, u32 iastatssm)
+{
+       bool reset_required;
+
+       if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SPPPARERR_BIT))
+               return false;
+
+       reset_required = adf_handle_spp_pullcmd_err(accel_dev, csr);
+       reset_required |= adf_handle_spp_pulldata_err(accel_dev, csr);
+       reset_required |= adf_handle_spp_pushcmd_err(accel_dev, csr);
+       reset_required |= adf_handle_spp_pushdata_err(accel_dev, csr);
+
+       return reset_required;
+}
+
+static bool adf_handle_ssmcpppar_err(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr, u32 iastatssm)
+{
+       u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SSMCPPERR);
+       u32 bits_num = BITS_PER_REG(reg);
+       bool reset_required = false;
+       unsigned long errs_bits;
+       u32 bit_iterator;
+
+       if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SSMCPPERR_BIT))
+               return false;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SSMCPPERR);
+       reg &= ADF_GEN4_SSMCPPERR_FATAL_BITMASK | ADF_GEN4_SSMCPPERR_UNCERR_BITMASK;
+       if (reg & ADF_GEN4_SSMCPPERR_FATAL_BITMASK) {
+               dev_err(&GET_DEV(accel_dev),
+                       "Fatal SSM CPP parity error: 0x%x\n", reg);
+
+               errs_bits = reg & ADF_GEN4_SSMCPPERR_FATAL_BITMASK;
+               for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+               }
+               reset_required = true;
+       }
+
+       if (reg & ADF_GEN4_SSMCPPERR_UNCERR_BITMASK) {
+               dev_err(&GET_DEV(accel_dev),
+                       "non-Fatal SSM CPP parity error: 0x%x\n", reg);
+               errs_bits = reg & ADF_GEN4_SSMCPPERR_UNCERR_BITMASK;
+
+               for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               }
+       }
+
+       ADF_CSR_WR(csr, ADF_GEN4_SSMCPPERR, reg);
+
+       return reset_required;
+}
+
+static bool adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
+                                  void __iomem *csr, u32 iastatssm)
+{
+       struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+       u32 reg;
+
+       if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT))
+               return false;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC);
+       reg &= ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT;
+       if (reg) {
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH);
+       reg &= err_mask->parerr_ath_cph_mask;
+       if (reg) {
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT);
+       reg &= err_mask->parerr_cpr_xlt_mask;
+       if (reg) {
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS);
+       reg &= err_mask->parerr_dcpr_ucs_mask;
+       if (reg) {
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS, reg);
+       }
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE);
+       reg &= err_mask->parerr_pke_mask;
+       if (reg) {
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE, reg);
+       }
+
+       if (err_mask->parerr_wat_wcp_mask) {
+               reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP);
+               reg &= err_mask->parerr_wat_wcp_mask;
+               if (reg) {
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+                       ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP,
+                                  reg);
+               }
+       }
+
+       dev_err(&GET_DEV(accel_dev), "Slice ssm soft parity error reported");
+
+       return false;
+}
+
+static bool adf_handle_ser_err_ssmsh(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr, u32 iastatssm)
+{
+       u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SER_ERR_SSMSH);
+       u32 bits_num = BITS_PER_REG(reg);
+       bool reset_required = false;
+       unsigned long errs_bits;
+       u32 bit_iterator;
+
+       if (!(iastatssm & (ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_CERR_BIT |
+                        ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_UNCERR_BIT)))
+               return false;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_SER_ERR_SSMSH);
+       reg &= ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK |
+              ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK |
+              ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK;
+       if (reg & ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK) {
+               dev_err(&GET_DEV(accel_dev),
+                       "Fatal SER_SSMSH_ERR: 0x%x\n", reg);
+
+               errs_bits = reg & ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK;
+               for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+               }
+
+               reset_required = true;
+       }
+
+       if (reg & ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK) {
+               dev_err(&GET_DEV(accel_dev),
+                       "non-fatal SER_SSMSH_ERR: 0x%x\n", reg);
+
+               errs_bits = reg & ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK;
+               for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               }
+       }
+
+       if (reg & ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK) {
+               dev_warn(&GET_DEV(accel_dev),
+                        "Correctable SER_SSMSH_ERR: 0x%x\n", reg);
+
+               errs_bits = reg & ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK;
+               for_each_set_bit(bit_iterator, &errs_bits, bits_num) {
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+               }
+       }
+
+       ADF_CSR_WR(csr, ADF_GEN4_SER_ERR_SSMSH, reg);
+
+       return reset_required;
+}
+
+static bool adf_handle_iaintstatssm(struct adf_accel_dev *accel_dev,
+                                   void __iomem *csr)
+{
+       u32 iastatssm = ADF_CSR_RD(csr, ADF_GEN4_IAINTSTATSSM);
+       bool reset_required;
+
+       iastatssm &= ADF_GEN4_IAINTSTATSSM_BITMASK;
+       if (!iastatssm)
+               return false;
+
+       reset_required = adf_handle_uerrssmsh(accel_dev, csr, iastatssm);
+       reset_required |= adf_handle_cerrssmsh(accel_dev, csr, iastatssm);
+       reset_required |= adf_handle_pperr_err(accel_dev, csr, iastatssm);
+       reset_required |= adf_handle_slice_hang_error(accel_dev, csr, iastatssm);
+       reset_required |= adf_handle_spppar_err(accel_dev, csr, iastatssm);
+       reset_required |= adf_handle_ssmcpppar_err(accel_dev, csr, iastatssm);
+       reset_required |= adf_handle_rf_parr_err(accel_dev, csr, iastatssm);
+       reset_required |= adf_handle_ser_err_ssmsh(accel_dev, csr, iastatssm);
+
+       ADF_CSR_WR(csr, ADF_GEN4_IAINTSTATSSM, iastatssm);
+
+       return reset_required;
+}
+
+static bool adf_handle_exprpssmcmpr(struct adf_accel_dev *accel_dev,
+                                   void __iomem *csr)
+{
+       u32 reg = ADF_CSR_RD(csr, ADF_GEN4_EXPRPSSMCPR);
+
+       reg &= ADF_GEN4_EXPRPSSMCPR_UNCERR_BITMASK;
+       if (!reg)
+               return false;
+
+       dev_err(&GET_DEV(accel_dev),
+               "Uncorrectable error exception in SSM CMP: 0x%x", reg);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+       ADF_CSR_WR(csr, ADF_GEN4_EXPRPSSMCPR, reg);
+
+       return false;
+}
+
+static bool adf_handle_exprpssmxlt(struct adf_accel_dev *accel_dev,
+                                  void __iomem *csr)
+{
+       u32 reg = ADF_CSR_RD(csr, ADF_GEN4_EXPRPSSMXLT);
+
+       reg &= ADF_GEN4_EXPRPSSMXLT_UNCERR_BITMASK |
+              ADF_GEN4_EXPRPSSMXLT_CERR_BIT;
+       if (!reg)
+               return false;
+
+       if (reg & ADF_GEN4_EXPRPSSMXLT_UNCERR_BITMASK) {
+               dev_err(&GET_DEV(accel_dev),
+                       "Uncorrectable error exception in SSM XLT: 0x%x", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       }
+
+       if (reg & ADF_GEN4_EXPRPSSMXLT_CERR_BIT) {
+               dev_warn(&GET_DEV(accel_dev),
+                        "Correctable error exception in SSM XLT: 0x%x", reg);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+       }
+
+       ADF_CSR_WR(csr, ADF_GEN4_EXPRPSSMXLT, reg);
+
+       return false;
+}
+
+static bool adf_handle_exprpssmdcpr(struct adf_accel_dev *accel_dev,
+                                   void __iomem *csr)
+{
+       u32 reg;
+       int i;
+
+       for (i = 0; i < ADF_GEN4_DCPR_SLICES_NUM; i++) {
+               reg = ADF_CSR_RD(csr, ADF_GEN4_EXPRPSSMDCPR(i));
+               reg &= ADF_GEN4_EXPRPSSMDCPR_UNCERR_BITMASK |
+                      ADF_GEN4_EXPRPSSMDCPR_CERR_BITMASK;
+               if (!reg)
+                       continue;
+
+               if (reg & ADF_GEN4_EXPRPSSMDCPR_UNCERR_BITMASK) {
+                       dev_err(&GET_DEV(accel_dev),
+                               "Uncorrectable error exception in SSM DCMP: 0x%x", reg);
+
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+               }
+
+               if (reg & ADF_GEN4_EXPRPSSMDCPR_CERR_BITMASK) {
+                       dev_warn(&GET_DEV(accel_dev),
+                                "Correctable error exception in SSM DCMP: 0x%x", reg);
+
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+               }
+
+               ADF_CSR_WR(csr, ADF_GEN4_EXPRPSSMDCPR(i), reg);
+       }
+
+       return false;
+}
+
+static bool adf_handle_ssm(struct adf_accel_dev *accel_dev, void __iomem *csr,
+                          u32 errsou)
+{
+       bool reset_required;
+
+       if (!(errsou & ADF_GEN4_ERRSOU2_SSM_ERR_BIT))
+               return false;
+
+       reset_required = adf_handle_iaintstatssm(accel_dev, csr);
+       reset_required |= adf_handle_exprpssmcmpr(accel_dev, csr);
+       reset_required |= adf_handle_exprpssmxlt(accel_dev, csr);
+       reset_required |= adf_handle_exprpssmdcpr(accel_dev, csr);
+
+       return reset_required;
+}
+
+static bool adf_handle_cpp_cfc_err(struct adf_accel_dev *accel_dev,
+                                  void __iomem *csr, u32 errsou)
+{
+       bool reset_required = false;
+       u32 reg;
+
+       if (!(errsou & ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT))
+               return false;
+
+       reg = ADF_CSR_RD(csr, ADF_GEN4_CPP_CFC_ERR_STATUS);
+       if (reg & ADF_GEN4_CPP_CFC_ERR_STATUS_DATAPAR_BIT) {
+               dev_err(&GET_DEV(accel_dev),
+                       "CPP_CFC_ERR: data parity: 0x%x", reg);
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       }
+
+       if (reg & ADF_GEN4_CPP_CFC_ERR_STATUS_CMDPAR_BIT) {
+               dev_err(&GET_DEV(accel_dev),
+                       "CPP_CFC_ERR: command parity: 0x%x", reg);
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               reset_required = true;
+       }
+
+       if (reg & ADF_GEN4_CPP_CFC_ERR_STATUS_MERR_BIT) {
+               dev_err(&GET_DEV(accel_dev),
+                       "CPP_CFC_ERR: multiple errors: 0x%x", reg);
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               reset_required = true;
+       }
+
+       ADF_CSR_WR(csr, ADF_GEN4_CPP_CFC_ERR_STATUS_CLR,
+                  ADF_GEN4_CPP_CFC_ERR_STATUS_CLR_BITMASK);
+
+       return reset_required;
+}
+
+static void adf_gen4_process_errsou2(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr, u32 errsou,
+                                    bool *reset_required)
+{
+       *reset_required |= adf_handle_ssm(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_cpp_cfc_err(accel_dev, csr, errsou);
+}
+
+static bool adf_handle_timiscsts(struct adf_accel_dev *accel_dev,
+                                void __iomem *csr, u32 errsou)
+{
+       u32 timiscsts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU3_TIMISCSTS_BIT))
+               return false;
+
+       timiscsts = ADF_CSR_RD(csr, ADF_GEN4_TIMISCSTS);
+
+       dev_err(&GET_DEV(accel_dev),
+               "Fatal error in Transmit Interface: 0x%x\n", timiscsts);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+       return true;
+}
+
+static bool adf_handle_ricppintsts(struct adf_accel_dev *accel_dev,
+                                  void __iomem *csr, u32 errsou)
+{
+       u32 ricppintsts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU3_RICPPINTSTS_BITMASK))
+               return false;
+
+       ricppintsts = ADF_CSR_RD(csr, ADF_GEN4_RICPPINTSTS);
+       ricppintsts &= ADF_GEN4_RICPPINTSTS_BITMASK;
+
+       dev_err(&GET_DEV(accel_dev),
+               "RI CPP Uncorrectable Error: 0x%x\n", ricppintsts);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+       ADF_CSR_WR(csr, ADF_GEN4_RICPPINTSTS, ricppintsts);
+
+       return false;
+}
+
+static bool adf_handle_ticppintsts(struct adf_accel_dev *accel_dev,
+                                  void __iomem *csr, u32 errsou)
+{
+       u32 ticppintsts;
+
+       if (!(errsou & ADF_GEN4_ERRSOU3_TICPPINTSTS_BITMASK))
+               return false;
+
+       ticppintsts = ADF_CSR_RD(csr, ADF_GEN4_TICPPINTSTS);
+       ticppintsts &= ADF_GEN4_TICPPINTSTS_BITMASK;
+
+       dev_err(&GET_DEV(accel_dev),
+               "TI CPP Uncorrectable Error: 0x%x\n", ticppintsts);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+       ADF_CSR_WR(csr, ADF_GEN4_TICPPINTSTS, ticppintsts);
+
+       return false;
+}
+
+static bool adf_handle_aramcerr(struct adf_accel_dev *accel_dev,
+                               void __iomem *csr, u32 errsou)
+{
+       u32 aram_cerr;
+
+       if (!(errsou & ADF_GEN4_ERRSOU3_REG_ARAMCERR_BIT))
+               return false;
+
+       aram_cerr = ADF_CSR_RD(csr, ADF_GEN4_REG_ARAMCERR);
+       aram_cerr &= ADF_GEN4_REG_ARAMCERR_BIT;
+
+       dev_warn(&GET_DEV(accel_dev),
+                "ARAM correctable error : 0x%x\n", aram_cerr);
+
+       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_CORR);
+
+       aram_cerr |= ADF_GEN4_REG_ARAMCERR_EN_BITMASK;
+
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMCERR, aram_cerr);
+
+       return false;
+}
+
+static bool adf_handle_aramuerr(struct adf_accel_dev *accel_dev,
+                               void __iomem *csr, u32 errsou)
+{
+       bool reset_required = false;
+       u32 aramuerr;
+
+       if (!(errsou & ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT))
+               return false;
+
+       aramuerr = ADF_CSR_RD(csr, ADF_GEN4_REG_ARAMUERR);
+       aramuerr &= ADF_GEN4_REG_ARAMUERR_ERROR_BIT |
+                   ADF_GEN4_REG_ARAMUERR_MULTI_ERRORS_BIT;
+
+       if (!aramuerr)
+               return false;
+
+       if (aramuerr & ADF_GEN4_REG_ARAMUERR_MULTI_ERRORS_BIT) {
+               dev_err(&GET_DEV(accel_dev),
+                       "ARAM multiple uncorrectable errors: 0x%x\n", aramuerr);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               reset_required = true;
+       } else {
+               dev_err(&GET_DEV(accel_dev),
+                       "ARAM uncorrectable error: 0x%x\n", aramuerr);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       }
+
+       aramuerr |= ADF_GEN4_REG_ARAMUERR_EN_BITMASK;
+
+       ADF_CSR_WR(csr, ADF_GEN4_REG_ARAMUERR, aramuerr);
+
+       return reset_required;
+}
+
+static bool adf_handle_reg_cppmemtgterr(struct adf_accel_dev *accel_dev,
+                                       void __iomem *csr, u32 errsou)
+{
+       bool reset_required = false;
+       u32 cppmemtgterr;
+
+       if (!(errsou & ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT))
+               return false;
+
+       cppmemtgterr = ADF_CSR_RD(csr, ADF_GEN4_REG_CPPMEMTGTERR);
+       cppmemtgterr &= ADF_GEN4_REG_CPPMEMTGTERR_BITMASK |
+                       ADF_GEN4_REG_CPPMEMTGTERR_MULTI_ERRORS_BIT;
+       if (!cppmemtgterr)
+               return false;
+
+       if (cppmemtgterr & ADF_GEN4_REG_CPPMEMTGTERR_MULTI_ERRORS_BIT) {
+               dev_err(&GET_DEV(accel_dev),
+                       "Misc memory target multiple uncorrectable errors: 0x%x\n",
+                       cppmemtgterr);
+
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_FATAL);
+
+               reset_required = true;
+       } else {
+               dev_err(&GET_DEV(accel_dev),
+                       "Misc memory target uncorrectable error: 0x%x\n", cppmemtgterr);
+               ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       }
+
+       cppmemtgterr |= ADF_GEN4_REG_CPPMEMTGTERR_EN_BITMASK;
+
+       ADF_CSR_WR(csr, ADF_GEN4_REG_CPPMEMTGTERR, cppmemtgterr);
+
+       return reset_required;
+}
+
+static bool adf_handle_atufaultstatus(struct adf_accel_dev *accel_dev,
+                                     void __iomem *csr, u32 errsou)
+{
+       u32 i;
+       u32 max_rp_num = GET_HW_DATA(accel_dev)->num_banks;
+
+       if (!(errsou & ADF_GEN4_ERRSOU3_ATUFAULTSTATUS_BIT))
+               return false;
+
+       for (i = 0; i < max_rp_num; i++) {
+               u32 atufaultstatus = ADF_CSR_RD(csr, ADF_GEN4_ATUFAULTSTATUS(i));
+
+               atufaultstatus &= ADF_GEN4_ATUFAULTSTATUS_BIT;
+
+               if (atufaultstatus) {
+                       dev_err(&GET_DEV(accel_dev),
+                               "Ring Pair (%u) ATU detected fault: 0x%x\n", i,
+                               atufaultstatus);
+
+                       ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+
+                       ADF_CSR_WR(csr, ADF_GEN4_ATUFAULTSTATUS(i), atufaultstatus);
+               }
+       }
+
+       return false;
+}
+
+static void adf_gen4_process_errsou3(struct adf_accel_dev *accel_dev,
+                                    void __iomem *csr, void __iomem *aram_csr,
+                                    u32 errsou, bool *reset_required)
+{
+       *reset_required |= adf_handle_timiscsts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ricppintsts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_ticppintsts(accel_dev, csr, errsou);
+       *reset_required |= adf_handle_aramcerr(accel_dev, aram_csr, errsou);
+       *reset_required |= adf_handle_aramuerr(accel_dev, aram_csr, errsou);
+       *reset_required |= adf_handle_reg_cppmemtgterr(accel_dev, aram_csr, errsou);
+       *reset_required |= adf_handle_atufaultstatus(accel_dev, csr, errsou);
+}
+
+static bool adf_gen4_handle_interrupt(struct adf_accel_dev *accel_dev,
+                                     bool *reset_required)
+{
+       void __iomem *aram_csr = adf_get_aram_base(accel_dev);
+       void __iomem *csr = adf_get_pmisc_base(accel_dev);
+       u32 errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU0);
+       bool handled = false;
+
+       *reset_required = false;
+
+       if (errsou & ADF_GEN4_ERRSOU0_BIT) {
+               adf_gen4_process_errsou0(accel_dev, csr);
+               handled = true;
+       }
+
+       errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU1);
+       if (errsou & ADF_GEN4_ERRSOU1_BITMASK) {
+               adf_gen4_process_errsou1(accel_dev, csr, errsou, reset_required);
+               handled = true;
+       }
+
+       errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU2);
+       if (errsou & ADF_GEN4_ERRSOU2_BITMASK) {
+               adf_gen4_process_errsou2(accel_dev, csr, errsou, reset_required);
+               handled = true;
+       }
+
+       errsou = ADF_CSR_RD(csr, ADF_GEN4_ERRSOU3);
+       if (errsou & ADF_GEN4_ERRSOU3_BITMASK) {
+               adf_gen4_process_errsou3(accel_dev, csr, aram_csr, errsou, reset_required);
+               handled = true;
+       }
+
+       return handled;
+}
+
+void adf_gen4_init_ras_ops(struct adf_ras_ops *ras_ops)
+{
+       ras_ops->enable_ras_errors = adf_gen4_enable_ras;
+       ras_ops->disable_ras_errors = adf_gen4_disable_ras;
+       ras_ops->handle_interrupt = adf_gen4_handle_interrupt;
+}
+EXPORT_SYMBOL_GPL(adf_gen4_init_ras_ops);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.h b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.h
new file mode 100644 (file)
index 0000000..5335208
--- /dev/null
@@ -0,0 +1,825 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_GEN4_RAS_H_
+#define ADF_GEN4_RAS_H_
+
+#include <linux/bits.h>
+
+struct adf_ras_ops;
+
+/* ERRSOU0 Correctable error mask*/
+#define ADF_GEN4_ERRSOU0_BIT                           BIT(0)
+
+/* HI AE Correctable error log */
+#define ADF_GEN4_HIAECORERRLOG_CPP0                    0x41A308
+
+/* HI AE Correctable error log enable */
+#define ADF_GEN4_HIAECORERRLOGENABLE_CPP0              0x41A318
+#define ADF_GEN4_ERRSOU1_HIAEUNCERRLOG_CPP0_BIT                BIT(0)
+#define ADF_GEN4_ERRSOU1_HICPPAGENTCMDPARERRLOG_BIT    BIT(1)
+#define ADF_GEN4_ERRSOU1_RIMEM_PARERR_STS_BIT          BIT(2)
+#define ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT          BIT(3)
+#define ADF_GEN4_ERRSOU1_RIMISCSTS_BIT                 BIT(4)
+
+#define ADF_GEN4_ERRSOU1_BITMASK ( \
+       (ADF_GEN4_ERRSOU1_HIAEUNCERRLOG_CPP0_BIT)       | \
+       (ADF_GEN4_ERRSOU1_HICPPAGENTCMDPARERRLOG_BIT)   | \
+       (ADF_GEN4_ERRSOU1_RIMEM_PARERR_STS_BIT) | \
+       (ADF_GEN4_ERRSOU1_TIMEM_PARERR_STS_BIT) | \
+       (ADF_GEN4_ERRSOU1_RIMISCSTS_BIT))
+
+/* HI AE Uncorrectable error log */
+#define ADF_GEN4_HIAEUNCERRLOG_CPP0                    0x41A300
+
+/* HI AE Uncorrectable error log enable */
+#define ADF_GEN4_HIAEUNCERRLOGENABLE_CPP0              0x41A320
+
+/* HI CPP Agent Command parity error log */
+#define ADF_GEN4_HICPPAGENTCMDPARERRLOG                        0x41A310
+
+/* HI CPP Agent Command parity error logging enable */
+#define ADF_GEN4_HICPPAGENTCMDPARERRLOGENABLE          0x41A314
+
+/* RI Memory parity error status register */
+#define ADF_GEN4_RIMEM_PARERR_STS                      0x41B128
+
+/* RI Memory parity error reporting enable */
+#define ADF_GEN4_RI_MEM_PAR_ERR_EN0                    0x41B12C
+
+/*
+ * RI Memory parity error mask
+ * BIT(0) - BIT(3) - ri_iosf_pdata_rxq[0:3] parity error
+ * BIT(4) - ri_tlq_phdr parity error
+ * BIT(5) - ri_tlq_pdata parity error
+ * BIT(6) - ri_tlq_nphdr parity error
+ * BIT(7) - ri_tlq_npdata parity error
+ * BIT(8) - BIT(9) - ri_tlq_cplhdr[0:1] parity error
+ * BIT(10) - BIT(17) - ri_tlq_cpldata[0:7] parity error
+ * BIT(18) - set this bit to 1 to enable logging status to ri_mem_par_err_sts0
+ * BIT(19) - ri_cds_cmd_fifo parity error
+ * BIT(20) - ri_obc_ricpl_fifo parity error
+ * BIT(21) - ri_obc_tiricpl_fifo parity error
+ * BIT(22) - ri_obc_cppcpl_fifo parity error
+ * BIT(23) - ri_obc_pendcpl_fifo parity error
+ * BIT(24) - ri_cpp_cmd_fifo parity error
+ * BIT(25) - ri_cds_ticmd_fifo parity error
+ * BIT(26) - riti_cmd_fifo parity error
+ * BIT(27) - ri_int_msixtbl parity error
+ * BIT(28) - ri_int_imstbl parity error
+ * BIT(30) - ri_kpt_fuses parity error
+ */
+#define ADF_GEN4_RIMEM_PARERR_STS_UNCERR_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(5) | \
+        BIT(7) | BIT(10) | BIT(11) | BIT(12) | BIT(13) | \
+        BIT(14) | BIT(15) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | \
+        BIT(20) | BIT(21) | BIT(22) | BIT(23) | BIT(24) | BIT(25) | \
+        BIT(26) | BIT(27) | BIT(28) | BIT(30))
+
+#define ADF_GEN4_RIMEM_PARERR_STS_FATAL_BITMASK \
+       (BIT(4) | BIT(6) | BIT(8) | BIT(9))
+
+/* TI CI parity status */
+#define ADF_GEN4_TI_CI_PAR_STS                         0x50060C
+
+/* TI CI parity reporting mask */
+#define ADF_GEN4_TI_CI_PAR_ERR_MASK                    0x500608
+
+/*
+ * TI CI parity status mask
+ * BIT(0) - CdCmdQ_sts patiry error status
+ * BIT(1) - CdDataQ_sts parity error status
+ * BIT(3) - CPP_SkidQ_sts parity error status
+ * BIT(7) - CPP_SkidQ_sc_sts parity error status
+ */
+#define ADF_GEN4_TI_CI_PAR_STS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(3) | BIT(7))
+
+/* TI PULLFUB parity status */
+#define ADF_GEN4_TI_PULL0FUB_PAR_STS                   0x500618
+
+/* TI PULLFUB parity error reporting mask */
+#define ADF_GEN4_TI_PULL0FUB_PAR_ERR_MASK              0x500614
+
+/*
+ * TI PULLFUB parity status mask
+ * BIT(0) - TrnPullReqQ_sts parity status
+ * BIT(1) - TrnSharedDataQ_sts parity status
+ * BIT(2) - TrnPullReqDataQ_sts parity status
+ * BIT(4) - CPP_CiPullReqQ_sts parity status
+ * BIT(5) - CPP_TrnPullReqQ_sts parity status
+ * BIT(6) - CPP_PullidQ_sts parity status
+ * BIT(7) - CPP_WaitDataQ_sts parity status
+ * BIT(8) - CPP_CdDataQ_sts parity status
+ * BIT(9) - CPP_TrnDataQP0_sts parity status
+ * BIT(10) - BIT(11) - CPP_TrnDataQRF[00:01]_sts parity status
+ * BIT(12) - CPP_TrnDataQP1_sts parity status
+ * BIT(13) - BIT(14) - CPP_TrnDataQRF[10:11]_sts parity status
+ */
+#define ADF_GEN4_TI_PULL0FUB_PAR_STS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(4) | BIT(5) | BIT(6) | BIT(7) | \
+        BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(12) | BIT(13) | BIT(14))
+
+/* TI PUSHUB parity status */
+#define ADF_GEN4_TI_PUSHFUB_PAR_STS                    0x500630
+
+/* TI PUSHFUB parity error reporting mask */
+#define ADF_GEN4_TI_PUSHFUB_PAR_ERR_MASK               0x50062C
+
+/*
+ * TI PUSHUB parity status mask
+ * BIT(0) - SbPushReqQ_sts parity status
+ * BIT(1) - BIT(2) - SbPushDataQ[0:1]_sts parity status
+ * BIT(4) - CPP_CdPushReqQ_sts parity status
+ * BIT(5) - BIT(6) - CPP_CdPushDataQ[0:1]_sts parity status
+ * BIT(7) - CPP_SbPushReqQ_sts parity status
+ * BIT(8) - CPP_SbPushDataQP_sts parity status
+ * BIT(9) - BIT(10) - CPP_SbPushDataQRF[0:1]_sts parity status
+ */
+#define ADF_GEN4_TI_PUSHFUB_PAR_STS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(4) | BIT(5) | \
+        BIT(6) | BIT(7) | BIT(8) | BIT(9) | BIT(10))
+
+/* TI CD parity status */
+#define ADF_GEN4_TI_CD_PAR_STS                         0x50063C
+
+/* TI CD parity error mask */
+#define ADF_GEN4_TI_CD_PAR_ERR_MASK                    0x500638
+
+/*
+ * TI CD parity status mask
+ * BIT(0) - BIT(15) - CtxMdRam[0:15]_sts parity status
+ * BIT(16) - Leaf2ClusterRam_sts parity status
+ * BIT(17) - BIT(18) - Ring2LeafRam[0:1]_sts parity status
+ * BIT(19) - VirtualQ_sts parity status
+ * BIT(20) - DtRdQ_sts parity status
+ * BIT(21) - DtWrQ_sts parity status
+ * BIT(22) - RiCmdQ_sts parity status
+ * BIT(23) - BypassQ_sts parity status
+ * BIT(24) - DtRdQ_sc_sts parity status
+ * BIT(25) - DtWrQ_sc_sts parity status
+ */
+#define ADF_GEN4_TI_CD_PAR_STS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(6) | \
+        BIT(7) | BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(12) | BIT(13) | \
+        BIT(14) | BIT(15) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | BIT(20) | \
+        BIT(21) | BIT(22) | BIT(23) | BIT(24) | BIT(25))
+
+/* TI TRNSB parity status */
+#define ADF_GEN4_TI_TRNSB_PAR_STS                      0x500648
+
+/* TI TRNSB Parity error reporting mask */
+#define ADF_GEN4_TI_TRNSB_PAR_ERR_MASK                 0x500644
+
+/*
+ * TI TRNSB parity status mask
+ * BIT(0) - TrnPHdrQP_sts parity status
+ * BIT(1) - TrnPHdrQRF_sts parity status
+ * BIT(2) - TrnPDataQP_sts parity status
+ * BIT(3) - BIT(6) - TrnPDataQRF[0:3]_sts parity status
+ * BIT(7) - TrnNpHdrQP_sts parity status
+ * BIT(8) - BIT(9) - TrnNpHdrQRF[0:1]_sts parity status
+ * BIT(10) - TrnCplHdrQ_sts parity status
+ * BIT(11) - TrnPutObsReqQ_sts parity status
+ * BIT(12) - TrnPushReqQ_sts parity status
+ * BIT(13) - SbSplitIdRam_sts parity status
+ * BIT(14) - SbReqCountQ_sts parity status
+ * BIT(15) - SbCplTrkRam_sts parity status
+ * BIT(16) - SbGetObsReqQ_sts parity status
+ * BIT(17) - SbEpochIdQ_sts parity status
+ * BIT(18) - SbAtCplHdrQ_sts parity status
+ * BIT(19) - SbAtCplDataQ_sts parity status
+ * BIT(20) - SbReqCountRam_sts parity status
+ * BIT(21) - SbAtCplHdrQ_sc_sts parity status
+ */
+#define ADF_GEN4_TI_TRNSB_PAR_STS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(6) | \
+        BIT(7) | BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(12) | \
+        BIT(13) | BIT(14) | BIT(15) | BIT(16) | BIT(17) | BIT(18) | \
+        BIT(19) | BIT(20) | BIT(21))
+
+/* Status register to log misc error on RI */
+#define ADF_GEN4_RIMISCSTS                             0x41B1B8
+
+/* Status control register to log misc RI error */
+#define ADF_GEN4_RIMISCCTL                             0x41B1BC
+
+/*
+ * ERRSOU2 bit mask
+ * BIT(0) - SSM Interrupt Mask
+ * BIT(1) - CFC on CPP. ORed of CFC Push error and Pull error
+ * BIT(2) - BIT(4) - CPP attention interrupts, deprecated on gen4 devices
+ * BIT(18) - PM interrupt
+ */
+#define ADF_GEN4_ERRSOU2_SSM_ERR_BIT                   BIT(0)
+#define ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT        BIT(1)
+#define ADF_GEN4_ERRSOU2_CPP_CFC_ATT_INT_BITMASK \
+       (BIT(2) | BIT(3) | BIT(4))
+
+#define ADF_GEN4_ERRSOU2_PM_INT_BIT                    BIT(18)
+
+#define ADF_GEN4_ERRSOU2_BITMASK \
+       (ADF_GEN4_ERRSOU2_SSM_ERR_BIT | \
+        ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT)
+
+#define ADF_GEN4_ERRSOU2_DIS_BITMASK \
+       (ADF_GEN4_ERRSOU2_SSM_ERR_BIT | \
+        ADF_GEN4_ERRSOU2_CPP_CFC_ERR_STATUS_BIT | \
+        ADF_GEN4_ERRSOU2_CPP_CFC_ATT_INT_BITMASK)
+
+#define ADF_GEN4_IAINTSTATSSM                          0x28
+
+/* IAINTSTATSSM error bit mask definitions */
+#define ADF_GEN4_IAINTSTATSSM_UERRSSMSH_BIT            BIT(0)
+#define ADF_GEN4_IAINTSTATSSM_CERRSSMSH_BIT            BIT(1)
+#define ADF_GEN4_IAINTSTATSSM_PPERR_BIT                        BIT(2)
+#define ADF_GEN4_IAINTSTATSSM_SLICEHANG_ERR_BIT                BIT(3)
+#define ADF_GEN4_IAINTSTATSSM_SPPPARERR_BIT            BIT(4)
+#define ADF_GEN4_IAINTSTATSSM_SSMCPPERR_BIT            BIT(5)
+#define ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT   BIT(6)
+#define ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_CERR_BIT   BIT(7)
+#define ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_UNCERR_BIT BIT(8)
+
+#define ADF_GEN4_IAINTSTATSSM_BITMASK \
+       (ADF_GEN4_IAINTSTATSSM_UERRSSMSH_BIT | \
+        ADF_GEN4_IAINTSTATSSM_CERRSSMSH_BIT | \
+        ADF_GEN4_IAINTSTATSSM_PPERR_BIT | \
+        ADF_GEN4_IAINTSTATSSM_SLICEHANG_ERR_BIT | \
+        ADF_GEN4_IAINTSTATSSM_SPPPARERR_BIT | \
+        ADF_GEN4_IAINTSTATSSM_SSMCPPERR_BIT | \
+        ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT | \
+        ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_CERR_BIT | \
+        ADF_GEN4_IAINTSTATSSM_SER_ERR_SSMSH_UNCERR_BIT)
+
+#define ADF_GEN4_UERRSSMSH                             0x18
+
+/*
+ * UERRSSMSH error bit masks definitions
+ *
+ * BIT(0) - Indicates one uncorrectable error
+ * BIT(15) - Indicates multiple uncorrectable errors
+ *          in device shared memory
+ */
+#define ADF_GEN4_UERRSSMSH_BITMASK                     (BIT(0) | BIT(15))
+
+#define ADF_GEN4_UERRSSMSHAD                           0x1C
+
+#define ADF_GEN4_CERRSSMSH                             0x10
+
+/*
+ * CERRSSMSH error bit
+ * BIT(0) - Indicates one correctable error
+ */
+#define ADF_GEN4_CERRSSMSH_ERROR_BIT                   BIT(0)
+
+#define ADF_GEN4_CERRSSMSHAD                           0x14
+
+/* SSM error handling features enable register */
+#define ADF_GEN4_SSMFEATREN                            0x198
+
+/*
+ * Disable SSM error detection and reporting features
+ * enabled by device driver on RAS initialization
+ *
+ * following bits should be cleared :
+ * BIT(4)  - Disable parity for CPP parity
+ * BIT(12) - Disable logging push/pull data error in pperr register.
+ * BIT(16) - BIT(23) - Disable parity for SPPs
+ * BIT(24) - BIT(27) - Disable parity for SPPs, if it's supported on the device.
+ */
+#define ADF_GEN4_SSMFEATREN_DIS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(5) | BIT(6) | BIT(7) | \
+        BIT(8) | BIT(9) | BIT(10) | BIT(11) | BIT(13) | BIT(14) | BIT(15))
+
+#define ADF_GEN4_INTMASKSSM                            0x0
+
+/*
+ * Error reporting mask in INTMASKSSM
+ * BIT(0) - Shared memory uncorrectable interrupt mask
+ * BIT(1) - Shared memory correctable interrupt mask
+ * BIT(2) - PPERR interrupt mask
+ * BIT(3) - CPP parity error Interrupt mask
+ * BIT(4) - SSM interrupt generated by SER correctable error mask
+ * BIT(5) - SSM interrupt generated by SER uncorrectable error
+ *         - not stop and scream - mask
+ */
+#define ADF_GEN4_INTMASKSSM_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5))
+
+/* CPP push or pull error */
+#define ADF_GEN4_PPERR                                 0x8
+
+#define ADF_GEN4_PPERR_BITMASK                         (BIT(0) | BIT(1))
+
+#define ADF_GEN4_PPERRID                               0xC
+
+/* Slice hang handling related registers */
+#define ADF_GEN4_SLICEHANGSTATUS_ATH_CPH               0x84
+#define ADF_GEN4_SLICEHANGSTATUS_CPR_XLT               0x88
+#define ADF_GEN4_SLICEHANGSTATUS_DCPR_UCS              0x90
+#define ADF_GEN4_SLICEHANGSTATUS_WAT_WCP               0x8C
+#define ADF_GEN4_SLICEHANGSTATUS_PKE                   0x94
+
+#define ADF_GEN4_SHINTMASKSSM_ATH_CPH                  0xF0
+#define ADF_GEN4_SHINTMASKSSM_CPR_XLT                  0xF4
+#define ADF_GEN4_SHINTMASKSSM_DCPR_UCS                 0xFC
+#define ADF_GEN4_SHINTMASKSSM_WAT_WCP                  0xF8
+#define ADF_GEN4_SHINTMASKSSM_PKE                      0x100
+
+/* SPP pull cmd parity err_*slice* CSR */
+#define ADF_GEN4_SPPPULLCMDPARERR_ATH_CPH              0x1A4
+#define ADF_GEN4_SPPPULLCMDPARERR_CPR_XLT              0x1A8
+#define ADF_GEN4_SPPPULLCMDPARERR_DCPR_UCS             0x1B0
+#define ADF_GEN4_SPPPULLCMDPARERR_PKE                  0x1B4
+#define ADF_GEN4_SPPPULLCMDPARERR_WAT_WCP              0x1AC
+
+/* SPP pull data parity err_*slice* CSR */
+#define ADF_GEN4_SPPPULLDATAPARERR_ATH_CPH             0x1BC
+#define ADF_GEN4_SPPPULLDATAPARERR_CPR_XLT             0x1C0
+#define ADF_GEN4_SPPPULLDATAPARERR_DCPR_UCS            0x1C8
+#define ADF_GEN4_SPPPULLDATAPARERR_PKE                 0x1CC
+#define ADF_GEN4_SPPPULLDATAPARERR_WAT_WCP             0x1C4
+
+/* SPP push cmd parity err_*slice* CSR */
+#define ADF_GEN4_SPPPUSHCMDPARERR_ATH_CPH              0x1D4
+#define ADF_GEN4_SPPPUSHCMDPARERR_CPR_XLT              0x1D8
+#define ADF_GEN4_SPPPUSHCMDPARERR_DCPR_UCS             0x1E0
+#define ADF_GEN4_SPPPUSHCMDPARERR_PKE                  0x1E4
+#define ADF_GEN4_SPPPUSHCMDPARERR_WAT_WCP              0x1DC
+
+/* SPP push data parity err_*slice* CSR */
+#define ADF_GEN4_SPPPUSHDATAPARERR_ATH_CPH             0x1EC
+#define ADF_GEN4_SPPPUSHDATAPARERR_CPR_XLT             0x1F0
+#define ADF_GEN4_SPPPUSHDATAPARERR_DCPR_UCS            0x1F8
+#define ADF_GEN4_SPPPUSHDATAPARERR_PKE                 0x1FC
+#define ADF_GEN4_SPPPUSHDATAPARERR_WAT_WCP             0x1F4
+
+/* Accelerator SPP parity error mask registers */
+#define ADF_GEN4_SPPPARERRMSK_ATH_CPH                  0x204
+#define ADF_GEN4_SPPPARERRMSK_CPR_XLT                  0x208
+#define ADF_GEN4_SPPPARERRMSK_DCPR_UCS                 0x210
+#define ADF_GEN4_SPPPARERRMSK_PKE                      0x214
+#define ADF_GEN4_SPPPARERRMSK_WAT_WCP                  0x20C
+
+#define ADF_GEN4_SSMCPPERR                             0x224
+
+/*
+ * Uncorrectable error mask in SSMCPPERR
+ * BIT(0) - indicates CPP command parity error
+ * BIT(1) - indicates CPP Main Push PPID parity error
+ * BIT(2) - indicates CPP Main ePPID parity error
+ * BIT(3) - indicates CPP Main push data parity error
+ * BIT(4) - indicates CPP Main Pull PPID parity error
+ * BIT(5) - indicates CPP target pull data parity error
+ */
+#define ADF_GEN4_SSMCPPERR_FATAL_BITMASK \
+       (BIT(0) | BIT(1) | BIT(4))
+
+#define ADF_GEN4_SSMCPPERR_UNCERR_BITMASK \
+       (BIT(2) | BIT(3) | BIT(5))
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_SRC                        0x9C
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_SRC            0xB8
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH            0xA0
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_ATH_CPH                0xBC
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT            0xA4
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_CPR_XLT                0xC0
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS           0xAC
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_DCPR_UCS       0xC8
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_PKE                        0xB0
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_PKE            0xCC
+
+#define ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP            0xA8
+#define ADF_GEN4_SSMSOFTERRORPARITYMASK_WAT_WCP                0xC4
+
+/* RF parity error detected in SharedRAM */
+#define ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT            BIT(0)
+
+#define ADF_GEN4_SER_ERR_SSMSH                         0x44C
+
+/*
+ * Fatal error mask in SER_ERR_SSMSH
+ * BIT(0) - Indicates an uncorrectable error has occurred in the
+ *          accelerator controller command RFs
+ * BIT(2) - Parity error occurred in the bank SPP fifos
+ * BIT(3) - Indicates Parity error occurred in following fifos in
+ *          the design
+ * BIT(4) - Parity error occurred in flops in the design
+ * BIT(5) - Uncorrectable error has occurred in the
+ *         target push and pull data register flop
+ * BIT(7) - Indicates Parity error occurred in the Resource Manager
+ *         pending lock request fifos
+ * BIT(8) - Indicates Parity error occurred in the Resource Manager
+ *         MECTX command queues logic
+ * BIT(9) - Indicates Parity error occurred in the Resource Manager
+ *         MECTX sigdone fifo flops
+ * BIT(10) - Indicates an uncorrectable error has occurred in the
+ *          Resource Manager MECTX command RFs
+ * BIT(14) - Parity error occurred in Buffer Manager sigdone FIFO
+ */
+ #define ADF_GEN4_SER_ERR_SSMSH_FATAL_BITMASK \
+        (BIT(0) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(7) | \
+         BIT(8) | BIT(9) | BIT(10) | BIT(14))
+
+/*
+ * Uncorrectable error mask in SER_ERR_SSMSH
+ * BIT(12) Parity error occurred in Buffer Manager pool 0
+ * BIT(13) Parity error occurred in Buffer Manager pool 1
+ */
+#define ADF_GEN4_SER_ERR_SSMSH_UNCERR_BITMASK \
+       (BIT(12) | BIT(13))
+
+/*
+ * Correctable error mask in SER_ERR_SSMSH
+ * BIT(1) - Indicates a correctable Error has occurred
+ *         in the slice controller command RFs
+ * BIT(6) - Indicates a correctable Error has occurred in
+ *         the target push and pull data RFs
+ * BIT(11) - Indicates an correctable Error has occurred in
+ *          the Resource Manager MECTX command RFs
+ */
+#define ADF_GEN4_SER_ERR_SSMSH_CERR_BITMASK \
+       (BIT(1) | BIT(6) | BIT(11))
+
+/* SSM shared memory SER error reporting mask */
+#define ADF_GEN4_SER_EN_SSMSH                          0x450
+
+/*
+ * SSM SER error reporting mask in SER_en_err_ssmsh
+ * BIT(0) - Enables uncorrectable Error detection in :
+ *         1) slice controller command RFs.
+ *         2) target push/pull data registers
+ * BIT(1) - Enables correctable Error detection in :
+ *         1) slice controller command RFs
+ *         2) target push/pull data registers
+ * BIT(2) - Enables Parity error detection in
+ *         1) bank SPP fifos
+ *         2) gen4_pull_id_queue
+ *         3) gen4_push_id_queue
+ *         4) AE_pull_sigdn_fifo
+ *         5) DT_push_sigdn_fifo
+ *         6) slx_push_sigdn_fifo
+ *         7) secure_push_cmd_fifo
+ *         8) secure_pull_cmd_fifo
+ *         9) Head register in FIFO wrapper
+ *         10) current_cmd in individual push queue
+ *         11) current_cmd in individual pull queue
+ *         12) push_command_rxp arbitrated in ssm_push_cmd_queues
+ *         13) pull_command_rxp arbitrated in ssm_pull_cmd_queues
+ * BIT(3) - Enables uncorrectable Error detection in
+ *         the resource manager mectx cmd RFs.
+ * BIT(4) - Enables correctable error detection in the Resource Manager
+ *         mectx command RFs
+ * BIT(5) - Enables Parity error detection in
+ *         1) resource manager lock request fifo
+ *         2) mectx cmdqueues logic
+ *         3) mectx sigdone fifo
+ * BIT(6) - Enables Parity error detection in Buffer Manager pools
+ *         and sigdone fifo
+ */
+#define ADF_GEN4_SER_EN_SSMSH_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | BIT(5) | BIT(6))
+
+#define ADF_GEN4_CPP_CFC_ERR_STATUS                    0x640C04
+
+/*
+ * BIT(1) - Indicates multiple CPP CFC errors
+ * BIT(7) - Indicates CPP CFC command parity error type
+ * BIT(8) - Indicated CPP CFC data parity error type
+ */
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_MERR_BIT           BIT(1)
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_CMDPAR_BIT         BIT(7)
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_DATAPAR_BIT                BIT(8)
+
+/*
+ * BIT(0) - Enables CFC to detect and log push/pull data error
+ * BIT(1) - Enables CFC to generate interrupt to PCIEP for CPP error
+ * BIT(4) - When 1 Parity detection is disabled
+ * BIT(5) - When 1 Parity detection is disabled on CPP command bus
+ * BIT(6) - When 1 Parity detection is disabled on CPP push/pull bus
+ * BIT(9) - When 1 RF parity error detection is disabled
+ */
+#define ADF_GEN4_CPP_CFC_ERR_CTRL_BITMASK              (BIT(0) | BIT(1))
+
+#define ADF_GEN4_CPP_CFC_ERR_CTRL_DIS_BITMASK \
+       (BIT(4) | BIT(5) | BIT(6) | BIT(9) | BIT(10))
+
+#define ADF_GEN4_CPP_CFC_ERR_CTRL                      0x640C00
+
+/*
+ * BIT(0) - Clears bit(0) of ADF_GEN4_CPP_CFC_ERR_STATUS
+ *         when an error is reported on CPP
+ * BIT(1) - Clears bit(1) of ADF_GEN4_CPP_CFC_ERR_STATUS
+ *         when multiple errors are reported on CPP
+ * BIT(2) - Clears bit(2) of ADF_GEN4_CPP_CFC_ERR_STATUS
+ *         when attention interrupt is reported
+ */
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_CLR_BITMASK (BIT(0) | BIT(1) | BIT(2))
+#define ADF_GEN4_CPP_CFC_ERR_STATUS_CLR                        0x640C08
+
+#define ADF_GEN4_CPP_CFC_ERR_PPID_LO                   0x640C0C
+#define ADF_GEN4_CPP_CFC_ERR_PPID_HI                   0x640C10
+
+/* Exception reporting in QAT SSM CMP */
+#define ADF_GEN4_EXPRPSSMCPR                           0x2000
+
+/*
+ * Uncorrectable error mask in EXPRPSSMCPR
+ * BIT(2) - Hard fatal error
+ * BIT(16) - Parity error detected in CPR Push FIFO
+ * BIT(17) - Parity error detected in CPR Pull FIFO
+ * BIT(18) - Parity error detected in CPR Hash Table
+ * BIT(19) - Parity error detected in CPR History Buffer Copy 0
+ * BIT(20) - Parity error detected in CPR History Buffer Copy 1
+ * BIT(21) - Parity error detected in CPR History Buffer Copy 2
+ * BIT(22) - Parity error detected in CPR History Buffer Copy 3
+ * BIT(23) - Parity error detected in CPR History Buffer Copy 4
+ * BIT(24) - Parity error detected in CPR History Buffer Copy 5
+ * BIT(25) - Parity error detected in CPR History Buffer Copy 6
+ * BIT(26) - Parity error detected in CPR History Buffer Copy 7
+ */
+#define ADF_GEN4_EXPRPSSMCPR_UNCERR_BITMASK \
+       (BIT(2) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | BIT(20) | \
+        BIT(21) | BIT(22) | BIT(23) | BIT(24) | BIT(25) | BIT(26))
+
+/* Exception reporting in QAT SSM XLT */
+#define ADF_GEN4_EXPRPSSMXLT                           0xA000
+
+/*
+ * Uncorrectable error mask in EXPRPSSMXLT
+ * BIT(2) - If set, an Uncorrectable Error event occurred
+ * BIT(16) - Parity error detected in XLT Push FIFO
+ * BIT(17) - Parity error detected in XLT Pull FIFO
+ * BIT(18) - Parity error detected in XLT HCTB0
+ * BIT(19) - Parity error detected in XLT HCTB1
+ * BIT(20) - Parity error detected in XLT HCTB2
+ * BIT(21) - Parity error detected in XLT HCTB3
+ * BIT(22) - Parity error detected in XLT CBCL
+ * BIT(23) - Parity error detected in XLT LITPTR
+ */
+#define ADF_GEN4_EXPRPSSMXLT_UNCERR_BITMASK \
+       (BIT(2) | BIT(16) | BIT(17) | BIT(18) | BIT(19) | BIT(20) | BIT(21) | \
+        BIT(22) | BIT(23))
+
+/*
+ * Correctable error mask in EXPRPSSMXLT
+ * BIT(3) - Correctable error event occurred.
+ */
+#define ADF_GEN4_EXPRPSSMXLT_CERR_BIT                  BIT(3)
+
+/* Exception reporting in QAT SSM DCMP */
+#define ADF_GEN4_EXPRPSSMDCPR(_n_) (0x12000 + (_n_) * 0x80)
+
+/*
+ * Uncorrectable error mask in EXPRPSSMDCPR
+ * BIT(2) - Even hard fatal error
+ * BIT(4) - Odd hard fatal error
+ * BIT(6) - decode soft error
+ * BIT(16) - Parity error detected in CPR Push FIFO
+ * BIT(17) - Parity error detected in CPR Pull FIFO
+ * BIT(18) - Parity error detected in the Input Buffer
+ * BIT(19) - symbuf0parerr
+ *          Parity error detected in CPR Push FIFO
+ * BIT(20) - symbuf1parerr
+ *          Parity error detected in CPR Push FIFO
+ */
+#define ADF_GEN4_EXPRPSSMDCPR_UNCERR_BITMASK \
+       (BIT(2) | BIT(4) | BIT(6) | BIT(16) | BIT(17) | \
+        BIT(18) | BIT(19) | BIT(20))
+
+/*
+ * Correctable error mask in EXPRPSSMDCPR
+ * BIT(3) - Even ecc correctable error
+ * BIT(5) - Odd ecc correctable error
+ */
+#define ADF_GEN4_EXPRPSSMDCPR_CERR_BITMASK             (BIT(3) | BIT(5))
+
+#define ADF_GEN4_DCPR_SLICES_NUM                       3
+
+/*
+ * ERRSOU3 bit masks
+ * BIT(0) - indicates error Response Order Overflow and/or BME error
+ * BIT(1) - indicates RI push/pull error
+ * BIT(2) - indicates TI push/pull error
+ * BIT(3) - indicates ARAM correctable error
+ * BIT(4) - indicates ARAM uncorrectable error
+ * BIT(5) - indicates TI pull parity error
+ * BIT(6) - indicates RI push parity error
+ * BIT(7) - indicates VFLR interrupt
+ * BIT(8) - indicates ring pair interrupts for ATU detected fault
+ * BIT(9) - indicates error when accessing RLT block
+ */
+#define ADF_GEN4_ERRSOU3_TIMISCSTS_BIT                 BIT(0)
+#define ADF_GEN4_ERRSOU3_RICPPINTSTS_BITMASK           (BIT(1) | BIT(6))
+#define ADF_GEN4_ERRSOU3_TICPPINTSTS_BITMASK           (BIT(2) | BIT(5))
+#define ADF_GEN4_ERRSOU3_REG_ARAMCERR_BIT              BIT(3)
+#define ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT              BIT(4)
+#define ADF_GEN4_ERRSOU3_VFLRNOTIFY_BIT                        BIT(7)
+#define ADF_GEN4_ERRSOU3_ATUFAULTSTATUS_BIT            BIT(8)
+#define ADF_GEN4_ERRSOU3_RLTERROR_BIT                  BIT(9)
+
+#define ADF_GEN4_ERRSOU3_BITMASK ( \
+       (ADF_GEN4_ERRSOU3_TIMISCSTS_BIT) | \
+       (ADF_GEN4_ERRSOU3_RICPPINTSTS_BITMASK) | \
+       (ADF_GEN4_ERRSOU3_TICPPINTSTS_BITMASK) | \
+       (ADF_GEN4_ERRSOU3_REG_ARAMCERR_BIT) | \
+       (ADF_GEN4_ERRSOU3_REG_ARAMUERR_BIT) | \
+       (ADF_GEN4_ERRSOU3_VFLRNOTIFY_BIT) | \
+       (ADF_GEN4_ERRSOU3_ATUFAULTSTATUS_BIT) | \
+       (ADF_GEN4_ERRSOU3_RLTERROR_BIT))
+
+/* TI Misc status register */
+#define ADF_GEN4_TIMISCSTS                             0x50054C
+
+/* TI Misc error reporting mask */
+#define ADF_GEN4_TIMISCCTL                             0x500548
+
+/*
+ * TI Misc error reporting control mask
+ * BIT(0) - Enables error detection and logging in TIMISCSTS register
+ * BIT(1) - It has effect only when SRIOV enabled, this bit is 0 by default
+ * BIT(2) - Enables the D-F-x counter within the dispatch arbiter
+ *         to start based on the command triggered from
+ * BIT(30) - Disables VFLR functionality
+ *          By setting this bit will revert to CPM1.x functionality
+ * bits 1, 2 and 30 value should be preserved and not meant to be changed
+ * within RAS.
+ */
+#define ADF_GEN4_TIMISCCTL_BIT                         BIT(0)
+#define ADF_GEN4_TIMSCCTL_RELAY_BITMASK (BIT(1) | BIT(2) | BIT(30))
+
+/* RI CPP interface status register */
+#define ADF_GEN4_RICPPINTSTS                           0x41A330
+
+/*
+ * Uncorrectable error mask in RICPPINTSTS register
+ * BIT(0) - RI asserted the CPP error signal during a push
+ * BIT(1) - RI detected the CPP error signal asserted during a pull
+ * BIT(2) - RI detected a push data parity error
+ * BIT(3) - RI detected a push valid parity error
+ */
+#define ADF_GEN4_RICPPINTSTS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3))
+
+/* RI CPP interface status register control */
+#define ADF_GEN4_RICPPINTCTL                           0x41A32C
+
+/*
+ * Control bit mask for RICPPINTCTL register
+ * BIT(0) - value of 1 enables error detection and reporting
+ *         on the RI CPP Push interface
+ * BIT(1) - value of 1 enables error detection and reporting
+ *         on the RI CPP Pull interface
+ * BIT(2) - value of 1 enables error detection and reporting
+ *         on the RI Parity
+ * BIT(3) - value of 1 enable checking parity on CPP
+ * BIT(4) - value of 1 enables the stop feature of the stop and stream
+ *         for all RI CPP Command RFs
+ */
+#define ADF_GEN4_RICPPINTCTL_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4))
+
+/* Push ID of the command which triggered the transaction error on RI */
+#define ADF_GEN4_RIERRPUSHID                           0x41A334
+
+/* Pull ID of the command which triggered the transaction error on RI */
+#define ADF_GEN4_RIERRPULLID                           0x41A338
+
+/* TI CPP interface status register */
+#define ADF_GEN4_TICPPINTSTS                           0x50053C
+
+/*
+ * Uncorrectable error mask in TICPPINTSTS register
+ * BIT(0) - value of 1 indicates that the TI asserted
+ *         the CPP error signal during a push
+ * BIT(1) - value of 1 indicates that the TI detected
+ *         the CPP error signal asserted during a pull
+ * BIT(2) - value of 1 indicates that the TI detected
+ *         a pull data parity error
+ */
+#define ADF_GEN4_TICPPINTSTS_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2))
+
+/* TI CPP interface status register control */
+#define ADF_GEN4_TICPPINTCTL                           0x500538
+
+/*
+ * Control bit mask for TICPPINTCTL register
+ * BIT(0) - value of 1 enables error detection and reporting on
+ *         the TI CPP Push interface
+ * BIT(1) - value of 1 enables error detection and reporting on
+ *         the TI CPP Push interface
+ * BIT(2) - value of 1 enables parity error detection and logging on
+ *         the TI CPP Pull interface
+ * BIT(3) - value of 1 enables CPP CMD and Pull Data parity checking
+ * BIT(4) - value of 1 enables TI stop part of stop and scream mode on
+ *         CPP/RF Parity error
+ */
+#define ADF_GEN4_TICPPINTCTL_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4))
+
+/* Push ID of the command which triggered the transaction error on TI */
+#define ADF_GEN4_TIERRPUSHID                           0x500540
+
+/* Pull ID of the command which triggered the transaction error on TI */
+#define ADF_GEN4_TIERRPULLID                           0x500544
+
+/* Correctable error in ARAM agent register */
+#define ADF_GEN4_REG_ARAMCERR                          0x1700
+
+#define ADF_GEN4_REG_ARAMCERR_BIT                      BIT(0)
+
+/*
+ * Correctable error enablement in ARAM bit mask
+ * BIT(3) - enable ARAM RAM to fix and log correctable error
+ * BIT(26) - enables ARAM agent to generate interrupt for correctable error
+ */
+#define ADF_GEN4_REG_ARAMCERR_EN_BITMASK               (BIT(3) | BIT(26))
+
+/* Correctable error address in ARAM agent register */
+#define ADF_GEN4_REG_ARAMCERRAD                                0x1708
+
+/* Uncorrectable error in ARAM agent register */
+#define ADF_GEN4_REG_ARAMUERR                          0x1704
+
+/*
+ * ARAM error bit mask
+ * BIT(0) - indicates error logged in ARAMCERR or ARAMUCERR
+ * BIT(18) - indicates uncorrectable multiple errors in ARAM agent
+ */
+#define ADF_GEN4_REG_ARAMUERR_ERROR_BIT                        BIT(0)
+#define ADF_GEN4_REG_ARAMUERR_MULTI_ERRORS_BIT         BIT(18)
+
+/*
+ * Uncorrectable error enablement in ARAM bit mask
+ * BIT(3) - enable ARAM RAM to fix and log uncorrectable error
+ * BIT(19) - enables ARAM agent to generate interrupt for uncorrectable error
+ */
+#define ADF_GEN4_REG_ARAMUERR_EN_BITMASK               (BIT(3) | BIT(19))
+
+/* Unorrectable error address in ARAM agent register */
+#define ADF_GEN4_REG_ARAMUERRAD                                0x170C
+
+/* Uncorrectable error transaction push/pull ID registers*/
+#define ADF_GEN4_REG_ERRPPID_LO                                0x1714
+#define ADF_GEN4_REG_ERRPPID_HI                                0x1718
+
+/* ARAM ECC block error enablement */
+#define ADF_GEN4_REG_ARAMCERRUERR_EN                   0x1808
+
+/*
+ * ARAM ECC block error control bit masks
+ * BIT(0) - enable ARAM CD ECC block error detecting
+ * BIT(1) - enable ARAM pull request ECC error detecting
+ * BIT(2) - enable ARAM command dispatch ECC error detecting
+ * BIT(3) - enable ARAM read datapath push ECC error detecting
+ * BIT(4) - enable ARAM read datapath pull ECC error detecting
+ * BIT(5) - enable ARAM RMW ECC error detecting
+ * BIT(6) - enable ARAM write datapath RMW ECC error detecting
+ * BIT(7) - enable ARAM write datapath ECC error detecting
+ */
+#define ADF_GEN4_REG_ARAMCERRUERR_EN_BITMASK \
+       (BIT(0) | BIT(1) | BIT(2) | BIT(3) | BIT(4) | \
+        BIT(5) | BIT(6) | BIT(7))
+
+/* ARAM misc memory target error registers*/
+#define ADF_GEN4_REG_CPPMEMTGTERR                      0x1710
+
+/*
+ * ARAM misc memory target error bit masks
+ * BIT(0) - indicates an error in ARAM target memory
+ * BIT(1) - indicates multiple errors in ARAM target memory
+ * BIT(4) - indicates pull error in ARAM target memory
+ * BIT(5) - indicates parity pull error in ARAM target memory
+ * BIT(6) - indicates push error in ARAM target memory
+ */
+#define ADF_GEN4_REG_CPPMEMTGTERR_BITMASK \
+       (BIT(0) | BIT(4) | BIT(5) | BIT(6))
+
+#define ADF_GEN4_REG_CPPMEMTGTERR_MULTI_ERRORS_BIT     BIT(1)
+
+/*
+ * ARAM misc memory target error enablement mask
+ * BIT(2) - enables CPP memory to detect and log push/pull data error
+ * BIT(7) - enables push/pull error to generate interrupts to RI
+ * BIT(8) - enables ARAM to check parity on pull data and CPP command buses
+ * BIT(9) - enables ARAM to autopush to AE when push/parity error is detected
+ *         on lookaside DT
+ */
+#define ADF_GEN4_REG_CPPMEMTGTERR_EN_BITMASK \
+       (BIT(2) | BIT(7) | BIT(8) | BIT(9))
+
+/* ATU fault status register */
+#define ADF_GEN4_ATUFAULTSTATUS(i)                     (0x506000 + ((i) * 0x4))
+
+#define ADF_GEN4_ATUFAULTSTATUS_BIT                    BIT(0)
+
+/* Command Parity error detected on IOSFP Command to QAT */
+#define ADF_GEN4_RIMISCSTS_BIT                         BIT(0)
+
+void adf_gen4_init_ras_ops(struct adf_ras_ops *ras_ops);
+
+#endif /* ADF_GEN4_RAS_H_ */
index 646c57922fcda5c1db50d4c0cb416936bd2ec7dd..35ccb91d6ec1b9060d368bc71a93e68bed77217c 100644 (file)
@@ -9,6 +9,7 @@
 #include <linux/slab.h>
 #include <linux/workqueue.h>
 
+#include "adf_admin.h"
 #include "adf_accel_devices.h"
 #include "adf_common_drv.h"
 #include "adf_gen4_timer.h"
index beef9a5f6c75c0868d9b4be0f69c572a199e068a..13f48d2f6da88e09034c0a33f4bb76f2fd29ed22 100644 (file)
@@ -12,6 +12,7 @@
 #include <linux/types.h>
 #include <asm/errno.h>
 #include "adf_accel_devices.h"
+#include "adf_admin.h"
 #include "adf_cfg.h"
 #include "adf_cfg_strings.h"
 #include "adf_clock.h"
index 803cbfd838f0a1333e639d642653b034f81b0034..2661af6a2ef697c7e7d4fe52fe7f96dd54195d21 100644 (file)
@@ -8,6 +8,7 @@
 #include <linux/kernel.h>
 #include <linux/kstrtox.h>
 #include <linux/types.h>
+#include "adf_admin.h"
 #include "adf_cfg.h"
 #include "adf_common_drv.h"
 #include "adf_heartbeat.h"
index 89001fe92e7629b95a92064abb1e467203bdcc63..81c39f3d07e1c4f58cf80e4aef89661a9b0698dc 100644 (file)
@@ -9,6 +9,8 @@
 #include "adf_common_drv.h"
 #include "adf_dbgfs.h"
 #include "adf_heartbeat.h"
+#include "adf_rl.h"
+#include "adf_sysfs_ras_counters.h"
 
 static LIST_HEAD(service_table);
 static DEFINE_MUTEX(service_lock);
@@ -61,7 +63,6 @@ int adf_service_unregister(struct service_hndl *service)
 static int adf_dev_init(struct adf_accel_dev *accel_dev)
 {
        struct service_hndl *service;
-       struct list_head *list_itr;
        struct adf_hw_device_data *hw_data = accel_dev->hw_device;
        int ret;
 
@@ -97,6 +98,9 @@ static int adf_dev_init(struct adf_accel_dev *accel_dev)
                return -EFAULT;
        }
 
+       if (hw_data->get_ring_to_svc_map)
+               hw_data->ring_to_svc_map = hw_data->get_ring_to_svc_map(accel_dev);
+
        if (adf_ae_init(accel_dev)) {
                dev_err(&GET_DEV(accel_dev),
                        "Failed to initialise Acceleration Engine\n");
@@ -117,6 +121,9 @@ static int adf_dev_init(struct adf_accel_dev *accel_dev)
        }
        set_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status);
 
+       if (hw_data->ras_ops.enable_ras_errors)
+               hw_data->ras_ops.enable_ras_errors(accel_dev);
+
        hw_data->enable_ints(accel_dev);
        hw_data->enable_error_correction(accel_dev);
 
@@ -131,14 +138,16 @@ static int adf_dev_init(struct adf_accel_dev *accel_dev)
        }
 
        adf_heartbeat_init(accel_dev);
+       ret = adf_rl_init(accel_dev);
+       if (ret && ret != -EOPNOTSUPP)
+               return ret;
 
        /*
         * Subservice initialisation is divided into two stages: init and start.
         * This is to facilitate any ordering dependencies between services
         * prior to starting any of the accelerators.
         */
-       list_for_each(list_itr, &service_table) {
-               service = list_entry(list_itr, struct service_hndl, list);
+       list_for_each_entry(service, &service_table, list) {
                if (service->event_hld(accel_dev, ADF_EVENT_INIT)) {
                        dev_err(&GET_DEV(accel_dev),
                                "Failed to initialise service %s\n",
@@ -165,7 +174,6 @@ static int adf_dev_start(struct adf_accel_dev *accel_dev)
 {
        struct adf_hw_device_data *hw_data = accel_dev->hw_device;
        struct service_hndl *service;
-       struct list_head *list_itr;
        int ret;
 
        set_bit(ADF_STATUS_STARTING, &accel_dev->status);
@@ -208,9 +216,11 @@ static int adf_dev_start(struct adf_accel_dev *accel_dev)
        }
 
        adf_heartbeat_start(accel_dev);
+       ret = adf_rl_start(accel_dev);
+       if (ret && ret != -EOPNOTSUPP)
+               return ret;
 
-       list_for_each(list_itr, &service_table) {
-               service = list_entry(list_itr, struct service_hndl, list);
+       list_for_each_entry(service, &service_table, list) {
                if (service->event_hld(accel_dev, ADF_EVENT_START)) {
                        dev_err(&GET_DEV(accel_dev),
                                "Failed to start service %s\n",
@@ -231,6 +241,7 @@ static int adf_dev_start(struct adf_accel_dev *accel_dev)
                clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
                return -EFAULT;
        }
+       set_bit(ADF_STATUS_CRYPTO_ALGS_REGISTERED, &accel_dev->status);
 
        if (!list_empty(&accel_dev->compression_list) && qat_comp_algs_register()) {
                dev_err(&GET_DEV(accel_dev),
@@ -239,8 +250,10 @@ static int adf_dev_start(struct adf_accel_dev *accel_dev)
                clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
                return -EFAULT;
        }
+       set_bit(ADF_STATUS_COMP_ALGS_REGISTERED, &accel_dev->status);
 
        adf_dbgfs_add(accel_dev);
+       adf_sysfs_start_ras(accel_dev);
 
        return 0;
 }
@@ -259,7 +272,6 @@ static void adf_dev_stop(struct adf_accel_dev *accel_dev)
 {
        struct adf_hw_device_data *hw_data = accel_dev->hw_device;
        struct service_hndl *service;
-       struct list_head *list_itr;
        bool wait = false;
        int ret;
 
@@ -267,21 +279,26 @@ static void adf_dev_stop(struct adf_accel_dev *accel_dev)
            !test_bit(ADF_STATUS_STARTING, &accel_dev->status))
                return;
 
+       adf_rl_stop(accel_dev);
        adf_dbgfs_rm(accel_dev);
+       adf_sysfs_stop_ras(accel_dev);
 
        clear_bit(ADF_STATUS_STARTING, &accel_dev->status);
        clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
 
-       if (!list_empty(&accel_dev->crypto_list)) {
+       if (!list_empty(&accel_dev->crypto_list) &&
+           test_bit(ADF_STATUS_CRYPTO_ALGS_REGISTERED, &accel_dev->status)) {
                qat_algs_unregister();
                qat_asym_algs_unregister();
        }
+       clear_bit(ADF_STATUS_CRYPTO_ALGS_REGISTERED, &accel_dev->status);
 
-       if (!list_empty(&accel_dev->compression_list))
+       if (!list_empty(&accel_dev->compression_list) &&
+           test_bit(ADF_STATUS_COMP_ALGS_REGISTERED, &accel_dev->status))
                qat_comp_algs_unregister();
+       clear_bit(ADF_STATUS_COMP_ALGS_REGISTERED, &accel_dev->status);
 
-       list_for_each(list_itr, &service_table) {
-               service = list_entry(list_itr, struct service_hndl, list);
+       list_for_each_entry(service, &service_table, list) {
                if (!test_bit(accel_dev->accel_id, service->start_status))
                        continue;
                ret = service->event_hld(accel_dev, ADF_EVENT_STOP);
@@ -318,7 +335,6 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
 {
        struct adf_hw_device_data *hw_data = accel_dev->hw_device;
        struct service_hndl *service;
-       struct list_head *list_itr;
 
        if (!hw_data) {
                dev_err(&GET_DEV(accel_dev),
@@ -340,8 +356,7 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
                                  &accel_dev->status);
        }
 
-       list_for_each(list_itr, &service_table) {
-               service = list_entry(list_itr, struct service_hndl, list);
+       list_for_each_entry(service, &service_table, list) {
                if (!test_bit(accel_dev->accel_id, service->init_status))
                        continue;
                if (service->event_hld(accel_dev, ADF_EVENT_SHUTDOWN))
@@ -352,6 +367,11 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
                        clear_bit(accel_dev->accel_id, service->init_status);
        }
 
+       adf_rl_exit(accel_dev);
+
+       if (hw_data->ras_ops.disable_ras_errors)
+               hw_data->ras_ops.disable_ras_errors(accel_dev);
+
        adf_heartbeat_shutdown(accel_dev);
 
        hw_data->disable_iov(accel_dev);
@@ -378,10 +398,8 @@ static void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
 int adf_dev_restarting_notify(struct adf_accel_dev *accel_dev)
 {
        struct service_hndl *service;
-       struct list_head *list_itr;
 
-       list_for_each(list_itr, &service_table) {
-               service = list_entry(list_itr, struct service_hndl, list);
+       list_for_each_entry(service, &service_table, list) {
                if (service->event_hld(accel_dev, ADF_EVENT_RESTARTING))
                        dev_err(&GET_DEV(accel_dev),
                                "Failed to restart service %s.\n",
@@ -393,10 +411,8 @@ int adf_dev_restarting_notify(struct adf_accel_dev *accel_dev)
 int adf_dev_restarted_notify(struct adf_accel_dev *accel_dev)
 {
        struct service_hndl *service;
-       struct list_head *list_itr;
 
-       list_for_each(list_itr, &service_table) {
-               service = list_entry(list_itr, struct service_hndl, list);
+       list_for_each_entry(service, &service_table, list) {
                if (service->event_hld(accel_dev, ADF_EVENT_RESTARTED))
                        dev_err(&GET_DEV(accel_dev),
                                "Failed to restart service %s.\n",
@@ -440,13 +456,6 @@ int adf_dev_down(struct adf_accel_dev *accel_dev, bool reconfig)
 
        mutex_lock(&accel_dev->state_lock);
 
-       if (!adf_dev_started(accel_dev)) {
-               dev_info(&GET_DEV(accel_dev), "Device qat_dev%d already down\n",
-                        accel_dev->accel_id);
-               ret = -EINVAL;
-               goto out;
-       }
-
        if (reconfig) {
                ret = adf_dev_shutdown_cache_cfg(accel_dev);
                goto out;
index 2aba194a7c292244b1e34503748851f53c3e16da..3557a0d6dea289fa6a2ef1a2b1369c48dbcdb599 100644 (file)
@@ -132,6 +132,21 @@ static bool adf_handle_pm_int(struct adf_accel_dev *accel_dev)
        return false;
 }
 
+static bool adf_handle_ras_int(struct adf_accel_dev *accel_dev)
+{
+       struct adf_ras_ops *ras_ops = &accel_dev->hw_device->ras_ops;
+       bool reset_required;
+
+       if (ras_ops->handle_interrupt &&
+           ras_ops->handle_interrupt(accel_dev, &reset_required)) {
+               if (reset_required)
+                       dev_err(&GET_DEV(accel_dev), "Fatal error, reset required\n");
+               return true;
+       }
+
+       return false;
+}
+
 static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
 {
        struct adf_accel_dev *accel_dev = dev_ptr;
@@ -145,6 +160,9 @@ static irqreturn_t adf_msix_isr_ae(int irq, void *dev_ptr)
        if (adf_handle_pm_int(accel_dev))
                return IRQ_HANDLED;
 
+       if (adf_handle_ras_int(accel_dev))
+               return IRQ_HANDLED;
+
        dev_dbg(&GET_DEV(accel_dev), "qat_dev%d spurious AE interrupt\n",
                accel_dev->accel_id);
 
diff --git a/drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs.c
new file mode 100644 (file)
index 0000000..f0a13c1
--- /dev/null
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+#include <linux/debugfs.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+
+#include "adf_accel_devices.h"
+#include "adf_pm_dbgfs.h"
+
+static ssize_t pm_status_read(struct file *f, char __user *buf, size_t count,
+                             loff_t *pos)
+{
+       struct adf_accel_dev *accel_dev = file_inode(f)->i_private;
+       struct adf_pm pm = accel_dev->power_management;
+
+       if (pm.print_pm_status)
+               return pm.print_pm_status(accel_dev, buf, count, pos);
+
+       return count;
+}
+
+static const struct file_operations pm_status_fops = {
+       .owner = THIS_MODULE,
+       .read = pm_status_read,
+};
+
+void adf_pm_dbgfs_add(struct adf_accel_dev *accel_dev)
+{
+       struct adf_pm *pm = &accel_dev->power_management;
+
+       if (!pm->present || !pm->print_pm_status)
+               return;
+
+       pm->debugfs_pm_status = debugfs_create_file("pm_status", 0400,
+                                                   accel_dev->debugfs_dir,
+                                                   accel_dev, &pm_status_fops);
+}
+
+void adf_pm_dbgfs_rm(struct adf_accel_dev *accel_dev)
+{
+       struct adf_pm *pm = &accel_dev->power_management;
+
+       if (!pm->present)
+               return;
+
+       debugfs_remove(pm->debugfs_pm_status);
+       pm->debugfs_pm_status = NULL;
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs.h b/drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs.h
new file mode 100644 (file)
index 0000000..83632e5
--- /dev/null
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_PM_DBGFS_H_
+#define ADF_PM_DBGFS_H_
+
+struct adf_accel_dev;
+
+void adf_pm_dbgfs_rm(struct adf_accel_dev *accel_dev);
+void adf_pm_dbgfs_add(struct adf_accel_dev *accel_dev);
+
+#endif /* ADF_PM_DBGFS_H_ */
diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.c b/drivers/crypto/intel/qat/qat_common/adf_rl.c
new file mode 100644 (file)
index 0000000..86e3e21
--- /dev/null
@@ -0,0 +1,1169 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#define dev_fmt(fmt) "RateLimiting: " fmt
+
+#include <asm/errno.h>
+#include <asm/div64.h>
+
+#include <linux/dev_printk.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/units.h>
+
+#include "adf_accel_devices.h"
+#include "adf_common_drv.h"
+#include "adf_rl_admin.h"
+#include "adf_rl.h"
+#include "adf_sysfs_rl.h"
+
+#define RL_TOKEN_GRANULARITY_PCIEIN_BUCKET     0U
+#define RL_TOKEN_GRANULARITY_PCIEOUT_BUCKET    0U
+#define RL_TOKEN_PCIE_SIZE                     64
+#define RL_TOKEN_ASYM_SIZE                     1024
+#define RL_CSR_SIZE                            4U
+#define RL_CAPABILITY_MASK                     GENMASK(6, 4)
+#define RL_CAPABILITY_VALUE                    0x70
+#define RL_VALIDATE_NON_ZERO(input)            ((input) == 0)
+#define ROOT_MASK                              GENMASK(1, 0)
+#define CLUSTER_MASK                           GENMASK(3, 0)
+#define LEAF_MASK                              GENMASK(5, 0)
+
+static int validate_user_input(struct adf_accel_dev *accel_dev,
+                              struct adf_rl_sla_input_data *sla_in,
+                              bool is_update)
+{
+       const unsigned long rp_mask = sla_in->rp_mask;
+       size_t rp_mask_size;
+       int i, cnt;
+
+       if (sla_in->pir < sla_in->cir) {
+               dev_notice(&GET_DEV(accel_dev),
+                          "PIR must be >= CIR, setting PIR to CIR\n");
+               sla_in->pir = sla_in->cir;
+       }
+
+       if (!is_update) {
+               cnt = 0;
+               rp_mask_size = sizeof(sla_in->rp_mask) * BITS_PER_BYTE;
+               for_each_set_bit(i, &rp_mask, rp_mask_size) {
+                       if (++cnt > RL_RP_CNT_PER_LEAF_MAX) {
+                               dev_notice(&GET_DEV(accel_dev),
+                                          "Too many ring pairs selected for this SLA\n");
+                               return -EINVAL;
+                       }
+               }
+
+               if (sla_in->srv >= ADF_SVC_NONE) {
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "Wrong service type\n");
+                       return -EINVAL;
+               }
+
+               if (sla_in->type > RL_LEAF) {
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "Wrong node type\n");
+                       return -EINVAL;
+               }
+
+               if (sla_in->parent_id < RL_PARENT_DEFAULT_ID ||
+                   sla_in->parent_id >= RL_NODES_CNT_MAX) {
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "Wrong parent ID\n");
+                       return -EINVAL;
+               }
+       }
+
+       return 0;
+}
+
+static int validate_sla_id(struct adf_accel_dev *accel_dev, int sla_id)
+{
+       struct rl_sla *sla;
+
+       if (sla_id <= RL_SLA_EMPTY_ID || sla_id >= RL_NODES_CNT_MAX) {
+               dev_notice(&GET_DEV(accel_dev), "Provided ID is out of bounds\n");
+               return -EINVAL;
+       }
+
+       sla = accel_dev->rate_limiting->sla[sla_id];
+
+       if (!sla) {
+               dev_notice(&GET_DEV(accel_dev), "SLA with provided ID does not exist\n");
+               return -EINVAL;
+       }
+
+       if (sla->type != RL_LEAF) {
+               dev_notice(&GET_DEV(accel_dev), "This ID is reserved for internal use\n");
+               return -EINVAL;
+       }
+
+       return 0;
+}
+
+/**
+ * find_parent() - Find the parent for a new SLA
+ * @rl_data: pointer to ratelimiting data
+ * @sla_in: pointer to user input data for a new SLA
+ *
+ * Function returns a pointer to the parent SLA. If the parent ID is provided
+ * as input in the user data, then such ID is validated and the parent SLA
+ * is returned.
+ * Otherwise, it returns the default parent SLA (root or cluster) for
+ * the new object.
+ *
+ * Return:
+ * * Pointer to the parent SLA object
+ * * NULL - when parent cannot be found
+ */
+static struct rl_sla *find_parent(struct adf_rl *rl_data,
+                                 struct adf_rl_sla_input_data *sla_in)
+{
+       int input_parent_id = sla_in->parent_id;
+       struct rl_sla *root = NULL;
+       struct rl_sla *parent_sla;
+       int i;
+
+       if (sla_in->type == RL_ROOT)
+               return NULL;
+
+       if (input_parent_id > RL_PARENT_DEFAULT_ID) {
+               parent_sla = rl_data->sla[input_parent_id];
+               /*
+                * SLA can be a parent if it has the same service as the child
+                * and its type is higher in the hierarchy,
+                * for example the parent type of a LEAF must be a CLUSTER.
+                */
+               if (parent_sla && parent_sla->srv == sla_in->srv &&
+                   parent_sla->type == sla_in->type - 1)
+                       return parent_sla;
+
+               return NULL;
+       }
+
+       /* If input_parent_id is not valid, get root for this service type. */
+       for (i = 0; i < RL_ROOT_MAX; i++) {
+               if (rl_data->root[i] && rl_data->root[i]->srv == sla_in->srv) {
+                       root = rl_data->root[i];
+                       break;
+               }
+       }
+
+       if (!root)
+               return NULL;
+
+       /*
+        * If the type of this SLA is cluster, then return the root.
+        * Otherwise, find the default (i.e. first) cluster for this service.
+        */
+       if (sla_in->type == RL_CLUSTER)
+               return root;
+
+       for (i = 0; i < RL_CLUSTER_MAX; i++) {
+               if (rl_data->cluster[i] && rl_data->cluster[i]->parent == root)
+                       return rl_data->cluster[i];
+       }
+
+       return NULL;
+}
+
+static enum adf_cfg_service_type srv_to_cfg_svc_type(enum adf_base_services rl_srv)
+{
+       switch (rl_srv) {
+       case ADF_SVC_ASYM:
+               return ASYM;
+       case ADF_SVC_SYM:
+               return SYM;
+       case ADF_SVC_DC:
+               return COMP;
+       default:
+               return UNUSED;
+       }
+}
+
+/**
+ * get_sla_arr_of_type() - Returns a pointer to SLA type specific array
+ * @rl_data: pointer to ratelimiting data
+ * @type: SLA type
+ * @sla_arr: pointer to variable where requested pointer will be stored
+ *
+ * Return: Max number of elements allowed for the returned array
+ */
+static u32 get_sla_arr_of_type(struct adf_rl *rl_data, enum rl_node_type type,
+                              struct rl_sla ***sla_arr)
+{
+       switch (type) {
+       case RL_LEAF:
+               *sla_arr = rl_data->leaf;
+               return RL_LEAF_MAX;
+       case RL_CLUSTER:
+               *sla_arr = rl_data->cluster;
+               return RL_CLUSTER_MAX;
+       case RL_ROOT:
+               *sla_arr = rl_data->root;
+               return RL_ROOT_MAX;
+       default:
+               *sla_arr = NULL;
+               return 0;
+       }
+}
+
+static bool is_service_enabled(struct adf_accel_dev *accel_dev,
+                              enum adf_base_services rl_srv)
+{
+       enum adf_cfg_service_type arb_srv = srv_to_cfg_svc_type(rl_srv);
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       u8 rps_per_bundle = hw_data->num_banks_per_vf;
+       int i;
+
+       for (i = 0; i < rps_per_bundle; i++) {
+               if (GET_SRV_TYPE(accel_dev, i) == arb_srv)
+                       return true;
+       }
+
+       return false;
+}
+
+/**
+ * prepare_rp_ids() - Creates an array of ring pair IDs from bitmask
+ * @accel_dev: pointer to acceleration device structure
+ * @sla: SLA object data where result will be written
+ * @rp_mask: bitmask of ring pair IDs
+ *
+ * Function tries to convert provided bitmap to an array of IDs. It checks if
+ * RPs aren't in use, are assigned to SLA  service or if a number of provided
+ * IDs is not too big. If successful, writes the result into the field
+ * sla->ring_pairs_cnt.
+ *
+ * Return:
+ * * 0         - ok
+ * * -EINVAL   - ring pairs array cannot be created from provided mask
+ */
+static int prepare_rp_ids(struct adf_accel_dev *accel_dev, struct rl_sla *sla,
+                         const unsigned long rp_mask)
+{
+       enum adf_cfg_service_type arb_srv = srv_to_cfg_svc_type(sla->srv);
+       u16 rps_per_bundle = GET_HW_DATA(accel_dev)->num_banks_per_vf;
+       bool *rp_in_use = accel_dev->rate_limiting->rp_in_use;
+       size_t rp_cnt_max = ARRAY_SIZE(sla->ring_pairs_ids);
+       u16 rp_id_max = GET_HW_DATA(accel_dev)->num_banks;
+       u16 cnt = 0;
+       u16 rp_id;
+
+       for_each_set_bit(rp_id, &rp_mask, rp_id_max) {
+               if (cnt >= rp_cnt_max) {
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "Assigned more ring pairs than supported");
+                       return -EINVAL;
+               }
+
+               if (rp_in_use[rp_id]) {
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "RP %u already assigned to other SLA", rp_id);
+                       return -EINVAL;
+               }
+
+               if (GET_SRV_TYPE(accel_dev, rp_id % rps_per_bundle) != arb_srv) {
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "RP %u does not support SLA service", rp_id);
+                       return -EINVAL;
+               }
+
+               sla->ring_pairs_ids[cnt++] = rp_id;
+       }
+
+       sla->ring_pairs_cnt = cnt;
+
+       return 0;
+}
+
+static void mark_rps_usage(struct rl_sla *sla, bool *rp_in_use, bool used)
+{
+       u16 rp_id;
+       int i;
+
+       for (i = 0; i < sla->ring_pairs_cnt; i++) {
+               rp_id = sla->ring_pairs_ids[i];
+               rp_in_use[rp_id] = used;
+       }
+}
+
+static void assign_rps_to_leaf(struct adf_accel_dev *accel_dev,
+                              struct rl_sla *sla, bool clear)
+{
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+       u32 base_offset = hw_data->rl_data.r2l_offset;
+       u32 node_id = clear ? 0U : (sla->node_id & LEAF_MASK);
+       u32 offset;
+       int i;
+
+       for (i = 0; i < sla->ring_pairs_cnt; i++) {
+               offset = base_offset + (RL_CSR_SIZE * sla->ring_pairs_ids[i]);
+               ADF_CSR_WR(pmisc_addr, offset, node_id);
+       }
+}
+
+static void assign_leaf_to_cluster(struct adf_accel_dev *accel_dev,
+                                  struct rl_sla *sla, bool clear)
+{
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+       u32 base_offset = hw_data->rl_data.l2c_offset;
+       u32 node_id = sla->node_id & LEAF_MASK;
+       u32 parent_id = clear ? 0U : (sla->parent->node_id & CLUSTER_MASK);
+       u32 offset;
+
+       offset = base_offset + (RL_CSR_SIZE * node_id);
+       ADF_CSR_WR(pmisc_addr, offset, parent_id);
+}
+
+static void assign_cluster_to_root(struct adf_accel_dev *accel_dev,
+                                  struct rl_sla *sla, bool clear)
+{
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+       u32 base_offset = hw_data->rl_data.c2s_offset;
+       u32 node_id = sla->node_id & CLUSTER_MASK;
+       u32 parent_id = clear ? 0U : (sla->parent->node_id & ROOT_MASK);
+       u32 offset;
+
+       offset = base_offset + (RL_CSR_SIZE * node_id);
+       ADF_CSR_WR(pmisc_addr, offset, parent_id);
+}
+
+static void assign_node_to_parent(struct adf_accel_dev *accel_dev,
+                                 struct rl_sla *sla, bool clear_assignment)
+{
+       switch (sla->type) {
+       case RL_LEAF:
+               assign_rps_to_leaf(accel_dev, sla, clear_assignment);
+               assign_leaf_to_cluster(accel_dev, sla, clear_assignment);
+               break;
+       case RL_CLUSTER:
+               assign_cluster_to_root(accel_dev, sla, clear_assignment);
+               break;
+       default:
+               break;
+       }
+}
+
+/**
+ * can_parent_afford_sla() - Verifies if parent allows to create an SLA
+ * @sla_in: pointer to user input data for a new SLA
+ * @sla_parent: pointer to parent SLA object
+ * @sla_cir: current child CIR value (only for update)
+ * @is_update: request is a update
+ *
+ * Algorithm verifies if parent has enough remaining budget to take assignment
+ * of a child with provided parameters. In update case current CIR value must be
+ * returned to budget first.
+ * PIR value cannot exceed the PIR assigned to parent.
+ *
+ * Return:
+ * * true      - SLA can be created
+ * * false     - SLA cannot be created
+ */
+static bool can_parent_afford_sla(struct adf_rl_sla_input_data *sla_in,
+                                 struct rl_sla *sla_parent, u32 sla_cir,
+                                 bool is_update)
+{
+       u32 rem_cir = sla_parent->rem_cir;
+
+       if (is_update)
+               rem_cir += sla_cir;
+
+       if (sla_in->cir > rem_cir || sla_in->pir > sla_parent->pir)
+               return false;
+
+       return true;
+}
+
+/**
+ * can_node_afford_update() - Verifies if SLA can be updated with input data
+ * @sla_in: pointer to user input data for a new SLA
+ * @sla: pointer to SLA object selected for update
+ *
+ * Algorithm verifies if a new CIR value is big enough to satisfy currently
+ * assigned child SLAs and if PIR can be updated
+ *
+ * Return:
+ * * true      - SLA can be updated
+ * * false     - SLA cannot be updated
+ */
+static bool can_node_afford_update(struct adf_rl_sla_input_data *sla_in,
+                                  struct rl_sla *sla)
+{
+       u32 cir_in_use = sla->cir - sla->rem_cir;
+
+       /* new CIR cannot be smaller then currently consumed value */
+       if (cir_in_use > sla_in->cir)
+               return false;
+
+       /* PIR of root/cluster cannot be reduced in node with assigned children */
+       if (sla_in->pir < sla->pir && sla->type != RL_LEAF && cir_in_use > 0)
+               return false;
+
+       return true;
+}
+
+static bool is_enough_budget(struct adf_rl *rl_data, struct rl_sla *sla,
+                            struct adf_rl_sla_input_data *sla_in,
+                            bool is_update)
+{
+       u32 max_val = rl_data->device_data->scale_ref;
+       struct rl_sla *parent = sla->parent;
+       bool ret = true;
+
+       if (sla_in->cir > max_val || sla_in->pir > max_val)
+               ret = false;
+
+       switch (sla->type) {
+       case RL_LEAF:
+               ret &= can_parent_afford_sla(sla_in, parent, sla->cir,
+                                                 is_update);
+               break;
+       case RL_CLUSTER:
+               ret &= can_parent_afford_sla(sla_in, parent, sla->cir,
+                                                 is_update);
+
+               if (is_update)
+                       ret &= can_node_afford_update(sla_in, sla);
+
+               break;
+       case RL_ROOT:
+               if (is_update)
+                       ret &= can_node_afford_update(sla_in, sla);
+
+               break;
+       default:
+               ret = false;
+               break;
+       }
+
+       return ret;
+}
+
+static void update_budget(struct rl_sla *sla, u32 old_cir, bool is_update)
+{
+       switch (sla->type) {
+       case RL_LEAF:
+               if (is_update)
+                       sla->parent->rem_cir += old_cir;
+
+               sla->parent->rem_cir -= sla->cir;
+               sla->rem_cir = 0;
+               break;
+       case RL_CLUSTER:
+               if (is_update) {
+                       sla->parent->rem_cir += old_cir;
+                       sla->rem_cir = sla->cir - (old_cir - sla->rem_cir);
+               } else {
+                       sla->rem_cir = sla->cir;
+               }
+
+               sla->parent->rem_cir -= sla->cir;
+               break;
+       case RL_ROOT:
+               if (is_update)
+                       sla->rem_cir = sla->cir - (old_cir - sla->rem_cir);
+               else
+                       sla->rem_cir = sla->cir;
+               break;
+       default:
+               break;
+       }
+}
+
+/**
+ * get_next_free_sla_id() - finds next free ID in the SLA array
+ * @rl_data: Pointer to ratelimiting data structure
+ *
+ * Return:
+ * * 0 : RL_NODES_CNT_MAX      - correct ID
+ * * -ENOSPC                   - all SLA slots are in use
+ */
+static int get_next_free_sla_id(struct adf_rl *rl_data)
+{
+       int i = 0;
+
+       while (i < RL_NODES_CNT_MAX && rl_data->sla[i++])
+               ;
+
+       if (i == RL_NODES_CNT_MAX)
+               return -ENOSPC;
+
+       return i - 1;
+}
+
+/**
+ * get_next_free_node_id() - finds next free ID in the array of that node type
+ * @rl_data: Pointer to ratelimiting data structure
+ * @sla: Pointer to SLA object for which the ID is searched
+ *
+ * Return:
+ * * 0 : RL_[NODE_TYPE]_MAX    - correct ID
+ * * -ENOSPC                   - all slots of that type are in use
+ */
+static int get_next_free_node_id(struct adf_rl *rl_data, struct rl_sla *sla)
+{
+       struct adf_hw_device_data *hw_device = GET_HW_DATA(rl_data->accel_dev);
+       int max_id, i, step, rp_per_leaf;
+       struct rl_sla **sla_list;
+
+       rp_per_leaf = hw_device->num_banks / hw_device->num_banks_per_vf;
+
+       /*
+        * Static nodes mapping:
+        * root0 - cluster[0,4,8,12] - leaf[0-15]
+        * root1 - cluster[1,5,9,13] - leaf[16-31]
+        * root2 - cluster[2,6,10,14] - leaf[32-47]
+        */
+       switch (sla->type) {
+       case RL_LEAF:
+               i = sla->srv * rp_per_leaf;
+               step = 1;
+               max_id = i + rp_per_leaf;
+               sla_list = rl_data->leaf;
+               break;
+       case RL_CLUSTER:
+               i = sla->srv;
+               step = 4;
+               max_id = RL_CLUSTER_MAX;
+               sla_list = rl_data->cluster;
+               break;
+       case RL_ROOT:
+               return sla->srv;
+       default:
+               return -EINVAL;
+       }
+
+       while (i < max_id && sla_list[i])
+               i += step;
+
+       if (i >= max_id)
+               return -ENOSPC;
+
+       return i;
+}
+
+u32 adf_rl_calculate_slice_tokens(struct adf_accel_dev *accel_dev, u32 sla_val,
+                                 enum adf_base_services svc_type)
+{
+       struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       u64 avail_slice_cycles, allocated_tokens;
+
+       if (!sla_val)
+               return 0;
+
+       avail_slice_cycles = hw_data->clock_frequency;
+
+       switch (svc_type) {
+       case ADF_SVC_ASYM:
+               avail_slice_cycles *= device_data->slices.pke_cnt;
+               break;
+       case ADF_SVC_SYM:
+               avail_slice_cycles *= device_data->slices.cph_cnt;
+               break;
+       case ADF_SVC_DC:
+               avail_slice_cycles *= device_data->slices.dcpr_cnt;
+               break;
+       default:
+               break;
+       }
+
+       do_div(avail_slice_cycles, device_data->scan_interval);
+       allocated_tokens = avail_slice_cycles * sla_val;
+       do_div(allocated_tokens, device_data->scale_ref);
+
+       return allocated_tokens;
+}
+
+u32 adf_rl_calculate_ae_cycles(struct adf_accel_dev *accel_dev, u32 sla_val,
+                              enum adf_base_services svc_type)
+{
+       struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       u64 allocated_ae_cycles, avail_ae_cycles;
+
+       if (!sla_val)
+               return 0;
+
+       avail_ae_cycles = hw_data->clock_frequency;
+       avail_ae_cycles *= hw_data->get_num_aes(hw_data) - 1;
+       do_div(avail_ae_cycles, device_data->scan_interval);
+
+       sla_val *= device_data->max_tp[svc_type];
+       sla_val /= device_data->scale_ref;
+
+       allocated_ae_cycles = (sla_val * avail_ae_cycles);
+       do_div(allocated_ae_cycles, device_data->max_tp[svc_type]);
+
+       return allocated_ae_cycles;
+}
+
+u32 adf_rl_calculate_pci_bw(struct adf_accel_dev *accel_dev, u32 sla_val,
+                           enum adf_base_services svc_type, bool is_bw_out)
+{
+       struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
+       u64 sla_to_bytes, allocated_bw, sla_scaled;
+
+       if (!sla_val)
+               return 0;
+
+       sla_to_bytes = sla_val;
+       sla_to_bytes *= device_data->max_tp[svc_type];
+       do_div(sla_to_bytes, device_data->scale_ref);
+
+       sla_to_bytes *= (svc_type == ADF_SVC_ASYM) ? RL_TOKEN_ASYM_SIZE :
+                                                    BYTES_PER_MBIT;
+       if (svc_type == ADF_SVC_DC && is_bw_out)
+               sla_to_bytes *= device_data->slices.dcpr_cnt -
+                               device_data->dcpr_correction;
+
+       sla_scaled = sla_to_bytes * device_data->pcie_scale_mul;
+       do_div(sla_scaled, device_data->pcie_scale_div);
+       allocated_bw = sla_scaled;
+       do_div(allocated_bw, RL_TOKEN_PCIE_SIZE);
+       do_div(allocated_bw, device_data->scan_interval);
+
+       return allocated_bw;
+}
+
+/**
+ * add_new_sla_entry() - creates a new SLA object and fills it with user data
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data for a new SLA
+ * @sla_out: Pointer to variable that will contain the address of a new
+ *          SLA object if the operation succeeds
+ *
+ * Return:
+ * * 0         - ok
+ * * -ENOMEM   - memory allocation failed
+ * * -EINVAL   - invalid user input
+ * * -ENOSPC   - all available SLAs are in use
+ */
+static int add_new_sla_entry(struct adf_accel_dev *accel_dev,
+                            struct adf_rl_sla_input_data *sla_in,
+                            struct rl_sla **sla_out)
+{
+       struct adf_rl *rl_data = accel_dev->rate_limiting;
+       struct rl_sla *sla;
+       int ret = 0;
+
+       sla = kzalloc(sizeof(*sla), GFP_KERNEL);
+       if (!sla) {
+               ret = -ENOMEM;
+               goto ret_err;
+       }
+       *sla_out = sla;
+
+       if (!is_service_enabled(accel_dev, sla_in->srv)) {
+               dev_notice(&GET_DEV(accel_dev),
+                          "Provided service is not enabled\n");
+               ret = -EINVAL;
+               goto ret_err;
+       }
+
+       sla->srv = sla_in->srv;
+       sla->type = sla_in->type;
+       ret = get_next_free_node_id(rl_data, sla);
+       if (ret < 0) {
+               dev_notice(&GET_DEV(accel_dev),
+                          "Exceeded number of available nodes for that service\n");
+               goto ret_err;
+       }
+       sla->node_id = ret;
+
+       ret = get_next_free_sla_id(rl_data);
+       if (ret < 0) {
+               dev_notice(&GET_DEV(accel_dev),
+                          "Allocated maximum SLAs number\n");
+               goto ret_err;
+       }
+       sla->sla_id = ret;
+
+       sla->parent = find_parent(rl_data, sla_in);
+       if (!sla->parent && sla->type != RL_ROOT) {
+               if (sla_in->parent_id != RL_PARENT_DEFAULT_ID)
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "Provided parent ID does not exist or cannot be parent for this SLA.");
+               else
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "Unable to find parent node for this service. Is service enabled?");
+               ret = -EINVAL;
+               goto ret_err;
+       }
+
+       if (sla->type == RL_LEAF) {
+               ret = prepare_rp_ids(accel_dev, sla, sla_in->rp_mask);
+               if (!sla->ring_pairs_cnt || ret) {
+                       dev_notice(&GET_DEV(accel_dev),
+                                  "Unable to find ring pairs to assign to the leaf");
+                       if (!ret)
+                               ret = -EINVAL;
+
+                       goto ret_err;
+               }
+       }
+
+       return 0;
+
+ret_err:
+       kfree(sla);
+       *sla_out = NULL;
+
+       return ret;
+}
+
+static int initialize_default_nodes(struct adf_accel_dev *accel_dev)
+{
+       struct adf_rl *rl_data = accel_dev->rate_limiting;
+       struct adf_rl_hw_data *device_data = rl_data->device_data;
+       struct adf_rl_sla_input_data sla_in = { };
+       int ret = 0;
+       int i;
+
+       /* Init root for each enabled service */
+       sla_in.type = RL_ROOT;
+       sla_in.parent_id = RL_PARENT_DEFAULT_ID;
+
+       for (i = 0; i < ADF_SVC_NONE; i++) {
+               if (!is_service_enabled(accel_dev, i))
+                       continue;
+
+               sla_in.cir = device_data->scale_ref;
+               sla_in.pir = sla_in.cir;
+               sla_in.srv = i;
+
+               ret = adf_rl_add_sla(accel_dev, &sla_in);
+               if (ret)
+                       return ret;
+       }
+
+       /* Init default cluster for each root */
+       sla_in.type = RL_CLUSTER;
+       for (i = 0; i < ADF_SVC_NONE; i++) {
+               if (!rl_data->root[i])
+                       continue;
+
+               sla_in.cir = rl_data->root[i]->cir;
+               sla_in.pir = sla_in.cir;
+               sla_in.srv = rl_data->root[i]->srv;
+
+               ret = adf_rl_add_sla(accel_dev, &sla_in);
+               if (ret)
+                       return ret;
+       }
+
+       return 0;
+}
+
+static void clear_sla(struct adf_rl *rl_data, struct rl_sla *sla)
+{
+       bool *rp_in_use = rl_data->rp_in_use;
+       struct rl_sla **sla_type_arr = NULL;
+       int i, sla_id, node_id;
+       u32 old_cir;
+
+       sla_id = sla->sla_id;
+       node_id = sla->node_id;
+       old_cir = sla->cir;
+       sla->cir = 0;
+       sla->pir = 0;
+
+       for (i = 0; i < sla->ring_pairs_cnt; i++)
+               rp_in_use[sla->ring_pairs_ids[i]] = false;
+
+       update_budget(sla, old_cir, true);
+       get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
+       assign_node_to_parent(rl_data->accel_dev, sla, true);
+       adf_rl_send_admin_delete_msg(rl_data->accel_dev, node_id, sla->type);
+       mark_rps_usage(sla, rl_data->rp_in_use, false);
+
+       kfree(sla);
+       rl_data->sla[sla_id] = NULL;
+       sla_type_arr[node_id] = NULL;
+}
+
+/**
+ * add_update_sla() - handles the creation and the update of an SLA
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data for a new/updated SLA
+ * @is_update: flag to indicate if this is an update or an add operation
+ *
+ * Return:
+ * * 0         - ok
+ * * -ENOMEM   - memory allocation failed
+ * * -EINVAL   - user input data cannot be used to create SLA
+ * * -ENOSPC   - all available SLAs are in use
+ */
+static int add_update_sla(struct adf_accel_dev *accel_dev,
+                         struct adf_rl_sla_input_data *sla_in, bool is_update)
+{
+       struct adf_rl *rl_data = accel_dev->rate_limiting;
+       struct rl_sla **sla_type_arr = NULL;
+       struct rl_sla *sla = NULL;
+       u32 old_cir = 0;
+       int ret;
+
+       if (!sla_in) {
+               dev_warn(&GET_DEV(accel_dev),
+                        "SLA input data pointer is missing\n");
+               ret = -EFAULT;
+               goto ret_err;
+       }
+
+       /* Input validation */
+       ret = validate_user_input(accel_dev, sla_in, is_update);
+       if (ret)
+               goto ret_err;
+
+       mutex_lock(&rl_data->rl_lock);
+
+       if (is_update) {
+               ret = validate_sla_id(accel_dev, sla_in->sla_id);
+               if (ret)
+                       goto ret_err;
+
+               sla = rl_data->sla[sla_in->sla_id];
+               old_cir = sla->cir;
+       } else {
+               ret = add_new_sla_entry(accel_dev, sla_in, &sla);
+               if (ret)
+                       goto ret_err;
+       }
+
+       if (!is_enough_budget(rl_data, sla, sla_in, is_update)) {
+               dev_notice(&GET_DEV(accel_dev),
+                          "Input value exceeds the remaining budget%s\n",
+                          is_update ? " or more budget is already in use" : "");
+               ret = -EINVAL;
+               goto ret_err;
+       }
+       sla->cir = sla_in->cir;
+       sla->pir = sla_in->pir;
+
+       /* Apply SLA */
+       assign_node_to_parent(accel_dev, sla, false);
+       ret = adf_rl_send_admin_add_update_msg(accel_dev, sla, is_update);
+       if (ret) {
+               dev_notice(&GET_DEV(accel_dev),
+                          "Failed to apply an SLA\n");
+               goto ret_err;
+       }
+       update_budget(sla, old_cir, is_update);
+
+       if (!is_update) {
+               mark_rps_usage(sla, rl_data->rp_in_use, true);
+               get_sla_arr_of_type(rl_data, sla->type, &sla_type_arr);
+               sla_type_arr[sla->node_id] = sla;
+               rl_data->sla[sla->sla_id] = sla;
+       }
+
+       sla_in->sla_id = sla->sla_id;
+       goto ret_ok;
+
+ret_err:
+       if (!is_update) {
+               sla_in->sla_id = -1;
+               kfree(sla);
+       }
+ret_ok:
+       mutex_unlock(&rl_data->rl_lock);
+       return ret;
+}
+
+/**
+ * adf_rl_add_sla() - handles the creation of an SLA
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data required to add an SLA
+ *
+ * Return:
+ * * 0         - ok
+ * * -ENOMEM   - memory allocation failed
+ * * -EINVAL   - invalid user input
+ * * -ENOSPC   - all available SLAs are in use
+ */
+int adf_rl_add_sla(struct adf_accel_dev *accel_dev,
+                  struct adf_rl_sla_input_data *sla_in)
+{
+       return add_update_sla(accel_dev, sla_in, false);
+}
+
+/**
+ * adf_rl_update_sla() - handles the update of an SLA
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user input data required to update an SLA
+ *
+ * Return:
+ * * 0         - ok
+ * * -EINVAL   - user input data cannot be used to update SLA
+ */
+int adf_rl_update_sla(struct adf_accel_dev *accel_dev,
+                     struct adf_rl_sla_input_data *sla_in)
+{
+       return add_update_sla(accel_dev, sla_in, true);
+}
+
+/**
+ * adf_rl_get_sla() - returns an existing SLA data
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_in: pointer to user data where SLA info will be stored
+ *
+ * The sla_id for which data are requested should be set in sla_id structure
+ *
+ * Return:
+ * * 0         - ok
+ * * -EINVAL   - provided sla_id does not exist
+ */
+int adf_rl_get_sla(struct adf_accel_dev *accel_dev,
+                  struct adf_rl_sla_input_data *sla_in)
+{
+       struct rl_sla *sla;
+       int ret, i;
+
+       ret = validate_sla_id(accel_dev, sla_in->sla_id);
+       if (ret)
+               return ret;
+
+       sla = accel_dev->rate_limiting->sla[sla_in->sla_id];
+       sla_in->type = sla->type;
+       sla_in->srv = sla->srv;
+       sla_in->cir = sla->cir;
+       sla_in->pir = sla->pir;
+       sla_in->rp_mask = 0U;
+       if (sla->parent)
+               sla_in->parent_id = sla->parent->sla_id;
+       else
+               sla_in->parent_id = RL_PARENT_DEFAULT_ID;
+
+       for (i = 0; i < sla->ring_pairs_cnt; i++)
+               sla_in->rp_mask |= BIT(sla->ring_pairs_ids[i]);
+
+       return 0;
+}
+
+/**
+ * adf_rl_get_capability_remaining() - returns the remaining SLA value (CIR) for
+ *                                    selected service or provided sla_id
+ * @accel_dev: pointer to acceleration device structure
+ * @srv: service ID for which capability is requested
+ * @sla_id: ID of the cluster or root to which we want assign a new SLA
+ *
+ * Check if the provided SLA id is valid. If it is and the service matches
+ * the requested service and the type is cluster or root, return the remaining
+ * capability.
+ * If the provided ID does not match the service or type, return the remaining
+ * capacity of the default cluster for that service.
+ *
+ * Return:
+ * * Positive value    - correct remaining value
+ * * -EINVAL           - algorithm cannot find a remaining value for provided data
+ */
+int adf_rl_get_capability_remaining(struct adf_accel_dev *accel_dev,
+                                   enum adf_base_services srv, int sla_id)
+{
+       struct adf_rl *rl_data = accel_dev->rate_limiting;
+       struct rl_sla *sla = NULL;
+       int i;
+
+       if (srv >= ADF_SVC_NONE)
+               return -EINVAL;
+
+       if (sla_id > RL_SLA_EMPTY_ID && !validate_sla_id(accel_dev, sla_id)) {
+               sla = rl_data->sla[sla_id];
+
+               if (sla->srv == srv && sla->type <= RL_CLUSTER)
+                       goto ret_ok;
+       }
+
+       for (i = 0; i < RL_CLUSTER_MAX; i++) {
+               if (!rl_data->cluster[i])
+                       continue;
+
+               if (rl_data->cluster[i]->srv == srv) {
+                       sla = rl_data->cluster[i];
+                       goto ret_ok;
+               }
+       }
+
+       return -EINVAL;
+ret_ok:
+       return sla->rem_cir;
+}
+
+/**
+ * adf_rl_remove_sla() - removes provided sla_id
+ * @accel_dev: pointer to acceleration device structure
+ * @sla_id: ID of the cluster or root to which we want assign an new SLA
+ *
+ * Return:
+ * * 0         - ok
+ * * -EINVAL   - wrong sla_id or it still have assigned children
+ */
+int adf_rl_remove_sla(struct adf_accel_dev *accel_dev, u32 sla_id)
+{
+       struct adf_rl *rl_data = accel_dev->rate_limiting;
+       struct rl_sla *sla;
+       int ret = 0;
+
+       mutex_lock(&rl_data->rl_lock);
+       ret = validate_sla_id(accel_dev, sla_id);
+       if (ret)
+               goto err_ret;
+
+       sla = rl_data->sla[sla_id];
+
+       if (sla->type < RL_LEAF && sla->rem_cir != sla->cir) {
+               dev_notice(&GET_DEV(accel_dev),
+                          "To remove parent SLA all its children must be removed first");
+               ret = -EINVAL;
+               goto err_ret;
+       }
+
+       clear_sla(rl_data, sla);
+
+err_ret:
+       mutex_unlock(&rl_data->rl_lock);
+       return ret;
+}
+
+/**
+ * adf_rl_remove_sla_all() - removes all SLAs from device
+ * @accel_dev: pointer to acceleration device structure
+ * @incl_default: set to true if default SLAs also should be removed
+ */
+void adf_rl_remove_sla_all(struct adf_accel_dev *accel_dev, bool incl_default)
+{
+       struct adf_rl *rl_data = accel_dev->rate_limiting;
+       int end_type = incl_default ? RL_ROOT : RL_LEAF;
+       struct rl_sla **sla_type_arr = NULL;
+       u32 max_id;
+       int i, j;
+
+       mutex_lock(&rl_data->rl_lock);
+
+       /* Unregister and remove all SLAs */
+       for (j = RL_LEAF; j >= end_type; j--) {
+               max_id = get_sla_arr_of_type(rl_data, j, &sla_type_arr);
+
+               for (i = 0; i < max_id; i++) {
+                       if (!sla_type_arr[i])
+                               continue;
+
+                       clear_sla(rl_data, sla_type_arr[i]);
+               }
+       }
+
+       mutex_unlock(&rl_data->rl_lock);
+}
+
+int adf_rl_init(struct adf_accel_dev *accel_dev)
+{
+       struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
+       struct adf_rl_hw_data *rl_hw_data = &hw_data->rl_data;
+       struct adf_rl *rl;
+       int ret = 0;
+
+       /* Validate device parameters */
+       if (RL_VALIDATE_NON_ZERO(rl_hw_data->max_tp[ADF_SVC_ASYM]) ||
+           RL_VALIDATE_NON_ZERO(rl_hw_data->max_tp[ADF_SVC_SYM]) ||
+           RL_VALIDATE_NON_ZERO(rl_hw_data->max_tp[ADF_SVC_DC]) ||
+           RL_VALIDATE_NON_ZERO(rl_hw_data->scan_interval) ||
+           RL_VALIDATE_NON_ZERO(rl_hw_data->pcie_scale_div) ||
+           RL_VALIDATE_NON_ZERO(rl_hw_data->pcie_scale_mul) ||
+           RL_VALIDATE_NON_ZERO(rl_hw_data->scale_ref)) {
+               ret = -EOPNOTSUPP;
+               goto err_ret;
+       }
+
+       rl = kzalloc(sizeof(*rl), GFP_KERNEL);
+       if (!rl) {
+               ret = -ENOMEM;
+               goto err_ret;
+       }
+
+       mutex_init(&rl->rl_lock);
+       rl->device_data = &accel_dev->hw_device->rl_data;
+       rl->accel_dev = accel_dev;
+       accel_dev->rate_limiting = rl;
+
+err_ret:
+       return ret;
+}
+
+int adf_rl_start(struct adf_accel_dev *accel_dev)
+{
+       struct adf_rl_hw_data *rl_hw_data = &GET_HW_DATA(accel_dev)->rl_data;
+       void __iomem *pmisc_addr = adf_get_pmisc_base(accel_dev);
+       u16 fw_caps =  GET_HW_DATA(accel_dev)->fw_capabilities;
+       int ret;
+
+       if (!accel_dev->rate_limiting) {
+               ret = -EOPNOTSUPP;
+               goto ret_err;
+       }
+
+       if ((fw_caps & RL_CAPABILITY_MASK) != RL_CAPABILITY_VALUE) {
+               dev_info(&GET_DEV(accel_dev), "not supported\n");
+               ret = -EOPNOTSUPP;
+               goto ret_free;
+       }
+
+       ADF_CSR_WR(pmisc_addr, rl_hw_data->pciin_tb_offset,
+                  RL_TOKEN_GRANULARITY_PCIEIN_BUCKET);
+       ADF_CSR_WR(pmisc_addr, rl_hw_data->pciout_tb_offset,
+                  RL_TOKEN_GRANULARITY_PCIEOUT_BUCKET);
+
+       ret = adf_rl_send_admin_init_msg(accel_dev, &rl_hw_data->slices);
+       if (ret) {
+               dev_err(&GET_DEV(accel_dev), "initialization failed\n");
+               goto ret_free;
+       }
+
+       ret = initialize_default_nodes(accel_dev);
+       if (ret) {
+               dev_err(&GET_DEV(accel_dev),
+                       "failed to initialize default SLAs\n");
+               goto ret_sla_rm;
+       }
+
+       ret = adf_sysfs_rl_add(accel_dev);
+       if (ret) {
+               dev_err(&GET_DEV(accel_dev), "failed to add sysfs interface\n");
+               goto ret_sysfs_rm;
+       }
+
+       return 0;
+
+ret_sysfs_rm:
+       adf_sysfs_rl_rm(accel_dev);
+ret_sla_rm:
+       adf_rl_remove_sla_all(accel_dev, true);
+ret_free:
+       kfree(accel_dev->rate_limiting);
+       accel_dev->rate_limiting = NULL;
+ret_err:
+       return ret;
+}
+
+void adf_rl_stop(struct adf_accel_dev *accel_dev)
+{
+       if (!accel_dev->rate_limiting)
+               return;
+
+       adf_sysfs_rl_rm(accel_dev);
+       adf_rl_remove_sla_all(accel_dev, true);
+}
+
+void adf_rl_exit(struct adf_accel_dev *accel_dev)
+{
+       if (!accel_dev->rate_limiting)
+               return;
+
+       kfree(accel_dev->rate_limiting);
+       accel_dev->rate_limiting = NULL;
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl.h b/drivers/crypto/intel/qat/qat_common/adf_rl.h
new file mode 100644 (file)
index 0000000..eb5a330
--- /dev/null
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_RL_H_
+#define ADF_RL_H_
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+struct adf_accel_dev;
+
+#define RL_ROOT_MAX            4
+#define RL_CLUSTER_MAX         16
+#define RL_LEAF_MAX            64
+#define RL_NODES_CNT_MAX       (RL_ROOT_MAX + RL_CLUSTER_MAX + RL_LEAF_MAX)
+#define RL_RP_CNT_PER_LEAF_MAX 4U
+#define RL_RP_CNT_MAX          64
+#define RL_SLA_EMPTY_ID                -1
+#define RL_PARENT_DEFAULT_ID   -1
+
+enum rl_node_type {
+       RL_ROOT,
+       RL_CLUSTER,
+       RL_LEAF,
+};
+
+enum adf_base_services {
+       ADF_SVC_ASYM = 0,
+       ADF_SVC_SYM,
+       ADF_SVC_DC,
+       ADF_SVC_NONE,
+};
+
+/**
+ * struct adf_rl_sla_input_data - ratelimiting user input data structure
+ * @rp_mask: 64 bit bitmask of ring pair IDs which will be assigned to SLA.
+ *          Eg. 0x5 -> RP0 and RP2 assigned; 0xA005 -> RP0,2,13,15 assigned.
+ * @sla_id: ID of current SLA for operations update, rm, get. For the add
+ *         operation, this field will be updated with the ID of the newly
+ *         added SLA
+ * @parent_id: ID of the SLA to which the current one should be assigned.
+ *            Set to -1 to refer to the default parent.
+ * @cir: Committed information rate. Rate guaranteed to be achieved. Input value
+ *      is expressed in permille scale, i.e. 1000 refers to the maximum
+ *      device throughput for a selected service.
+ * @pir: Peak information rate. Maximum rate available that the SLA can achieve.
+ *      Input value is expressed in permille scale, i.e. 1000 refers to
+ *      the maximum device throughput for a selected service.
+ * @type: SLA type: root, cluster, node
+ * @srv: Service associated to the SLA: asym, sym dc.
+ *
+ * This structure is used to perform operations on an SLA.
+ * Depending on the operation, some of the parameters are ignored.
+ * The following list reports which parameters should be set for each operation.
+ *     - add: all except sla_id
+ *     - update: cir, pir, sla_id
+ *     - rm: sla_id
+ *     - rm_all: -
+ *     - get: sla_id
+ *     - get_capability_rem: srv, sla_id
+ */
+struct adf_rl_sla_input_data {
+       u64 rp_mask;
+       int sla_id;
+       int parent_id;
+       unsigned int cir;
+       unsigned int pir;
+       enum rl_node_type type;
+       enum adf_base_services srv;
+};
+
+struct rl_slice_cnt {
+       u8 dcpr_cnt;
+       u8 pke_cnt;
+       u8 cph_cnt;
+};
+
+struct adf_rl_interface_data {
+       struct adf_rl_sla_input_data input;
+       enum adf_base_services cap_rem_srv;
+       struct rw_semaphore lock;
+};
+
+struct adf_rl_hw_data {
+       u32 scale_ref;
+       u32 scan_interval;
+       u32 r2l_offset;
+       u32 l2c_offset;
+       u32 c2s_offset;
+       u32 pciin_tb_offset;
+       u32 pciout_tb_offset;
+       u32 pcie_scale_mul;
+       u32 pcie_scale_div;
+       u32 dcpr_correction;
+       u32 max_tp[RL_ROOT_MAX];
+       struct rl_slice_cnt slices;
+};
+
+/**
+ * struct adf_rl - ratelimiting data structure
+ * @accel_dev: pointer to acceleration device data
+ * @device_data: pointer to rate limiting data specific to a device type (or revision)
+ * @sla: array of pointers to SLA objects
+ * @root: array of pointers to root type SLAs, element number reflects node_id
+ * @cluster: array of pointers to cluster type SLAs, element number reflects node_id
+ * @leaf: array of pointers to leaf type SLAs, element number reflects node_id
+ * @rp_in_use: array of ring pair IDs already used in one of SLAs
+ * @rl_lock: mutex object which is protecting data in this structure
+ * @input: structure which is used for holding the data received from user
+ */
+struct adf_rl {
+       struct adf_accel_dev *accel_dev;
+       struct adf_rl_hw_data *device_data;
+       /* mapping sla_id to SLA objects */
+       struct rl_sla *sla[RL_NODES_CNT_MAX];
+       struct rl_sla *root[RL_ROOT_MAX];
+       struct rl_sla *cluster[RL_CLUSTER_MAX];
+       struct rl_sla *leaf[RL_LEAF_MAX];
+       bool rp_in_use[RL_RP_CNT_MAX];
+       /* Mutex protecting writing to SLAs lists */
+       struct mutex rl_lock;
+       struct adf_rl_interface_data user_input;
+};
+
+/**
+ * struct rl_sla - SLA object data structure
+ * @parent: pointer to the parent SLA (root/cluster)
+ * @type: SLA type
+ * @srv: service associated with this SLA
+ * @sla_id: ID of the SLA, used as element number in SLA array and as identifier
+ *         shared with the user
+ * @node_id: ID of node, each of SLA type have a separate ID list
+ * @cir: committed information rate
+ * @pir: peak information rate (PIR >= CIR)
+ * @rem_cir: if this SLA is a parent then this field represents a remaining
+ *          value to be used by child SLAs.
+ * @ring_pairs_ids: array with numeric ring pairs IDs assigned to this SLA
+ * @ring_pairs_cnt: number of assigned ring pairs listed in the array above
+ */
+struct rl_sla {
+       struct rl_sla *parent;
+       enum rl_node_type type;
+       enum adf_base_services srv;
+       u32 sla_id;
+       u32 node_id;
+       u32 cir;
+       u32 pir;
+       u32 rem_cir;
+       u16 ring_pairs_ids[RL_RP_CNT_PER_LEAF_MAX];
+       u16 ring_pairs_cnt;
+};
+
+int adf_rl_add_sla(struct adf_accel_dev *accel_dev,
+                  struct adf_rl_sla_input_data *sla_in);
+int adf_rl_update_sla(struct adf_accel_dev *accel_dev,
+                     struct adf_rl_sla_input_data *sla_in);
+int adf_rl_get_sla(struct adf_accel_dev *accel_dev,
+                  struct adf_rl_sla_input_data *sla_in);
+int adf_rl_get_capability_remaining(struct adf_accel_dev *accel_dev,
+                                   enum adf_base_services srv, int sla_id);
+int adf_rl_remove_sla(struct adf_accel_dev *accel_dev, u32 sla_id);
+void adf_rl_remove_sla_all(struct adf_accel_dev *accel_dev, bool incl_default);
+
+int adf_rl_init(struct adf_accel_dev *accel_dev);
+int adf_rl_start(struct adf_accel_dev *accel_dev);
+void adf_rl_stop(struct adf_accel_dev *accel_dev);
+void adf_rl_exit(struct adf_accel_dev *accel_dev);
+
+u32 adf_rl_calculate_pci_bw(struct adf_accel_dev *accel_dev, u32 sla_val,
+                           enum adf_base_services svc_type, bool is_bw_out);
+u32 adf_rl_calculate_ae_cycles(struct adf_accel_dev *accel_dev, u32 sla_val,
+                              enum adf_base_services svc_type);
+u32 adf_rl_calculate_slice_tokens(struct adf_accel_dev *accel_dev, u32 sla_val,
+                                 enum adf_base_services svc_type);
+
+#endif /* ADF_RL_H_ */
diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl_admin.c b/drivers/crypto/intel/qat/qat_common/adf_rl_admin.c
new file mode 100644 (file)
index 0000000..698a14f
--- /dev/null
@@ -0,0 +1,97 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/dma-mapping.h>
+#include <linux/pci.h>
+
+#include "adf_admin.h"
+#include "adf_accel_devices.h"
+#include "adf_rl_admin.h"
+
+static void
+prep_admin_req_msg(struct rl_sla *sla, dma_addr_t dma_addr,
+                  struct icp_qat_fw_init_admin_sla_config_params *fw_params,
+                  struct icp_qat_fw_init_admin_req *req, bool is_update)
+{
+       req->cmd_id = is_update ? ICP_QAT_FW_RL_UPDATE : ICP_QAT_FW_RL_ADD;
+       req->init_cfg_ptr = dma_addr;
+       req->init_cfg_sz = sizeof(*fw_params);
+       req->node_id = sla->node_id;
+       req->node_type = sla->type;
+       req->rp_count = sla->ring_pairs_cnt;
+       req->svc_type = sla->srv;
+}
+
+static void
+prep_admin_req_params(struct adf_accel_dev *accel_dev, struct rl_sla *sla,
+                     struct icp_qat_fw_init_admin_sla_config_params *fw_params)
+{
+       fw_params->pcie_in_cir =
+               adf_rl_calculate_pci_bw(accel_dev, sla->cir, sla->srv, false);
+       fw_params->pcie_in_pir =
+               adf_rl_calculate_pci_bw(accel_dev, sla->pir, sla->srv, false);
+       fw_params->pcie_out_cir =
+               adf_rl_calculate_pci_bw(accel_dev, sla->cir, sla->srv, true);
+       fw_params->pcie_out_pir =
+               adf_rl_calculate_pci_bw(accel_dev, sla->pir, sla->srv, true);
+
+       fw_params->slice_util_cir =
+               adf_rl_calculate_slice_tokens(accel_dev, sla->cir, sla->srv);
+       fw_params->slice_util_pir =
+               adf_rl_calculate_slice_tokens(accel_dev, sla->pir, sla->srv);
+
+       fw_params->ae_util_cir =
+               adf_rl_calculate_ae_cycles(accel_dev, sla->cir, sla->srv);
+       fw_params->ae_util_pir =
+               adf_rl_calculate_ae_cycles(accel_dev, sla->pir, sla->srv);
+
+       memcpy(fw_params->rp_ids, sla->ring_pairs_ids,
+              sizeof(sla->ring_pairs_ids));
+}
+
+int adf_rl_send_admin_init_msg(struct adf_accel_dev *accel_dev,
+                              struct rl_slice_cnt *slices_int)
+{
+       struct icp_qat_fw_init_admin_slice_cnt slices_resp = { };
+       int ret;
+
+       ret = adf_send_admin_rl_init(accel_dev, &slices_resp);
+       if (ret)
+               return ret;
+
+       slices_int->dcpr_cnt = slices_resp.dcpr_cnt;
+       slices_int->pke_cnt = slices_resp.pke_cnt;
+       /* For symmetric crypto, slice tokens are relative to the UCS slice */
+       slices_int->cph_cnt = slices_resp.ucs_cnt;
+
+       return 0;
+}
+
+int adf_rl_send_admin_add_update_msg(struct adf_accel_dev *accel_dev,
+                                    struct rl_sla *sla, bool is_update)
+{
+       struct icp_qat_fw_init_admin_sla_config_params *fw_params;
+       struct icp_qat_fw_init_admin_req req = { };
+       dma_addr_t dma_addr;
+       int ret;
+
+       fw_params = dma_alloc_coherent(&GET_DEV(accel_dev), sizeof(*fw_params),
+                                      &dma_addr, GFP_KERNEL);
+       if (!fw_params)
+               return -ENOMEM;
+
+       prep_admin_req_params(accel_dev, sla, fw_params);
+       prep_admin_req_msg(sla, dma_addr, fw_params, &req, is_update);
+       ret = adf_send_admin_rl_add_update(accel_dev, &req);
+
+       dma_free_coherent(&GET_DEV(accel_dev), sizeof(*fw_params), fw_params,
+                         dma_addr);
+
+       return ret;
+}
+
+int adf_rl_send_admin_delete_msg(struct adf_accel_dev *accel_dev, u16 node_id,
+                                u8 node_type)
+{
+       return adf_send_admin_rl_delete(accel_dev, node_id, node_type);
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_rl_admin.h b/drivers/crypto/intel/qat/qat_common/adf_rl_admin.h
new file mode 100644 (file)
index 0000000..dd5419b
--- /dev/null
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_RL_ADMIN_H_
+#define ADF_RL_ADMIN_H_
+
+#include <linux/types.h>
+
+#include "adf_rl.h"
+
+int adf_rl_send_admin_init_msg(struct adf_accel_dev *accel_dev,
+                              struct rl_slice_cnt *slices_int);
+int adf_rl_send_admin_add_update_msg(struct adf_accel_dev *accel_dev,
+                                    struct rl_sla *sla, bool is_update);
+int adf_rl_send_admin_delete_msg(struct adf_accel_dev *accel_dev, u16 node_id,
+                                u8 node_type);
+
+#endif /* ADF_RL_ADMIN_H_ */
index a74d2f93036709375c7939f553543ea34223bf74..ddffc98119c6b8d5ab9e1582ee69cf37c35200ce 100644 (file)
@@ -5,8 +5,11 @@
 #include <linux/pci.h>
 #include "adf_accel_devices.h"
 #include "adf_cfg.h"
+#include "adf_cfg_services.h"
 #include "adf_common_drv.h"
 
+#define UNSET_RING_NUM -1
+
 static const char * const state_operations[] = {
        [DEV_DOWN] = "down",
        [DEV_UP] = "up",
@@ -52,16 +55,25 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
        case DEV_DOWN:
                dev_info(dev, "Stopping device qat_dev%d\n", accel_id);
 
+               if (!adf_dev_started(accel_dev)) {
+                       dev_info(&GET_DEV(accel_dev), "Device qat_dev%d already down\n",
+                                accel_id);
+
+                       break;
+               }
+
                ret = adf_dev_down(accel_dev, true);
-               if (ret < 0)
-                       return -EINVAL;
+               if (ret)
+                       return ret;
 
                break;
        case DEV_UP:
                dev_info(dev, "Starting device qat_dev%d\n", accel_id);
 
                ret = adf_dev_up(accel_dev, true);
-               if (ret < 0) {
+               if (ret == -EALREADY) {
+                       break;
+               } else if (ret) {
                        dev_err(dev, "Failed to start device qat_dev%d\n",
                                accel_id);
                        adf_dev_down(accel_dev, true);
@@ -75,18 +87,6 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
        return count;
 }
 
-static const char * const services_operations[] = {
-       ADF_CFG_CY,
-       ADF_CFG_DC,
-       ADF_CFG_SYM,
-       ADF_CFG_ASYM,
-       ADF_CFG_ASYM_SYM,
-       ADF_CFG_ASYM_DC,
-       ADF_CFG_DC_ASYM,
-       ADF_CFG_SYM_DC,
-       ADF_CFG_DC_SYM,
-};
-
 static ssize_t cfg_services_show(struct device *dev, struct device_attribute *attr,
                                 char *buf)
 {
@@ -121,7 +121,7 @@ static ssize_t cfg_services_store(struct device *dev, struct device_attribute *a
        struct adf_accel_dev *accel_dev;
        int ret;
 
-       ret = sysfs_match_string(services_operations, buf);
+       ret = sysfs_match_string(adf_cfg_services, buf);
        if (ret < 0)
                return ret;
 
@@ -135,7 +135,7 @@ static ssize_t cfg_services_store(struct device *dev, struct device_attribute *a
                return -EINVAL;
        }
 
-       ret = adf_sysfs_update_dev_config(accel_dev, services_operations[ret]);
+       ret = adf_sysfs_update_dev_config(accel_dev, adf_cfg_services[ret]);
        if (ret < 0)
                return ret;
 
@@ -207,10 +207,86 @@ static DEVICE_ATTR_RW(pm_idle_enabled);
 static DEVICE_ATTR_RW(state);
 static DEVICE_ATTR_RW(cfg_services);
 
+static ssize_t rp2srv_show(struct device *dev, struct device_attribute *attr,
+                          char *buf)
+{
+       struct adf_hw_device_data *hw_data;
+       struct adf_accel_dev *accel_dev;
+       enum adf_cfg_service_type svc;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       hw_data = GET_HW_DATA(accel_dev);
+
+       if (accel_dev->sysfs.ring_num == UNSET_RING_NUM)
+               return -EINVAL;
+
+       down_read(&accel_dev->sysfs.lock);
+       svc = GET_SRV_TYPE(accel_dev, accel_dev->sysfs.ring_num %
+                                             hw_data->num_banks_per_vf);
+       up_read(&accel_dev->sysfs.lock);
+
+       switch (svc) {
+       case COMP:
+               return sysfs_emit(buf, "%s\n", ADF_CFG_DC);
+       case SYM:
+               return sysfs_emit(buf, "%s\n", ADF_CFG_SYM);
+       case ASYM:
+               return sysfs_emit(buf, "%s\n", ADF_CFG_ASYM);
+       default:
+               break;
+       }
+       return -EINVAL;
+}
+
+static ssize_t rp2srv_store(struct device *dev, struct device_attribute *attr,
+                           const char *buf, size_t count)
+{
+       struct adf_accel_dev *accel_dev;
+       int ring, num_rings, ret;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       ret = kstrtouint(buf, 10, &ring);
+       if (ret)
+               return ret;
+
+       num_rings = GET_MAX_BANKS(accel_dev);
+       if (ring >= num_rings) {
+               dev_err(&GET_DEV(accel_dev),
+                       "Device does not support more than %u ring pairs\n",
+                       num_rings);
+               return -EINVAL;
+       }
+
+       down_write(&accel_dev->sysfs.lock);
+       accel_dev->sysfs.ring_num = ring;
+       up_write(&accel_dev->sysfs.lock);
+
+       return count;
+}
+static DEVICE_ATTR_RW(rp2srv);
+
+static ssize_t num_rps_show(struct device *dev, struct device_attribute *attr,
+                           char *buf)
+{
+       struct adf_accel_dev *accel_dev;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       return sysfs_emit(buf, "%u\n", GET_MAX_BANKS(accel_dev));
+}
+static DEVICE_ATTR_RO(num_rps);
+
 static struct attribute *qat_attrs[] = {
        &dev_attr_state.attr,
        &dev_attr_cfg_services.attr,
        &dev_attr_pm_idle_enabled.attr,
+       &dev_attr_rp2srv.attr,
+       &dev_attr_num_rps.attr,
        NULL,
 };
 
@@ -229,6 +305,8 @@ int adf_sysfs_init(struct adf_accel_dev *accel_dev)
                        "Failed to create qat attribute group: %d\n", ret);
        }
 
+       accel_dev->sysfs.ring_num = UNSET_RING_NUM;
+
        return ret;
 }
 EXPORT_SYMBOL_GPL(adf_sysfs_init);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.c b/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.c
new file mode 100644 (file)
index 0000000..cffe2d7
--- /dev/null
@@ -0,0 +1,112 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#include <linux/sysfs.h>
+#include <linux/pci.h>
+#include <linux/string.h>
+
+#include "adf_common_drv.h"
+#include "adf_sysfs_ras_counters.h"
+
+static ssize_t errors_correctable_show(struct device *dev,
+                                      struct device_attribute *dev_attr,
+                                      char *buf)
+{
+       struct adf_accel_dev *accel_dev;
+       unsigned long counter;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       counter = ADF_RAS_ERR_CTR_READ(accel_dev->ras_errors, ADF_RAS_CORR);
+       return scnprintf(buf, PAGE_SIZE, "%ld\n", counter);
+}
+
+static ssize_t errors_nonfatal_show(struct device *dev,
+                                   struct device_attribute *dev_attr,
+                                   char *buf)
+{
+       struct adf_accel_dev *accel_dev;
+       unsigned long counter;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       counter = ADF_RAS_ERR_CTR_READ(accel_dev->ras_errors, ADF_RAS_UNCORR);
+       return scnprintf(buf, PAGE_SIZE, "%ld\n", counter);
+}
+
+static ssize_t errors_fatal_show(struct device *dev,
+                                struct device_attribute *dev_attr,
+                                char *buf)
+{
+       struct adf_accel_dev *accel_dev;
+       unsigned long counter;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       counter = ADF_RAS_ERR_CTR_READ(accel_dev->ras_errors, ADF_RAS_FATAL);
+       return scnprintf(buf, PAGE_SIZE, "%ld\n", counter);
+}
+
+static ssize_t reset_error_counters_store(struct device *dev,
+                                         struct device_attribute *dev_attr,
+                                         const char *buf, size_t count)
+{
+       struct adf_accel_dev *accel_dev;
+
+       if (buf[0] != '1' || count != 2)
+               return -EINVAL;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       ADF_RAS_ERR_CTR_CLEAR(accel_dev->ras_errors);
+
+       return count;
+}
+
+static DEVICE_ATTR_RO(errors_correctable);
+static DEVICE_ATTR_RO(errors_nonfatal);
+static DEVICE_ATTR_RO(errors_fatal);
+static DEVICE_ATTR_WO(reset_error_counters);
+
+static struct attribute *qat_ras_attrs[] = {
+       &dev_attr_errors_correctable.attr,
+       &dev_attr_errors_nonfatal.attr,
+       &dev_attr_errors_fatal.attr,
+       &dev_attr_reset_error_counters.attr,
+       NULL,
+};
+
+static struct attribute_group qat_ras_group = {
+       .attrs = qat_ras_attrs,
+       .name = "qat_ras",
+};
+
+void adf_sysfs_start_ras(struct adf_accel_dev *accel_dev)
+{
+       if (!accel_dev->ras_errors.enabled)
+               return;
+
+       ADF_RAS_ERR_CTR_CLEAR(accel_dev->ras_errors);
+
+       if (device_add_group(&GET_DEV(accel_dev), &qat_ras_group))
+               dev_err(&GET_DEV(accel_dev),
+                       "Failed to create qat_ras attribute group.\n");
+}
+
+void adf_sysfs_stop_ras(struct adf_accel_dev *accel_dev)
+{
+       if (!accel_dev->ras_errors.enabled)
+               return;
+
+       device_remove_group(&GET_DEV(accel_dev), &qat_ras_group);
+
+       ADF_RAS_ERR_CTR_CLEAR(accel_dev->ras_errors);
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.h b/drivers/crypto/intel/qat/qat_common/adf_sysfs_ras_counters.h
new file mode 100644 (file)
index 0000000..99e9d9c
--- /dev/null
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+
+#ifndef ADF_RAS_H
+#define ADF_RAS_H
+
+#include <linux/bitops.h>
+#include <linux/atomic.h>
+
+struct adf_accel_dev;
+
+void adf_sysfs_start_ras(struct adf_accel_dev *accel_dev);
+void adf_sysfs_stop_ras(struct adf_accel_dev *accel_dev);
+
+#define ADF_RAS_ERR_CTR_READ(ras_errors, ERR) \
+       atomic_read(&(ras_errors).counter[ERR])
+
+#define ADF_RAS_ERR_CTR_CLEAR(ras_errors) \
+       do { \
+               for (int err = 0; err < ADF_RAS_ERRORS; ++err) \
+                       atomic_set(&(ras_errors).counter[err], 0); \
+       } while (0)
+
+#define ADF_RAS_ERR_CTR_INC(ras_errors, ERR) \
+       atomic_inc(&(ras_errors).counter[ERR])
+
+#endif /* ADF_RAS_H */
diff --git a/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.c b/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.c
new file mode 100644 (file)
index 0000000..abf9c52
--- /dev/null
@@ -0,0 +1,451 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright(c) 2023 Intel Corporation */
+
+#define dev_fmt(fmt) "RateLimiting: " fmt
+
+#include <linux/dev_printk.h>
+#include <linux/pci.h>
+#include <linux/sysfs.h>
+#include <linux/types.h>
+
+#include "adf_common_drv.h"
+#include "adf_rl.h"
+#include "adf_sysfs_rl.h"
+
+#define GET_RL_STRUCT(accel_dev) ((accel_dev)->rate_limiting->user_input)
+
+enum rl_ops {
+       ADD,
+       UPDATE,
+       RM,
+       RM_ALL,
+       GET,
+};
+
+enum rl_params {
+       RP_MASK,
+       ID,
+       CIR,
+       PIR,
+       SRV,
+       CAP_REM_SRV,
+};
+
+static const char *const rl_services[] = {
+       [ADF_SVC_ASYM] = "asym",
+       [ADF_SVC_SYM] = "sym",
+       [ADF_SVC_DC] = "dc",
+};
+
+static const char *const rl_operations[] = {
+       [ADD] = "add",
+       [UPDATE] = "update",
+       [RM] = "rm",
+       [RM_ALL] = "rm_all",
+       [GET] = "get",
+};
+
+static int set_param_u(struct device *dev, enum rl_params param, u64 set)
+{
+       struct adf_rl_interface_data *data;
+       struct adf_accel_dev *accel_dev;
+       int ret = 0;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       data = &GET_RL_STRUCT(accel_dev);
+
+       down_write(&data->lock);
+       switch (param) {
+       case RP_MASK:
+               data->input.rp_mask = set;
+               break;
+       case CIR:
+               data->input.cir = set;
+               break;
+       case PIR:
+               data->input.pir = set;
+               break;
+       case SRV:
+               data->input.srv = set;
+               break;
+       case CAP_REM_SRV:
+               data->cap_rem_srv = set;
+               break;
+       default:
+               ret = -EINVAL;
+               break;
+       }
+       up_write(&data->lock);
+
+       return ret;
+}
+
+static int set_param_s(struct device *dev, enum rl_params param, int set)
+{
+       struct adf_rl_interface_data *data;
+       struct adf_accel_dev *accel_dev;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev || param != ID)
+               return -EINVAL;
+
+       data = &GET_RL_STRUCT(accel_dev);
+
+       down_write(&data->lock);
+       data->input.sla_id = set;
+       up_write(&data->lock);
+
+       return 0;
+}
+
+static int get_param_u(struct device *dev, enum rl_params param, u64 *get)
+{
+       struct adf_rl_interface_data *data;
+       struct adf_accel_dev *accel_dev;
+       int ret = 0;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       data = &GET_RL_STRUCT(accel_dev);
+
+       down_read(&data->lock);
+       switch (param) {
+       case RP_MASK:
+               *get = data->input.rp_mask;
+               break;
+       case CIR:
+               *get = data->input.cir;
+               break;
+       case PIR:
+               *get = data->input.pir;
+               break;
+       case SRV:
+               *get = data->input.srv;
+               break;
+       default:
+               ret = -EINVAL;
+       }
+       up_read(&data->lock);
+
+       return ret;
+}
+
+static int get_param_s(struct device *dev, enum rl_params param)
+{
+       struct adf_rl_interface_data *data;
+       struct adf_accel_dev *accel_dev;
+       int ret = 0;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       data = &GET_RL_STRUCT(accel_dev);
+
+       down_read(&data->lock);
+       if (param == ID)
+               ret = data->input.sla_id;
+       up_read(&data->lock);
+
+       return ret;
+}
+
+static ssize_t rp_show(struct device *dev, struct device_attribute *attr,
+                      char *buf)
+{
+       int ret;
+       u64 get;
+
+       ret = get_param_u(dev, RP_MASK, &get);
+       if (ret)
+               return ret;
+
+       return sysfs_emit(buf, "%#llx\n", get);
+}
+
+static ssize_t rp_store(struct device *dev, struct device_attribute *attr,
+                       const char *buf, size_t count)
+{
+       int err;
+       u64 val;
+
+       err = kstrtou64(buf, 16, &val);
+       if (err)
+               return err;
+
+       err = set_param_u(dev, RP_MASK, val);
+       if (err)
+               return err;
+
+       return count;
+}
+static DEVICE_ATTR_RW(rp);
+
+static ssize_t id_show(struct device *dev, struct device_attribute *attr,
+                      char *buf)
+{
+       return sysfs_emit(buf, "%d\n", get_param_s(dev, ID));
+}
+
+static ssize_t id_store(struct device *dev, struct device_attribute *attr,
+                       const char *buf, size_t count)
+{
+       int err;
+       int val;
+
+       err = kstrtoint(buf, 10, &val);
+       if (err)
+               return err;
+
+       err = set_param_s(dev, ID, val);
+       if (err)
+               return err;
+
+       return count;
+}
+static DEVICE_ATTR_RW(id);
+
+static ssize_t cir_show(struct device *dev, struct device_attribute *attr,
+                       char *buf)
+{
+       int ret;
+       u64 get;
+
+       ret = get_param_u(dev, CIR, &get);
+       if (ret)
+               return ret;
+
+       return sysfs_emit(buf, "%llu\n", get);
+}
+
+static ssize_t cir_store(struct device *dev, struct device_attribute *attr,
+                        const char *buf, size_t count)
+{
+       unsigned int val;
+       int err;
+
+       err = kstrtouint(buf, 10, &val);
+       if (err)
+               return err;
+
+       err = set_param_u(dev, CIR, val);
+       if (err)
+               return err;
+
+       return count;
+}
+static DEVICE_ATTR_RW(cir);
+
+static ssize_t pir_show(struct device *dev, struct device_attribute *attr,
+                       char *buf)
+{
+       int ret;
+       u64 get;
+
+       ret = get_param_u(dev, PIR, &get);
+       if (ret)
+               return ret;
+
+       return sysfs_emit(buf, "%llu\n", get);
+}
+
+static ssize_t pir_store(struct device *dev, struct device_attribute *attr,
+                        const char *buf, size_t count)
+{
+       unsigned int val;
+       int err;
+
+       err = kstrtouint(buf, 10, &val);
+       if (err)
+               return err;
+
+       err = set_param_u(dev, PIR, val);
+       if (err)
+               return err;
+
+       return count;
+}
+static DEVICE_ATTR_RW(pir);
+
+static ssize_t srv_show(struct device *dev, struct device_attribute *attr,
+                       char *buf)
+{
+       int ret;
+       u64 get;
+
+       ret = get_param_u(dev, SRV, &get);
+       if (ret)
+               return ret;
+
+       if (get == ADF_SVC_NONE)
+               return -EINVAL;
+
+       return sysfs_emit(buf, "%s\n", rl_services[get]);
+}
+
+static ssize_t srv_store(struct device *dev, struct device_attribute *attr,
+                        const char *buf, size_t count)
+{
+       unsigned int val;
+       int ret;
+
+       ret = sysfs_match_string(rl_services, buf);
+       if (ret < 0)
+               return ret;
+
+       val = ret;
+       ret = set_param_u(dev, SRV, val);
+       if (ret)
+               return ret;
+
+       return count;
+}
+static DEVICE_ATTR_RW(srv);
+
+static ssize_t cap_rem_show(struct device *dev, struct device_attribute *attr,
+                           char *buf)
+{
+       struct adf_rl_interface_data *data;
+       struct adf_accel_dev *accel_dev;
+       int ret, rem_cap;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       data = &GET_RL_STRUCT(accel_dev);
+
+       down_read(&data->lock);
+       rem_cap = adf_rl_get_capability_remaining(accel_dev, data->cap_rem_srv,
+                                                 RL_SLA_EMPTY_ID);
+       up_read(&data->lock);
+       if (rem_cap < 0)
+               return rem_cap;
+
+       ret = sysfs_emit(buf, "%u\n", rem_cap);
+
+       return ret;
+}
+
+static ssize_t cap_rem_store(struct device *dev, struct device_attribute *attr,
+                            const char *buf, size_t count)
+{
+       unsigned int val;
+       int ret;
+
+       ret = sysfs_match_string(rl_services, buf);
+       if (ret < 0)
+               return ret;
+
+       val = ret;
+       ret = set_param_u(dev, CAP_REM_SRV, val);
+       if (ret)
+               return ret;
+
+       return count;
+}
+static DEVICE_ATTR_RW(cap_rem);
+
+static ssize_t sla_op_store(struct device *dev, struct device_attribute *attr,
+                           const char *buf, size_t count)
+{
+       struct adf_rl_interface_data *data;
+       struct adf_accel_dev *accel_dev;
+       int ret;
+
+       accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
+       if (!accel_dev)
+               return -EINVAL;
+
+       data = &GET_RL_STRUCT(accel_dev);
+
+       ret = sysfs_match_string(rl_operations, buf);
+       if (ret < 0)
+               return ret;
+
+       down_write(&data->lock);
+       switch (ret) {
+       case ADD:
+               data->input.parent_id = RL_PARENT_DEFAULT_ID;
+               data->input.type = RL_LEAF;
+               data->input.sla_id = 0;
+               ret = adf_rl_add_sla(accel_dev, &data->input);
+               if (ret)
+                       goto err_free_lock;
+               break;
+       case UPDATE:
+               ret = adf_rl_update_sla(accel_dev, &data->input);
+               if (ret)
+                       goto err_free_lock;
+               break;
+       case RM:
+               ret = adf_rl_remove_sla(accel_dev, data->input.sla_id);
+               if (ret)
+                       goto err_free_lock;
+               break;
+       case RM_ALL:
+               adf_rl_remove_sla_all(accel_dev, false);
+               break;
+       case GET:
+               ret = adf_rl_get_sla(accel_dev, &data->input);
+               if (ret)
+                       goto err_free_lock;
+               break;
+       default:
+               ret = -EINVAL;
+               goto err_free_lock;
+       }
+       up_write(&data->lock);
+
+       return count;
+
+err_free_lock:
+       up_write(&data->lock);
+
+       return ret;
+}
+static DEVICE_ATTR_WO(sla_op);
+
+static struct attribute *qat_rl_attrs[] = {
+       &dev_attr_rp.attr,
+       &dev_attr_id.attr,
+       &dev_attr_cir.attr,
+       &dev_attr_pir.attr,
+       &dev_attr_srv.attr,
+       &dev_attr_cap_rem.attr,
+       &dev_attr_sla_op.attr,
+       NULL,
+};
+
+static struct attribute_group qat_rl_group = {
+       .attrs = qat_rl_attrs,
+       .name = "qat_rl",
+};
+
+int adf_sysfs_rl_add(struct adf_accel_dev *accel_dev)
+{
+       struct adf_rl_interface_data *data;
+       int ret;
+
+       data = &GET_RL_STRUCT(accel_dev);
+
+       ret = device_add_group(&GET_DEV(accel_dev), &qat_rl_group);
+       if (ret)
+               dev_err(&GET_DEV(accel_dev),
+                       "Failed to create qat_rl attribute group\n");
+
+       data->cap_rem_srv = ADF_SVC_NONE;
+       data->input.srv = ADF_SVC_NONE;
+
+       return ret;
+}
+
+void adf_sysfs_rl_rm(struct adf_accel_dev *accel_dev)
+{
+       device_remove_group(&GET_DEV(accel_dev), &qat_rl_group);
+}
diff --git a/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.h b/drivers/crypto/intel/qat/qat_common/adf_sysfs_rl.h
new file mode 100644 (file)
index 0000000..22d36aa
--- /dev/null
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright(c) 2023 Intel Corporation */
+#ifndef ADF_SYSFS_RL_H_
+#define ADF_SYSFS_RL_H_
+
+struct adf_accel_dev;
+
+int adf_sysfs_rl_add(struct adf_accel_dev *accel_dev);
+void adf_sysfs_rl_rm(struct adf_accel_dev *accel_dev);
+
+#endif /* ADF_SYSFS_RL_H_ */
index 08bca1c506c0efb44a995fb95f74b843afc99791..e2dd568b87b519a00efeeeb03e551eaf28b3f206 100644 (file)
@@ -90,7 +90,7 @@ DEFINE_SEQ_ATTRIBUTE(adf_ring_debug);
 int adf_ring_debugfs_add(struct adf_etr_ring_data *ring, const char *name)
 {
        struct adf_etr_ring_debug_entry *ring_debug;
-       char entry_name[8];
+       char entry_name[16];
 
        ring_debug = kzalloc(sizeof(*ring_debug), GFP_KERNEL);
        if (!ring_debug)
@@ -192,7 +192,7 @@ int adf_bank_debugfs_add(struct adf_etr_bank_data *bank)
 {
        struct adf_accel_dev *accel_dev = bank->accel_dev;
        struct dentry *parent = accel_dev->transport->debug;
-       char name[8];
+       char name[16];
 
        snprintf(name, sizeof(name), "bank_%02d", bank->bank_number);
        bank->bank_debug_dir = debugfs_create_dir(name, parent);
index 3e968a4bcc9cd51e8c52c77220694e517c565fa6..cd418b51d9f351534c60ee0758810240c6ef7b99 100644 (file)
@@ -5,6 +5,8 @@
 
 #include "icp_qat_fw.h"
 
+#define RL_MAX_RP_IDS 16
+
 enum icp_qat_fw_init_admin_cmd_id {
        ICP_QAT_FW_INIT_AE = 0,
        ICP_QAT_FW_TRNG_ENABLE = 1,
@@ -16,9 +18,17 @@ enum icp_qat_fw_init_admin_cmd_id {
        ICP_QAT_FW_HEARTBEAT_SYNC = 7,
        ICP_QAT_FW_HEARTBEAT_GET = 8,
        ICP_QAT_FW_COMP_CAPABILITY_GET = 9,
+       ICP_QAT_FW_CRYPTO_CAPABILITY_GET = 10,
+       ICP_QAT_FW_DC_CHAIN_INIT = 11,
        ICP_QAT_FW_HEARTBEAT_TIMER_SET = 13,
+       ICP_QAT_FW_RL_INIT = 15,
        ICP_QAT_FW_TIMER_GET = 19,
+       ICP_QAT_FW_CNV_STATS_GET = 20,
        ICP_QAT_FW_PM_STATE_CONFIG = 128,
+       ICP_QAT_FW_PM_INFO = 129,
+       ICP_QAT_FW_RL_ADD = 134,
+       ICP_QAT_FW_RL_UPDATE = 135,
+       ICP_QAT_FW_RL_REMOVE = 136,
 };
 
 enum icp_qat_fw_init_admin_resp_status {
@@ -26,6 +36,30 @@ enum icp_qat_fw_init_admin_resp_status {
        ICP_QAT_FW_INIT_RESP_STATUS_FAIL
 };
 
+struct icp_qat_fw_init_admin_slice_cnt {
+       __u8 cpr_cnt;
+       __u8 xlt_cnt;
+       __u8 dcpr_cnt;
+       __u8 pke_cnt;
+       __u8 wat_cnt;
+       __u8 wcp_cnt;
+       __u8 ucs_cnt;
+       __u8 cph_cnt;
+       __u8 ath_cnt;
+};
+
+struct icp_qat_fw_init_admin_sla_config_params {
+       __u32 pcie_in_cir;
+       __u32 pcie_in_pir;
+       __u32 pcie_out_cir;
+       __u32 pcie_out_pir;
+       __u32 slice_util_cir;
+       __u32 slice_util_pir;
+       __u32 ae_util_cir;
+       __u32 ae_util_pir;
+       __u16 rp_ids[RL_MAX_RP_IDS];
+};
+
 struct icp_qat_fw_init_admin_req {
        __u16 init_cfg_sz;
        __u8 resrvd1;
@@ -45,6 +79,13 @@ struct icp_qat_fw_init_admin_req {
                struct {
                        __u32 heartbeat_ticks;
                };
+               struct {
+                       __u16 node_id;
+                       __u8 node_type;
+                       __u8 svc_type;
+                       __u8 resrvd5[3];
+                       __u8 rp_count;
+               };
                __u32 idle_filter;
        };
 
@@ -63,6 +104,10 @@ struct icp_qat_fw_init_admin_resp {
                        __u16 version_major_num;
                };
                __u32 extended_features;
+               struct {
+                       __u16 error_count;
+                       __u16 latest_error;
+               };
        };
        __u64 opaque_data;
        union {
@@ -102,9 +147,46 @@ struct icp_qat_fw_init_admin_resp {
                        __u32 unsuccessful_count;
                        __u64 resrvd8;
                };
+               struct icp_qat_fw_init_admin_slice_cnt slices;
+               __u16 fw_capabilities;
        };
 } __packed;
 
 #define ICP_QAT_FW_SYNC ICP_QAT_FW_HEARTBEAT_SYNC
+#define ICP_QAT_FW_CAPABILITIES_GET ICP_QAT_FW_CRYPTO_CAPABILITY_GET
+
+#define ICP_QAT_NUMBER_OF_PM_EVENTS 8
+
+struct icp_qat_fw_init_admin_pm_info {
+       __u16 max_pwrreq;
+       __u16 min_pwrreq;
+       __u16 resvrd1;
+       __u8 pwr_state;
+       __u8 resvrd2;
+       __u32 fusectl0;
+       struct_group(event_counters,
+               __u32 sys_pm;
+               __u32 host_msg;
+               __u32 unknown;
+               __u32 local_ssm;
+               __u32 timer;
+       );
+       __u32 event_log[ICP_QAT_NUMBER_OF_PM_EVENTS];
+       struct_group(pm,
+               __u32 fw_init;
+               __u32 pwrreq;
+               __u32 status;
+               __u32 main;
+               __u32 thread;
+       );
+       struct_group(ssm,
+               __u32 pm_enable;
+               __u32 pm_active_status;
+               __u32 pm_managed_status;
+               __u32 pm_domain_status;
+               __u32 active_constraint;
+       );
+       __u32 resvrd3[6];
+};
 
 #endif
index 0c8883e2ccc6dc1979ac32811b2c72de38a27a3e..eb2ef225bcee16cdb8d62878aebec84001865c23 100644 (file)
@@ -3,6 +3,8 @@
 #ifndef _ICP_QAT_HW_H_
 #define _ICP_QAT_HW_H_
 
+#include <linux/bits.h>
+
 enum icp_qat_hw_ae_id {
        ICP_QAT_HW_AE_0 = 0,
        ICP_QAT_HW_AE_1 = 1,
index bb80455b3e81e2e83dd5b384f14111ded7386a8d..b97b678823a9756c2f49e79f8500bfa1acf41d1b 100644 (file)
@@ -40,40 +40,44 @@ void qat_alg_send_backlog(struct qat_instance_backlog *backlog)
        spin_unlock_bh(&backlog->lock);
 }
 
-static void qat_alg_backlog_req(struct qat_alg_req *req,
-                               struct qat_instance_backlog *backlog)
-{
-       INIT_LIST_HEAD(&req->list);
-
-       spin_lock_bh(&backlog->lock);
-       list_add_tail(&req->list, &backlog->list);
-       spin_unlock_bh(&backlog->lock);
-}
-
-static int qat_alg_send_message_maybacklog(struct qat_alg_req *req)
+static bool qat_alg_try_enqueue(struct qat_alg_req *req)
 {
        struct qat_instance_backlog *backlog = req->backlog;
        struct adf_etr_ring_data *tx_ring = req->tx_ring;
        u32 *fw_req = req->fw_req;
 
-       /* If any request is already backlogged, then add to backlog list */
+       /* Check if any request is already backlogged */
        if (!list_empty(&backlog->list))
-               goto enqueue;
+               return false;
 
-       /* If ring is nearly full, then add to backlog list */
+       /* Check if ring is nearly full */
        if (adf_ring_nearly_full(tx_ring))
-               goto enqueue;
+               return false;
 
-       /* If adding request to HW ring fails, then add to backlog list */
+       /* Try to enqueue to HW ring */
        if (adf_send_message(tx_ring, fw_req))
-               goto enqueue;
+               return false;
 
-       return -EINPROGRESS;
+       return true;
+}
 
-enqueue:
-       qat_alg_backlog_req(req, backlog);
 
-       return -EBUSY;
+static int qat_alg_send_message_maybacklog(struct qat_alg_req *req)
+{
+       struct qat_instance_backlog *backlog = req->backlog;
+       int ret = -EINPROGRESS;
+
+       if (qat_alg_try_enqueue(req))
+               return ret;
+
+       spin_lock_bh(&backlog->lock);
+       if (!qat_alg_try_enqueue(req)) {
+               list_add_tail(&req->list, &backlog->list);
+               ret = -EBUSY;
+       }
+       spin_unlock_bh(&backlog->lock);
+
+       return ret;
 }
 
 int qat_alg_send_message(struct qat_alg_req *req)
index b533984906ece67a5a6a27a5e8b5f331403861ff..bf8c0ee629175ec55c0aa064d91ca5bd7ce85d1d 100644 (file)
@@ -109,69 +109,6 @@ err:
        acomp_request_complete(areq, ret);
 }
 
-static int parse_zlib_header(u16 zlib_h)
-{
-       int ret = -EINVAL;
-       __be16 header;
-       u8 *header_p;
-       u8 cmf, flg;
-
-       header = cpu_to_be16(zlib_h);
-       header_p = (u8 *)&header;
-
-       flg = header_p[0];
-       cmf = header_p[1];
-
-       if (cmf >> QAT_RFC_1950_CM_OFFSET > QAT_RFC_1950_CM_DEFLATE_CINFO_32K)
-               return ret;
-
-       if ((cmf & QAT_RFC_1950_CM_MASK) != QAT_RFC_1950_CM_DEFLATE)
-               return ret;
-
-       if (flg & QAT_RFC_1950_DICT_MASK)
-               return ret;
-
-       return 0;
-}
-
-static int qat_comp_rfc1950_callback(struct qat_compression_req *qat_req,
-                                    void *resp)
-{
-       struct acomp_req *areq = qat_req->acompress_req;
-       enum direction dir = qat_req->dir;
-       __be32 qat_produced_adler;
-
-       qat_produced_adler = cpu_to_be32(qat_comp_get_produced_adler32(resp));
-
-       if (dir == COMPRESSION) {
-               __be16 zlib_header;
-
-               zlib_header = cpu_to_be16(QAT_RFC_1950_COMP_HDR);
-               scatterwalk_map_and_copy(&zlib_header, areq->dst, 0, QAT_RFC_1950_HDR_SIZE, 1);
-               areq->dlen += QAT_RFC_1950_HDR_SIZE;
-
-               scatterwalk_map_and_copy(&qat_produced_adler, areq->dst, areq->dlen,
-                                        QAT_RFC_1950_FOOTER_SIZE, 1);
-               areq->dlen += QAT_RFC_1950_FOOTER_SIZE;
-       } else {
-               __be32 decomp_adler;
-               int footer_offset;
-               int consumed;
-
-               consumed = qat_comp_get_consumed_ctr(resp);
-               footer_offset = consumed + QAT_RFC_1950_HDR_SIZE;
-               if (footer_offset + QAT_RFC_1950_FOOTER_SIZE > areq->slen)
-                       return -EBADMSG;
-
-               scatterwalk_map_and_copy(&decomp_adler, areq->src, footer_offset,
-                                        QAT_RFC_1950_FOOTER_SIZE, 0);
-
-               if (qat_produced_adler != decomp_adler)
-                       return -EBADMSG;
-       }
-       return 0;
-}
-
 static void qat_comp_generic_callback(struct qat_compression_req *qat_req,
                                      void *resp)
 {
@@ -293,18 +230,6 @@ static void qat_comp_alg_exit_tfm(struct crypto_acomp *acomp_tfm)
        memset(ctx, 0, sizeof(*ctx));
 }
 
-static int qat_comp_alg_rfc1950_init_tfm(struct crypto_acomp *acomp_tfm)
-{
-       struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm);
-       struct qat_compression_ctx *ctx = crypto_tfm_ctx(tfm);
-       int ret;
-
-       ret = qat_comp_alg_init_tfm(acomp_tfm);
-       ctx->qat_comp_callback = &qat_comp_rfc1950_callback;
-
-       return ret;
-}
-
 static int qat_comp_alg_compress_decompress(struct acomp_req *areq, enum direction dir,
                                            unsigned int shdr, unsigned int sftr,
                                            unsigned int dhdr, unsigned int dftr)
@@ -400,43 +325,6 @@ static int qat_comp_alg_decompress(struct acomp_req *req)
        return qat_comp_alg_compress_decompress(req, DECOMPRESSION, 0, 0, 0, 0);
 }
 
-static int qat_comp_alg_rfc1950_compress(struct acomp_req *req)
-{
-       if (!req->dst && req->dlen != 0)
-               return -EINVAL;
-
-       if (req->dst && req->dlen <= QAT_RFC_1950_HDR_SIZE + QAT_RFC_1950_FOOTER_SIZE)
-               return -EINVAL;
-
-       return qat_comp_alg_compress_decompress(req, COMPRESSION, 0, 0,
-                                               QAT_RFC_1950_HDR_SIZE,
-                                               QAT_RFC_1950_FOOTER_SIZE);
-}
-
-static int qat_comp_alg_rfc1950_decompress(struct acomp_req *req)
-{
-       struct crypto_acomp *acomp_tfm = crypto_acomp_reqtfm(req);
-       struct crypto_tfm *tfm = crypto_acomp_tfm(acomp_tfm);
-       struct qat_compression_ctx *ctx = crypto_tfm_ctx(tfm);
-       struct adf_accel_dev *accel_dev = ctx->inst->accel_dev;
-       u16 zlib_header;
-       int ret;
-
-       if (req->slen <= QAT_RFC_1950_HDR_SIZE + QAT_RFC_1950_FOOTER_SIZE)
-               return -EBADMSG;
-
-       scatterwalk_map_and_copy(&zlib_header, req->src, 0, QAT_RFC_1950_HDR_SIZE, 0);
-
-       ret = parse_zlib_header(zlib_header);
-       if (ret) {
-               dev_dbg(&GET_DEV(accel_dev), "Error parsing zlib header\n");
-               return ret;
-       }
-
-       return qat_comp_alg_compress_decompress(req, DECOMPRESSION, QAT_RFC_1950_HDR_SIZE,
-                                               QAT_RFC_1950_FOOTER_SIZE, 0, 0);
-}
-
 static struct acomp_alg qat_acomp[] = { {
        .base = {
                .cra_name = "deflate",
@@ -452,22 +340,7 @@ static struct acomp_alg qat_acomp[] = { {
        .decompress = qat_comp_alg_decompress,
        .dst_free = sgl_free,
        .reqsize = sizeof(struct qat_compression_req),
-}, {
-       .base = {
-               .cra_name = "zlib-deflate",
-               .cra_driver_name = "qat_zlib_deflate",
-               .cra_priority = 4001,
-               .cra_flags = CRYPTO_ALG_ASYNC,
-               .cra_ctxsize = sizeof(struct qat_compression_ctx),
-               .cra_module = THIS_MODULE,
-       },
-       .init = qat_comp_alg_rfc1950_init_tfm,
-       .exit = qat_comp_alg_exit_tfm,
-       .compress = qat_comp_alg_rfc1950_compress,
-       .decompress = qat_comp_alg_rfc1950_decompress,
-       .dst_free = sgl_free,
-       .reqsize = sizeof(struct qat_compression_req),
-} };
+}};
 
 int qat_comp_algs_register(void)
 {
index 4bd150d1441a02aecb9b4d81e97327fd3af8f538..e27ea7e28c51b07b586f37480dbea4b016c7f65a 100644 (file)
@@ -200,7 +200,7 @@ static int qat_uclo_parse_num(char *str, unsigned int *num)
        unsigned long ae = 0;
        int i;
 
-       strncpy(buf, str, 15);
+       strscpy(buf, str, sizeof(buf));
        for (i = 0; i < 16; i++) {
                if (!isdigit(buf[i])) {
                        buf[i] = '\0';
index 09551f949126530807db7f785b4a49cd189a3202..af14090cc4be311a3d7fbe5ad6fb9b1074687e06 100644 (file)
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
 /* Copyright(c) 2014 - 2021 Intel Corporation */
 #include <adf_accel_devices.h>
+#include <adf_admin.h>
 #include <adf_common_drv.h>
 #include <adf_gen2_config.h>
 #include <adf_gen2_dc.h>
index 1e748e8ce12d5df17d92d7f921c1ffb16f8598cf..40b456b8035b5a242efd103dbf04359f49495e3d 100644 (file)
@@ -252,3 +252,4 @@ MODULE_FIRMWARE(ADF_DH895XCC_FW);
 MODULE_FIRMWARE(ADF_DH895XCC_MMP);
 MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
 MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
index fefb85ceaeb9a2b261a06a1b8a1702e019c26d4a..d59cb1ba2ad5994b8f3b5b1c48ea21014953ec2b 100644 (file)
@@ -226,3 +226,4 @@ MODULE_LICENSE("Dual BSD/GPL");
 MODULE_AUTHOR("Intel");
 MODULE_DESCRIPTION("Intel(R) QuickAssist Technology");
 MODULE_VERSION(ADF_DRV_VERSION);
+MODULE_IMPORT_NS(CRYPTO_QAT);
index b61e35b932e590d6f62f1631464bf86b553417af..5744df30c83830ecb3f4d60940ea2717885d3686 100644 (file)
@@ -581,7 +581,7 @@ err_cleanup:
        return ret;
 }
 
-static int mv_cesa_remove(struct platform_device *pdev)
+static void mv_cesa_remove(struct platform_device *pdev)
 {
        struct mv_cesa_dev *cesa = platform_get_drvdata(pdev);
        int i;
@@ -594,8 +594,6 @@ static int mv_cesa_remove(struct platform_device *pdev)
                mv_cesa_put_sram(pdev, i);
                irq_set_affinity_hint(cesa->engines[i].irq, NULL);
        }
-
-       return 0;
 }
 
 static const struct platform_device_id mv_cesa_plat_id_table[] = {
@@ -606,7 +604,7 @@ MODULE_DEVICE_TABLE(platform, mv_cesa_plat_id_table);
 
 static struct platform_driver marvell_cesa = {
        .probe          = mv_cesa_probe,
-       .remove         = mv_cesa_remove,
+       .remove_new     = mv_cesa_remove,
        .id_table       = mv_cesa_plat_id_table,
        .driver         = {
                .name   = "marvell-cesa",
index f6b7bce0e65686e17356b6753c9a28095724f1cd..2b3ebe0db3a6d9204e30eb58056404aaa2de2e5f 100644 (file)
@@ -908,7 +908,6 @@ static struct ahash_alg dcp_sha1_alg = {
                        .cra_name               = "sha1",
                        .cra_driver_name        = "sha1-dcp",
                        .cra_priority           = 400,
-                       .cra_alignmask          = 63,
                        .cra_flags              = CRYPTO_ALG_ASYNC,
                        .cra_blocksize          = SHA1_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct dcp_async_ctx),
@@ -935,7 +934,6 @@ static struct ahash_alg dcp_sha256_alg = {
                        .cra_name               = "sha256",
                        .cra_driver_name        = "sha256-dcp",
                        .cra_priority           = 400,
-                       .cra_alignmask          = 63,
                        .cra_flags              = CRYPTO_ALG_ASYNC,
                        .cra_blocksize          = SHA256_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct dcp_async_ctx),
@@ -1131,7 +1129,7 @@ err_destroy_sha_thread:
        return ret;
 }
 
-static int mxs_dcp_remove(struct platform_device *pdev)
+static void mxs_dcp_remove(struct platform_device *pdev)
 {
        struct dcp *sdcp = platform_get_drvdata(pdev);
 
@@ -1150,8 +1148,6 @@ static int mxs_dcp_remove(struct platform_device *pdev)
        platform_set_drvdata(pdev, NULL);
 
        global_sdcp = NULL;
-
-       return 0;
 }
 
 static const struct of_device_id mxs_dcp_dt_ids[] = {
@@ -1164,7 +1160,7 @@ MODULE_DEVICE_TABLE(of, mxs_dcp_dt_ids);
 
 static struct platform_driver mxs_dcp_driver = {
        .probe  = mxs_dcp_probe,
-       .remove = mxs_dcp_remove,
+       .remove_new = mxs_dcp_remove,
        .driver = {
                .name           = "mxs-dcp",
                .of_match_table = mxs_dcp_dt_ids,
index d5a32d71a3e970b7943d5c4985440e4ce0a79dc1..caea98622c33628915aecd9db275f9d16f4003fd 100644 (file)
@@ -2011,7 +2011,7 @@ out_free_n2cp:
        return err;
 }
 
-static int n2_crypto_remove(struct platform_device *dev)
+static void n2_crypto_remove(struct platform_device *dev)
 {
        struct n2_crypto *np = dev_get_drvdata(&dev->dev);
 
@@ -2022,8 +2022,6 @@ static int n2_crypto_remove(struct platform_device *dev)
        release_global_resources();
 
        free_n2cp(np);
-
-       return 0;
 }
 
 static struct n2_mau *alloc_ncp(void)
@@ -2109,7 +2107,7 @@ out_free_ncp:
        return err;
 }
 
-static int n2_mau_remove(struct platform_device *dev)
+static void n2_mau_remove(struct platform_device *dev)
 {
        struct n2_mau *mp = dev_get_drvdata(&dev->dev);
 
@@ -2118,8 +2116,6 @@ static int n2_mau_remove(struct platform_device *dev)
        release_global_resources();
 
        free_ncp(mp);
-
-       return 0;
 }
 
 static const struct of_device_id n2_crypto_match[] = {
@@ -2146,7 +2142,7 @@ static struct platform_driver n2_crypto_driver = {
                .of_match_table =       n2_crypto_match,
        },
        .probe          =       n2_crypto_probe,
-       .remove         =       n2_crypto_remove,
+       .remove_new     =       n2_crypto_remove,
 };
 
 static const struct of_device_id n2_mau_match[] = {
@@ -2173,7 +2169,7 @@ static struct platform_driver n2_mau_driver = {
                .of_match_table =       n2_mau_match,
        },
        .probe          =       n2_mau_probe,
-       .remove         =       n2_mau_remove,
+       .remove_new     =       n2_mau_remove,
 };
 
 static struct platform_driver * const drivers[] = {
index ed83023dd77a8d3c9ed18e874fa36a2f83351332..bad1adacbc84c4cd1715555ddddbf17ad438a93c 100644 (file)
@@ -1255,7 +1255,7 @@ err_data:
        return err;
 }
 
-static int omap_aes_remove(struct platform_device *pdev)
+static void omap_aes_remove(struct platform_device *pdev)
 {
        struct omap_aes_dev *dd = platform_get_drvdata(pdev);
        struct aead_engine_alg *aalg;
@@ -1285,8 +1285,6 @@ static int omap_aes_remove(struct platform_device *pdev)
        pm_runtime_disable(dd->dev);
 
        sysfs_remove_group(&dd->dev->kobj, &omap_aes_attr_group);
-
-       return 0;
 }
 
 #ifdef CONFIG_PM_SLEEP
@@ -1307,7 +1305,7 @@ static SIMPLE_DEV_PM_OPS(omap_aes_pm_ops, omap_aes_suspend, omap_aes_resume);
 
 static struct platform_driver omap_aes_driver = {
        .probe  = omap_aes_probe,
-       .remove = omap_aes_remove,
+       .remove_new = omap_aes_remove,
        .driver = {
                .name   = "omap-aes",
                .pm     = &omap_aes_pm_ops,
index 089dd45eaedd70adf6b26e5f035cc4cfe566fd1e..209d3dc03a9bcaee657b1cdfc3303ad495410abe 100644 (file)
@@ -1072,7 +1072,7 @@ err_data:
        return err;
 }
 
-static int omap_des_remove(struct platform_device *pdev)
+static void omap_des_remove(struct platform_device *pdev)
 {
        struct omap_des_dev *dd = platform_get_drvdata(pdev);
        int i, j;
@@ -1089,8 +1089,6 @@ static int omap_des_remove(struct platform_device *pdev)
        tasklet_kill(&dd->done_task);
        omap_des_dma_cleanup(dd);
        pm_runtime_disable(dd->dev);
-
-       return 0;
 }
 
 #ifdef CONFIG_PM_SLEEP
@@ -1117,7 +1115,7 @@ static SIMPLE_DEV_PM_OPS(omap_des_pm_ops, omap_des_suspend, omap_des_resume);
 
 static struct platform_driver omap_des_driver = {
        .probe  = omap_des_probe,
-       .remove = omap_des_remove,
+       .remove_new = omap_des_remove,
        .driver = {
                .name   = "omap-des",
                .pm     = &omap_des_pm_ops,
index a6b4a0b3ace30dbde0e4fd164793c45f68223e06..5bcd9ab0f72ad5d06fa845d2bcb4567a59d4985d 100644 (file)
@@ -356,10 +356,10 @@ static void omap_sham_copy_ready_hash(struct ahash_request *req)
 
        if (big_endian)
                for (i = 0; i < d; i++)
-                       hash[i] = be32_to_cpup((__be32 *)in + i);
+                       put_unaligned(be32_to_cpup((__be32 *)in + i), &hash[i]);
        else
                for (i = 0; i < d; i++)
-                       hash[i] = le32_to_cpup((__le32 *)in + i);
+                       put_unaligned(le32_to_cpup((__le32 *)in + i), &hash[i]);
 }
 
 static void omap_sham_write_ctrl_omap2(struct omap_sham_dev *dd, size_t length,
@@ -1435,7 +1435,6 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
                                                CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = SHA1_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1458,7 +1457,6 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
                                                CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = SHA1_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1483,7 +1481,6 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
                .cra_blocksize          = SHA1_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx) +
                                        sizeof(struct omap_sham_hmac_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_sha1_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1508,7 +1505,6 @@ static struct ahash_engine_alg algs_sha1_md5[] = {
                .cra_blocksize          = SHA1_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx) +
                                        sizeof(struct omap_sham_hmac_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_md5_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1535,7 +1531,6 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
                                                CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = SHA224_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1558,7 +1553,6 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
                                                CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = SHA256_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1583,7 +1577,6 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
                .cra_blocksize          = SHA224_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx) +
                                        sizeof(struct omap_sham_hmac_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_sha224_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1608,7 +1601,6 @@ static struct ahash_engine_alg algs_sha224_sha256[] = {
                .cra_blocksize          = SHA256_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx) +
                                        sizeof(struct omap_sham_hmac_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_sha256_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1634,7 +1626,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                                                CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = SHA384_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1657,7 +1648,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                                                CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = SHA512_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1682,7 +1672,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                .cra_blocksize          = SHA384_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx) +
                                        sizeof(struct omap_sham_hmac_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_sha384_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -1707,7 +1696,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                .cra_blocksize          = SHA512_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct omap_sham_ctx) +
                                        sizeof(struct omap_sham_hmac_ctx),
-               .cra_alignmask          = OMAP_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = omap_sham_cra_sha512_init,
                .cra_exit               = omap_sham_cra_exit,
@@ -2200,7 +2188,7 @@ data_err:
        return err;
 }
 
-static int omap_sham_remove(struct platform_device *pdev)
+static void omap_sham_remove(struct platform_device *pdev)
 {
        struct omap_sham_dev *dd;
        int i, j;
@@ -2224,13 +2212,11 @@ static int omap_sham_remove(struct platform_device *pdev)
                dma_release_channel(dd->dma_lch);
 
        sysfs_remove_group(&dd->dev->kobj, &omap_sham_attr_group);
-
-       return 0;
 }
 
 static struct platform_driver omap_sham_driver = {
        .probe  = omap_sham_probe,
-       .remove = omap_sham_remove,
+       .remove_new = omap_sham_remove,
        .driver = {
                .name   = "omap-sham",
                .of_match_table = omap_sham_of_match,
index fce49c0dee3e2dd73e6b359b506c2bedfb794195..28b5fd82382775dd5cfffbda301058cfbd538ddd 100644 (file)
@@ -277,7 +277,7 @@ err_mem_path_disable:
        return ret;
 }
 
-static int qce_crypto_remove(struct platform_device *pdev)
+static void qce_crypto_remove(struct platform_device *pdev)
 {
        struct qce_device *qce = platform_get_drvdata(pdev);
 
@@ -287,7 +287,6 @@ static int qce_crypto_remove(struct platform_device *pdev)
        clk_disable_unprepare(qce->bus);
        clk_disable_unprepare(qce->iface);
        clk_disable_unprepare(qce->core);
-       return 0;
 }
 
 static const struct of_device_id qce_crypto_of_match[] = {
@@ -300,7 +299,7 @@ MODULE_DEVICE_TABLE(of, qce_crypto_of_match);
 
 static struct platform_driver qce_crypto_driver = {
        .probe = qce_crypto_probe,
-       .remove = qce_crypto_remove,
+       .remove_new = qce_crypto_remove,
        .driver = {
                .name = KBUILD_MODNAME,
                .of_match_table = qce_crypto_of_match,
index 825a729f205e5fdc0d71d78aa869f2ec1e143ae0..c670d7d0c11ea8969551298482b13c0c3680b165 100644 (file)
@@ -7,6 +7,7 @@
 #include <linux/acpi.h>
 #include <linux/clk.h>
 #include <linux/crypto.h>
+#include <linux/hw_random.h>
 #include <linux/io.h>
 #include <linux/iopoll.h>
 #include <linux/kernel.h>
 
 #define WORD_SZ                        4
 
+#define QCOM_TRNG_QUALITY      1024
+
 struct qcom_rng {
        struct mutex lock;
        void __iomem *base;
        struct clk *clk;
-       unsigned int skip_init;
+       struct hwrng hwrng;
+       struct qcom_rng_of_data *of_data;
 };
 
 struct qcom_rng_ctx {
        struct qcom_rng *rng;
 };
 
+struct qcom_rng_of_data {
+       bool skip_init;
+       bool hwrng_support;
+};
+
 static struct qcom_rng *qcom_rng_dev;
 
 static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
@@ -66,11 +75,11 @@ static int qcom_rng_read(struct qcom_rng *rng, u8 *data, unsigned int max)
                } else {
                        /* copy only remaining bytes */
                        memcpy(data, &val, max - currsize);
-                       break;
+                       currsize = max;
                }
        } while (currsize < max);
 
-       return 0;
+       return currsize;
 }
 
 static int qcom_rng_generate(struct crypto_rng *tfm,
@@ -92,6 +101,9 @@ static int qcom_rng_generate(struct crypto_rng *tfm,
        mutex_unlock(&rng->lock);
        clk_disable_unprepare(rng->clk);
 
+       if (ret >= 0)
+               ret = 0;
+
        return ret;
 }
 
@@ -101,6 +113,13 @@ static int qcom_rng_seed(struct crypto_rng *tfm, const u8 *seed,
        return 0;
 }
 
+static int qcom_hwrng_read(struct hwrng *hwrng, void *data, size_t max, bool wait)
+{
+       struct qcom_rng *qrng = container_of(hwrng, struct qcom_rng, hwrng);
+
+       return qcom_rng_read(qrng, data, max);
+}
+
 static int qcom_rng_enable(struct qcom_rng *rng)
 {
        u32 val;
@@ -136,7 +155,7 @@ static int qcom_rng_init(struct crypto_tfm *tfm)
 
        ctx->rng = qcom_rng_dev;
 
-       if (!ctx->rng->skip_init)
+       if (!ctx->rng->of_data->skip_init)
                return qcom_rng_enable(ctx->rng);
 
        return 0;
@@ -177,27 +196,56 @@ static int qcom_rng_probe(struct platform_device *pdev)
        if (IS_ERR(rng->clk))
                return PTR_ERR(rng->clk);
 
-       rng->skip_init = (unsigned long)device_get_match_data(&pdev->dev);
+       rng->of_data = (struct qcom_rng_of_data *)of_device_get_match_data(&pdev->dev);
 
        qcom_rng_dev = rng;
        ret = crypto_register_rng(&qcom_rng_alg);
        if (ret) {
                dev_err(&pdev->dev, "Register crypto rng failed: %d\n", ret);
                qcom_rng_dev = NULL;
+               return ret;
        }
 
+       if (rng->of_data->hwrng_support) {
+               rng->hwrng.name = "qcom_hwrng";
+               rng->hwrng.read = qcom_hwrng_read;
+               rng->hwrng.quality = QCOM_TRNG_QUALITY;
+               ret = devm_hwrng_register(&pdev->dev, &rng->hwrng);
+               if (ret) {
+                       dev_err(&pdev->dev, "Register hwrng failed: %d\n", ret);
+                       qcom_rng_dev = NULL;
+                       goto fail;
+               }
+       }
+
+       return ret;
+fail:
+       crypto_unregister_rng(&qcom_rng_alg);
        return ret;
 }
 
-static int qcom_rng_remove(struct platform_device *pdev)
+static void qcom_rng_remove(struct platform_device *pdev)
 {
        crypto_unregister_rng(&qcom_rng_alg);
 
        qcom_rng_dev = NULL;
-
-       return 0;
 }
 
+static struct qcom_rng_of_data qcom_prng_of_data = {
+       .skip_init = false,
+       .hwrng_support = false,
+};
+
+static struct qcom_rng_of_data qcom_prng_ee_of_data = {
+       .skip_init = true,
+       .hwrng_support = false,
+};
+
+static struct qcom_rng_of_data qcom_trng_of_data = {
+       .skip_init = true,
+       .hwrng_support = true,
+};
+
 static const struct acpi_device_id __maybe_unused qcom_rng_acpi_match[] = {
        { .id = "QCOM8160", .driver_data = 1 },
        {}
@@ -205,15 +253,16 @@ static const struct acpi_device_id __maybe_unused qcom_rng_acpi_match[] = {
 MODULE_DEVICE_TABLE(acpi, qcom_rng_acpi_match);
 
 static const struct of_device_id __maybe_unused qcom_rng_of_match[] = {
-       { .compatible = "qcom,prng", .data = (void *)0},
-       { .compatible = "qcom,prng-ee", .data = (void *)1},
+       { .compatible = "qcom,prng", .data = &qcom_prng_of_data },
+       { .compatible = "qcom,prng-ee", .data = &qcom_prng_ee_of_data },
+       { .compatible = "qcom,trng", .data = &qcom_trng_of_data },
        {}
 };
 MODULE_DEVICE_TABLE(of, qcom_rng_of_match);
 
 static struct platform_driver qcom_rng_driver = {
        .probe = qcom_rng_probe,
-       .remove =  qcom_rng_remove,
+       .remove_new =  qcom_rng_remove,
        .driver = {
                .name = KBUILD_MODNAME,
                .of_match_table = of_match_ptr(qcom_rng_of_match),
index 77d5705a5d960dc639dd1c4cc96acf93657d9531..70edf40bc523c0932bf8abb0b36682b0549d2ed3 100644 (file)
@@ -405,7 +405,7 @@ err_crypto:
        return err;
 }
 
-static int rk_crypto_remove(struct platform_device *pdev)
+static void rk_crypto_remove(struct platform_device *pdev)
 {
        struct rk_crypto_info *crypto_tmp = platform_get_drvdata(pdev);
        struct rk_crypto_info *first;
@@ -424,12 +424,11 @@ static int rk_crypto_remove(struct platform_device *pdev)
        }
        rk_crypto_pm_exit(crypto_tmp);
        crypto_engine_exit(crypto_tmp->engine);
-       return 0;
 }
 
 static struct platform_driver crypto_driver = {
        .probe          = rk_crypto_probe,
-       .remove         = rk_crypto_remove,
+       .remove_new     = rk_crypto_remove,
        .driver         = {
                .name   = "rk3288-crypto",
                .pm             = &rk_crypto_pm_ops,
index 8c143180645e5bdd61d7a140838db295c9678b9c..1b13b4aa16ecc441a37266996f1b4aca6863a436 100644 (file)
@@ -393,7 +393,6 @@ struct rk_crypto_tmp rk_ahash_sha1 = {
                                               CRYPTO_ALG_NEED_FALLBACK,
                                  .cra_blocksize = SHA1_BLOCK_SIZE,
                                  .cra_ctxsize = sizeof(struct rk_ahash_ctx),
-                                 .cra_alignmask = 3,
                                  .cra_module = THIS_MODULE,
                        }
                }
@@ -426,7 +425,6 @@ struct rk_crypto_tmp rk_ahash_sha256 = {
                                               CRYPTO_ALG_NEED_FALLBACK,
                                  .cra_blocksize = SHA256_BLOCK_SIZE,
                                  .cra_ctxsize = sizeof(struct rk_ahash_ctx),
-                                 .cra_alignmask = 3,
                                  .cra_module = THIS_MODULE,
                        }
                }
@@ -459,7 +457,6 @@ struct rk_crypto_tmp rk_ahash_md5 = {
                                               CRYPTO_ALG_NEED_FALLBACK,
                                  .cra_blocksize = SHA1_BLOCK_SIZE,
                                  .cra_ctxsize = sizeof(struct rk_ahash_ctx),
-                                 .cra_alignmask = 3,
                                  .cra_module = THIS_MODULE,
                        }
                }
index fe8cf9ba8005c3d482b54b6f09b3580a8588ef2d..8b6e3f5c94ded7585ad70bafc2f9eac163282cb7 100644 (file)
 /* HASH HW constants */
 #define BUFLEN                 HASH_BLOCK_SIZE
 
-#define SSS_HASH_DMA_LEN_ALIGN 8
-#define SSS_HASH_DMA_ALIGN_MASK        (SSS_HASH_DMA_LEN_ALIGN - 1)
-
 #define SSS_HASH_QUEUE_LENGTH  10
 
 /**
@@ -1746,7 +1743,6 @@ static struct ahash_alg algs_sha1_md5_sha256[] = {
                                          CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = HASH_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct s5p_hash_ctx),
-               .cra_alignmask          = SSS_HASH_DMA_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = s5p_hash_cra_init,
                .cra_exit               = s5p_hash_cra_exit,
@@ -1771,7 +1767,6 @@ static struct ahash_alg algs_sha1_md5_sha256[] = {
                                          CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = HASH_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct s5p_hash_ctx),
-               .cra_alignmask          = SSS_HASH_DMA_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = s5p_hash_cra_init,
                .cra_exit               = s5p_hash_cra_exit,
@@ -1796,7 +1791,6 @@ static struct ahash_alg algs_sha1_md5_sha256[] = {
                                          CRYPTO_ALG_NEED_FALLBACK,
                .cra_blocksize          = HASH_BLOCK_SIZE,
                .cra_ctxsize            = sizeof(struct s5p_hash_ctx),
-               .cra_alignmask          = SSS_HASH_DMA_ALIGN_MASK,
                .cra_module             = THIS_MODULE,
                .cra_init               = s5p_hash_cra_init,
                .cra_exit               = s5p_hash_cra_exit,
@@ -2315,7 +2309,7 @@ err_clk:
        return err;
 }
 
-static int s5p_aes_remove(struct platform_device *pdev)
+static void s5p_aes_remove(struct platform_device *pdev)
 {
        struct s5p_aes_dev *pdata = platform_get_drvdata(pdev);
        int i;
@@ -2337,13 +2331,11 @@ static int s5p_aes_remove(struct platform_device *pdev)
 
        clk_disable_unprepare(pdata->clk);
        s5p_dev = NULL;
-
-       return 0;
 }
 
 static struct platform_driver s5p_aes_crypto = {
        .probe  = s5p_aes_probe,
-       .remove = s5p_aes_remove,
+       .remove_new = s5p_aes_remove,
        .driver = {
                .name   = "s5p-secss",
                .of_match_table = s5p_sss_dt_match,
index 6238d34f8db2f6dd478d907331726d301a6977e3..6846a84295745e75867a91c31abe0ebe9c7d5981 100644 (file)
@@ -2468,7 +2468,7 @@ destroy_dma_pool:
        return ret;
 }
 
-static int sa_ul_remove(struct platform_device *pdev)
+static void sa_ul_remove(struct platform_device *pdev)
 {
        struct sa_crypto_data *dev_data = platform_get_drvdata(pdev);
 
@@ -2486,13 +2486,11 @@ static int sa_ul_remove(struct platform_device *pdev)
 
        pm_runtime_put_sync(&pdev->dev);
        pm_runtime_disable(&pdev->dev);
-
-       return 0;
 }
 
 static struct platform_driver sa_ul_driver = {
        .probe = sa_ul_probe,
-       .remove = sa_ul_remove,
+       .remove_new = sa_ul_remove,
        .driver = {
                   .name = "saul-crypto",
                   .of_match_table = of_match,
index 62d93526920f8002adf0c28b371d35a0b5a38fb9..02065131c3008c278864751417b9439ff47f2b2d 100644 (file)
@@ -1510,7 +1510,7 @@ clk_ipg_disable:
        return err;
 }
 
-static int sahara_remove(struct platform_device *pdev)
+static void sahara_remove(struct platform_device *pdev)
 {
        struct sahara_dev *dev = platform_get_drvdata(pdev);
 
@@ -1522,13 +1522,11 @@ static int sahara_remove(struct platform_device *pdev)
        clk_disable_unprepare(dev->clk_ahb);
 
        dev_ptr = NULL;
-
-       return 0;
 }
 
 static struct platform_driver sahara_driver = {
        .probe          = sahara_probe,
-       .remove         = sahara_remove,
+       .remove_new     = sahara_remove,
        .driver         = {
                .name   = SAHARA_NAME,
                .of_match_table = sahara_dt_ids,
index cc7650198d70358a5e517617d1458ef4025d8986..b6d1808012ca7e5350ad4132987e8e5cee76185c 100644 (file)
@@ -209,7 +209,8 @@ static int starfive_hash_copy_hash(struct ahash_request *req)
        data = (u32 *)req->result;
 
        for (count = 0; count < mlen; count++)
-               data[count] = readl(ctx->cryp->base + STARFIVE_HASH_SHARDR);
+               put_unaligned(readl(ctx->cryp->base + STARFIVE_HASH_SHARDR),
+                             &data[count]);
 
        return 0;
 }
@@ -628,7 +629,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA224_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -658,7 +658,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA224_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -687,7 +686,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA256_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -717,7 +715,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA256_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -746,7 +743,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA384_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -776,7 +772,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA384_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -805,7 +800,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA512_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -835,7 +829,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SHA512_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -864,7 +857,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SM3_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
@@ -894,7 +886,6 @@ static struct ahash_engine_alg algs_sha2_sm3[] = {
                                                  CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize          = SM3_BLOCK_SIZE,
                        .cra_ctxsize            = sizeof(struct starfive_cryp_ctx),
-                       .cra_alignmask          = 3,
                        .cra_module             = THIS_MODULE,
                }
        },
index 90a920e7f6642ff4eb2bee37d81c28c46548eee0..b2d5c8921ab36c6d6726be3ff4d54789f7839e5e 100644 (file)
@@ -283,7 +283,6 @@ static struct shash_alg algs[] = {
                        .cra_priority           = 200,
                        .cra_flags              = CRYPTO_ALG_OPTIONAL_KEY,
                        .cra_blocksize          = CHKSUM_BLOCK_SIZE,
-                       .cra_alignmask          = 3,
                        .cra_ctxsize            = sizeof(struct stm32_crc_ctx),
                        .cra_module             = THIS_MODULE,
                        .cra_init               = stm32_crc32_cra_init,
@@ -305,7 +304,6 @@ static struct shash_alg algs[] = {
                        .cra_priority           = 200,
                        .cra_flags              = CRYPTO_ALG_OPTIONAL_KEY,
                        .cra_blocksize          = CHKSUM_BLOCK_SIZE,
-                       .cra_alignmask          = 3,
                        .cra_ctxsize            = sizeof(struct stm32_crc_ctx),
                        .cra_module             = THIS_MODULE,
                        .cra_init               = stm32_crc32c_cra_init,
@@ -379,16 +377,11 @@ static int stm32_crc_probe(struct platform_device *pdev)
        return 0;
 }
 
-static int stm32_crc_remove(struct platform_device *pdev)
+static void stm32_crc_remove(struct platform_device *pdev)
 {
        struct stm32_crc *crc = platform_get_drvdata(pdev);
        int ret = pm_runtime_get_sync(crc->dev);
 
-       if (ret < 0) {
-               pm_runtime_put_noidle(crc->dev);
-               return ret;
-       }
-
        spin_lock(&crc_list.lock);
        list_del(&crc->list);
        spin_unlock(&crc_list.lock);
@@ -401,9 +394,9 @@ static int stm32_crc_remove(struct platform_device *pdev)
        pm_runtime_disable(crc->dev);
        pm_runtime_put_noidle(crc->dev);
 
-       clk_disable_unprepare(crc->clk);
-
-       return 0;
+       if (ret >= 0)
+               clk_disable(crc->clk);
+       clk_unprepare(crc->clk);
 }
 
 static int __maybe_unused stm32_crc_suspend(struct device *dev)
@@ -472,7 +465,7 @@ MODULE_DEVICE_TABLE(of, stm32_dt_ids);
 
 static struct platform_driver stm32_crc_driver = {
        .probe  = stm32_crc_probe,
-       .remove = stm32_crc_remove,
+       .remove_new = stm32_crc_remove,
        .driver = {
                .name           = DRIVER_NAME,
                .pm             = &stm32_crc_pm_ops,
index f095f0065428a9c5bade69d802690d58a9d2934b..c3cbc2673338d29fc799160ea3c6fc4e174d4f0c 100644 (file)
@@ -2084,17 +2084,12 @@ err_rst:
        return ret;
 }
 
-static int stm32_cryp_remove(struct platform_device *pdev)
+static void stm32_cryp_remove(struct platform_device *pdev)
 {
        struct stm32_cryp *cryp = platform_get_drvdata(pdev);
        int ret;
 
-       if (!cryp)
-               return -ENODEV;
-
-       ret = pm_runtime_resume_and_get(cryp->dev);
-       if (ret < 0)
-               return ret;
+       ret = pm_runtime_get_sync(cryp->dev);
 
        if (cryp->caps->aeads_support)
                crypto_engine_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs));
@@ -2109,9 +2104,8 @@ static int stm32_cryp_remove(struct platform_device *pdev)
        pm_runtime_disable(cryp->dev);
        pm_runtime_put_noidle(cryp->dev);
 
-       clk_disable_unprepare(cryp->clk);
-
-       return 0;
+       if (ret >= 0)
+               clk_disable_unprepare(cryp->clk);
 }
 
 #ifdef CONFIG_PM
@@ -2148,7 +2142,7 @@ static const struct dev_pm_ops stm32_cryp_pm_ops = {
 
 static struct platform_driver stm32_cryp_driver = {
        .probe  = stm32_cryp_probe,
-       .remove = stm32_cryp_remove,
+       .remove_new = stm32_cryp_remove,
        .driver = {
                .name           = DRIVER_NAME,
                .pm             = &stm32_cryp_pm_ops,
index 2b2382d4332c531b8263e500f2737ef9162573d8..34e0d7e381a8c6373f902fe0706e7136665f494f 100644 (file)
@@ -1283,7 +1283,6 @@ static struct ahash_engine_alg algs_md5[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = MD5_HMAC_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1313,7 +1312,6 @@ static struct ahash_engine_alg algs_md5[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = MD5_HMAC_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1345,7 +1343,6 @@ static struct ahash_engine_alg algs_sha1[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA1_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1375,7 +1372,6 @@ static struct ahash_engine_alg algs_sha1[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA1_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1407,7 +1403,6 @@ static struct ahash_engine_alg algs_sha224[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA224_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1437,7 +1432,6 @@ static struct ahash_engine_alg algs_sha224[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA224_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1469,7 +1463,6 @@ static struct ahash_engine_alg algs_sha256[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA256_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1499,7 +1492,6 @@ static struct ahash_engine_alg algs_sha256[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA256_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1531,7 +1523,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA384_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1561,7 +1552,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA384_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1590,7 +1580,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA512_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1620,7 +1609,6 @@ static struct ahash_engine_alg algs_sha384_sha512[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA512_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1652,7 +1640,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_224_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1682,7 +1669,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_224_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1711,7 +1697,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_256_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1741,7 +1726,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_256_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1770,7 +1754,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_384_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1800,7 +1783,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_384_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1829,7 +1811,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_512_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
@@ -1859,7 +1840,6 @@ static struct ahash_engine_alg algs_sha3[] = {
                                        CRYPTO_ALG_KERN_DRIVER_ONLY,
                                .cra_blocksize = SHA3_512_BLOCK_SIZE,
                                .cra_ctxsize = sizeof(struct stm32_hash_ctx),
-                               .cra_alignmask = 3,
                                .cra_init = stm32_hash_cra_sha3_hmac_init,
                                .cra_exit = stm32_hash_cra_exit,
                                .cra_module = THIS_MODULE,
index 4ca4fbd227bce135502657e4aef4d3ae9f647870..511ddcb0efd4b4b513e4025aa7f51a6c9826b1bc 100644 (file)
@@ -2119,13 +2119,14 @@ static int ahash_finup(struct ahash_request *areq)
 
 static int ahash_digest(struct ahash_request *areq)
 {
-       struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
-       struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq);
-
-       ahash->init(areq);
-       req_ctx->last = 1;
+       ahash_init(areq);
+       return ahash_finup(areq);
+}
 
-       return ahash_process_req(areq, areq->nbytes);
+static int ahash_digest_sha224_swinit(struct ahash_request *areq)
+{
+       ahash_init_sha224_swinit(areq);
+       return ahash_finup(areq);
 }
 
 static int ahash_export(struct ahash_request *areq, void *out)
@@ -3136,7 +3137,7 @@ static int hw_supports(struct device *dev, __be32 desc_hdr_template)
        return ret;
 }
 
-static int talitos_remove(struct platform_device *ofdev)
+static void talitos_remove(struct platform_device *ofdev)
 {
        struct device *dev = &ofdev->dev;
        struct talitos_private *priv = dev_get_drvdata(dev);
@@ -3170,8 +3171,6 @@ static int talitos_remove(struct platform_device *ofdev)
        tasklet_kill(&priv->done_task[0]);
        if (priv->irq[1])
                tasklet_kill(&priv->done_task[1]);
-
-       return 0;
 }
 
 static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
@@ -3242,6 +3241,8 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
                    (!strcmp(alg->cra_name, "sha224") ||
                     !strcmp(alg->cra_name, "hmac(sha224)"))) {
                        t_alg->algt.alg.hash.init = ahash_init_sha224_swinit;
+                       t_alg->algt.alg.hash.digest =
+                               ahash_digest_sha224_swinit;
                        t_alg->algt.desc_hdr_template =
                                        DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU |
                                        DESC_HDR_SEL0_MDEUA |
@@ -3259,7 +3260,7 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
                alg->cra_priority = t_alg->algt.priority;
        else
                alg->cra_priority = TALITOS_CRA_PRIORITY;
-       if (has_ftr_sec1(priv))
+       if (has_ftr_sec1(priv) && t_alg->algt.type != CRYPTO_ALG_TYPE_AHASH)
                alg->cra_alignmask = 3;
        else
                alg->cra_alignmask = 0;
@@ -3559,7 +3560,7 @@ static struct platform_driver talitos_driver = {
                .of_match_table = talitos_match,
        },
        .probe = talitos_probe,
-       .remove = talitos_remove,
+       .remove_new = talitos_remove,
 };
 
 module_platform_driver(talitos_driver);
index 50a0a18f35da380a8b455ce3dc1f2a54b55fe2ae..f729589d792eab8c31589d73fa337fb79a848b8b 100644 (file)
@@ -132,11 +132,12 @@ rcon:
 .long  0x1b000000, 0x1b000000, 0x1b000000, 0x1b000000  ?rev
 .long  0x0d0e0f0c, 0x0d0e0f0c, 0x0d0e0f0c, 0x0d0e0f0c  ?rev
 .long  0,0,0,0                                         ?asis
+.long  0x0f102132, 0x43546576, 0x8798a9ba, 0xcbdcedfe
 Lconsts:
        mflr    r0
        bcl     20,31,\$+4
        mflr    $ptr     #vvvvv "distance between . and rcon
-       addi    $ptr,$ptr,-0x48
+       addi    $ptr,$ptr,-0x58
        mtlr    r0
        blr
        .long   0
@@ -2495,6 +2496,17 @@ _aesp8_xts_encrypt6x:
        li              $x70,0x70
        mtspr           256,r0
 
+       xxlor           2, 32+$eighty7, 32+$eighty7
+       vsldoi          $eighty7,$tmp,$eighty7,1        # 0x010101..87
+       xxlor           1, 32+$eighty7, 32+$eighty7
+
+       # Load XOR Lconsts.
+       mr              $x70, r6
+       bl              Lconsts
+       lxvw4x          0, $x40, r6             # load XOR contents
+       mr              r6, $x70
+       li              $x70,0x70
+
        subi            $rounds,$rounds,3       # -4 in total
 
        lvx             $rndkey0,$x00,$key1     # load key schedule
@@ -2537,69 +2549,77 @@ Load_xts_enc_key:
        ?vperm          v31,v31,$twk5,$keyperm
        lvx             v25,$x10,$key_          # pre-load round[2]
 
+       # Switch to use the following codes with 0x010101..87 to generate tweak.
+       #     eighty7 = 0x010101..87
+       # vsrab         tmp, tweak, seven       # next tweak value, right shift 7 bits
+       # vand          tmp, tmp, eighty7       # last byte with carry
+       # vaddubm       tweak, tweak, tweak     # left shift 1 bit (x2)
+       # xxlor         vsx, 0, 0
+       # vpermxor      tweak, tweak, tmp, vsx
+
         vperm          $in0,$inout,$inptail,$inpperm
         subi           $inp,$inp,31            # undo "caller"
        vxor            $twk0,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
        vand            $tmp,$tmp,$eighty7
         vxor           $out0,$in0,$twk0
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in1, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in1
 
         lvx_u          $in1,$x10,$inp
        vxor            $twk1,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in1,$in1,$in1,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out1,$in1,$twk1
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in2, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in2
 
         lvx_u          $in2,$x20,$inp
         andi.          $taillen,$len,15
        vxor            $twk2,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in2,$in2,$in2,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out2,$in2,$twk2
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in3, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in3
 
         lvx_u          $in3,$x30,$inp
         sub            $len,$len,$taillen
        vxor            $twk3,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in3,$in3,$in3,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out3,$in3,$twk3
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in4, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in4
 
         lvx_u          $in4,$x40,$inp
         subi           $len,$len,0x60
        vxor            $twk4,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in4,$in4,$in4,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out4,$in4,$twk4
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in5, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in5
 
         lvx_u          $in5,$x50,$inp
         addi           $inp,$inp,0x60
        vxor            $twk5,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in5,$in5,$in5,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out5,$in5,$twk5
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in0, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in0
 
        vxor            v31,v31,$rndkey0
        mtctr           $rounds
@@ -2625,6 +2645,8 @@ Loop_xts_enc6x:
        lvx             v25,$x10,$key_          # round[4]
        bdnz            Loop_xts_enc6x
 
+       xxlor           32+$eighty7, 1, 1       # 0x010101..87
+
        subic           $len,$len,96            # $len-=96
         vxor           $in0,$twk0,v31          # xor with last round key
        vcipher         $out0,$out0,v24
@@ -2634,7 +2656,6 @@ Loop_xts_enc6x:
         vaddubm        $tweak,$tweak,$tweak
        vcipher         $out2,$out2,v24
        vcipher         $out3,$out3,v24
-        vsldoi         $tmp,$tmp,$tmp,15
        vcipher         $out4,$out4,v24
        vcipher         $out5,$out5,v24
 
@@ -2642,7 +2663,8 @@ Loop_xts_enc6x:
         vand           $tmp,$tmp,$eighty7
        vcipher         $out0,$out0,v25
        vcipher         $out1,$out1,v25
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in1, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in1
        vcipher         $out2,$out2,v25
        vcipher         $out3,$out3,v25
         vxor           $in1,$twk1,v31
@@ -2653,13 +2675,13 @@ Loop_xts_enc6x:
 
        and             r0,r0,$len
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
        vcipher         $out0,$out0,v26
        vcipher         $out1,$out1,v26
         vand           $tmp,$tmp,$eighty7
        vcipher         $out2,$out2,v26
        vcipher         $out3,$out3,v26
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in2, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in2
        vcipher         $out4,$out4,v26
        vcipher         $out5,$out5,v26
 
@@ -2673,7 +2695,6 @@ Loop_xts_enc6x:
         vaddubm        $tweak,$tweak,$tweak
        vcipher         $out0,$out0,v27
        vcipher         $out1,$out1,v27
-        vsldoi         $tmp,$tmp,$tmp,15
        vcipher         $out2,$out2,v27
        vcipher         $out3,$out3,v27
         vand           $tmp,$tmp,$eighty7
@@ -2681,7 +2702,8 @@ Loop_xts_enc6x:
        vcipher         $out5,$out5,v27
 
        addi            $key_,$sp,$FRAME+15     # rewind $key_
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in3, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in3
        vcipher         $out0,$out0,v28
        vcipher         $out1,$out1,v28
         vxor           $in3,$twk3,v31
@@ -2690,7 +2712,6 @@ Loop_xts_enc6x:
        vcipher         $out2,$out2,v28
        vcipher         $out3,$out3,v28
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
        vcipher         $out4,$out4,v28
        vcipher         $out5,$out5,v28
        lvx             v24,$x00,$key_          # re-pre-load round[1]
@@ -2698,7 +2719,8 @@ Loop_xts_enc6x:
 
        vcipher         $out0,$out0,v29
        vcipher         $out1,$out1,v29
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in4, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in4
        vcipher         $out2,$out2,v29
        vcipher         $out3,$out3,v29
         vxor           $in4,$twk4,v31
@@ -2708,14 +2730,14 @@ Loop_xts_enc6x:
        vcipher         $out5,$out5,v29
        lvx             v25,$x10,$key_          # re-pre-load round[2]
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
 
        vcipher         $out0,$out0,v30
        vcipher         $out1,$out1,v30
         vand           $tmp,$tmp,$eighty7
        vcipher         $out2,$out2,v30
        vcipher         $out3,$out3,v30
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in5, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in5
        vcipher         $out4,$out4,v30
        vcipher         $out5,$out5,v30
         vxor           $in5,$twk5,v31
@@ -2725,7 +2747,6 @@ Loop_xts_enc6x:
        vcipherlast     $out0,$out0,$in0
         lvx_u          $in0,$x00,$inp          # load next input block
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
        vcipherlast     $out1,$out1,$in1
         lvx_u          $in1,$x10,$inp
        vcipherlast     $out2,$out2,$in2
@@ -2738,7 +2759,10 @@ Loop_xts_enc6x:
        vcipherlast     $out4,$out4,$in4
         le?vperm       $in2,$in2,$in2,$leperm
         lvx_u          $in4,$x40,$inp
-        vxor           $tweak,$tweak,$tmp
+        xxlor          10, 32+$in0, 32+$in0
+        xxlor          32+$in0, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in0
+        xxlor          32+$in0, 10, 10
        vcipherlast     $tmp,$out5,$in5         # last block might be needed
                                                # in stealing mode
         le?vperm       $in3,$in3,$in3,$leperm
@@ -2771,6 +2795,8 @@ Loop_xts_enc6x:
        mtctr           $rounds
        beq             Loop_xts_enc6x          # did $len-=96 borrow?
 
+       xxlor           32+$eighty7, 2, 2       # 0x010101..87
+
        addic.          $len,$len,0x60
        beq             Lxts_enc6x_zero
        cmpwi           $len,0x20
@@ -3147,6 +3173,17 @@ _aesp8_xts_decrypt6x:
        li              $x70,0x70
        mtspr           256,r0
 
+       xxlor           2, 32+$eighty7, 32+$eighty7
+       vsldoi          $eighty7,$tmp,$eighty7,1        # 0x010101..87
+       xxlor           1, 32+$eighty7, 32+$eighty7
+
+       # Load XOR Lconsts.
+       mr              $x70, r6
+       bl              Lconsts
+       lxvw4x          0, $x40, r6             # load XOR contents
+       mr              r6, $x70
+       li              $x70,0x70
+
        subi            $rounds,$rounds,3       # -4 in total
 
        lvx             $rndkey0,$x00,$key1     # load key schedule
@@ -3194,64 +3231,64 @@ Load_xts_dec_key:
        vxor            $twk0,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
        vand            $tmp,$tmp,$eighty7
         vxor           $out0,$in0,$twk0
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in1, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in1
 
         lvx_u          $in1,$x10,$inp
        vxor            $twk1,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in1,$in1,$in1,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out1,$in1,$twk1
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in2, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in2
 
         lvx_u          $in2,$x20,$inp
         andi.          $taillen,$len,15
        vxor            $twk2,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in2,$in2,$in2,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out2,$in2,$twk2
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in3, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in3
 
         lvx_u          $in3,$x30,$inp
         sub            $len,$len,$taillen
        vxor            $twk3,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in3,$in3,$in3,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out3,$in3,$twk3
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in4, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in4
 
         lvx_u          $in4,$x40,$inp
         subi           $len,$len,0x60
        vxor            $twk4,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in4,$in4,$in4,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out4,$in4,$twk4
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in5, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in5
 
         lvx_u          $in5,$x50,$inp
         addi           $inp,$inp,0x60
        vxor            $twk5,$tweak,$rndkey0
        vsrab           $tmp,$tweak,$seven      # next tweak value
        vaddubm         $tweak,$tweak,$tweak
-       vsldoi          $tmp,$tmp,$tmp,15
         le?vperm       $in5,$in5,$in5,$leperm
        vand            $tmp,$tmp,$eighty7
         vxor           $out5,$in5,$twk5
-       vxor            $tweak,$tweak,$tmp
+       xxlor           32+$in0, 0, 0
+       vpermxor        $tweak, $tweak, $tmp, $in0
 
        vxor            v31,v31,$rndkey0
        mtctr           $rounds
@@ -3277,6 +3314,8 @@ Loop_xts_dec6x:
        lvx             v25,$x10,$key_          # round[4]
        bdnz            Loop_xts_dec6x
 
+       xxlor           32+$eighty7, 1, 1       # 0x010101..87
+
        subic           $len,$len,96            # $len-=96
         vxor           $in0,$twk0,v31          # xor with last round key
        vncipher        $out0,$out0,v24
@@ -3286,7 +3325,6 @@ Loop_xts_dec6x:
         vaddubm        $tweak,$tweak,$tweak
        vncipher        $out2,$out2,v24
        vncipher        $out3,$out3,v24
-        vsldoi         $tmp,$tmp,$tmp,15
        vncipher        $out4,$out4,v24
        vncipher        $out5,$out5,v24
 
@@ -3294,7 +3332,8 @@ Loop_xts_dec6x:
         vand           $tmp,$tmp,$eighty7
        vncipher        $out0,$out0,v25
        vncipher        $out1,$out1,v25
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in1, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in1
        vncipher        $out2,$out2,v25
        vncipher        $out3,$out3,v25
         vxor           $in1,$twk1,v31
@@ -3305,13 +3344,13 @@ Loop_xts_dec6x:
 
        and             r0,r0,$len
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
        vncipher        $out0,$out0,v26
        vncipher        $out1,$out1,v26
         vand           $tmp,$tmp,$eighty7
        vncipher        $out2,$out2,v26
        vncipher        $out3,$out3,v26
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in2, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in2
        vncipher        $out4,$out4,v26
        vncipher        $out5,$out5,v26
 
@@ -3325,7 +3364,6 @@ Loop_xts_dec6x:
         vaddubm        $tweak,$tweak,$tweak
        vncipher        $out0,$out0,v27
        vncipher        $out1,$out1,v27
-        vsldoi         $tmp,$tmp,$tmp,15
        vncipher        $out2,$out2,v27
        vncipher        $out3,$out3,v27
         vand           $tmp,$tmp,$eighty7
@@ -3333,7 +3371,8 @@ Loop_xts_dec6x:
        vncipher        $out5,$out5,v27
 
        addi            $key_,$sp,$FRAME+15     # rewind $key_
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in3, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in3
        vncipher        $out0,$out0,v28
        vncipher        $out1,$out1,v28
         vxor           $in3,$twk3,v31
@@ -3342,7 +3381,6 @@ Loop_xts_dec6x:
        vncipher        $out2,$out2,v28
        vncipher        $out3,$out3,v28
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
        vncipher        $out4,$out4,v28
        vncipher        $out5,$out5,v28
        lvx             v24,$x00,$key_          # re-pre-load round[1]
@@ -3350,7 +3388,8 @@ Loop_xts_dec6x:
 
        vncipher        $out0,$out0,v29
        vncipher        $out1,$out1,v29
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in4, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in4
        vncipher        $out2,$out2,v29
        vncipher        $out3,$out3,v29
         vxor           $in4,$twk4,v31
@@ -3360,14 +3399,14 @@ Loop_xts_dec6x:
        vncipher        $out5,$out5,v29
        lvx             v25,$x10,$key_          # re-pre-load round[2]
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
 
        vncipher        $out0,$out0,v30
        vncipher        $out1,$out1,v30
         vand           $tmp,$tmp,$eighty7
        vncipher        $out2,$out2,v30
        vncipher        $out3,$out3,v30
-        vxor           $tweak,$tweak,$tmp
+        xxlor          32+$in5, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in5
        vncipher        $out4,$out4,v30
        vncipher        $out5,$out5,v30
         vxor           $in5,$twk5,v31
@@ -3377,7 +3416,6 @@ Loop_xts_dec6x:
        vncipherlast    $out0,$out0,$in0
         lvx_u          $in0,$x00,$inp          # load next input block
         vaddubm        $tweak,$tweak,$tweak
-        vsldoi         $tmp,$tmp,$tmp,15
        vncipherlast    $out1,$out1,$in1
         lvx_u          $in1,$x10,$inp
        vncipherlast    $out2,$out2,$in2
@@ -3390,7 +3428,10 @@ Loop_xts_dec6x:
        vncipherlast    $out4,$out4,$in4
         le?vperm       $in2,$in2,$in2,$leperm
         lvx_u          $in4,$x40,$inp
-        vxor           $tweak,$tweak,$tmp
+        xxlor          10, 32+$in0, 32+$in0
+        xxlor          32+$in0, 0, 0
+        vpermxor       $tweak, $tweak, $tmp, $in0
+        xxlor          32+$in0, 10, 10
        vncipherlast    $out5,$out5,$in5
         le?vperm       $in3,$in3,$in3,$leperm
         lvx_u          $in5,$x50,$inp
@@ -3421,6 +3462,8 @@ Loop_xts_dec6x:
        mtctr           $rounds
        beq             Loop_xts_dec6x          # did $len-=96 borrow?
 
+       xxlor           32+$eighty7, 2, 2       # 0x010101..87
+
        addic.          $len,$len,0x60
        beq             Lxts_dec6x_zero
        cmpwi           $len,0x20
index ce335578b759ed9531a35fe54b4e61bb16b8ca58..3c205324b22b676be150d44957c52d299fdd3a5d 100644 (file)
@@ -421,12 +421,10 @@ err_engine:
        return err;
 }
 
-static int zynqmp_aes_aead_remove(struct platform_device *pdev)
+static void zynqmp_aes_aead_remove(struct platform_device *pdev)
 {
        crypto_engine_exit(aes_drv_ctx.engine);
        crypto_engine_unregister_aead(&aes_drv_ctx.alg.aead);
-
-       return 0;
 }
 
 static const struct of_device_id zynqmp_aes_dt_ids[] = {
@@ -437,7 +435,7 @@ MODULE_DEVICE_TABLE(of, zynqmp_aes_dt_ids);
 
 static struct platform_driver zynqmp_aes_driver = {
        .probe  = zynqmp_aes_aead_probe,
-       .remove = zynqmp_aes_aead_remove,
+       .remove_new = zynqmp_aes_aead_remove,
        .driver = {
                .name           = "zynqmp-aes",
                .of_match_table = zynqmp_aes_dt_ids,
index 426bf1a72ba66b478d9764c05d036f34d68a03ea..1bcec6f46c9c755506a6c5af35ec7615d719ec35 100644 (file)
@@ -182,7 +182,6 @@ static struct zynqmp_sha_drv_ctx sha3_drv_ctx = {
                                     CRYPTO_ALG_NEED_FALLBACK,
                        .cra_blocksize = SHA3_384_BLOCK_SIZE,
                        .cra_ctxsize = sizeof(struct zynqmp_sha_tfm_ctx),
-                       .cra_alignmask = 3,
                        .cra_module = THIS_MODULE,
                }
        }
@@ -238,20 +237,18 @@ err_shash:
        return err;
 }
 
-static int zynqmp_sha_remove(struct platform_device *pdev)
+static void zynqmp_sha_remove(struct platform_device *pdev)
 {
        sha3_drv_ctx.dev = platform_get_drvdata(pdev);
 
        dma_free_coherent(sha3_drv_ctx.dev, ZYNQMP_DMA_ALLOC_FIXED_SIZE, ubuf, update_dma_addr);
        dma_free_coherent(sha3_drv_ctx.dev, SHA3_384_DIGEST_SIZE, fbuf, final_dma_addr);
        crypto_unregister_shash(&sha3_drv_ctx.sha3_384);
-
-       return 0;
 }
 
 static struct platform_driver zynqmp_sha_driver = {
        .probe = zynqmp_sha_probe,
-       .remove = zynqmp_sha_remove,
+       .remove_new = zynqmp_sha_remove,
        .driver = {
                .name = "zynqmp-sha3-384",
        },
index 3731c93f8f953f6fb20a1b03c999d79d54563cb7..c7338ac6a5bbe61aeedea4564bde53524b2b2a9e 100644 (file)
@@ -39,7 +39,6 @@
 
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/crypto.h>
 #include <linux/skbuff.h>
 #include <linux/rtnetlink.h>
 #include <linux/highmem.h>
@@ -49,7 +48,6 @@
 #include <net/esp.h>
 #include <net/xfrm.h>
 #include <crypto/aes.h>
-#include <crypto/algapi.h>
 #include <crypto/hash.h>
 #include <crypto/sha1.h>
 #include <crypto/sha2.h>
index 1d110d2edd64c8a62532a48d875c1e0b4b276cca..0d42e7d15714bc86374ccc1a8214bd6f857a7839 100644 (file)
@@ -4,7 +4,6 @@
 #ifndef __CHCR_IPSEC_H__
 #define __CHCR_IPSEC_H__
 
-#include <crypto/algapi.h>
 #include "t4_hw.h"
 #include "cxgb4.h"
 #include "t4_msg.h"
index 62f62bff74a5fd98265644d97908f6d1a7571dbb..7ff82b6778bad1d3b28f148306ddc911c597d86f 100644 (file)
@@ -7,7 +7,6 @@
 #define __CHTLS_H__
 
 #include <crypto/aes.h>
-#include <crypto/algapi.h>
 #include <crypto/hash.h>
 #include <crypto/sha1.h>
 #include <crypto/sha2.h>
index 4956f0499c198d0ab5fe1cb0cb7d42bcfbe86062..f89581b5e8cbe14f5040396ccf787e19f977182f 100644 (file)
@@ -12,9 +12,9 @@
 
 #include <crypto/blake2s.h>
 #include <crypto/chacha20poly1305.h>
+#include <crypto/utils.h>
 
 #include <net/ipv6.h>
-#include <crypto/algapi.h>
 
 void wg_cookie_checker_init(struct cookie_checker *checker,
                            struct wg_device *wg)
index dc09b75a32485c2f7e57700a4e451d19a69841b0..e220d761b1f27aa31eab3ad1b9211ddfe55eaabd 100644 (file)
@@ -15,7 +15,7 @@
 #include <linux/if.h>
 #include <net/genetlink.h>
 #include <net/sock.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
 
 static struct genl_family genl_family;
 
index 720952b92e784c5e94dadb5e0e70fa16ee53f6a5..202a33af5a721f2216ad0815e7b63a28d2bf5888 100644 (file)
@@ -15,7 +15,7 @@
 #include <linux/bitmap.h>
 #include <linux/scatterlist.h>
 #include <linux/highmem.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
 
 /* This implements Noise_IKpsk2:
  *
index a10710bc81230f47ef745c228ec5bcfdf3023381..cf3b58ec32ccec393b1c2096bb5d4c2399502107 100644 (file)
@@ -20,8 +20,8 @@
  *    managed alongside the master keys in the filesystem-level keyring)
  */
 
-#include <crypto/algapi.h>
 #include <crypto/skcipher.h>
+#include <crypto/utils.h>
 #include <keys/user-type.h>
 #include <linux/hashtable.h>
 #include <linux/scatterlist.h>
index 0065f191b54b7af8f69e19f52b29f04509a99e61..001513806fc0d87907c8f7b028c2a73c5097fee8 100644 (file)
@@ -1,3 +1,11 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 1998, 2000 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc2478#section-3.2.1
+-- https://www.rfc-editor.org/rfc/rfc2743#section-3.1
+
 GSSAPI ::=
        [APPLICATION 0] IMPLICIT SEQUENCE {
                thisMech
index 1151933e7b9c581bd00e5c29cbcd146c504f4776..797e485d57f1ed292fc97aa23fb7f0d26a9326ca 100644 (file)
@@ -1,3 +1,10 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 1998 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc2478#section-3.2.1
+
 GSSAPI ::=
        CHOICE {
                negTokenInit
index e564d5ff87816380ccf65712c99bf8fa9c9bf4ee..0d561ecb686943e7063c167ba096030bb95b8e3d 100644 (file)
@@ -9,10 +9,9 @@
  * This file implements various helper functions for UBIFS authentication support
  */
 
-#include <linux/crypto.h>
 #include <linux/verification.h>
 #include <crypto/hash.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
 #include <keys/user-type.h>
 #include <keys/asymmetric-type.h>
 
index 4211e4456b1e72af149f579e77d0aecd55895a67..c59d47fe79396f6f20d120516e7187b92f12196f 100644 (file)
@@ -23,7 +23,6 @@
 #include "ubifs.h"
 #include <linux/list_sort.h>
 #include <crypto/hash.h>
-#include <crypto/algapi.h>
 
 /**
  * struct replay_entry - replay list entry.
index 62633816d7d045d0f3feb09d0a1db7e924c99f65..3916dc4f30caa65072eb90b3d78762f41d56c82a 100644 (file)
@@ -31,7 +31,7 @@
 #include <linux/completion.h>
 #include <crypto/hash_info.h>
 #include <crypto/hash.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
 
 #include <linux/fscrypt.h>
 
index 35e45b854a6fa4940f099eadde9228c1b3a1105d..51382befbe37abcd73cbce1d5f6d0ca53b7af841 100644 (file)
@@ -217,6 +217,18 @@ static inline void crypto_free_aead(struct crypto_aead *tfm)
        crypto_destroy_tfm(tfm, crypto_aead_tfm(tfm));
 }
 
+/**
+ * crypto_has_aead() - Search for the availability of an aead.
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ *           aead
+ * @type: specifies the type of the aead
+ * @mask: specifies the mask for the aead
+ *
+ * Return: true when the aead is known to the kernel crypto API; false
+ *        otherwise
+ */
+int crypto_has_aead(const char *alg_name, u32 type, u32 mask);
+
 static inline const char *crypto_aead_driver_name(struct crypto_aead *tfm)
 {
        return crypto_tfm_alg_driver_name(crypto_aead_tfm(tfm));
index 670508f1dca19ce59d57e0880f6caa74b2fe6276..31c111bebb68883437e8bf5360986c0e44ce8fe4 100644 (file)
@@ -382,7 +382,7 @@ static inline int crypto_akcipher_decrypt(struct akcipher_request *req)
  * @tfm:       AKCIPHER tfm handle allocated with crypto_alloc_akcipher()
  * @src:       source buffer
  * @slen:      source length
- * @dst:       destinatino obuffer
+ * @dst:       destination obuffer
  * @dlen:      destination length
  *
  * Return: zero on success; error code in case of error
@@ -400,7 +400,7 @@ int crypto_akcipher_sync_encrypt(struct crypto_akcipher *tfm,
  * @tfm:       AKCIPHER tfm handle allocated with crypto_alloc_akcipher()
  * @src:       source buffer
  * @slen:      source length
- * @dst:       destinatino obuffer
+ * @dst:       destination obuffer
  * @dlen:      destination length
  *
  * Return: Output length on success; error code in case of error
index ca86f4c6ba4394184459d918718814b5cc889154..7a4a71af653fa84b80f888ec695c9deacc91e77b 100644 (file)
@@ -195,11 +195,6 @@ static inline void *crypto_tfm_ctx_align(struct crypto_tfm *tfm,
        return PTR_ALIGN(crypto_tfm_ctx(tfm), align);
 }
 
-static inline void *crypto_tfm_ctx_aligned(struct crypto_tfm *tfm)
-{
-       return crypto_tfm_ctx_align(tfm, crypto_tfm_alg_alignmask(tfm) + 1);
-}
-
 static inline unsigned int crypto_dma_align(void)
 {
        return CRYPTO_DMA_ALIGN;
index 2835069c5997eed8fc7dfadcbb217b6592d4a507..545dbefe3e13c6b790174c82590170100a6ae56d 100644 (file)
@@ -78,7 +78,7 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev,
                                                       bool retry_support,
                                                       int (*cbk_do_batch)(struct crypto_engine *engine),
                                                       bool rt, int qlen);
-int crypto_engine_exit(struct crypto_engine *engine);
+void crypto_engine_exit(struct crypto_engine *engine);
 
 int crypto_engine_register_aead(struct aead_engine_alg *alg);
 void crypto_engine_unregister_aead(struct aead_engine_alg *alg);
index f7c2a22cd776daa3eb42c6310ddd8bd69ee175be..c7bdbece27ccbc4dce6581b50654c35e20ead9d2 100644 (file)
@@ -250,16 +250,7 @@ struct shash_alg {
 #undef HASH_ALG_COMMON_STAT
 
 struct crypto_ahash {
-       int (*init)(struct ahash_request *req);
-       int (*update)(struct ahash_request *req);
-       int (*final)(struct ahash_request *req);
-       int (*finup)(struct ahash_request *req);
-       int (*digest)(struct ahash_request *req);
-       int (*export)(struct ahash_request *req, void *out);
-       int (*import)(struct ahash_request *req, const void *in);
-       int (*setkey)(struct crypto_ahash *tfm, const u8 *key,
-                     unsigned int keylen);
-
+       bool using_shash; /* Underlying algorithm is shash, not ahash */
        unsigned int statesize;
        unsigned int reqsize;
        struct crypto_tfm base;
@@ -342,12 +333,6 @@ static inline const char *crypto_ahash_driver_name(struct crypto_ahash *tfm)
        return crypto_tfm_alg_driver_name(crypto_ahash_tfm(tfm));
 }
 
-static inline unsigned int crypto_ahash_alignmask(
-       struct crypto_ahash *tfm)
-{
-       return crypto_tfm_alg_alignmask(crypto_ahash_tfm(tfm));
-}
-
 /**
  * crypto_ahash_blocksize() - obtain block size for cipher
  * @tfm: cipher handle
@@ -519,10 +504,7 @@ int crypto_ahash_digest(struct ahash_request *req);
  *
  * Return: 0 if the export was successful; < 0 if an error occurred
  */
-static inline int crypto_ahash_export(struct ahash_request *req, void *out)
-{
-       return crypto_ahash_reqtfm(req)->export(req, out);
-}
+int crypto_ahash_export(struct ahash_request *req, void *out);
 
 /**
  * crypto_ahash_import() - import message digest state
@@ -535,15 +517,7 @@ static inline int crypto_ahash_export(struct ahash_request *req, void *out)
  *
  * Return: 0 if the import was successful; < 0 if an error occurred
  */
-static inline int crypto_ahash_import(struct ahash_request *req, const void *in)
-{
-       struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-
-       if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
-               return -ENOKEY;
-
-       return tfm->import(req, in);
-}
+int crypto_ahash_import(struct ahash_request *req, const void *in);
 
 /**
  * crypto_ahash_init() - (re)initialize message digest handle
@@ -556,36 +530,7 @@ static inline int crypto_ahash_import(struct ahash_request *req, const void *in)
  *
  * Return: see crypto_ahash_final()
  */
-static inline int crypto_ahash_init(struct ahash_request *req)
-{
-       struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-
-       if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
-               return -ENOKEY;
-
-       return tfm->init(req);
-}
-
-static inline struct crypto_istat_hash *hash_get_stat(
-       struct hash_alg_common *alg)
-{
-#ifdef CONFIG_CRYPTO_STATS
-       return &alg->stat;
-#else
-       return NULL;
-#endif
-}
-
-static inline int crypto_hash_errstat(struct hash_alg_common *alg, int err)
-{
-       if (!IS_ENABLED(CONFIG_CRYPTO_STATS))
-               return err;
-
-       if (err && err != -EINPROGRESS && err != -EBUSY)
-               atomic64_inc(&hash_get_stat(alg)->err_cnt);
-
-       return err;
-}
+int crypto_ahash_init(struct ahash_request *req);
 
 /**
  * crypto_ahash_update() - add data to message digest for processing
@@ -598,16 +543,7 @@ static inline int crypto_hash_errstat(struct hash_alg_common *alg, int err)
  *
  * Return: see crypto_ahash_final()
  */
-static inline int crypto_ahash_update(struct ahash_request *req)
-{
-       struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-       struct hash_alg_common *alg = crypto_hash_alg_common(tfm);
-
-       if (IS_ENABLED(CONFIG_CRYPTO_STATS))
-               atomic64_add(req->nbytes, &hash_get_stat(alg)->hash_tlen);
-
-       return crypto_hash_errstat(alg, tfm->update(req));
-}
+int crypto_ahash_update(struct ahash_request *req);
 
 /**
  * DOC: Asynchronous Hash Request Handle
@@ -798,12 +734,6 @@ static inline const char *crypto_shash_driver_name(struct crypto_shash *tfm)
        return crypto_tfm_alg_driver_name(crypto_shash_tfm(tfm));
 }
 
-static inline unsigned int crypto_shash_alignmask(
-       struct crypto_shash *tfm)
-{
-       return crypto_tfm_alg_alignmask(crypto_shash_tfm(tfm));
-}
-
 /**
  * crypto_shash_blocksize() - obtain block size for cipher
  * @tfm: cipher handle
@@ -952,10 +882,7 @@ int crypto_shash_tfm_digest(struct crypto_shash *tfm, const u8 *data,
  * Context: Any context.
  * Return: 0 if the export creation was successful; < 0 if an error occurred
  */
-static inline int crypto_shash_export(struct shash_desc *desc, void *out)
-{
-       return crypto_shash_alg(desc->tfm)->export(desc, out);
-}
+int crypto_shash_export(struct shash_desc *desc, void *out);
 
 /**
  * crypto_shash_import() - import operational state
@@ -969,15 +896,7 @@ static inline int crypto_shash_export(struct shash_desc *desc, void *out)
  * Context: Any context.
  * Return: 0 if the import was successful; < 0 if an error occurred
  */
-static inline int crypto_shash_import(struct shash_desc *desc, const void *in)
-{
-       struct crypto_shash *tfm = desc->tfm;
-
-       if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
-               return -ENOKEY;
-
-       return crypto_shash_alg(tfm)->import(desc, in);
-}
+int crypto_shash_import(struct shash_desc *desc, const void *in);
 
 /**
  * crypto_shash_init() - (re)initialize message digest
index dd4f067850493b46c48d0bb512c57bf5deee2339..d6927739f8b2c9c35b3994240b188838cb6a0f69 100644 (file)
@@ -10,6 +10,7 @@
 
 #include <crypto/sha1.h>
 #include <crypto/sha2.h>
+#include <crypto/sha3.h>
 #include <crypto/md5.h>
 #include <crypto/streebog.h>
 
index cf65676e45f4d4ee57be1fa864e2fa271006bc81..59c707e4dea467be1320ef458c51ddbd1568cd71 100644 (file)
@@ -18,15 +18,13 @@ struct crypto_hash_walk {
        char *data;
 
        unsigned int offset;
-       unsigned int alignmask;
+       unsigned int flags;
 
        struct page *pg;
        unsigned int entrylen;
 
        unsigned int total;
        struct scatterlist *sg;
-
-       unsigned int flags;
 };
 
 struct ahash_instance {
@@ -269,11 +267,6 @@ static inline struct crypto_shash *crypto_spawn_shash(
        return crypto_spawn_tfm2(&spawn->base);
 }
 
-static inline void *crypto_shash_ctx_aligned(struct crypto_shash *tfm)
-{
-       return crypto_tfm_ctx_aligned(&tfm->base);
-}
-
 static inline struct crypto_shash *__crypto_shash_cast(struct crypto_tfm *tfm)
 {
        return container_of(tfm, struct crypto_shash, base);
index fb3d9e899f526b08e9b456418860e49b6309f206..7ae42afdcf3ed6c8d3965dd14534b7fba9e7f668 100644 (file)
@@ -36,10 +36,25 @@ struct skcipher_instance {
        };
 };
 
+struct lskcipher_instance {
+       void (*free)(struct lskcipher_instance *inst);
+       union {
+               struct {
+                       char head[offsetof(struct lskcipher_alg, co.base)];
+                       struct crypto_instance base;
+               } s;
+               struct lskcipher_alg alg;
+       };
+};
+
 struct crypto_skcipher_spawn {
        struct crypto_spawn base;
 };
 
+struct crypto_lskcipher_spawn {
+       struct crypto_spawn base;
+};
+
 struct skcipher_walk {
        union {
                struct {
@@ -80,6 +95,12 @@ static inline struct crypto_instance *skcipher_crypto_instance(
        return &inst->s.base;
 }
 
+static inline struct crypto_instance *lskcipher_crypto_instance(
+       struct lskcipher_instance *inst)
+{
+       return &inst->s.base;
+}
+
 static inline struct skcipher_instance *skcipher_alg_instance(
        struct crypto_skcipher *skcipher)
 {
@@ -87,11 +108,23 @@ static inline struct skcipher_instance *skcipher_alg_instance(
                            struct skcipher_instance, alg);
 }
 
+static inline struct lskcipher_instance *lskcipher_alg_instance(
+       struct crypto_lskcipher *lskcipher)
+{
+       return container_of(crypto_lskcipher_alg(lskcipher),
+                           struct lskcipher_instance, alg);
+}
+
 static inline void *skcipher_instance_ctx(struct skcipher_instance *inst)
 {
        return crypto_instance_ctx(skcipher_crypto_instance(inst));
 }
 
+static inline void *lskcipher_instance_ctx(struct lskcipher_instance *inst)
+{
+       return crypto_instance_ctx(lskcipher_crypto_instance(inst));
+}
+
 static inline void skcipher_request_complete(struct skcipher_request *req, int err)
 {
        crypto_request_complete(&req->base, err);
@@ -101,21 +134,36 @@ int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn,
                         struct crypto_instance *inst,
                         const char *name, u32 type, u32 mask);
 
+int crypto_grab_lskcipher(struct crypto_lskcipher_spawn *spawn,
+                         struct crypto_instance *inst,
+                         const char *name, u32 type, u32 mask);
+
 static inline void crypto_drop_skcipher(struct crypto_skcipher_spawn *spawn)
 {
        crypto_drop_spawn(&spawn->base);
 }
 
-static inline struct skcipher_alg *crypto_skcipher_spawn_alg(
-       struct crypto_skcipher_spawn *spawn)
+static inline void crypto_drop_lskcipher(struct crypto_lskcipher_spawn *spawn)
+{
+       crypto_drop_spawn(&spawn->base);
+}
+
+static inline struct lskcipher_alg *crypto_lskcipher_spawn_alg(
+       struct crypto_lskcipher_spawn *spawn)
 {
-       return container_of(spawn->base.alg, struct skcipher_alg, base);
+       return container_of(spawn->base.alg, struct lskcipher_alg, co.base);
 }
 
-static inline struct skcipher_alg *crypto_spawn_skcipher_alg(
+static inline struct skcipher_alg_common *crypto_spawn_skcipher_alg_common(
        struct crypto_skcipher_spawn *spawn)
 {
-       return crypto_skcipher_spawn_alg(spawn);
+       return container_of(spawn->base.alg, struct skcipher_alg_common, base);
+}
+
+static inline struct lskcipher_alg *crypto_spawn_lskcipher_alg(
+       struct crypto_lskcipher_spawn *spawn)
+{
+       return crypto_lskcipher_spawn_alg(spawn);
 }
 
 static inline struct crypto_skcipher *crypto_spawn_skcipher(
@@ -124,6 +172,12 @@ static inline struct crypto_skcipher *crypto_spawn_skcipher(
        return crypto_spawn_tfm2(&spawn->base);
 }
 
+static inline struct crypto_lskcipher *crypto_spawn_lskcipher(
+       struct crypto_lskcipher_spawn *spawn)
+{
+       return crypto_spawn_tfm2(&spawn->base);
+}
+
 static inline void crypto_skcipher_set_reqsize(
        struct crypto_skcipher *skcipher, unsigned int reqsize)
 {
@@ -144,6 +198,13 @@ void crypto_unregister_skciphers(struct skcipher_alg *algs, int count);
 int skcipher_register_instance(struct crypto_template *tmpl,
                               struct skcipher_instance *inst);
 
+int crypto_register_lskcipher(struct lskcipher_alg *alg);
+void crypto_unregister_lskcipher(struct lskcipher_alg *alg);
+int crypto_register_lskciphers(struct lskcipher_alg *algs, int count);
+void crypto_unregister_lskciphers(struct lskcipher_alg *algs, int count);
+int lskcipher_register_instance(struct crypto_template *tmpl,
+                               struct lskcipher_instance *inst);
+
 int skcipher_walk_done(struct skcipher_walk *walk, int err);
 int skcipher_walk_virt(struct skcipher_walk *walk,
                       struct skcipher_request *req,
@@ -166,6 +227,11 @@ static inline void *crypto_skcipher_ctx(struct crypto_skcipher *tfm)
        return crypto_tfm_ctx(&tfm->base);
 }
 
+static inline void *crypto_lskcipher_ctx(struct crypto_lskcipher *tfm)
+{
+       return crypto_tfm_ctx(&tfm->base);
+}
+
 static inline void *crypto_skcipher_ctx_dma(struct crypto_skcipher *tfm)
 {
        return crypto_tfm_ctx_dma(&tfm->base);
@@ -191,41 +257,6 @@ static inline u32 skcipher_request_flags(struct skcipher_request *req)
        return req->base.flags;
 }
 
-static inline unsigned int crypto_skcipher_alg_min_keysize(
-       struct skcipher_alg *alg)
-{
-       return alg->min_keysize;
-}
-
-static inline unsigned int crypto_skcipher_alg_max_keysize(
-       struct skcipher_alg *alg)
-{
-       return alg->max_keysize;
-}
-
-static inline unsigned int crypto_skcipher_alg_walksize(
-       struct skcipher_alg *alg)
-{
-       return alg->walksize;
-}
-
-/**
- * crypto_skcipher_walksize() - obtain walk size
- * @tfm: cipher handle
- *
- * In some cases, algorithms can only perform optimally when operating on
- * multiple blocks in parallel. This is reflected by the walksize, which
- * must be a multiple of the chunksize (or equal if the concern does not
- * apply)
- *
- * Return: walk size in bytes
- */
-static inline unsigned int crypto_skcipher_walksize(
-       struct crypto_skcipher *tfm)
-{
-       return crypto_skcipher_alg_walksize(crypto_skcipher_alg(tfm));
-}
-
 /* Helpers for simple block cipher modes of operation */
 struct skcipher_ctx_simple {
        struct crypto_cipher *cipher;   /* underlying block cipher */
@@ -249,5 +280,24 @@ static inline struct crypto_alg *skcipher_ialg_simple(
        return crypto_spawn_cipher_alg(spawn);
 }
 
+static inline struct crypto_lskcipher *lskcipher_cipher_simple(
+       struct crypto_lskcipher *tfm)
+{
+       struct crypto_lskcipher **ctx = crypto_lskcipher_ctx(tfm);
+
+       return *ctx;
+}
+
+struct lskcipher_instance *lskcipher_alloc_instance_simple(
+       struct crypto_template *tmpl, struct rtattr **tb);
+
+static inline struct lskcipher_alg *lskcipher_ialg_simple(
+       struct lskcipher_instance *inst)
+{
+       struct crypto_lskcipher_spawn *spawn = lskcipher_instance_ctx(inst);
+
+       return crypto_lskcipher_spawn_alg(spawn);
+}
+
 #endif /* _CRYPTO_INTERNAL_SKCIPHER_H */
 
index 641b4714c448dc780d042014fd62a0b0ecf6794a..d25186bb2be31d46b850b8c342e9a2dce772aacb 100644 (file)
@@ -79,7 +79,7 @@ int crypto_sig_maxsize(struct crypto_sig *tfm);
  * @tfm:       signature tfm handle allocated with crypto_alloc_sig()
  * @src:       source buffer
  * @slen:      source length
- * @dst:       destinatino obuffer
+ * @dst:       destination obuffer
  * @dlen:      destination length
  *
  * Return: zero on success; error code in case of error
index 080d1ba3611d8d924f82178c2fc4cad4d54b425a..ea18af48346b157c195a721bb35f83666e3baec8 100644 (file)
@@ -49,6 +49,10 @@ struct crypto_sync_skcipher {
        struct crypto_skcipher base;
 };
 
+struct crypto_lskcipher {
+       struct crypto_tfm base;
+};
+
 /*
  * struct crypto_istat_cipher - statistics for cipher algorithm
  * @encrypt_cnt:       number of encrypt requests
@@ -65,6 +69,43 @@ struct crypto_istat_cipher {
        atomic64_t err_cnt;
 };
 
+#ifdef CONFIG_CRYPTO_STATS
+#define SKCIPHER_ALG_COMMON_STAT struct crypto_istat_cipher stat;
+#else
+#define SKCIPHER_ALG_COMMON_STAT
+#endif
+
+/*
+ * struct skcipher_alg_common - common properties of skcipher_alg
+ * @min_keysize: Minimum key size supported by the transformation. This is the
+ *              smallest key length supported by this transformation algorithm.
+ *              This must be set to one of the pre-defined values as this is
+ *              not hardware specific. Possible values for this field can be
+ *              found via git grep "_MIN_KEY_SIZE" include/crypto/
+ * @max_keysize: Maximum key size supported by the transformation. This is the
+ *              largest key length supported by this transformation algorithm.
+ *              This must be set to one of the pre-defined values as this is
+ *              not hardware specific. Possible values for this field can be
+ *              found via git grep "_MAX_KEY_SIZE" include/crypto/
+ * @ivsize: IV size applicable for transformation. The consumer must provide an
+ *         IV of exactly that size to perform the encrypt or decrypt operation.
+ * @chunksize: Equal to the block size except for stream ciphers such as
+ *            CTR where it is set to the underlying block size.
+ * @stat: Statistics for cipher algorithm
+ * @base: Definition of a generic crypto algorithm.
+ */
+#define SKCIPHER_ALG_COMMON {          \
+       unsigned int min_keysize;       \
+       unsigned int max_keysize;       \
+       unsigned int ivsize;            \
+       unsigned int chunksize;         \
+                                       \
+       SKCIPHER_ALG_COMMON_STAT        \
+                                       \
+       struct crypto_alg base;         \
+}
+struct skcipher_alg_common SKCIPHER_ALG_COMMON;
+
 /**
  * struct skcipher_alg - symmetric key cipher definition
  * @min_keysize: Minimum key size supported by the transformation. This is the
@@ -120,6 +161,7 @@ struct crypto_istat_cipher {
  *           in parallel. Should be a multiple of chunksize.
  * @stat: Statistics for cipher algorithm
  * @base: Definition of a generic crypto algorithm.
+ * @co: see struct skcipher_alg_common
  *
  * All fields except @ivsize are mandatory and must be filled.
  */
@@ -131,17 +173,55 @@ struct skcipher_alg {
        int (*init)(struct crypto_skcipher *tfm);
        void (*exit)(struct crypto_skcipher *tfm);
 
-       unsigned int min_keysize;
-       unsigned int max_keysize;
-       unsigned int ivsize;
-       unsigned int chunksize;
        unsigned int walksize;
 
-#ifdef CONFIG_CRYPTO_STATS
-       struct crypto_istat_cipher stat;
-#endif
+       union {
+               struct SKCIPHER_ALG_COMMON;
+               struct skcipher_alg_common co;
+       };
+};
 
-       struct crypto_alg base;
+/**
+ * struct lskcipher_alg - linear symmetric key cipher definition
+ * @setkey: Set key for the transformation. This function is used to either
+ *         program a supplied key into the hardware or store the key in the
+ *         transformation context for programming it later. Note that this
+ *         function does modify the transformation context. This function can
+ *         be called multiple times during the existence of the transformation
+ *         object, so one must make sure the key is properly reprogrammed into
+ *         the hardware. This function is also responsible for checking the key
+ *         length for validity. In case a software fallback was put in place in
+ *         the @cra_init call, this function might need to use the fallback if
+ *         the algorithm doesn't support all of the key sizes.
+ * @encrypt: Encrypt a number of bytes. This function is used to encrypt
+ *          the supplied data.  This function shall not modify
+ *          the transformation context, as this function may be called
+ *          in parallel with the same transformation object.  Data
+ *          may be left over if length is not a multiple of blocks
+ *          and there is more to come (final == false).  The number of
+ *          left-over bytes should be returned in case of success.
+ * @decrypt: Decrypt a number of bytes. This is a reverse counterpart to
+ *          @encrypt and the conditions are exactly the same.
+ * @init: Initialize the cryptographic transformation object. This function
+ *       is used to initialize the cryptographic transformation object.
+ *       This function is called only once at the instantiation time, right
+ *       after the transformation context was allocated.
+ * @exit: Deinitialize the cryptographic transformation object. This is a
+ *       counterpart to @init, used to remove various changes set in
+ *       @init.
+ * @co: see struct skcipher_alg_common
+ */
+struct lskcipher_alg {
+       int (*setkey)(struct crypto_lskcipher *tfm, const u8 *key,
+                     unsigned int keylen);
+       int (*encrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+                      u8 *dst, unsigned len, u8 *iv, bool final);
+       int (*decrypt)(struct crypto_lskcipher *tfm, const u8 *src,
+                      u8 *dst, unsigned len, u8 *iv, bool final);
+       int (*init)(struct crypto_lskcipher *tfm);
+       void (*exit)(struct crypto_lskcipher *tfm);
+
+       struct skcipher_alg_common co;
 };
 
 #define MAX_SYNC_SKCIPHER_REQSIZE      384
@@ -213,12 +293,36 @@ struct crypto_skcipher *crypto_alloc_skcipher(const char *alg_name,
 struct crypto_sync_skcipher *crypto_alloc_sync_skcipher(const char *alg_name,
                                              u32 type, u32 mask);
 
+
+/**
+ * crypto_alloc_lskcipher() - allocate linear symmetric key cipher handle
+ * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
+ *           lskcipher
+ * @type: specifies the type of the cipher
+ * @mask: specifies the mask for the cipher
+ *
+ * Allocate a cipher handle for an lskcipher. The returned struct
+ * crypto_lskcipher is the cipher handle that is required for any subsequent
+ * API invocation for that lskcipher.
+ *
+ * Return: allocated cipher handle in case of success; IS_ERR() is true in case
+ *        of an error, PTR_ERR() returns the error code.
+ */
+struct crypto_lskcipher *crypto_alloc_lskcipher(const char *alg_name,
+                                               u32 type, u32 mask);
+
 static inline struct crypto_tfm *crypto_skcipher_tfm(
        struct crypto_skcipher *tfm)
 {
        return &tfm->base;
 }
 
+static inline struct crypto_tfm *crypto_lskcipher_tfm(
+       struct crypto_lskcipher *tfm)
+{
+       return &tfm->base;
+}
+
 /**
  * crypto_free_skcipher() - zeroize and free cipher handle
  * @tfm: cipher handle to be freed
@@ -235,6 +339,17 @@ static inline void crypto_free_sync_skcipher(struct crypto_sync_skcipher *tfm)
        crypto_free_skcipher(&tfm->base);
 }
 
+/**
+ * crypto_free_lskcipher() - zeroize and free cipher handle
+ * @tfm: cipher handle to be freed
+ *
+ * If @tfm is a NULL or error pointer, this function does nothing.
+ */
+static inline void crypto_free_lskcipher(struct crypto_lskcipher *tfm)
+{
+       crypto_destroy_tfm(tfm, crypto_lskcipher_tfm(tfm));
+}
+
 /**
  * crypto_has_skcipher() - Search for the availability of an skcipher.
  * @alg_name: is the cra_name / name or cra_driver_name / driver name of the
@@ -253,6 +368,19 @@ static inline const char *crypto_skcipher_driver_name(
        return crypto_tfm_alg_driver_name(crypto_skcipher_tfm(tfm));
 }
 
+static inline const char *crypto_lskcipher_driver_name(
+       struct crypto_lskcipher *tfm)
+{
+       return crypto_tfm_alg_driver_name(crypto_lskcipher_tfm(tfm));
+}
+
+static inline struct skcipher_alg_common *crypto_skcipher_alg_common(
+       struct crypto_skcipher *tfm)
+{
+       return container_of(crypto_skcipher_tfm(tfm)->__crt_alg,
+                           struct skcipher_alg_common, base);
+}
+
 static inline struct skcipher_alg *crypto_skcipher_alg(
        struct crypto_skcipher *tfm)
 {
@@ -260,9 +388,11 @@ static inline struct skcipher_alg *crypto_skcipher_alg(
                            struct skcipher_alg, base);
 }
 
-static inline unsigned int crypto_skcipher_alg_ivsize(struct skcipher_alg *alg)
+static inline struct lskcipher_alg *crypto_lskcipher_alg(
+       struct crypto_lskcipher *tfm)
 {
-       return alg->ivsize;
+       return container_of(crypto_lskcipher_tfm(tfm)->__crt_alg,
+                           struct lskcipher_alg, co.base);
 }
 
 /**
@@ -276,7 +406,7 @@ static inline unsigned int crypto_skcipher_alg_ivsize(struct skcipher_alg *alg)
  */
 static inline unsigned int crypto_skcipher_ivsize(struct crypto_skcipher *tfm)
 {
-       return crypto_skcipher_alg(tfm)->ivsize;
+       return crypto_skcipher_alg_common(tfm)->ivsize;
 }
 
 static inline unsigned int crypto_sync_skcipher_ivsize(
@@ -285,6 +415,21 @@ static inline unsigned int crypto_sync_skcipher_ivsize(
        return crypto_skcipher_ivsize(&tfm->base);
 }
 
+/**
+ * crypto_lskcipher_ivsize() - obtain IV size
+ * @tfm: cipher handle
+ *
+ * The size of the IV for the lskcipher referenced by the cipher handle is
+ * returned. This IV size may be zero if the cipher does not need an IV.
+ *
+ * Return: IV size in bytes
+ */
+static inline unsigned int crypto_lskcipher_ivsize(
+       struct crypto_lskcipher *tfm)
+{
+       return crypto_lskcipher_alg(tfm)->co.ivsize;
+}
+
 /**
  * crypto_skcipher_blocksize() - obtain block size of cipher
  * @tfm: cipher handle
@@ -301,10 +446,20 @@ static inline unsigned int crypto_skcipher_blocksize(
        return crypto_tfm_alg_blocksize(crypto_skcipher_tfm(tfm));
 }
 
-static inline unsigned int crypto_skcipher_alg_chunksize(
-       struct skcipher_alg *alg)
+/**
+ * crypto_lskcipher_blocksize() - obtain block size of cipher
+ * @tfm: cipher handle
+ *
+ * The block size for the lskcipher referenced with the cipher handle is
+ * returned. The caller may use that information to allocate appropriate
+ * memory for the data returned by the encryption or decryption operation
+ *
+ * Return: block size of cipher
+ */
+static inline unsigned int crypto_lskcipher_blocksize(
+       struct crypto_lskcipher *tfm)
 {
-       return alg->chunksize;
+       return crypto_tfm_alg_blocksize(crypto_lskcipher_tfm(tfm));
 }
 
 /**
@@ -321,7 +476,24 @@ static inline unsigned int crypto_skcipher_alg_chunksize(
 static inline unsigned int crypto_skcipher_chunksize(
        struct crypto_skcipher *tfm)
 {
-       return crypto_skcipher_alg_chunksize(crypto_skcipher_alg(tfm));
+       return crypto_skcipher_alg_common(tfm)->chunksize;
+}
+
+/**
+ * crypto_lskcipher_chunksize() - obtain chunk size
+ * @tfm: cipher handle
+ *
+ * The block size is set to one for ciphers such as CTR.  However,
+ * you still need to provide incremental updates in multiples of
+ * the underlying block size as the IV does not have sub-block
+ * granularity.  This is known in this API as the chunk size.
+ *
+ * Return: chunk size in bytes
+ */
+static inline unsigned int crypto_lskcipher_chunksize(
+       struct crypto_lskcipher *tfm)
+{
+       return crypto_lskcipher_alg(tfm)->co.chunksize;
 }
 
 static inline unsigned int crypto_sync_skcipher_blocksize(
@@ -336,6 +508,12 @@ static inline unsigned int crypto_skcipher_alignmask(
        return crypto_tfm_alg_alignmask(crypto_skcipher_tfm(tfm));
 }
 
+static inline unsigned int crypto_lskcipher_alignmask(
+       struct crypto_lskcipher *tfm)
+{
+       return crypto_tfm_alg_alignmask(crypto_lskcipher_tfm(tfm));
+}
+
 static inline u32 crypto_skcipher_get_flags(struct crypto_skcipher *tfm)
 {
        return crypto_tfm_get_flags(crypto_skcipher_tfm(tfm));
@@ -371,6 +549,23 @@ static inline void crypto_sync_skcipher_clear_flags(
        crypto_skcipher_clear_flags(&tfm->base, flags);
 }
 
+static inline u32 crypto_lskcipher_get_flags(struct crypto_lskcipher *tfm)
+{
+       return crypto_tfm_get_flags(crypto_lskcipher_tfm(tfm));
+}
+
+static inline void crypto_lskcipher_set_flags(struct crypto_lskcipher *tfm,
+                                              u32 flags)
+{
+       crypto_tfm_set_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
+static inline void crypto_lskcipher_clear_flags(struct crypto_lskcipher *tfm,
+                                                u32 flags)
+{
+       crypto_tfm_clear_flags(crypto_lskcipher_tfm(tfm), flags);
+}
+
 /**
  * crypto_skcipher_setkey() - set key for cipher
  * @tfm: cipher handle
@@ -396,16 +591,47 @@ static inline int crypto_sync_skcipher_setkey(struct crypto_sync_skcipher *tfm,
        return crypto_skcipher_setkey(&tfm->base, key, keylen);
 }
 
+/**
+ * crypto_lskcipher_setkey() - set key for cipher
+ * @tfm: cipher handle
+ * @key: buffer holding the key
+ * @keylen: length of the key in bytes
+ *
+ * The caller provided key is set for the lskcipher referenced by the cipher
+ * handle.
+ *
+ * Note, the key length determines the cipher type. Many block ciphers implement
+ * different cipher modes depending on the key size, such as AES-128 vs AES-192
+ * vs. AES-256. When providing a 16 byte key for an AES cipher handle, AES-128
+ * is performed.
+ *
+ * Return: 0 if the setting of the key was successful; < 0 if an error occurred
+ */
+int crypto_lskcipher_setkey(struct crypto_lskcipher *tfm,
+                           const u8 *key, unsigned int keylen);
+
 static inline unsigned int crypto_skcipher_min_keysize(
        struct crypto_skcipher *tfm)
 {
-       return crypto_skcipher_alg(tfm)->min_keysize;
+       return crypto_skcipher_alg_common(tfm)->min_keysize;
 }
 
 static inline unsigned int crypto_skcipher_max_keysize(
        struct crypto_skcipher *tfm)
 {
-       return crypto_skcipher_alg(tfm)->max_keysize;
+       return crypto_skcipher_alg_common(tfm)->max_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_min_keysize(
+       struct crypto_lskcipher *tfm)
+{
+       return crypto_lskcipher_alg(tfm)->co.min_keysize;
+}
+
+static inline unsigned int crypto_lskcipher_max_keysize(
+       struct crypto_lskcipher *tfm)
+{
+       return crypto_lskcipher_alg(tfm)->co.max_keysize;
 }
 
 /**
@@ -457,6 +683,42 @@ int crypto_skcipher_encrypt(struct skcipher_request *req);
  */
 int crypto_skcipher_decrypt(struct skcipher_request *req);
 
+/**
+ * crypto_lskcipher_encrypt() - encrypt plaintext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ *      by crypto_lskcipher_ivsize
+ *
+ * Encrypt plaintext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ *        then this many bytes have been left unprocessed;
+ *        < 0 if an error occurred
+ */
+int crypto_lskcipher_encrypt(struct crypto_lskcipher *tfm, const u8 *src,
+                            u8 *dst, unsigned len, u8 *iv);
+
+/**
+ * crypto_lskcipher_decrypt() - decrypt ciphertext
+ * @tfm: lskcipher handle
+ * @src: source buffer
+ * @dst: destination buffer
+ * @len: number of bytes to process
+ * @iv: IV for the cipher operation which must comply with the IV size defined
+ *      by crypto_lskcipher_ivsize
+ *
+ * Decrypt ciphertext data using the lskcipher handle.
+ *
+ * Return: >=0 if the cipher operation was successful, if positive
+ *        then this many bytes have been left unprocessed;
+ *        < 0 if an error occurred
+ */
+int crypto_lskcipher_decrypt(struct crypto_lskcipher *tfm, const u8 *src,
+                            u8 *dst, unsigned len, u8 *iv);
+
 /**
  * DOC: Symmetric Key Cipher Request Handle
  *
index 31f6fee0c36c6448c00358679301db7538bf307a..b164da5e129e82aac45cc65fb7042fdbdfbf4fa7 100644 (file)
@@ -24,6 +24,7 @@
 #define CRYPTO_ALG_TYPE_CIPHER         0x00000001
 #define CRYPTO_ALG_TYPE_COMPRESS       0x00000002
 #define CRYPTO_ALG_TYPE_AEAD           0x00000003
+#define CRYPTO_ALG_TYPE_LSKCIPHER      0x00000004
 #define CRYPTO_ALG_TYPE_SKCIPHER       0x00000005
 #define CRYPTO_ALG_TYPE_AKCIPHER       0x00000006
 #define CRYPTO_ALG_TYPE_SIG            0x00000007
@@ -35,8 +36,6 @@
 #define CRYPTO_ALG_TYPE_SHASH          0x0000000e
 #define CRYPTO_ALG_TYPE_AHASH          0x0000000f
 
-#define CRYPTO_ALG_TYPE_HASH_MASK      0x0000000e
-#define CRYPTO_ALG_TYPE_AHASH_MASK     0x0000000e
 #define CRYPTO_ALG_TYPE_ACOMPRESS_MASK 0x0000000e
 
 #define CRYPTO_ALG_LARVAL              0x00000010
  *       crypto_aead_walksize() (with the remainder going at the end), no chunk
  *       can cross a page boundary or a scatterlist element boundary.
  *    ahash:
- *     - The result buffer must be aligned to the algorithm's alignmask.
  *     - crypto_ahash_finup() must not be used unless the algorithm implements
  *       ->finup() natively.
  */
@@ -279,18 +277,20 @@ struct compress_alg {
  * @cra_ctxsize: Size of the operational context of the transformation. This
  *              value informs the kernel crypto API about the memory size
  *              needed to be allocated for the transformation context.
- * @cra_alignmask: Alignment mask for the input and output data buffer. The data
- *                buffer containing the input data for the algorithm must be
- *                aligned to this alignment mask. The data buffer for the
- *                output data must be aligned to this alignment mask. Note that
- *                the Crypto API will do the re-alignment in software, but
- *                only under special conditions and there is a performance hit.
- *                The re-alignment happens at these occasions for different
- *                @cra_u types: cipher -- For both input data and output data
- *                buffer; ahash -- For output hash destination buf; shash --
- *                For output hash destination buf.
- *                This is needed on hardware which is flawed by design and
- *                cannot pick data from arbitrary addresses.
+ * @cra_alignmask: For cipher, skcipher, lskcipher, and aead algorithms this is
+ *                1 less than the alignment, in bytes, that the algorithm
+ *                implementation requires for input and output buffers.  When
+ *                the crypto API is invoked with buffers that are not aligned
+ *                to this alignment, the crypto API automatically utilizes
+ *                appropriately aligned temporary buffers to comply with what
+ *                the algorithm needs.  (For scatterlists this happens only if
+ *                the algorithm uses the skcipher_walk helper functions.)  This
+ *                misalignment handling carries a performance penalty, so it is
+ *                preferred that algorithms do not set a nonzero alignmask.
+ *                Also, crypto API users may wish to allocate buffers aligned
+ *                to the alignmask of the algorithm being used, in order to
+ *                avoid the API having to realign them.  Note: the alignmask is
+ *                not supported for hash algorithms and is always 0 for them.
  * @cra_priority: Priority of this transformation implementation. In case
  *               multiple transformations with same @cra_name are available to
  *               the Crypto API, the kernel will use the one with highest
index 39fbfb4be944bb0fcb24b134658860302b15e310..ddc7ebb705234c321d2cd7af0047f8a9c32c38ba 100644 (file)
@@ -144,6 +144,13 @@ enum qm_vf_state {
        QM_NOT_READY,
 };
 
+enum qm_misc_ctl_bits {
+       QM_DRIVER_REMOVING = 0x0,
+       QM_RST_SCHED,
+       QM_RESETTING,
+       QM_MODULE_PARAM,
+};
+
 enum qm_cap_bits {
        QM_SUPPORT_DB_ISOLATION = 0x0,
        QM_SUPPORT_FUNC_QOS,
@@ -269,6 +276,7 @@ struct hisi_qm_poll_data {
        struct hisi_qm *qm;
        struct work_struct work;
        u16 *qp_finish_id;
+       u16 eqe_num;
 };
 
 /**
@@ -285,6 +293,18 @@ struct qm_err_isolate {
        struct list_head qm_hw_errs;
 };
 
+struct qm_rsv_buf {
+       struct qm_sqc *sqc;
+       struct qm_cqc *cqc;
+       struct qm_eqc *eqc;
+       struct qm_aeqc *aeqc;
+       dma_addr_t sqc_dma;
+       dma_addr_t cqc_dma;
+       dma_addr_t eqc_dma;
+       dma_addr_t aeqc_dma;
+       struct qm_dma qcdma;
+};
+
 struct hisi_qm {
        enum qm_hw_ver ver;
        enum qm_fun_type fun_type;
@@ -317,6 +337,7 @@ struct hisi_qm {
        dma_addr_t cqc_dma;
        dma_addr_t eqe_dma;
        dma_addr_t aeqe_dma;
+       struct qm_rsv_buf xqc_buf;
 
        struct hisi_qm_status status;
        const struct hisi_qm_err_ini *err_ini;
@@ -471,6 +492,20 @@ static inline void hisi_qm_init_list(struct hisi_qm_list *qm_list)
        mutex_init(&qm_list->lock);
 }
 
+static inline void hisi_qm_add_list(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+{
+       mutex_lock(&qm_list->lock);
+       list_add_tail(&qm->list, &qm_list->list);
+       mutex_unlock(&qm_list->lock);
+}
+
+static inline void hisi_qm_del_list(struct hisi_qm *qm, struct hisi_qm_list *qm_list)
+{
+       mutex_lock(&qm_list->lock);
+       list_del(&qm->list);
+       mutex_unlock(&qm_list->lock);
+}
+
 int hisi_qm_init(struct hisi_qm *qm);
 void hisi_qm_uninit(struct hisi_qm *qm);
 int hisi_qm_start(struct hisi_qm *qm);
@@ -516,8 +551,8 @@ int hisi_qm_alloc_qps_node(struct hisi_qm_list *qm_list, int qp_num,
 void hisi_qm_free_qps(struct hisi_qp **qps, int qp_num);
 void hisi_qm_dev_shutdown(struct pci_dev *pdev);
 void hisi_qm_wait_task_finish(struct hisi_qm *qm, struct hisi_qm_list *qm_list);
-int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list);
-void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list);
+int hisi_qm_alg_register(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard);
+void hisi_qm_alg_unregister(struct hisi_qm *qm, struct hisi_qm_list *qm_list, int guard);
 int hisi_qm_resume(struct device *dev);
 int hisi_qm_suspend(struct device *dev);
 void hisi_qm_pm_uninit(struct hisi_qm *qm);
index 8a3115516a1ba9a6462f73f40c0109c998097904..136e9842120e88dd6db0406c9cf8aa5892501825 100644 (file)
@@ -63,5 +63,6 @@ extern void hwrng_unregister(struct hwrng *rng);
 extern void devm_hwrng_unregister(struct device *dve, struct hwrng *rng);
 
 extern long hwrng_msleep(struct hwrng *rng, unsigned int msecs);
+extern long hwrng_yield(struct hwrng *rng);
 
 #endif /* LINUX_HWRANDOM_H_ */
index f86a08ba0207ee1dd1bb704b6380d55b2d3feb1f..3921fbed0b28685cccfa0251e5b2d1cd4785324c 100644 (file)
  *       build_OID_registry.pl to generate the data for look_up_OID().
  */
 enum OID {
-       OID_id_dsa_with_sha1,           /* 1.2.840.10030.4.3 */
        OID_id_dsa,                     /* 1.2.840.10040.4.1 */
        OID_id_ecPublicKey,             /* 1.2.840.10045.2.1 */
        OID_id_prime192v1,              /* 1.2.840.10045.3.1.1 */
        OID_id_prime256v1,              /* 1.2.840.10045.3.1.7 */
-       OID_id_ecdsa_with_sha1,         /* 1.2.840.10045.4.1 */
        OID_id_ecdsa_with_sha224,       /* 1.2.840.10045.4.3.1 */
        OID_id_ecdsa_with_sha256,       /* 1.2.840.10045.4.3.2 */
        OID_id_ecdsa_with_sha384,       /* 1.2.840.10045.4.3.3 */
@@ -30,10 +28,6 @@ enum OID {
 
        /* PKCS#1 {iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-1(1)} */
        OID_rsaEncryption,              /* 1.2.840.113549.1.1.1 */
-       OID_md2WithRSAEncryption,       /* 1.2.840.113549.1.1.2 */
-       OID_md3WithRSAEncryption,       /* 1.2.840.113549.1.1.3 */
-       OID_md4WithRSAEncryption,       /* 1.2.840.113549.1.1.4 */
-       OID_sha1WithRSAEncryption,      /* 1.2.840.113549.1.1.5 */
        OID_sha256WithRSAEncryption,    /* 1.2.840.113549.1.1.11 */
        OID_sha384WithRSAEncryption,    /* 1.2.840.113549.1.1.12 */
        OID_sha512WithRSAEncryption,    /* 1.2.840.113549.1.1.13 */
@@ -49,11 +43,6 @@ enum OID {
        OID_smimeCapabilites,           /* 1.2.840.113549.1.9.15 */
        OID_smimeAuthenticatedAttrs,    /* 1.2.840.113549.1.9.16.2.11 */
 
-       /* {iso(1) member-body(2) us(840) rsadsi(113549) digestAlgorithm(2)} */
-       OID_md2,                        /* 1.2.840.113549.2.2 */
-       OID_md4,                        /* 1.2.840.113549.2.4 */
-       OID_md5,                        /* 1.2.840.113549.2.5 */
-
        OID_mskrb5,                     /* 1.2.840.48018.1.2.2 */
        OID_krb5,                       /* 1.2.840.113554.1.2.2 */
        OID_krb5u2u,                    /* 1.2.840.113554.1.2.2.3 */
@@ -75,7 +64,6 @@ enum OID {
        OID_PKU2U,                      /* 1.3.5.1.5.2.7 */
        OID_Scram,                      /* 1.3.6.1.5.5.14 */
        OID_certAuthInfoAccess,         /* 1.3.6.1.5.5.7.1.1 */
-       OID_sha1,                       /* 1.3.14.3.2.26 */
        OID_id_ansip384r1,              /* 1.3.132.0.34 */
        OID_sha256,                     /* 2.16.840.1.101.3.4.2.1 */
        OID_sha384,                     /* 2.16.840.1.101.3.4.2.2 */
@@ -141,6 +129,17 @@ enum OID {
        OID_TPMImportableKey,           /* 2.23.133.10.1.4 */
        OID_TPMSealedData,              /* 2.23.133.10.1.5 */
 
+       /* CSOR FIPS-202 SHA-3 */
+       OID_sha3_256,                           /* 2.16.840.1.101.3.4.2.8 */
+       OID_sha3_384,                           /* 2.16.840.1.101.3.4.2.9 */
+       OID_sha3_512,                           /* 2.16.840.1.101.3.4.2.10 */
+       OID_id_ecdsa_with_sha3_256,             /* 2.16.840.1.101.3.4.3.10 */
+       OID_id_ecdsa_with_sha3_384,             /* 2.16.840.1.101.3.4.3.11 */
+       OID_id_ecdsa_with_sha3_512,             /* 2.16.840.1.101.3.4.3.12 */
+       OID_id_rsassa_pkcs1_v1_5_with_sha3_256, /* 2.16.840.1.101.3.4.3.14 */
+       OID_id_rsassa_pkcs1_v1_5_with_sha3_384, /* 2.16.840.1.101.3.4.3.15 */
+       OID_id_rsassa_pkcs1_v1_5_with_sha3_512, /* 2.16.840.1.101.3.4.3.16 */
+
        OID__NR
 };
 
index 2793a41e73a2b6c36a14bd9a245e582e5c4ae560..ff1bd6b5f5b372449102ed50fa7edacc47d60c19 100644 (file)
 #define MICROWATT_PER_MILLIWATT        1000UL
 #define MICROWATT_PER_WATT     1000000UL
 
+#define BYTES_PER_KBIT         (KILO / BITS_PER_BYTE)
+#define BYTES_PER_MBIT         (MEGA / BITS_PER_BYTE)
+#define BYTES_PER_GBIT         (GIGA / BITS_PER_BYTE)
+
 #define ABSOLUTE_ZERO_MILLICELSIUS -273150
 
 static inline long milli_kelvin_to_millicelsius(long t)
index f34e50ebcf60a4377cbcafbc8960d898265b9a9c..cb2d47f2809103d2d4be272885d630517c26f312 100644 (file)
@@ -8,6 +8,7 @@
 #ifndef _LINUX_VERIFICATION_H
 #define _LINUX_VERIFICATION_H
 
+#include <linux/errno.h>
 #include <linux/types.h>
 
 /*
index 74a8609fcb4d38e3ac23a1c389f69d6f903bdb27..0af23ec196d8f709483321b8dd4af873b6147ea8 100644 (file)
@@ -35,6 +35,9 @@ enum hash_algo {
        HASH_ALGO_SM3_256,
        HASH_ALGO_STREEBOG_256,
        HASH_ALGO_STREEBOG_512,
+       HASH_ALGO_SHA3_256,
+       HASH_ALGO_SHA3_384,
+       HASH_ALGO_SHA3_512,
        HASH_ALGO__LAST
 };
 
index 33a2e991f6081471ab51885abcce00076367d34d..0ea1b2970a23b544cd6a91c0182fa63c9c98a02d 100644 (file)
@@ -236,14 +236,6 @@ choice
          possible to load a signed module containing the algorithm to check
          the signature on that module.
 
-config MODULE_SIG_SHA1
-       bool "Sign modules with SHA-1"
-       select CRYPTO_SHA1
-
-config MODULE_SIG_SHA224
-       bool "Sign modules with SHA-224"
-       select CRYPTO_SHA256
-
 config MODULE_SIG_SHA256
        bool "Sign modules with SHA-256"
        select CRYPTO_SHA256
@@ -256,16 +248,29 @@ config MODULE_SIG_SHA512
        bool "Sign modules with SHA-512"
        select CRYPTO_SHA512
 
+config MODULE_SIG_SHA3_256
+       bool "Sign modules with SHA3-256"
+       select CRYPTO_SHA3
+
+config MODULE_SIG_SHA3_384
+       bool "Sign modules with SHA3-384"
+       select CRYPTO_SHA3
+
+config MODULE_SIG_SHA3_512
+       bool "Sign modules with SHA3-512"
+       select CRYPTO_SHA3
+
 endchoice
 
 config MODULE_SIG_HASH
        string
        depends on MODULE_SIG || IMA_APPRAISE_MODSIG
-       default "sha1" if MODULE_SIG_SHA1
-       default "sha224" if MODULE_SIG_SHA224
        default "sha256" if MODULE_SIG_SHA256
        default "sha384" if MODULE_SIG_SHA384
        default "sha512" if MODULE_SIG_SHA512
+       default "sha3-256" if MODULE_SIG_SHA3_256
+       default "sha3-384" if MODULE_SIG_SHA3_384
+       default "sha3-512" if MODULE_SIG_SHA3_512
 
 choice
        prompt "Module compression mode"
index 222d60195de66fec5721dace5d2b304936d2a937..179fb1518070c21f028e201ab32ae3dd53e23357 100644 (file)
@@ -202,7 +202,7 @@ int padata_do_parallel(struct padata_shell *ps,
                *cb_cpu = cpu;
        }
 
-       err =  -EBUSY;
+       err = -EBUSY;
        if ((pinst->flags & PADATA_RESET))
                goto out;
 
@@ -1102,12 +1102,16 @@ EXPORT_SYMBOL(padata_alloc_shell);
  */
 void padata_free_shell(struct padata_shell *ps)
 {
+       struct parallel_data *pd;
+
        if (!ps)
                return;
 
        mutex_lock(&ps->pinst->lock);
        list_del(&ps->list);
-       padata_free_pd(rcu_dereference_protected(ps->pd, 1));
+       pd = rcu_dereference_protected(ps->pd, 1);
+       if (refcount_dec_and_test(&pd->refcnt))
+               padata_free_pd(pd);
        mutex_unlock(&ps->pinst->lock);
 
        kfree(ps);
index f1a9fc0012f0987e7b5ec358594df09644422ecf..5f2f97de295eb65fbdf2d042e35fc25d8a39369f 100644 (file)
 
 #include <linux/debugfs.h>
 #include <linux/scatterlist.h>
-#include <linux/crypto.h>
 #include <crypto/aes.h>
-#include <crypto/algapi.h>
 #include <crypto/hash.h>
 #include <crypto/kpp.h>
+#include <crypto/utils.h>
 
 #include <net/bluetooth/bluetooth.h>
 #include <net/bluetooth/hci_core.h>
index d09a39ff2cf0419db2e66a34b61738a2fd98fc3b..f8ec60e1aba3a112aaa024c235f0117297b9bf70 100644 (file)
@@ -733,8 +733,6 @@ static int setup_crypto(struct ceph_connection *con,
                return ret;
        }
 
-       WARN_ON((unsigned long)session_key &
-               crypto_shash_alignmask(con->v2.hmac_tfm));
        ret = crypto_shash_setkey(con->v2.hmac_tfm, session_key,
                                  session_key_len);
        if (ret) {
@@ -816,8 +814,6 @@ static int hmac_sha256(struct ceph_connection *con, const struct kvec *kvecs,
                goto out;
 
        for (i = 0; i < kvec_cnt; i++) {
-               WARN_ON((unsigned long)kvecs[i].iov_base &
-                       crypto_shash_alignmask(con->v2.hmac_tfm));
                ret = crypto_shash_update(desc, kvecs[i].iov_base,
                                          kvecs[i].iov_len);
                if (ret)
index 015c0f4ec5ba9f8cea4a308286c96c9fe016ea01..a2e6e1fdf82be44c15daefa2a423967ccd8999f7 100644 (file)
@@ -1,8 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0-only
 #define pr_fmt(fmt) "IPsec: " fmt
 
-#include <crypto/algapi.h>
 #include <crypto/hash.h>
+#include <crypto/utils.h>
 #include <linux/err.h>
 #include <linux/module.h>
 #include <linux/slab.h>
@@ -27,9 +27,7 @@ static void *ah_alloc_tmp(struct crypto_ahash *ahash, int nfrags,
 {
        unsigned int len;
 
-       len = size + crypto_ahash_digestsize(ahash) +
-             (crypto_ahash_alignmask(ahash) &
-              ~(crypto_tfm_ctx_alignment() - 1));
+       len = size + crypto_ahash_digestsize(ahash);
 
        len = ALIGN(len, crypto_tfm_ctx_alignment());
 
@@ -46,10 +44,9 @@ static inline u8 *ah_tmp_auth(void *tmp, unsigned int offset)
        return tmp + offset;
 }
 
-static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
-                            unsigned int offset)
+static inline u8 *ah_tmp_icv(void *tmp, unsigned int offset)
 {
-       return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+       return tmp + offset;
 }
 
 static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
@@ -129,7 +126,7 @@ static void ah_output_done(void *data, int err)
        int ihl = ip_hdrlen(skb);
 
        iph = AH_SKB_CB(skb)->tmp;
-       icv = ah_tmp_icv(ahp->ahash, iph, ihl);
+       icv = ah_tmp_icv(iph, ihl);
        memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
 
        top_iph->tos = iph->tos;
@@ -182,7 +179,7 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb)
        if (!iph)
                goto out;
        seqhi = (__be32 *)((char *)iph + ihl);
-       icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+       icv = ah_tmp_icv(seqhi, seqhi_len);
        req = ah_tmp_req(ahash, icv);
        sg = ah_req_sg(ahash, req);
        seqhisg = sg + nfrags;
@@ -279,7 +276,7 @@ static void ah_input_done(void *data, int err)
 
        work_iph = AH_SKB_CB(skb)->tmp;
        auth_data = ah_tmp_auth(work_iph, ihl);
-       icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+       icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);
 
        err = crypto_memneq(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG : 0;
        if (err)
@@ -374,7 +371,7 @@ static int ah_input(struct xfrm_state *x, struct sk_buff *skb)
 
        seqhi = (__be32 *)((char *)work_iph + ihl);
        auth_data = ah_tmp_auth(seqhi, seqhi_len);
-       icv = ah_tmp_icv(ahash, auth_data, ahp->icv_trunc_len);
+       icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);
        req = ah_tmp_req(ahash, icv);
        sg = ah_req_sg(ahash, req);
        seqhisg = sg + nfrags;
index 24b73268f362f0c779a9379ac3aafc961b0a2a18..dc2cc579416095b4a1e55c3a81eb2e764e4f2e94 100644 (file)
@@ -1,3 +1,11 @@
+-- SPDX-License-Identifier: BSD-3-Clause
+--
+-- Copyright (C) 1990, 2002 IETF Trust and the persons identified as authors
+-- of the code
+--
+-- https://www.rfc-editor.org/rfc/rfc1157#section-4
+-- https://www.rfc-editor.org/rfc/rfc3416#section-3
+
 Message ::=
        SEQUENCE {
                version
index 01005035ad1018c3c704743bfedc09350b31481f..2016e90e6e1d21a49696c9933f1b77320cc71953 100644 (file)
@@ -13,8 +13,8 @@
 
 #define pr_fmt(fmt) "IPv6: " fmt
 
-#include <crypto/algapi.h>
 #include <crypto/hash.h>
+#include <crypto/utils.h>
 #include <linux/module.h>
 #include <linux/slab.h>
 #include <net/ip.h>
@@ -51,9 +51,7 @@ static void *ah_alloc_tmp(struct crypto_ahash *ahash, int nfrags,
 {
        unsigned int len;
 
-       len = size + crypto_ahash_digestsize(ahash) +
-             (crypto_ahash_alignmask(ahash) &
-              ~(crypto_tfm_ctx_alignment() - 1));
+       len = size + crypto_ahash_digestsize(ahash);
 
        len = ALIGN(len, crypto_tfm_ctx_alignment());
 
@@ -75,10 +73,9 @@ static inline u8 *ah_tmp_auth(u8 *tmp, unsigned int offset)
        return tmp + offset;
 }
 
-static inline u8 *ah_tmp_icv(struct crypto_ahash *ahash, void *tmp,
-                            unsigned int offset)
+static inline u8 *ah_tmp_icv(void *tmp, unsigned int offset)
 {
-       return PTR_ALIGN((u8 *)tmp + offset, crypto_ahash_alignmask(ahash) + 1);
+       return tmp + offset;
 }
 
 static inline struct ahash_request *ah_tmp_req(struct crypto_ahash *ahash,
@@ -299,7 +296,7 @@ static void ah6_output_done(void *data, int err)
 
        iph_base = AH_SKB_CB(skb)->tmp;
        iph_ext = ah_tmp_ext(iph_base);
-       icv = ah_tmp_icv(ahp->ahash, iph_ext, extlen);
+       icv = ah_tmp_icv(iph_ext, extlen);
 
        memcpy(ah->auth_data, icv, ahp->icv_trunc_len);
        memcpy(top_iph, iph_base, IPV6HDR_BASELEN);
@@ -362,7 +359,7 @@ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb)
 
        iph_ext = ah_tmp_ext(iph_base);
        seqhi = (__be32 *)((char *)iph_ext + extlen);
-       icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+       icv = ah_tmp_icv(seqhi, seqhi_len);
        req = ah_tmp_req(ahash, icv);
        sg = ah_req_sg(ahash, req);
        seqhisg = sg + nfrags;
@@ -468,7 +465,7 @@ static void ah6_input_done(void *data, int err)
 
        work_iph = AH_SKB_CB(skb)->tmp;
        auth_data = ah_tmp_auth(work_iph, hdr_len);
-       icv = ah_tmp_icv(ahp->ahash, auth_data, ahp->icv_trunc_len);
+       icv = ah_tmp_icv(auth_data, ahp->icv_trunc_len);
 
        err = crypto_memneq(icv, auth_data, ahp->icv_trunc_len) ? -EBADMSG : 0;
        if (err)
@@ -576,7 +573,7 @@ static int ah6_input(struct xfrm_state *x, struct sk_buff *skb)
 
        auth_data = ah_tmp_auth((u8 *)work_iph, hdr_len);
        seqhi = (__be32 *)(auth_data + ahp->icv_trunc_len);
-       icv = ah_tmp_icv(ahash, seqhi, seqhi_len);
+       icv = ah_tmp_icv(seqhi, seqhi_len);
        req = ah_tmp_req(ahash, icv);
        sg = ah_req_sg(ahash, req);
        seqhisg = sg + nfrags;
index e120e961645406551be353fd661cdb58dee299b3..a4f3c27f0309f9000435d83b41b199067b31f341 100644 (file)
@@ -9,8 +9,8 @@
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/netdevice.h>
-#include <crypto/algapi.h>
 #include <crypto/sha2.h>
+#include <crypto/utils.h>
 #include <net/sock.h>
 #include <net/inet_common.h>
 #include <net/inet_hashtables.h>
index 9734e1d9f991f85df5157c7580daddf732bfa258..d2b02710ab0709dfc92b4ce8e1bc0d892016594e 100644 (file)
@@ -34,9 +34,9 @@
  * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
  */
 
-#include <crypto/algapi.h>
 #include <crypto/hash.h>
 #include <crypto/skcipher.h>
+#include <crypto/utils.h>
 #include <linux/err.h>
 #include <linux/types.h>
 #include <linux/mm.h>
index 4fbc50a0a2c4bfd36d1068534c68372ee63c9f4d..ef0e6af9fc959c217867f469100d035339ca7342 100644 (file)
  * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
  */
 
-#include <crypto/algapi.h>
 #include <linux/types.h>
 #include <linux/jiffies.h>
 #include <linux/sunrpc/gss_krb5.h>
-#include <linux/crypto.h>
 
 #include "gss_krb5_internal.h"
 
index 3adf31a83a79a27a29d8a99e6724d51de5e2ab25..d7b16f2c23e968bf3980e925e67db78462d826c9 100644 (file)
@@ -15,6 +15,7 @@ config XFRM_ALGO
        tristate
        select XFRM
        select CRYPTO
+       select CRYPTO_AEAD
        select CRYPTO_HASH
        select CRYPTO_SKCIPHER
 
index 094734fbec9675053c45f71eda6162d22b4564d1..41533c631431493882a7fa427d393c4b6a753e74 100644 (file)
@@ -5,6 +5,7 @@
  * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
  */
 
+#include <crypto/aead.h>
 #include <crypto/hash.h>
 #include <crypto/skcipher.h>
 #include <linux/module.h>
@@ -644,38 +645,33 @@ static inline int calg_entries(void)
 }
 
 struct xfrm_algo_list {
+       int (*find)(const char *name, u32 type, u32 mask);
        struct xfrm_algo_desc *algs;
        int entries;
-       u32 type;
-       u32 mask;
 };
 
 static const struct xfrm_algo_list xfrm_aead_list = {
+       .find = crypto_has_aead,
        .algs = aead_list,
        .entries = ARRAY_SIZE(aead_list),
-       .type = CRYPTO_ALG_TYPE_AEAD,
-       .mask = CRYPTO_ALG_TYPE_MASK,
 };
 
 static const struct xfrm_algo_list xfrm_aalg_list = {
+       .find = crypto_has_ahash,
        .algs = aalg_list,
        .entries = ARRAY_SIZE(aalg_list),
-       .type = CRYPTO_ALG_TYPE_HASH,
-       .mask = CRYPTO_ALG_TYPE_HASH_MASK,
 };
 
 static const struct xfrm_algo_list xfrm_ealg_list = {
+       .find = crypto_has_skcipher,
        .algs = ealg_list,
        .entries = ARRAY_SIZE(ealg_list),
-       .type = CRYPTO_ALG_TYPE_SKCIPHER,
-       .mask = CRYPTO_ALG_TYPE_MASK,
 };
 
 static const struct xfrm_algo_list xfrm_calg_list = {
+       .find = crypto_has_comp,
        .algs = calg_list,
        .entries = ARRAY_SIZE(calg_list),
-       .type = CRYPTO_ALG_TYPE_COMPRESS,
-       .mask = CRYPTO_ALG_TYPE_MASK,
 };
 
 static struct xfrm_algo_desc *xfrm_find_algo(
@@ -696,8 +692,7 @@ static struct xfrm_algo_desc *xfrm_find_algo(
                if (!probe)
                        break;
 
-               status = crypto_has_alg(list[i].name, algo_list->type,
-                                       algo_list->mask);
+               status = algo_list->find(list[i].name, 0, 0);
                if (!status)
                        break;
 
index ff9a939dad8e42d83bff26150fe9b283fdf53494..894570fe39bc530262abff3b0989679da7dfeef4 100644 (file)
@@ -14,7 +14,6 @@
 #define pr_fmt(fmt) "EVM: "fmt
 
 #include <linux/init.h>
-#include <linux/crypto.h>
 #include <linux/audit.h>
 #include <linux/xattr.h>
 #include <linux/integrity.h>
@@ -25,7 +24,7 @@
 
 #include <crypto/hash.h>
 #include <crypto/hash_info.h>
-#include <crypto/algapi.h>
+#include <crypto/utils.h>
 #include "evm.h"
 
 int evm_initialized;
index 1e313982af02a56a40732eceb3a2b21314413ef1..8af2136069d239129c2994e5ee0f3e9b696ed7ea 100644 (file)
 #include <linux/scatterlist.h>
 #include <linux/ctype.h>
 #include <crypto/aes.h>
-#include <crypto/algapi.h>
 #include <crypto/hash.h>
 #include <crypto/sha2.h>
 #include <crypto/skcipher.h>
+#include <crypto/utils.h>
 
 #include "encrypted.h"
 #include "ecryptfs_format.h"
index 37e813175642fd137208cd7f77a06691c485c5dd..a807df0f059740350f79926d226604479f95cb11 100644 (file)
@@ -8,6 +8,7 @@
  */
 
 #include <assert.h>
+#include <errno.h>
 #include <string.h>
 #include <sys/ioctl.h>
 
@@ -22,16 +23,14 @@ int get_nonce(int fd, void *nonce_out, void *signature)
        struct dbc_user_nonce tmp = {
                .auth_needed = !!signature,
        };
-       int ret;
 
        assert(nonce_out);
 
        if (signature)
                memcpy(tmp.signature, signature, sizeof(tmp.signature));
 
-       ret = ioctl(fd, DBCIOCNONCE, &tmp);
-       if (ret)
-               return ret;
+       if (ioctl(fd, DBCIOCNONCE, &tmp))
+               return errno;
        memcpy(nonce_out, tmp.nonce, sizeof(tmp.nonce));
 
        return 0;
@@ -47,7 +46,9 @@ int set_uid(int fd, __u8 *uid, __u8 *signature)
        memcpy(tmp.uid, uid, sizeof(tmp.uid));
        memcpy(tmp.signature, signature, sizeof(tmp.signature));
 
-       return ioctl(fd, DBCIOCUID, &tmp);
+       if (ioctl(fd, DBCIOCUID, &tmp))
+               return errno;
+       return 0;
 }
 
 int process_param(int fd, int msg_index, __u8 *signature, int *data)
@@ -63,10 +64,10 @@ int process_param(int fd, int msg_index, __u8 *signature, int *data)
 
        memcpy(tmp.signature, signature, sizeof(tmp.signature));
 
-       ret = ioctl(fd, DBCIOCPARAM, &tmp);
-       if (ret)
-               return ret;
+       if (ioctl(fd, DBCIOCPARAM, &tmp))
+               return errno;
 
        *data = tmp.param;
+       memcpy(signature, tmp.signature, sizeof(tmp.signature));
        return 0;
 }
index 3f6a825ffc9e4e25e67d926f3ff0d4f8c3b8c369..2b91415b19407402e1997230e7d781a31ccdf5ec 100644 (file)
@@ -27,8 +27,7 @@ lib = ctypes.CDLL("./dbc_library.so", mode=ctypes.RTLD_GLOBAL)
 
 
 def handle_error(code):
-    val = code * -1
-    raise OSError(val, os.strerror(val))
+    raise OSError(code, os.strerror(code))
 
 
 def get_nonce(device, signature):
@@ -58,7 +57,8 @@ def process_param(device, message, signature, data=None):
     if type(message) != tuple:
         raise ValueError("Expected message tuple")
     arg = ctypes.c_int(data if data else 0)
-    ret = lib.process_param(device.fileno(), message[0], signature, ctypes.pointer(arg))
+    sig = ctypes.create_string_buffer(signature, len(signature))
+    ret = lib.process_param(device.fileno(), message[0], ctypes.pointer(sig), ctypes.pointer(arg))
     if ret:
         handle_error(ret)
-    return arg, signature
+    return arg.value, sig.value
index 998bb3e3cd040900ef1a691131f3d292290e6e85..79de3638a01abeef61f733d148704684091b4549 100755 (executable)
@@ -4,6 +4,12 @@ import unittest
 import os
 import time
 import glob
+import fcntl
+try:
+    import ioctl_opt as ioctl
+except ImportError:
+    ioctl = None
+    pass
 from dbc import *
 
 # Artificial delay between set commands
@@ -27,8 +33,8 @@ def system_is_secured() -> bool:
 class DynamicBoostControlTest(unittest.TestCase):
     def __init__(self, data) -> None:
         self.d = None
-        self.signature = "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
-        self.uid = "1111111111111111"
+        self.signature = b"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
+        self.uid = b"1111111111111111"
         super().__init__(data)
 
     def setUp(self) -> None:
@@ -64,13 +70,16 @@ class TestInvalidIoctls(DynamicBoostControlTest):
     def setUp(self) -> None:
         if not os.path.exists(DEVICE_NODE):
             self.skipTest("system is unsupported")
+        if not ioctl:
+            self.skipTest("unable to test IOCTLs without ioctl_opt")
+
         return super().setUp()
 
     def test_invalid_nonce_ioctl(self) -> None:
         """tries to call get_nonce ioctl with invalid data structures"""
 
         # 0x1 (get nonce), and invalid data
-        INVALID1 = IOWR(ord("D"), 0x01, invalid_param)
+        INVALID1 = ioctl.IOWR(ord("D"), 0x01, invalid_param)
         with self.assertRaises(OSError) as error:
             fcntl.ioctl(self.d, INVALID1, self.data, True)
         self.assertEqual(error.exception.errno, 22)
@@ -79,7 +88,7 @@ class TestInvalidIoctls(DynamicBoostControlTest):
         """tries to call set_uid ioctl with invalid data structures"""
 
         # 0x2 (set uid), and invalid data
-        INVALID2 = IOW(ord("D"), 0x02, invalid_param)
+        INVALID2 = ioctl.IOW(ord("D"), 0x02, invalid_param)
         with self.assertRaises(OSError) as error:
             fcntl.ioctl(self.d, INVALID2, self.data, True)
         self.assertEqual(error.exception.errno, 22)
@@ -88,7 +97,7 @@ class TestInvalidIoctls(DynamicBoostControlTest):
         """tries to call set_uid ioctl with invalid data structures"""
 
         # 0x2 as RW (set uid), and invalid data
-        INVALID3 = IOWR(ord("D"), 0x02, invalid_param)
+        INVALID3 = ioctl.IOWR(ord("D"), 0x02, invalid_param)
         with self.assertRaises(OSError) as error:
             fcntl.ioctl(self.d, INVALID3, self.data, True)
         self.assertEqual(error.exception.errno, 22)
@@ -96,7 +105,7 @@ class TestInvalidIoctls(DynamicBoostControlTest):
     def test_invalid_param_ioctl(self) -> None:
         """tries to call param ioctl with invalid data structures"""
         # 0x3 (param), and invalid data
-        INVALID4 = IOWR(ord("D"), 0x03, invalid_param)
+        INVALID4 = ioctl.IOWR(ord("D"), 0x03, invalid_param)
         with self.assertRaises(OSError) as error:
             fcntl.ioctl(self.d, INVALID4, self.data, True)
         self.assertEqual(error.exception.errno, 22)
@@ -104,7 +113,7 @@ class TestInvalidIoctls(DynamicBoostControlTest):
     def test_invalid_call_ioctl(self) -> None:
         """tries to call the DBC ioctl with invalid data structures"""
         # 0x4, and invalid data
-        INVALID5 = IOWR(ord("D"), 0x04, invalid_param)
+        INVALID5 = ioctl.IOWR(ord("D"), 0x04, invalid_param)
         with self.assertRaises(OSError) as error:
             fcntl.ioctl(self.d, INVALID5, self.data, True)
         self.assertEqual(error.exception.errno, 22)
@@ -183,12 +192,12 @@ class TestUnFusedSystem(DynamicBoostControlTest):
         # SOC power
         soc_power_max = process_param(self.d, PARAM_GET_SOC_PWR_MAX, self.signature)
         soc_power_min = process_param(self.d, PARAM_GET_SOC_PWR_MIN, self.signature)
-        self.assertGreater(soc_power_max.parameter, soc_power_min.parameter)
+        self.assertGreater(soc_power_max[0], soc_power_min[0])
 
         # fmax
         fmax_max = process_param(self.d, PARAM_GET_FMAX_MAX, self.signature)
         fmax_min = process_param(self.d, PARAM_GET_FMAX_MIN, self.signature)
-        self.assertGreater(fmax_max.parameter, fmax_min.parameter)
+        self.assertGreater(fmax_max[0], fmax_min[0])
 
         # cap values
         keys = {
@@ -199,7 +208,7 @@ class TestUnFusedSystem(DynamicBoostControlTest):
         }
         for k in keys:
             result = process_param(self.d, keys[k], self.signature)
-            self.assertGreater(result.parameter, 0)
+            self.assertGreater(result[0], 0)
 
     def test_get_invalid_param(self) -> None:
         """fetch an invalid parameter"""
@@ -217,17 +226,17 @@ class TestUnFusedSystem(DynamicBoostControlTest):
         original = process_param(self.d, PARAM_GET_FMAX_CAP, self.signature)
 
         # set the fmax
-        target = original.parameter - 100
+        target = original[0] - 100
         process_param(self.d, PARAM_SET_FMAX_CAP, self.signature, target)
         time.sleep(SET_DELAY)
         new = process_param(self.d, PARAM_GET_FMAX_CAP, self.signature)
-        self.assertEqual(new.parameter, target)
+        self.assertEqual(new[0], target)
 
         # revert back to current
-        process_param(self.d, PARAM_SET_FMAX_CAP, self.signature, original.parameter)
+        process_param(self.d, PARAM_SET_FMAX_CAP, self.signature, original[0])
         time.sleep(SET_DELAY)
         cur = process_param(self.d, PARAM_GET_FMAX_CAP, self.signature)
-        self.assertEqual(cur.parameter, original.parameter)
+        self.assertEqual(cur[0], original[0])
 
     def test_set_power_cap(self) -> None:
         """get/set power cap limit"""
@@ -235,17 +244,17 @@ class TestUnFusedSystem(DynamicBoostControlTest):
         original = process_param(self.d, PARAM_GET_PWR_CAP, self.signature)
 
         # set the fmax
-        target = original.parameter - 10
+        target = original[0] - 10
         process_param(self.d, PARAM_SET_PWR_CAP, self.signature, target)
         time.sleep(SET_DELAY)
         new = process_param(self.d, PARAM_GET_PWR_CAP, self.signature)
-        self.assertEqual(new.parameter, target)
+        self.assertEqual(new[0], target)
 
         # revert back to current
-        process_param(self.d, PARAM_SET_PWR_CAP, self.signature, original.parameter)
+        process_param(self.d, PARAM_SET_PWR_CAP, self.signature, original[0])
         time.sleep(SET_DELAY)
         cur = process_param(self.d, PARAM_GET_PWR_CAP, self.signature)
-        self.assertEqual(cur.parameter, original.parameter)
+        self.assertEqual(cur[0], original[0])
 
     def test_set_3d_graphics_mode(self) -> None:
         """set/get 3d graphics mode"""