This update includes the following changes:
API: - Avoid unnecessary copying in scomp for trivial SG lists. Algorithms: - Optimise NEON CCM implementation on ARM64. Drivers: - Add queue stop/query debugfs support in hisilicon/qm. - -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmXxDbcACgkQxycdCkmx i6dCzw//dI5CuszRMalZu8fwvh7K48d4GoY/5uuc1vPK6yLyc5dl6CzPsrKhmBGO Ip2fUCFoO+ViWl5tw8SjOOEnPRztETrIGlOM5PHOIxRuSA6lbcsdo+EQja4WBDo1 fGUNaNVW9eY9XsaUw46uwxaicd+xOqRGJURKB2XuzSHACln/QrNy74aVdbc6VhaK Jnh4o2blsEh7jcVAK2ahNmmDm739w8C462Go4UIl8xcM693HtjUUFx7TALpI0iC+ BWxctAV9IzA/siUwztvwwlo+V1KZbjZFD5XfC/erkV0gFs7ll8IcuDFnZ1suMvt2 Z+6OVhghkkvMJJEl1qKNxVJITOUdXH0yKWqTbEHuFlV7VV+9hjUlWlvRVm7ZIrXf ZXi/OpBJRAIBB6fJqy1RAvfIZUR8Dl8i8RKLVawdqFLbB+XCOiIK7a8+5hCKaWsU 0DVFNCfiJFPIiByzxXV+4Jt/hzxx/qseIbTiUShdoXeHOVWMgH5H255MWJv5qjVy aSwgQLX6MlY9Pj+w5cPHAUjOiGjiAa3iKwsbnmo5lszsPl4KR407gaJnV7VI4s6R fhqAgIjJ4Ik2lHzQRM89QU2OOogVBPs+FHEkk98lak5CATpjUq97iLbTharAg3dc l8FLoSB9+NM+RCbnzUXuOIvoU/okSM4+j6a3xAnLQH6OxBOtk3Y= =Uw2R -----END PGP SIGNATURE----- Merge tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Avoid unnecessary copying in scomp for trivial SG lists Algorithms: - Optimise NEON CCM implementation on ARM64 Drivers: - Add queue stop/query debugfs support in hisilicon/qm - Intel qat updates and cleanups" * tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (79 commits) Revert "crypto: remove CONFIG_CRYPTO_STATS" crypto: scomp - remove memcpy if sg_nents is 1 and pages are lowmem crypto: tcrypt - add ffdhe2048(dh) test crypto: iaa - fix the missing CRYPTO_ALG_ASYNC in cra_flags crypto: hisilicon/zip - fix the missing CRYPTO_ALG_ASYNC in cra_flags hwrng: hisi - use dev_err_probe MAINTAINERS: Remove T Ambarus from few mchp entries crypto: iaa - Fix comp/decomp delay statistics crypto: iaa - Fix async_disable descriptor leak dt-bindings: rng: atmel,at91-trng: add sam9x7 TRNG dt-bindings: crypto: add sam9x7 in Atmel TDES dt-bindings: crypto: add sam9x7 in Atmel SHA dt-bindings: crypto: add sam9x7 in Atmel AES crypto: remove CONFIG_CRYPTO_STATS crypto: dh - Make public key test FIPS-only crypto: rockchip - fix to check return value crypto: jitter - fix CRYPTO_JITTERENTROPY help text crypto: qat - make ring to service map common for QAT GEN4 crypto: qat - fix ring to service map for dcc in 420xx crypto: qat - fix ring to service map for dcc in 4xxx ...
This commit is contained in:
commit
c8e7699616
|
@ -81,3 +81,29 @@ Description: (RO) Read returns, for each Acceleration Engine (AE), the number
|
|||
<N>: Number of Compress and Verify (CnV) errors and type
|
||||
of the last CnV error detected by Acceleration
|
||||
Engine N.
|
||||
|
||||
What: /sys/kernel/debug/qat_<device>_<BDF>/heartbeat/inject_error
|
||||
Date: March 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: qat-linux@intel.com
|
||||
Description: (WO) Write to inject an error that simulates an heartbeat
|
||||
failure. This is to be used for testing purposes.
|
||||
|
||||
After writing this file, the driver stops arbitration on a
|
||||
random engine and disables the fetching of heartbeat counters.
|
||||
If a workload is running on the device, a job submitted to the
|
||||
accelerator might not get a response and a read of the
|
||||
`heartbeat/status` attribute might report -1, i.e. device
|
||||
unresponsive.
|
||||
The error is unrecoverable thus the device must be restarted to
|
||||
restore its functionality.
|
||||
|
||||
This attribute is available only when the kernel is built with
|
||||
CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION=y.
|
||||
|
||||
A write of 1 enables error injection.
|
||||
|
||||
The following example shows how to enable error injection::
|
||||
|
||||
# cd /sys/kernel/debug/qat_<device>_<BDF>
|
||||
# echo 1 > heartbeat/inject_error
|
||||
|
|
|
@ -111,6 +111,28 @@ Description: QM debug registers(regs) read hardware register value. This
|
|||
node is used to show the change of the qm register values. This
|
||||
node can be help users to check the change of register values.
|
||||
|
||||
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/qm_state
|
||||
Date: Jan 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Dump the state of the device.
|
||||
0: busy, 1: idle.
|
||||
Only available for PF, and take no other effect on HPRE.
|
||||
|
||||
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/dev_timeout
|
||||
Date: Feb 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Set the wait time when stop queue fails. Available for both PF
|
||||
and VF, and take no other effect on HPRE.
|
||||
0: not wait(default), others value: wait dev_timeout * 20 microsecond.
|
||||
|
||||
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/dev_state
|
||||
Date: Feb 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Dump the stop queue status of the QM. The default value is 0,
|
||||
if dev_timeout is set, when stop queue fails, the dev_state
|
||||
will return non-zero value. Available for both PF and VF,
|
||||
and take no other effect on HPRE.
|
||||
|
||||
What: /sys/kernel/debug/hisi_hpre/<bdf>/hpre_dfx/diff_regs
|
||||
Date: Mar 2022
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
|
|
|
@ -91,6 +91,28 @@ Description: QM debug registers(regs) read hardware register value. This
|
|||
node is used to show the change of the qm register values. This
|
||||
node can be help users to check the change of register values.
|
||||
|
||||
What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/qm_state
|
||||
Date: Jan 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Dump the state of the device.
|
||||
0: busy, 1: idle.
|
||||
Only available for PF, and take no other effect on SEC.
|
||||
|
||||
What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/dev_timeout
|
||||
Date: Feb 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Set the wait time when stop queue fails. Available for both PF
|
||||
and VF, and take no other effect on SEC.
|
||||
0: not wait(default), others value: wait dev_timeout * 20 microsecond.
|
||||
|
||||
What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/dev_state
|
||||
Date: Feb 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Dump the stop queue status of the QM. The default value is 0,
|
||||
if dev_timeout is set, when stop queue fails, the dev_state
|
||||
will return non-zero value. Available for both PF and VF,
|
||||
and take no other effect on SEC.
|
||||
|
||||
What: /sys/kernel/debug/hisi_sec2/<bdf>/sec_dfx/diff_regs
|
||||
Date: Mar 2022
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
|
|
|
@ -104,6 +104,28 @@ Description: QM debug registers(regs) read hardware register value. This
|
|||
node is used to show the change of the qm registers value. This
|
||||
node can be help users to check the change of register values.
|
||||
|
||||
What: /sys/kernel/debug/hisi_zip/<bdf>/qm/qm_state
|
||||
Date: Jan 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Dump the state of the device.
|
||||
0: busy, 1: idle.
|
||||
Only available for PF, and take no other effect on ZIP.
|
||||
|
||||
What: /sys/kernel/debug/hisi_zip/<bdf>/qm/dev_timeout
|
||||
Date: Feb 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Set the wait time when stop queue fails. Available for both PF
|
||||
and VF, and take no other effect on ZIP.
|
||||
0: not wait(default), others value: wait dev_timeout * 20 microsecond.
|
||||
|
||||
What: /sys/kernel/debug/hisi_zip/<bdf>/qm/dev_state
|
||||
Date: Feb 2024
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
Description: Dump the stop queue status of the QM. The default value is 0,
|
||||
if dev_timeout is set, when stop queue fails, the dev_state
|
||||
will return non-zero value. Available for both PF and VF,
|
||||
and take no other effect on ZIP.
|
||||
|
||||
What: /sys/kernel/debug/hisi_zip/<bdf>/zip_dfx/diff_regs
|
||||
Date: Mar 2022
|
||||
Contact: linux-crypto@vger.kernel.org
|
||||
|
|
|
@ -141,3 +141,23 @@ Description:
|
|||
64
|
||||
|
||||
This attribute is only available for qat_4xxx devices.
|
||||
|
||||
What: /sys/bus/pci/devices/<BDF>/qat/auto_reset
|
||||
Date: March 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: qat-linux@intel.com
|
||||
Description: (RW) Reports the current state of the autoreset feature
|
||||
for a QAT device
|
||||
|
||||
Write to the attribute to enable or disable device auto reset.
|
||||
|
||||
Device auto reset is disabled by default.
|
||||
|
||||
The values are:
|
||||
|
||||
* 1/Yy/on: auto reset enabled. If the device encounters an
|
||||
unrecoverable error, it will be reset automatically.
|
||||
* 0/Nn/off: auto reset disabled. If the device encounters an
|
||||
unrecoverable error, it will not be reset.
|
||||
|
||||
This attribute is only available for qat_4xxx devices.
|
||||
|
|
|
@ -12,7 +12,11 @@ maintainers:
|
|||
|
||||
properties:
|
||||
compatible:
|
||||
const: atmel,at91sam9g46-aes
|
||||
oneOf:
|
||||
- const: atmel,at91sam9g46-aes
|
||||
- items:
|
||||
- const: microchip,sam9x7-aes
|
||||
- const: atmel,at91sam9g46-aes
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -12,7 +12,11 @@ maintainers:
|
|||
|
||||
properties:
|
||||
compatible:
|
||||
const: atmel,at91sam9g46-sha
|
||||
oneOf:
|
||||
- const: atmel,at91sam9g46-sha
|
||||
- items:
|
||||
- const: microchip,sam9x7-sha
|
||||
- const: atmel,at91sam9g46-sha
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -12,7 +12,11 @@ maintainers:
|
|||
|
||||
properties:
|
||||
compatible:
|
||||
const: atmel,at91sam9g46-tdes
|
||||
oneOf:
|
||||
- const: atmel,at91sam9g46-tdes
|
||||
- items:
|
||||
- const: microchip,sam9x7-tdes
|
||||
- const: atmel,at91sam9g46-tdes
|
||||
|
||||
reg:
|
||||
maxItems: 1
|
||||
|
|
|
@ -14,6 +14,7 @@ properties:
|
|||
items:
|
||||
- enum:
|
||||
- qcom,sa8775p-inline-crypto-engine
|
||||
- qcom,sc7180-inline-crypto-engine
|
||||
- qcom,sm8450-inline-crypto-engine
|
||||
- qcom,sm8550-inline-crypto-engine
|
||||
- qcom,sm8650-inline-crypto-engine
|
||||
|
|
|
@ -45,6 +45,7 @@ properties:
|
|||
- items:
|
||||
- enum:
|
||||
- qcom,sc7280-qce
|
||||
- qcom,sm6350-qce
|
||||
- qcom,sm8250-qce
|
||||
- qcom,sm8350-qce
|
||||
- qcom,sm8450-qce
|
||||
|
|
|
@ -21,6 +21,10 @@ properties:
|
|||
- enum:
|
||||
- microchip,sama7g5-trng
|
||||
- const: atmel,at91sam9g45-trng
|
||||
- items:
|
||||
- enum:
|
||||
- microchip,sam9x7-trng
|
||||
- const: microchip,sam9x60-trng
|
||||
|
||||
clocks:
|
||||
maxItems: 1
|
||||
|
|
25
MAINTAINERS
25
MAINTAINERS
|
@ -10377,12 +10377,17 @@ M: Nayna Jain <nayna@linux.ibm.com>
|
|||
M: Paulo Flabiano Smorigo <pfsmorigo@gmail.com>
|
||||
L: linux-crypto@vger.kernel.org
|
||||
S: Supported
|
||||
F: drivers/crypto/vmx/Kconfig
|
||||
F: drivers/crypto/vmx/Makefile
|
||||
F: drivers/crypto/vmx/aes*
|
||||
F: drivers/crypto/vmx/ghash*
|
||||
F: drivers/crypto/vmx/ppc-xlate.pl
|
||||
F: drivers/crypto/vmx/vmx.c
|
||||
F: arch/powerpc/crypto/Kconfig
|
||||
F: arch/powerpc/crypto/Makefile
|
||||
F: arch/powerpc/crypto/aes.c
|
||||
F: arch/powerpc/crypto/aes_cbc.c
|
||||
F: arch/powerpc/crypto/aes_ctr.c
|
||||
F: arch/powerpc/crypto/aes_xts.c
|
||||
F: arch/powerpc/crypto/aesp8-ppc.*
|
||||
F: arch/powerpc/crypto/ghash.c
|
||||
F: arch/powerpc/crypto/ghashp8-ppc.pl
|
||||
F: arch/powerpc/crypto/ppc-xlate.pl
|
||||
F: arch/powerpc/crypto/vmx.c
|
||||
|
||||
IBM ServeRAID RAID DRIVER
|
||||
S: Orphan
|
||||
|
@ -12456,7 +12461,6 @@ F: drivers/*/*/*pasemi*
|
|||
F: drivers/*/*pasemi*
|
||||
F: drivers/char/tpm/tpm_ibmvtpm*
|
||||
F: drivers/crypto/nx/
|
||||
F: drivers/crypto/vmx/
|
||||
F: drivers/i2c/busses/i2c-opal.c
|
||||
F: drivers/net/ethernet/ibm/ibmveth.*
|
||||
F: drivers/net/ethernet/ibm/ibmvnic.*
|
||||
|
@ -14304,7 +14308,6 @@ F: drivers/misc/xilinx_tmr_manager.c
|
|||
|
||||
MICROCHIP AT91 DMA DRIVERS
|
||||
M: Ludovic Desroches <ludovic.desroches@microchip.com>
|
||||
M: Tudor Ambarus <tudor.ambarus@linaro.org>
|
||||
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
|
||||
L: dmaengine@vger.kernel.org
|
||||
S: Supported
|
||||
|
@ -14353,9 +14356,8 @@ F: Documentation/devicetree/bindings/media/microchip,csi2dc.yaml
|
|||
F: drivers/media/platform/microchip/microchip-csi2dc.c
|
||||
|
||||
MICROCHIP ECC DRIVER
|
||||
M: Tudor Ambarus <tudor.ambarus@linaro.org>
|
||||
L: linux-crypto@vger.kernel.org
|
||||
S: Maintained
|
||||
S: Orphan
|
||||
F: drivers/crypto/atmel-ecc.*
|
||||
|
||||
MICROCHIP EIC DRIVER
|
||||
|
@ -14460,9 +14462,8 @@ S: Maintained
|
|||
F: drivers/mmc/host/atmel-mci.c
|
||||
|
||||
MICROCHIP NAND DRIVER
|
||||
M: Tudor Ambarus <tudor.ambarus@linaro.org>
|
||||
L: linux-mtd@lists.infradead.org
|
||||
S: Supported
|
||||
S: Orphan
|
||||
F: Documentation/devicetree/bindings/mtd/atmel-nand.txt
|
||||
F: drivers/mtd/nand/raw/atmel/*
|
||||
|
||||
|
|
|
@ -24,8 +24,8 @@
|
|||
|
||||
#include "sha256_glue.h"
|
||||
|
||||
asmlinkage void sha256_block_data_order(u32 *digest, const void *data,
|
||||
unsigned int num_blks);
|
||||
asmlinkage void sha256_block_data_order(struct sha256_state *state,
|
||||
const u8 *data, int num_blks);
|
||||
|
||||
int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len)
|
||||
|
@ -33,23 +33,20 @@ int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *data,
|
|||
/* make sure casting to sha256_block_fn() is safe */
|
||||
BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
|
||||
|
||||
return sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
return sha256_base_do_update(desc, data, len, sha256_block_data_order);
|
||||
}
|
||||
EXPORT_SYMBOL(crypto_sha256_arm_update);
|
||||
|
||||
static int crypto_sha256_arm_final(struct shash_desc *desc, u8 *out)
|
||||
{
|
||||
sha256_base_do_finalize(desc,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
sha256_base_do_finalize(desc, sha256_block_data_order);
|
||||
return sha256_base_finish(desc, out);
|
||||
}
|
||||
|
||||
int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len, u8 *out)
|
||||
{
|
||||
sha256_base_do_update(desc, data, len,
|
||||
(sha256_block_fn *)sha256_block_data_order);
|
||||
sha256_base_do_update(desc, data, len, sha256_block_data_order);
|
||||
return crypto_sha256_arm_final(desc, out);
|
||||
}
|
||||
EXPORT_SYMBOL(crypto_sha256_arm_finup);
|
||||
|
|
|
@ -25,27 +25,25 @@ MODULE_ALIAS_CRYPTO("sha512");
|
|||
MODULE_ALIAS_CRYPTO("sha384-arm");
|
||||
MODULE_ALIAS_CRYPTO("sha512-arm");
|
||||
|
||||
asmlinkage void sha512_block_data_order(u64 *state, u8 const *src, int blocks);
|
||||
asmlinkage void sha512_block_data_order(struct sha512_state *state,
|
||||
u8 const *src, int blocks);
|
||||
|
||||
int sha512_arm_update(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len)
|
||||
{
|
||||
return sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
return sha512_base_do_update(desc, data, len, sha512_block_data_order);
|
||||
}
|
||||
|
||||
static int sha512_arm_final(struct shash_desc *desc, u8 *out)
|
||||
{
|
||||
sha512_base_do_finalize(desc,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
sha512_base_do_finalize(desc, sha512_block_data_order);
|
||||
return sha512_base_finish(desc, out);
|
||||
}
|
||||
|
||||
int sha512_arm_finup(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len, u8 *out)
|
||||
{
|
||||
sha512_base_do_update(desc, data, len,
|
||||
(sha512_block_fn *)sha512_block_data_order);
|
||||
sha512_base_do_update(desc, data, len, sha512_block_data_order);
|
||||
return sha512_arm_final(desc, out);
|
||||
}
|
||||
|
||||
|
|
|
@ -268,6 +268,7 @@ config CRYPTO_AES_ARM64_CE_CCM
|
|||
depends on ARM64 && KERNEL_MODE_NEON
|
||||
select CRYPTO_ALGAPI
|
||||
select CRYPTO_AES_ARM64_CE
|
||||
select CRYPTO_AES_ARM64_CE_BLK
|
||||
select CRYPTO_AEAD
|
||||
select CRYPTO_LIB_AES
|
||||
help
|
||||
|
|
|
@ -1,8 +1,11 @@
|
|||
/* SPDX-License-Identifier: GPL-2.0-only */
|
||||
/*
|
||||
* aesce-ccm-core.S - AES-CCM transform for ARMv8 with Crypto Extensions
|
||||
* aes-ce-ccm-core.S - AES-CCM transform for ARMv8 with Crypto Extensions
|
||||
*
|
||||
* Copyright (C) 2013 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
|
||||
* Copyright (C) 2013 - 2017 Linaro Ltd.
|
||||
* Copyright (C) 2024 Google LLC
|
||||
*
|
||||
* Author: Ard Biesheuvel <ardb@kernel.org>
|
||||
*/
|
||||
|
||||
#include <linux/linkage.h>
|
||||
|
@ -11,211 +14,129 @@
|
|||
.text
|
||||
.arch armv8-a+crypto
|
||||
|
||||
/*
|
||||
* u32 ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes,
|
||||
* u32 macp, u8 const rk[], u32 rounds);
|
||||
*/
|
||||
SYM_FUNC_START(ce_aes_ccm_auth_data)
|
||||
ld1 {v0.16b}, [x0] /* load mac */
|
||||
cbz w3, 1f
|
||||
sub w3, w3, #16
|
||||
eor v1.16b, v1.16b, v1.16b
|
||||
0: ldrb w7, [x1], #1 /* get 1 byte of input */
|
||||
subs w2, w2, #1
|
||||
add w3, w3, #1
|
||||
ins v1.b[0], w7
|
||||
ext v1.16b, v1.16b, v1.16b, #1 /* rotate in the input bytes */
|
||||
beq 8f /* out of input? */
|
||||
cbnz w3, 0b
|
||||
eor v0.16b, v0.16b, v1.16b
|
||||
1: ld1 {v3.4s}, [x4] /* load first round key */
|
||||
prfm pldl1strm, [x1]
|
||||
cmp w5, #12 /* which key size? */
|
||||
add x6, x4, #16
|
||||
sub w7, w5, #2 /* modified # of rounds */
|
||||
bmi 2f
|
||||
bne 5f
|
||||
mov v5.16b, v3.16b
|
||||
b 4f
|
||||
2: mov v4.16b, v3.16b
|
||||
ld1 {v5.4s}, [x6], #16 /* load 2nd round key */
|
||||
3: aese v0.16b, v4.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
4: ld1 {v3.4s}, [x6], #16 /* load next round key */
|
||||
aese v0.16b, v5.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
5: ld1 {v4.4s}, [x6], #16 /* load next round key */
|
||||
subs w7, w7, #3
|
||||
aese v0.16b, v3.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
ld1 {v5.4s}, [x6], #16 /* load next round key */
|
||||
bpl 3b
|
||||
aese v0.16b, v4.16b
|
||||
subs w2, w2, #16 /* last data? */
|
||||
eor v0.16b, v0.16b, v5.16b /* final round */
|
||||
bmi 6f
|
||||
ld1 {v1.16b}, [x1], #16 /* load next input block */
|
||||
eor v0.16b, v0.16b, v1.16b /* xor with mac */
|
||||
bne 1b
|
||||
6: st1 {v0.16b}, [x0] /* store mac */
|
||||
beq 10f
|
||||
adds w2, w2, #16
|
||||
beq 10f
|
||||
mov w3, w2
|
||||
7: ldrb w7, [x1], #1
|
||||
umov w6, v0.b[0]
|
||||
eor w6, w6, w7
|
||||
strb w6, [x0], #1
|
||||
subs w2, w2, #1
|
||||
beq 10f
|
||||
ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */
|
||||
b 7b
|
||||
8: cbz w3, 91f
|
||||
mov w7, w3
|
||||
add w3, w3, #16
|
||||
9: ext v1.16b, v1.16b, v1.16b, #1
|
||||
adds w7, w7, #1
|
||||
bne 9b
|
||||
91: eor v0.16b, v0.16b, v1.16b
|
||||
st1 {v0.16b}, [x0]
|
||||
10: mov w0, w3
|
||||
ret
|
||||
SYM_FUNC_END(ce_aes_ccm_auth_data)
|
||||
.macro load_round_keys, rk, nr, tmp
|
||||
sub w\tmp, \nr, #10
|
||||
add \tmp, \rk, w\tmp, sxtw #4
|
||||
ld1 {v10.4s-v13.4s}, [\rk]
|
||||
ld1 {v14.4s-v17.4s}, [\tmp], #64
|
||||
ld1 {v18.4s-v21.4s}, [\tmp], #64
|
||||
ld1 {v3.4s-v5.4s}, [\tmp]
|
||||
.endm
|
||||
|
||||
/*
|
||||
* void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u8 const rk[],
|
||||
* u32 rounds);
|
||||
*/
|
||||
SYM_FUNC_START(ce_aes_ccm_final)
|
||||
ld1 {v3.4s}, [x2], #16 /* load first round key */
|
||||
ld1 {v0.16b}, [x0] /* load mac */
|
||||
cmp w3, #12 /* which key size? */
|
||||
sub w3, w3, #2 /* modified # of rounds */
|
||||
ld1 {v1.16b}, [x1] /* load 1st ctriv */
|
||||
bmi 0f
|
||||
bne 3f
|
||||
mov v5.16b, v3.16b
|
||||
b 2f
|
||||
0: mov v4.16b, v3.16b
|
||||
1: ld1 {v5.4s}, [x2], #16 /* load next round key */
|
||||
aese v0.16b, v4.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
aese v1.16b, v4.16b
|
||||
aesmc v1.16b, v1.16b
|
||||
2: ld1 {v3.4s}, [x2], #16 /* load next round key */
|
||||
aese v0.16b, v5.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
aese v1.16b, v5.16b
|
||||
aesmc v1.16b, v1.16b
|
||||
3: ld1 {v4.4s}, [x2], #16 /* load next round key */
|
||||
subs w3, w3, #3
|
||||
aese v0.16b, v3.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
aese v1.16b, v3.16b
|
||||
aesmc v1.16b, v1.16b
|
||||
bpl 1b
|
||||
aese v0.16b, v4.16b
|
||||
aese v1.16b, v4.16b
|
||||
/* final round key cancels out */
|
||||
eor v0.16b, v0.16b, v1.16b /* en-/decrypt the mac */
|
||||
st1 {v0.16b}, [x0] /* store result */
|
||||
ret
|
||||
SYM_FUNC_END(ce_aes_ccm_final)
|
||||
.macro dround, va, vb, vk
|
||||
aese \va\().16b, \vk\().16b
|
||||
aesmc \va\().16b, \va\().16b
|
||||
aese \vb\().16b, \vk\().16b
|
||||
aesmc \vb\().16b, \vb\().16b
|
||||
.endm
|
||||
|
||||
.macro aes_encrypt, va, vb, nr
|
||||
tbz \nr, #2, .L\@
|
||||
dround \va, \vb, v10
|
||||
dround \va, \vb, v11
|
||||
tbz \nr, #1, .L\@
|
||||
dround \va, \vb, v12
|
||||
dround \va, \vb, v13
|
||||
.L\@: .irp v, v14, v15, v16, v17, v18, v19, v20, v21, v3
|
||||
dround \va, \vb, \v
|
||||
.endr
|
||||
aese \va\().16b, v4.16b
|
||||
aese \vb\().16b, v4.16b
|
||||
.endm
|
||||
|
||||
.macro aes_ccm_do_crypt,enc
|
||||
cbz x2, 5f
|
||||
ldr x8, [x6, #8] /* load lower ctr */
|
||||
load_round_keys x3, w4, x10
|
||||
|
||||
ld1 {v0.16b}, [x5] /* load mac */
|
||||
cbz x2, ce_aes_ccm_final
|
||||
ldr x8, [x6, #8] /* load lower ctr */
|
||||
CPU_LE( rev x8, x8 ) /* keep swabbed ctr in reg */
|
||||
0: /* outer loop */
|
||||
ld1 {v1.8b}, [x6] /* load upper ctr */
|
||||
prfm pldl1strm, [x1]
|
||||
add x8, x8, #1
|
||||
rev x9, x8
|
||||
cmp w4, #12 /* which key size? */
|
||||
sub w7, w4, #2 /* get modified # of rounds */
|
||||
ins v1.d[1], x9 /* no carry in lower ctr */
|
||||
ld1 {v3.4s}, [x3] /* load first round key */
|
||||
add x10, x3, #16
|
||||
bmi 1f
|
||||
bne 4f
|
||||
mov v5.16b, v3.16b
|
||||
b 3f
|
||||
1: mov v4.16b, v3.16b
|
||||
ld1 {v5.4s}, [x10], #16 /* load 2nd round key */
|
||||
2: /* inner loop: 3 rounds, 2x interleaved */
|
||||
aese v0.16b, v4.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
aese v1.16b, v4.16b
|
||||
aesmc v1.16b, v1.16b
|
||||
3: ld1 {v3.4s}, [x10], #16 /* load next round key */
|
||||
aese v0.16b, v5.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
aese v1.16b, v5.16b
|
||||
aesmc v1.16b, v1.16b
|
||||
4: ld1 {v4.4s}, [x10], #16 /* load next round key */
|
||||
subs w7, w7, #3
|
||||
aese v0.16b, v3.16b
|
||||
aesmc v0.16b, v0.16b
|
||||
aese v1.16b, v3.16b
|
||||
aesmc v1.16b, v1.16b
|
||||
ld1 {v5.4s}, [x10], #16 /* load next round key */
|
||||
bpl 2b
|
||||
aese v0.16b, v4.16b
|
||||
aese v1.16b, v4.16b
|
||||
|
||||
aes_encrypt v0, v1, w4
|
||||
|
||||
subs w2, w2, #16
|
||||
bmi 6f /* partial block? */
|
||||
bmi ce_aes_ccm_crypt_tail
|
||||
ld1 {v2.16b}, [x1], #16 /* load next input block */
|
||||
.if \enc == 1
|
||||
eor v2.16b, v2.16b, v5.16b /* final round enc+mac */
|
||||
eor v1.16b, v1.16b, v2.16b /* xor with crypted ctr */
|
||||
eor v6.16b, v1.16b, v2.16b /* xor with crypted ctr */
|
||||
.else
|
||||
eor v2.16b, v2.16b, v1.16b /* xor with crypted ctr */
|
||||
eor v1.16b, v2.16b, v5.16b /* final round enc */
|
||||
eor v6.16b, v2.16b, v5.16b /* final round enc */
|
||||
.endif
|
||||
eor v0.16b, v0.16b, v2.16b /* xor mac with pt ^ rk[last] */
|
||||
st1 {v1.16b}, [x0], #16 /* write output block */
|
||||
st1 {v6.16b}, [x0], #16 /* write output block */
|
||||
bne 0b
|
||||
CPU_LE( rev x8, x8 )
|
||||
st1 {v0.16b}, [x5] /* store mac */
|
||||
str x8, [x6, #8] /* store lsb end of ctr (BE) */
|
||||
5: ret
|
||||
|
||||
6: eor v0.16b, v0.16b, v5.16b /* final round mac */
|
||||
eor v1.16b, v1.16b, v5.16b /* final round enc */
|
||||
cbnz x7, ce_aes_ccm_final
|
||||
st1 {v0.16b}, [x5] /* store mac */
|
||||
add w2, w2, #16 /* process partial tail block */
|
||||
7: ldrb w9, [x1], #1 /* get 1 byte of input */
|
||||
umov w6, v1.b[0] /* get top crypted ctr byte */
|
||||
umov w7, v0.b[0] /* get top mac byte */
|
||||
.if \enc == 1
|
||||
eor w7, w7, w9
|
||||
eor w9, w9, w6
|
||||
.else
|
||||
eor w9, w9, w6
|
||||
eor w7, w7, w9
|
||||
.endif
|
||||
strb w9, [x0], #1 /* store out byte */
|
||||
strb w7, [x5], #1 /* store mac byte */
|
||||
subs w2, w2, #1
|
||||
beq 5b
|
||||
ext v0.16b, v0.16b, v0.16b, #1 /* shift out mac byte */
|
||||
ext v1.16b, v1.16b, v1.16b, #1 /* shift out ctr byte */
|
||||
b 7b
|
||||
ret
|
||||
.endm
|
||||
|
||||
SYM_FUNC_START_LOCAL(ce_aes_ccm_crypt_tail)
|
||||
eor v0.16b, v0.16b, v5.16b /* final round mac */
|
||||
eor v1.16b, v1.16b, v5.16b /* final round enc */
|
||||
|
||||
add x1, x1, w2, sxtw /* rewind the input pointer (w2 < 0) */
|
||||
add x0, x0, w2, sxtw /* rewind the output pointer */
|
||||
|
||||
adr_l x8, .Lpermute /* load permute vectors */
|
||||
add x9, x8, w2, sxtw
|
||||
sub x8, x8, w2, sxtw
|
||||
ld1 {v7.16b-v8.16b}, [x9]
|
||||
ld1 {v9.16b}, [x8]
|
||||
|
||||
ld1 {v2.16b}, [x1] /* load a full block of input */
|
||||
tbl v1.16b, {v1.16b}, v7.16b /* move keystream to end of register */
|
||||
eor v7.16b, v2.16b, v1.16b /* encrypt partial input block */
|
||||
bif v2.16b, v7.16b, v22.16b /* select plaintext */
|
||||
tbx v7.16b, {v6.16b}, v8.16b /* insert output from previous iteration */
|
||||
tbl v2.16b, {v2.16b}, v9.16b /* copy plaintext to start of v2 */
|
||||
eor v0.16b, v0.16b, v2.16b /* fold plaintext into mac */
|
||||
|
||||
st1 {v7.16b}, [x0] /* store output block */
|
||||
cbz x7, 0f
|
||||
|
||||
SYM_INNER_LABEL(ce_aes_ccm_final, SYM_L_LOCAL)
|
||||
ld1 {v1.16b}, [x7] /* load 1st ctriv */
|
||||
|
||||
aes_encrypt v0, v1, w4
|
||||
|
||||
/* final round key cancels out */
|
||||
eor v0.16b, v0.16b, v1.16b /* en-/decrypt the mac */
|
||||
0: st1 {v0.16b}, [x5] /* store result */
|
||||
ret
|
||||
SYM_FUNC_END(ce_aes_ccm_crypt_tail)
|
||||
|
||||
/*
|
||||
* void ce_aes_ccm_encrypt(u8 out[], u8 const in[], u32 cbytes,
|
||||
* u8 const rk[], u32 rounds, u8 mac[],
|
||||
* u8 ctr[]);
|
||||
* u8 ctr[], u8 const final_iv[]);
|
||||
* void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes,
|
||||
* u8 const rk[], u32 rounds, u8 mac[],
|
||||
* u8 ctr[]);
|
||||
* u8 ctr[], u8 const final_iv[]);
|
||||
*/
|
||||
SYM_FUNC_START(ce_aes_ccm_encrypt)
|
||||
movi v22.16b, #255
|
||||
aes_ccm_do_crypt 1
|
||||
SYM_FUNC_END(ce_aes_ccm_encrypt)
|
||||
|
||||
SYM_FUNC_START(ce_aes_ccm_decrypt)
|
||||
movi v22.16b, #0
|
||||
aes_ccm_do_crypt 0
|
||||
SYM_FUNC_END(ce_aes_ccm_decrypt)
|
||||
|
||||
.section ".rodata", "a"
|
||||
.align 6
|
||||
.fill 15, 1, 0xff
|
||||
.Lpermute:
|
||||
.byte 0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7
|
||||
.byte 0x8, 0x9, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf
|
||||
.fill 15, 1, 0xff
|
||||
|
|
|
@ -1,8 +1,11 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/*
|
||||
* aes-ccm-glue.c - AES-CCM transform for ARMv8 with Crypto Extensions
|
||||
* aes-ce-ccm-glue.c - AES-CCM transform for ARMv8 with Crypto Extensions
|
||||
*
|
||||
* Copyright (C) 2013 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
|
||||
* Copyright (C) 2013 - 2017 Linaro Ltd.
|
||||
* Copyright (C) 2024 Google LLC
|
||||
*
|
||||
* Author: Ard Biesheuvel <ardb@kernel.org>
|
||||
*/
|
||||
|
||||
#include <asm/neon.h>
|
||||
|
@ -15,6 +18,8 @@
|
|||
|
||||
#include "aes-ce-setkey.h"
|
||||
|
||||
MODULE_IMPORT_NS(CRYPTO_INTERNAL);
|
||||
|
||||
static int num_rounds(struct crypto_aes_ctx *ctx)
|
||||
{
|
||||
/*
|
||||
|
@ -27,19 +32,17 @@ static int num_rounds(struct crypto_aes_ctx *ctx)
|
|||
return 6 + ctx->key_length / 4;
|
||||
}
|
||||
|
||||
asmlinkage u32 ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes,
|
||||
u32 macp, u32 const rk[], u32 rounds);
|
||||
asmlinkage u32 ce_aes_mac_update(u8 const in[], u32 const rk[], int rounds,
|
||||
int blocks, u8 dg[], int enc_before,
|
||||
int enc_after);
|
||||
|
||||
asmlinkage void ce_aes_ccm_encrypt(u8 out[], u8 const in[], u32 cbytes,
|
||||
u32 const rk[], u32 rounds, u8 mac[],
|
||||
u8 ctr[]);
|
||||
u8 ctr[], u8 const final_iv[]);
|
||||
|
||||
asmlinkage void ce_aes_ccm_decrypt(u8 out[], u8 const in[], u32 cbytes,
|
||||
u32 const rk[], u32 rounds, u8 mac[],
|
||||
u8 ctr[]);
|
||||
|
||||
asmlinkage void ce_aes_ccm_final(u8 mac[], u8 const ctr[], u32 const rk[],
|
||||
u32 rounds);
|
||||
u8 ctr[], u8 const final_iv[]);
|
||||
|
||||
static int ccm_setkey(struct crypto_aead *tfm, const u8 *in_key,
|
||||
unsigned int key_len)
|
||||
|
@ -94,6 +97,41 @@ static int ccm_init_mac(struct aead_request *req, u8 maciv[], u32 msglen)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static u32 ce_aes_ccm_auth_data(u8 mac[], u8 const in[], u32 abytes,
|
||||
u32 macp, u32 const rk[], u32 rounds)
|
||||
{
|
||||
int enc_after = (macp + abytes) % AES_BLOCK_SIZE;
|
||||
|
||||
do {
|
||||
u32 blocks = abytes / AES_BLOCK_SIZE;
|
||||
|
||||
if (macp == AES_BLOCK_SIZE || (!macp && blocks > 0)) {
|
||||
u32 rem = ce_aes_mac_update(in, rk, rounds, blocks, mac,
|
||||
macp, enc_after);
|
||||
u32 adv = (blocks - rem) * AES_BLOCK_SIZE;
|
||||
|
||||
macp = enc_after ? 0 : AES_BLOCK_SIZE;
|
||||
in += adv;
|
||||
abytes -= adv;
|
||||
|
||||
if (unlikely(rem)) {
|
||||
kernel_neon_end();
|
||||
kernel_neon_begin();
|
||||
macp = 0;
|
||||
}
|
||||
} else {
|
||||
u32 l = min(AES_BLOCK_SIZE - macp, abytes);
|
||||
|
||||
crypto_xor(&mac[macp], in, l);
|
||||
in += l;
|
||||
macp += l;
|
||||
abytes -= l;
|
||||
}
|
||||
} while (abytes > 0);
|
||||
|
||||
return macp;
|
||||
}
|
||||
|
||||
static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
|
||||
{
|
||||
struct crypto_aead *aead = crypto_aead_reqtfm(req);
|
||||
|
@ -101,7 +139,7 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
|
|||
struct __packed { __be16 l; __be32 h; u16 len; } ltag;
|
||||
struct scatter_walk walk;
|
||||
u32 len = req->assoclen;
|
||||
u32 macp = 0;
|
||||
u32 macp = AES_BLOCK_SIZE;
|
||||
|
||||
/* prepend the AAD with a length tag */
|
||||
if (len < 0xff00) {
|
||||
|
@ -125,16 +163,11 @@ static void ccm_calculate_auth_mac(struct aead_request *req, u8 mac[])
|
|||
scatterwalk_start(&walk, sg_next(walk.sg));
|
||||
n = scatterwalk_clamp(&walk, len);
|
||||
}
|
||||
n = min_t(u32, n, SZ_4K); /* yield NEON at least every 4k */
|
||||
p = scatterwalk_map(&walk);
|
||||
|
||||
macp = ce_aes_ccm_auth_data(mac, p, n, macp, ctx->key_enc,
|
||||
num_rounds(ctx));
|
||||
|
||||
if (len / SZ_4K > (len - n) / SZ_4K) {
|
||||
kernel_neon_end();
|
||||
kernel_neon_begin();
|
||||
}
|
||||
len -= n;
|
||||
|
||||
scatterwalk_unmap(p);
|
||||
|
@ -149,7 +182,7 @@ static int ccm_encrypt(struct aead_request *req)
|
|||
struct crypto_aes_ctx *ctx = crypto_aead_ctx(aead);
|
||||
struct skcipher_walk walk;
|
||||
u8 __aligned(8) mac[AES_BLOCK_SIZE];
|
||||
u8 buf[AES_BLOCK_SIZE];
|
||||
u8 orig_iv[AES_BLOCK_SIZE];
|
||||
u32 len = req->cryptlen;
|
||||
int err;
|
||||
|
||||
|
@ -158,42 +191,55 @@ static int ccm_encrypt(struct aead_request *req)
|
|||
return err;
|
||||
|
||||
/* preserve the original iv for the final round */
|
||||
memcpy(buf, req->iv, AES_BLOCK_SIZE);
|
||||
memcpy(orig_iv, req->iv, AES_BLOCK_SIZE);
|
||||
|
||||
err = skcipher_walk_aead_encrypt(&walk, req, false);
|
||||
if (unlikely(err))
|
||||
return err;
|
||||
|
||||
kernel_neon_begin();
|
||||
|
||||
if (req->assoclen)
|
||||
ccm_calculate_auth_mac(req, mac);
|
||||
|
||||
while (walk.nbytes) {
|
||||
do {
|
||||
u32 tail = walk.nbytes % AES_BLOCK_SIZE;
|
||||
bool final = walk.nbytes == walk.total;
|
||||
const u8 *src = walk.src.virt.addr;
|
||||
u8 *dst = walk.dst.virt.addr;
|
||||
u8 buf[AES_BLOCK_SIZE];
|
||||
u8 *final_iv = NULL;
|
||||
|
||||
if (final)
|
||||
if (walk.nbytes == walk.total) {
|
||||
tail = 0;
|
||||
final_iv = orig_iv;
|
||||
}
|
||||
|
||||
ce_aes_ccm_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
|
||||
walk.nbytes - tail, ctx->key_enc,
|
||||
num_rounds(ctx), mac, walk.iv);
|
||||
if (unlikely(walk.nbytes < AES_BLOCK_SIZE))
|
||||
src = dst = memcpy(&buf[sizeof(buf) - walk.nbytes],
|
||||
src, walk.nbytes);
|
||||
|
||||
if (!final)
|
||||
kernel_neon_end();
|
||||
err = skcipher_walk_done(&walk, tail);
|
||||
if (!final)
|
||||
kernel_neon_begin();
|
||||
}
|
||||
ce_aes_ccm_encrypt(dst, src, walk.nbytes - tail,
|
||||
ctx->key_enc, num_rounds(ctx),
|
||||
mac, walk.iv, final_iv);
|
||||
|
||||
ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx));
|
||||
if (unlikely(walk.nbytes < AES_BLOCK_SIZE))
|
||||
memcpy(walk.dst.virt.addr, dst, walk.nbytes);
|
||||
|
||||
if (walk.nbytes) {
|
||||
err = skcipher_walk_done(&walk, tail);
|
||||
}
|
||||
} while (walk.nbytes);
|
||||
|
||||
kernel_neon_end();
|
||||
|
||||
if (unlikely(err))
|
||||
return err;
|
||||
|
||||
/* copy authtag to end of dst */
|
||||
scatterwalk_map_and_copy(mac, req->dst, req->assoclen + req->cryptlen,
|
||||
crypto_aead_authsize(aead), 1);
|
||||
|
||||
return err;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ccm_decrypt(struct aead_request *req)
|
||||
|
@ -203,7 +249,7 @@ static int ccm_decrypt(struct aead_request *req)
|
|||
unsigned int authsize = crypto_aead_authsize(aead);
|
||||
struct skcipher_walk walk;
|
||||
u8 __aligned(8) mac[AES_BLOCK_SIZE];
|
||||
u8 buf[AES_BLOCK_SIZE];
|
||||
u8 orig_iv[AES_BLOCK_SIZE];
|
||||
u32 len = req->cryptlen - authsize;
|
||||
int err;
|
||||
|
||||
|
@ -212,34 +258,44 @@ static int ccm_decrypt(struct aead_request *req)
|
|||
return err;
|
||||
|
||||
/* preserve the original iv for the final round */
|
||||
memcpy(buf, req->iv, AES_BLOCK_SIZE);
|
||||
memcpy(orig_iv, req->iv, AES_BLOCK_SIZE);
|
||||
|
||||
err = skcipher_walk_aead_decrypt(&walk, req, false);
|
||||
if (unlikely(err))
|
||||
return err;
|
||||
|
||||
kernel_neon_begin();
|
||||
|
||||
if (req->assoclen)
|
||||
ccm_calculate_auth_mac(req, mac);
|
||||
|
||||
while (walk.nbytes) {
|
||||
do {
|
||||
u32 tail = walk.nbytes % AES_BLOCK_SIZE;
|
||||
bool final = walk.nbytes == walk.total;
|
||||
const u8 *src = walk.src.virt.addr;
|
||||
u8 *dst = walk.dst.virt.addr;
|
||||
u8 buf[AES_BLOCK_SIZE];
|
||||
u8 *final_iv = NULL;
|
||||
|
||||
if (final)
|
||||
if (walk.nbytes == walk.total) {
|
||||
tail = 0;
|
||||
final_iv = orig_iv;
|
||||
}
|
||||
|
||||
ce_aes_ccm_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
|
||||
walk.nbytes - tail, ctx->key_enc,
|
||||
num_rounds(ctx), mac, walk.iv);
|
||||
if (unlikely(walk.nbytes < AES_BLOCK_SIZE))
|
||||
src = dst = memcpy(&buf[sizeof(buf) - walk.nbytes],
|
||||
src, walk.nbytes);
|
||||
|
||||
if (!final)
|
||||
kernel_neon_end();
|
||||
err = skcipher_walk_done(&walk, tail);
|
||||
if (!final)
|
||||
kernel_neon_begin();
|
||||
}
|
||||
ce_aes_ccm_decrypt(dst, src, walk.nbytes - tail,
|
||||
ctx->key_enc, num_rounds(ctx),
|
||||
mac, walk.iv, final_iv);
|
||||
|
||||
ce_aes_ccm_final(mac, buf, ctx->key_enc, num_rounds(ctx));
|
||||
if (unlikely(walk.nbytes < AES_BLOCK_SIZE))
|
||||
memcpy(walk.dst.virt.addr, dst, walk.nbytes);
|
||||
|
||||
if (walk.nbytes) {
|
||||
err = skcipher_walk_done(&walk, tail);
|
||||
}
|
||||
} while (walk.nbytes);
|
||||
|
||||
kernel_neon_end();
|
||||
|
||||
|
@ -247,11 +303,11 @@ static int ccm_decrypt(struct aead_request *req)
|
|||
return err;
|
||||
|
||||
/* compare calculated auth tag with the stored one */
|
||||
scatterwalk_map_and_copy(buf, req->src,
|
||||
scatterwalk_map_and_copy(orig_iv, req->src,
|
||||
req->assoclen + req->cryptlen - authsize,
|
||||
authsize, 0);
|
||||
|
||||
if (crypto_memneq(mac, buf, authsize))
|
||||
if (crypto_memneq(mac, orig_iv, authsize))
|
||||
return -EBADMSG;
|
||||
return 0;
|
||||
}
|
||||
|
@ -290,6 +346,6 @@ module_init(aes_mod_init);
|
|||
module_exit(aes_mod_exit);
|
||||
|
||||
MODULE_DESCRIPTION("Synchronous AES in CCM mode using ARMv8 Crypto Extensions");
|
||||
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
|
||||
MODULE_AUTHOR("Ard Biesheuvel <ardb@kernel.org>");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_ALIAS_CRYPTO("ccm(aes)");
|
||||
|
|
|
@ -1048,6 +1048,7 @@ unregister_ciphers:
|
|||
|
||||
#ifdef USE_V8_CRYPTO_EXTENSIONS
|
||||
module_cpu_feature_match(AES, aes_init);
|
||||
EXPORT_SYMBOL_NS(ce_aes_mac_update, CRYPTO_INTERNAL);
|
||||
#else
|
||||
module_init(aes_init);
|
||||
EXPORT_SYMBOL(neon_aes_ecb_encrypt);
|
||||
|
|
|
@ -137,4 +137,24 @@ config CRYPTO_POLY1305_P10
|
|||
- Power10 or later
|
||||
- Little-endian
|
||||
|
||||
config CRYPTO_DEV_VMX
|
||||
bool "Support for VMX cryptographic acceleration instructions"
|
||||
depends on PPC64 && VSX
|
||||
help
|
||||
Support for VMX cryptographic acceleration instructions.
|
||||
|
||||
config CRYPTO_DEV_VMX_ENCRYPT
|
||||
tristate "Encryption acceleration support on P8 CPU"
|
||||
depends on CRYPTO_DEV_VMX
|
||||
select CRYPTO_AES
|
||||
select CRYPTO_CBC
|
||||
select CRYPTO_CTR
|
||||
select CRYPTO_GHASH
|
||||
select CRYPTO_XTS
|
||||
default m
|
||||
help
|
||||
Support for VMX cryptographic acceleration instructions on Power8 CPU.
|
||||
This module supports acceleration for AES and GHASH in hardware. If you
|
||||
choose 'M' here, this module will be called vmx-crypto.
|
||||
|
||||
endmenu
|
||||
|
|
|
@ -16,6 +16,7 @@ obj-$(CONFIG_CRYPTO_VPMSUM_TESTER) += crc-vpmsum_test.o
|
|||
obj-$(CONFIG_CRYPTO_AES_GCM_P10) += aes-gcm-p10-crypto.o
|
||||
obj-$(CONFIG_CRYPTO_CHACHA20_P10) += chacha-p10-crypto.o
|
||||
obj-$(CONFIG_CRYPTO_POLY1305_P10) += poly1305-p10-crypto.o
|
||||
obj-$(CONFIG_CRYPTO_DEV_VMX_ENCRYPT) += vmx-crypto.o
|
||||
|
||||
aes-ppc-spe-y := aes-spe-core.o aes-spe-keys.o aes-tab-4k.o aes-spe-modes.o aes-spe-glue.o
|
||||
md5-ppc-y := md5-asm.o md5-glue.o
|
||||
|
@ -27,14 +28,29 @@ crct10dif-vpmsum-y := crct10dif-vpmsum_asm.o crct10dif-vpmsum_glue.o
|
|||
aes-gcm-p10-crypto-y := aes-gcm-p10-glue.o aes-gcm-p10.o ghashp10-ppc.o aesp10-ppc.o
|
||||
chacha-p10-crypto-y := chacha-p10-glue.o chacha-p10le-8x.o
|
||||
poly1305-p10-crypto-y := poly1305-p10-glue.o poly1305-p10le_64.o
|
||||
vmx-crypto-objs := vmx.o aesp8-ppc.o ghashp8-ppc.o aes.o aes_cbc.o aes_ctr.o aes_xts.o ghash.o
|
||||
|
||||
ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
|
||||
override flavour := linux-ppc64le
|
||||
else
|
||||
ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
override flavour := linux-ppc64-elfv2
|
||||
else
|
||||
override flavour := linux-ppc64
|
||||
endif
|
||||
endif
|
||||
|
||||
quiet_cmd_perl = PERL $@
|
||||
cmd_perl = $(PERL) $< $(if $(CONFIG_CPU_LITTLE_ENDIAN), linux-ppc64le, linux-ppc64) > $@
|
||||
cmd_perl = $(PERL) $< $(flavour) > $@
|
||||
|
||||
targets += aesp10-ppc.S ghashp10-ppc.S
|
||||
targets += aesp10-ppc.S ghashp10-ppc.S aesp8-ppc.S ghashp8-ppc.S
|
||||
|
||||
$(obj)/aesp10-ppc.S $(obj)/ghashp10-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE
|
||||
$(call if_changed,perl)
|
||||
|
||||
$(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE
|
||||
$(call if_changed,perl)
|
||||
|
||||
OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y
|
||||
OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y
|
||||
OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y
|
||||
|
|
|
@ -1269,10 +1269,11 @@ config CRYPTO_JITTERENTROPY
|
|||
|
||||
A non-physical non-deterministic ("true") RNG (e.g., an entropy source
|
||||
compliant with NIST SP800-90B) intended to provide a seed to a
|
||||
deterministic RNG (e.g. per NIST SP800-90C).
|
||||
deterministic RNG (e.g., per NIST SP800-90C).
|
||||
This RNG does not perform any cryptographic whitening of the generated
|
||||
random numbers.
|
||||
|
||||
See https://www.chronox.de/jent.html
|
||||
See https://www.chronox.de/jent/
|
||||
|
||||
if CRYPTO_JITTERENTROPY
|
||||
if CRYPTO_FIPS && EXPERT
|
||||
|
|
|
@ -618,6 +618,16 @@ int crypto_has_ahash(const char *alg_name, u32 type, u32 mask)
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(crypto_has_ahash);
|
||||
|
||||
static bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
|
||||
{
|
||||
struct crypto_alg *alg = &halg->base;
|
||||
|
||||
if (alg->cra_type == &crypto_shash_type)
|
||||
return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));
|
||||
|
||||
return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey;
|
||||
}
|
||||
|
||||
struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
|
||||
{
|
||||
struct hash_alg_common *halg = crypto_hash_alg_common(hash);
|
||||
|
@ -760,16 +770,5 @@ int ahash_register_instance(struct crypto_template *tmpl,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(ahash_register_instance);
|
||||
|
||||
bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
|
||||
{
|
||||
struct crypto_alg *alg = &halg->base;
|
||||
|
||||
if (alg->cra_type == &crypto_shash_type)
|
||||
return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));
|
||||
|
||||
return __crypto_ahash_alg(alg)->setkey != ahash_nosetkey;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
|
||||
|
|
|
@ -28,7 +28,7 @@ static int pefile_parse_binary(const void *pebuf, unsigned int pelen,
|
|||
const struct pe32plus_opt_hdr *pe64;
|
||||
const struct data_directory *ddir;
|
||||
const struct data_dirent *dde;
|
||||
const struct section_header *secs, *sec;
|
||||
const struct section_header *sec;
|
||||
size_t cursor, datalen = pelen;
|
||||
|
||||
kenter("");
|
||||
|
@ -110,7 +110,7 @@ static int pefile_parse_binary(const void *pebuf, unsigned int pelen,
|
|||
ctx->n_sections = pe->sections;
|
||||
if (ctx->n_sections > (ctx->header_size - cursor) / sizeof(*sec))
|
||||
return -ELIBBAD;
|
||||
ctx->secs = secs = pebuf + cursor;
|
||||
ctx->secs = pebuf + cursor;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
63
crypto/dh.c
63
crypto/dh.c
|
@ -106,6 +106,12 @@ err_clear_ctx:
|
|||
*/
|
||||
static int dh_is_pubkey_valid(struct dh_ctx *ctx, MPI y)
|
||||
{
|
||||
MPI val, q;
|
||||
int ret;
|
||||
|
||||
if (!fips_enabled)
|
||||
return 0;
|
||||
|
||||
if (unlikely(!ctx->p))
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -125,41 +131,36 @@ static int dh_is_pubkey_valid(struct dh_ctx *ctx, MPI y)
|
|||
*
|
||||
* For the safe-prime groups q = (p - 1)/2.
|
||||
*/
|
||||
if (fips_enabled) {
|
||||
MPI val, q;
|
||||
int ret;
|
||||
|
||||
val = mpi_alloc(0);
|
||||
if (!val)
|
||||
return -ENOMEM;
|
||||
|
||||
q = mpi_alloc(mpi_get_nlimbs(ctx->p));
|
||||
if (!q) {
|
||||
mpi_free(val);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* ->p is odd, so no need to explicitly subtract one
|
||||
* from it before shifting to the right.
|
||||
*/
|
||||
mpi_rshift(q, ctx->p, 1);
|
||||
|
||||
ret = mpi_powm(val, y, q, ctx->p);
|
||||
mpi_free(q);
|
||||
if (ret) {
|
||||
mpi_free(val);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = mpi_cmp_ui(val, 1);
|
||||
val = mpi_alloc(0);
|
||||
if (!val)
|
||||
return -ENOMEM;
|
||||
|
||||
q = mpi_alloc(mpi_get_nlimbs(ctx->p));
|
||||
if (!q) {
|
||||
mpi_free(val);
|
||||
|
||||
if (ret != 0)
|
||||
return -EINVAL;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/*
|
||||
* ->p is odd, so no need to explicitly subtract one
|
||||
* from it before shifting to the right.
|
||||
*/
|
||||
mpi_rshift(q, ctx->p, 1);
|
||||
|
||||
ret = mpi_powm(val, y, q, ctx->p);
|
||||
mpi_free(q);
|
||||
if (ret) {
|
||||
mpi_free(val);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = mpi_cmp_ui(val, 1);
|
||||
|
||||
mpi_free(val);
|
||||
|
||||
if (ret != 0)
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -71,7 +71,7 @@ static int crypto_pcbc_encrypt(struct skcipher_request *req)
|
|||
|
||||
err = skcipher_walk_virt(&walk, req, false);
|
||||
|
||||
while ((nbytes = walk.nbytes)) {
|
||||
while (walk.nbytes) {
|
||||
if (walk.src.virt.addr == walk.dst.virt.addr)
|
||||
nbytes = crypto_pcbc_encrypt_inplace(req, &walk,
|
||||
cipher);
|
||||
|
@ -138,7 +138,7 @@ static int crypto_pcbc_decrypt(struct skcipher_request *req)
|
|||
|
||||
err = skcipher_walk_virt(&walk, req, false);
|
||||
|
||||
while ((nbytes = walk.nbytes)) {
|
||||
while (walk.nbytes) {
|
||||
if (walk.src.virt.addr == walk.dst.virt.addr)
|
||||
nbytes = crypto_pcbc_decrypt_inplace(req, &walk,
|
||||
cipher);
|
||||
|
|
36
crypto/rsa.c
36
crypto/rsa.c
|
@ -24,14 +24,38 @@ struct rsa_mpi_key {
|
|||
MPI qinv;
|
||||
};
|
||||
|
||||
static int rsa_check_payload(MPI x, MPI n)
|
||||
{
|
||||
MPI n1;
|
||||
|
||||
if (mpi_cmp_ui(x, 1) <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
n1 = mpi_alloc(0);
|
||||
if (!n1)
|
||||
return -ENOMEM;
|
||||
|
||||
if (mpi_sub_ui(n1, n, 1) || mpi_cmp(x, n1) >= 0) {
|
||||
mpi_free(n1);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mpi_free(n1);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* RSAEP function [RFC3447 sec 5.1.1]
|
||||
* c = m^e mod n;
|
||||
*/
|
||||
static int _rsa_enc(const struct rsa_mpi_key *key, MPI c, MPI m)
|
||||
{
|
||||
/* (1) Validate 0 <= m < n */
|
||||
if (mpi_cmp_ui(m, 0) < 0 || mpi_cmp(m, key->n) >= 0)
|
||||
/*
|
||||
* Even though (1) in RFC3447 only requires 0 <= m <= n - 1, we are
|
||||
* slightly more conservative and require 1 < m < n - 1. This is in line
|
||||
* with SP 800-56Br2, Section 7.1.1.
|
||||
*/
|
||||
if (rsa_check_payload(m, key->n))
|
||||
return -EINVAL;
|
||||
|
||||
/* (2) c = m^e mod n */
|
||||
|
@ -50,8 +74,12 @@ static int _rsa_dec_crt(const struct rsa_mpi_key *key, MPI m_or_m1_or_h, MPI c)
|
|||
MPI m2, m12_or_qh;
|
||||
int ret = -ENOMEM;
|
||||
|
||||
/* (1) Validate 0 <= c < n */
|
||||
if (mpi_cmp_ui(c, 0) < 0 || mpi_cmp(c, key->n) >= 0)
|
||||
/*
|
||||
* Even though (1) in RFC3447 only requires 0 <= c <= n - 1, we are
|
||||
* slightly more conservative and require 1 < c < n - 1. This is in line
|
||||
* with SP 800-56Br2, Section 7.1.2.
|
||||
*/
|
||||
if (rsa_check_payload(c, key->n))
|
||||
return -EINVAL;
|
||||
|
||||
m2 = mpi_alloc(0);
|
||||
|
|
|
@ -117,6 +117,7 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
|
|||
struct crypto_scomp *scomp = *tfm_ctx;
|
||||
void **ctx = acomp_request_ctx(req);
|
||||
struct scomp_scratch *scratch;
|
||||
void *src, *dst;
|
||||
unsigned int dlen;
|
||||
int ret;
|
||||
|
||||
|
@ -134,13 +135,25 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
|
|||
scratch = raw_cpu_ptr(&scomp_scratch);
|
||||
spin_lock(&scratch->lock);
|
||||
|
||||
scatterwalk_map_and_copy(scratch->src, req->src, 0, req->slen, 0);
|
||||
if (dir)
|
||||
ret = crypto_scomp_compress(scomp, scratch->src, req->slen,
|
||||
scratch->dst, &req->dlen, *ctx);
|
||||
if (sg_nents(req->src) == 1 && !PageHighMem(sg_page(req->src))) {
|
||||
src = page_to_virt(sg_page(req->src)) + req->src->offset;
|
||||
} else {
|
||||
scatterwalk_map_and_copy(scratch->src, req->src, 0,
|
||||
req->slen, 0);
|
||||
src = scratch->src;
|
||||
}
|
||||
|
||||
if (req->dst && sg_nents(req->dst) == 1 && !PageHighMem(sg_page(req->dst)))
|
||||
dst = page_to_virt(sg_page(req->dst)) + req->dst->offset;
|
||||
else
|
||||
ret = crypto_scomp_decompress(scomp, scratch->src, req->slen,
|
||||
scratch->dst, &req->dlen, *ctx);
|
||||
dst = scratch->dst;
|
||||
|
||||
if (dir)
|
||||
ret = crypto_scomp_compress(scomp, src, req->slen,
|
||||
dst, &req->dlen, *ctx);
|
||||
else
|
||||
ret = crypto_scomp_decompress(scomp, src, req->slen,
|
||||
dst, &req->dlen, *ctx);
|
||||
if (!ret) {
|
||||
if (!req->dst) {
|
||||
req->dst = sgl_alloc(req->dlen, GFP_ATOMIC, NULL);
|
||||
|
@ -152,8 +165,17 @@ static int scomp_acomp_comp_decomp(struct acomp_req *req, int dir)
|
|||
ret = -ENOSPC;
|
||||
goto out;
|
||||
}
|
||||
scatterwalk_map_and_copy(scratch->dst, req->dst, 0, req->dlen,
|
||||
1);
|
||||
if (dst == scratch->dst) {
|
||||
scatterwalk_map_and_copy(scratch->dst, req->dst, 0,
|
||||
req->dlen, 1);
|
||||
} else {
|
||||
int nr_pages = DIV_ROUND_UP(req->dst->offset + req->dlen, PAGE_SIZE);
|
||||
int i;
|
||||
struct page *dst_page = sg_page(req->dst);
|
||||
|
||||
for (i = 0; i < nr_pages; i++)
|
||||
flush_dcache_page(dst_page + i);
|
||||
}
|
||||
}
|
||||
out:
|
||||
spin_unlock(&scratch->lock);
|
||||
|
|
|
@ -1851,6 +1851,9 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
|
|||
ret = min(ret, tcrypt_test("cbc(aria)"));
|
||||
ret = min(ret, tcrypt_test("ctr(aria)"));
|
||||
break;
|
||||
case 193:
|
||||
ret = min(ret, tcrypt_test("ffdhe2048(dh)"));
|
||||
break;
|
||||
case 200:
|
||||
test_cipher_speed("ecb(aes)", ENCRYPT, sec, NULL, 0,
|
||||
speed_template_16_24_32);
|
||||
|
|
|
@ -5720,14 +5720,6 @@ static const struct alg_test_desc alg_test_descs[] = {
|
|||
}
|
||||
}, {
|
||||
#endif
|
||||
.alg = "xts4096(paes)",
|
||||
.test = alg_test_null,
|
||||
.fips_allowed = 1,
|
||||
}, {
|
||||
.alg = "xts512(paes)",
|
||||
.test = alg_test_null,
|
||||
.fips_allowed = 1,
|
||||
}, {
|
||||
.alg = "xxhash64",
|
||||
.test = alg_test_hash,
|
||||
.fips_allowed = 1,
|
||||
|
|
|
@ -89,10 +89,8 @@ static int hisi_rng_probe(struct platform_device *pdev)
|
|||
rng->rng.read = hisi_rng_read;
|
||||
|
||||
ret = devm_hwrng_register(&pdev->dev, &rng->rng);
|
||||
if (ret) {
|
||||
dev_err(&pdev->dev, "failed to register hwrng\n");
|
||||
return ret;
|
||||
}
|
||||
if (ret)
|
||||
return dev_err_probe(&pdev->dev, ret, "failed to register hwrng\n");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -611,13 +611,13 @@ config CRYPTO_DEV_QCOM_RNG
|
|||
To compile this driver as a module, choose M here. The
|
||||
module will be called qcom-rng. If unsure, say N.
|
||||
|
||||
config CRYPTO_DEV_VMX
|
||||
bool "Support for VMX cryptographic acceleration instructions"
|
||||
depends on PPC64 && VSX
|
||||
help
|
||||
Support for VMX cryptographic acceleration instructions.
|
||||
|
||||
source "drivers/crypto/vmx/Kconfig"
|
||||
#config CRYPTO_DEV_VMX
|
||||
# bool "Support for VMX cryptographic acceleration instructions"
|
||||
# depends on PPC64 && VSX
|
||||
# help
|
||||
# Support for VMX cryptographic acceleration instructions.
|
||||
#
|
||||
#source "drivers/crypto/vmx/Kconfig"
|
||||
|
||||
config CRYPTO_DEV_IMGTEC_HASH
|
||||
tristate "Imagination Technologies hardware hash accelerator"
|
||||
|
|
|
@ -42,7 +42,7 @@ obj-$(CONFIG_CRYPTO_DEV_SL3516) += gemini/
|
|||
obj-y += stm32/
|
||||
obj-$(CONFIG_CRYPTO_DEV_TALITOS) += talitos.o
|
||||
obj-$(CONFIG_CRYPTO_DEV_VIRTIO) += virtio/
|
||||
obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/
|
||||
#obj-$(CONFIG_CRYPTO_DEV_VMX) += vmx/
|
||||
obj-$(CONFIG_CRYPTO_DEV_BCM_SPU) += bcm/
|
||||
obj-$(CONFIG_CRYPTO_DEV_SAFEXCEL) += inside-secure/
|
||||
obj-$(CONFIG_CRYPTO_DEV_ARTPEC6) += axis/
|
||||
|
|
|
@ -362,7 +362,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
|
|||
digestsize = SHA512_DIGEST_SIZE;
|
||||
|
||||
/* the padding could be up to two block. */
|
||||
buf = kzalloc(bs * 2, GFP_KERNEL | GFP_DMA);
|
||||
buf = kcalloc(2, bs, GFP_KERNEL | GFP_DMA);
|
||||
if (!buf) {
|
||||
err = -ENOMEM;
|
||||
goto theend;
|
||||
|
|
|
@ -118,9 +118,16 @@ int psp_send_platform_access_msg(enum psp_platform_access_msg msg,
|
|||
goto unlock;
|
||||
}
|
||||
|
||||
/* Store the status in request header for caller to investigate */
|
||||
/*
|
||||
* Read status from PSP. If status is non-zero, it indicates an error
|
||||
* occurred during "processing" of the command.
|
||||
* If status is zero, it indicates the command was "processed"
|
||||
* successfully, but the result of the command is in the payload.
|
||||
* Return both cases to the caller as -EIO to investigate.
|
||||
*/
|
||||
cmd_reg = ioread32(cmd);
|
||||
req->header.status = FIELD_GET(PSP_CMDRESP_STS, cmd_reg);
|
||||
if (FIELD_GET(PSP_CMDRESP_STS, cmd_reg))
|
||||
req->header.status = FIELD_GET(PSP_CMDRESP_STS, cmd_reg);
|
||||
if (req->header.status) {
|
||||
ret = -EIO;
|
||||
goto unlock;
|
||||
|
|
|
@ -156,11 +156,14 @@ static unsigned int psp_get_capability(struct psp_device *psp)
|
|||
}
|
||||
psp->capability = val;
|
||||
|
||||
/* Detect if TSME and SME are both enabled */
|
||||
/* Detect TSME and/or SME status */
|
||||
if (PSP_CAPABILITY(psp, PSP_SECURITY_REPORTING) &&
|
||||
psp->capability & (PSP_SECURITY_TSME_STATUS << PSP_CAPABILITY_PSP_SECURITY_OFFSET) &&
|
||||
cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
|
||||
dev_notice(psp->dev, "psp: Both TSME and SME are active, SME is unnecessary when TSME is active.\n");
|
||||
psp->capability & (PSP_SECURITY_TSME_STATUS << PSP_CAPABILITY_PSP_SECURITY_OFFSET)) {
|
||||
if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
|
||||
dev_notice(psp->dev, "psp: Both TSME and SME are active, SME is unnecessary when TSME is active.\n");
|
||||
else
|
||||
dev_notice(psp->dev, "psp: TSME enabled\n");
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -24,6 +24,8 @@
|
|||
#define QM_DFX_QN_SHIFT 16
|
||||
#define QM_DFX_CNT_CLR_CE 0x100118
|
||||
#define QM_DBG_WRITE_LEN 1024
|
||||
#define QM_IN_IDLE_ST_REG 0x1040e4
|
||||
#define QM_IN_IDLE_STATE 0x1
|
||||
|
||||
static const char * const qm_debug_file_name[] = {
|
||||
[CURRENT_QM] = "current_qm",
|
||||
|
@ -81,6 +83,30 @@ static const struct debugfs_reg32 qm_dfx_regs[] = {
|
|||
{"QM_DFX_FF_ST5 ", 0x1040dc},
|
||||
{"QM_DFX_FF_ST6 ", 0x1040e0},
|
||||
{"QM_IN_IDLE_ST ", 0x1040e4},
|
||||
{"QM_CACHE_CTL ", 0x100050},
|
||||
{"QM_TIMEOUT_CFG ", 0x100070},
|
||||
{"QM_DB_TIMEOUT_CFG ", 0x100074},
|
||||
{"QM_FLR_PENDING_TIME_CFG ", 0x100078},
|
||||
{"QM_ARUSR_MCFG1 ", 0x100088},
|
||||
{"QM_AWUSR_MCFG1 ", 0x100098},
|
||||
{"QM_AXI_M_CFG_ENABLE ", 0x1000B0},
|
||||
{"QM_RAS_CE_THRESHOLD ", 0x1000F8},
|
||||
{"QM_AXI_TIMEOUT_CTRL ", 0x100120},
|
||||
{"QM_AXI_TIMEOUT_STATUS ", 0x100124},
|
||||
{"QM_CQE_AGGR_TIMEOUT_CTRL ", 0x100144},
|
||||
{"ACC_RAS_MSI_INT_SEL ", 0x1040fc},
|
||||
{"QM_CQE_OUT ", 0x104100},
|
||||
{"QM_EQE_OUT ", 0x104104},
|
||||
{"QM_AEQE_OUT ", 0x104108},
|
||||
{"QM_DB_INFO0 ", 0x104180},
|
||||
{"QM_DB_INFO1 ", 0x104184},
|
||||
{"QM_AM_CTRL_GLOBAL ", 0x300000},
|
||||
{"QM_AM_CURR_PORT_STS ", 0x300100},
|
||||
{"QM_AM_CURR_TRANS_RETURN ", 0x300150},
|
||||
{"QM_AM_CURR_RD_MAX_TXID ", 0x300154},
|
||||
{"QM_AM_CURR_WR_MAX_TXID ", 0x300158},
|
||||
{"QM_AM_ALARM_RRESP ", 0x300180},
|
||||
{"QM_AM_ALARM_BRESP ", 0x300184},
|
||||
};
|
||||
|
||||
static const struct debugfs_reg32 qm_vf_dfx_regs[] = {
|
||||
|
@ -1001,6 +1027,30 @@ static int qm_diff_regs_show(struct seq_file *s, void *unused)
|
|||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(qm_diff_regs);
|
||||
|
||||
static int qm_state_show(struct seq_file *s, void *unused)
|
||||
{
|
||||
struct hisi_qm *qm = s->private;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/* If device is in suspended, directly return the idle state. */
|
||||
ret = hisi_qm_get_dfx_access(qm);
|
||||
if (!ret) {
|
||||
val = readl(qm->io_base + QM_IN_IDLE_ST_REG);
|
||||
hisi_qm_put_dfx_access(qm);
|
||||
} else if (ret == -EAGAIN) {
|
||||
val = QM_IN_IDLE_STATE;
|
||||
} else {
|
||||
return ret;
|
||||
}
|
||||
|
||||
seq_printf(s, "%u\n", val);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEFINE_SHOW_ATTRIBUTE(qm_state);
|
||||
|
||||
static ssize_t qm_status_read(struct file *filp, char __user *buffer,
|
||||
size_t count, loff_t *pos)
|
||||
{
|
||||
|
@ -1062,6 +1112,7 @@ DEFINE_DEBUGFS_ATTRIBUTE(qm_atomic64_ops, qm_debugfs_atomic64_get,
|
|||
void hisi_qm_debug_init(struct hisi_qm *qm)
|
||||
{
|
||||
struct dfx_diff_registers *qm_regs = qm->debug.qm_diff_regs;
|
||||
struct qm_dev_dfx *dev_dfx = &qm->debug.dev_dfx;
|
||||
struct qm_dfx *dfx = &qm->debug.dfx;
|
||||
struct dentry *qm_d;
|
||||
void *data;
|
||||
|
@ -1072,6 +1123,9 @@ void hisi_qm_debug_init(struct hisi_qm *qm)
|
|||
|
||||
/* only show this in PF */
|
||||
if (qm->fun_type == QM_HW_PF) {
|
||||
debugfs_create_file("qm_state", 0444, qm->debug.qm_d,
|
||||
qm, &qm_state_fops);
|
||||
|
||||
qm_create_debugfs_file(qm, qm->debug.debug_root, CURRENT_QM);
|
||||
for (i = CURRENT_Q; i < DEBUG_FILE_NUM; i++)
|
||||
qm_create_debugfs_file(qm, qm->debug.qm_d, i);
|
||||
|
@ -1087,6 +1141,10 @@ void hisi_qm_debug_init(struct hisi_qm *qm)
|
|||
|
||||
debugfs_create_file("status", 0444, qm->debug.qm_d, qm,
|
||||
&qm_status_fops);
|
||||
|
||||
debugfs_create_u32("dev_state", 0444, qm->debug.qm_d, &dev_dfx->dev_state);
|
||||
debugfs_create_u32("dev_timeout", 0644, qm->debug.qm_d, &dev_dfx->dev_timeout);
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(qm_dfx_files); i++) {
|
||||
data = (atomic64_t *)((uintptr_t)dfx + qm_dfx_files[i].offset);
|
||||
debugfs_create_file(qm_dfx_files[i].name,
|
||||
|
|
|
@ -440,7 +440,7 @@ MODULE_PARM_DESC(vfs_num, "Number of VFs to enable(1-63), 0(default)");
|
|||
|
||||
struct hisi_qp *hpre_create_qp(u8 type)
|
||||
{
|
||||
int node = cpu_to_node(smp_processor_id());
|
||||
int node = cpu_to_node(raw_smp_processor_id());
|
||||
struct hisi_qp *qp = NULL;
|
||||
int ret;
|
||||
|
||||
|
|
|
@ -236,6 +236,12 @@
|
|||
|
||||
#define QM_DEV_ALG_MAX_LEN 256
|
||||
|
||||
/* abnormal status value for stopping queue */
|
||||
#define QM_STOP_QUEUE_FAIL 1
|
||||
#define QM_DUMP_SQC_FAIL 3
|
||||
#define QM_DUMP_CQC_FAIL 4
|
||||
#define QM_FINISH_WAIT 5
|
||||
|
||||
#define QM_MK_CQC_DW3_V1(hop_num, pg_sz, buf_sz, cqe_sz) \
|
||||
(((hop_num) << QM_CQ_HOP_NUM_SHIFT) | \
|
||||
((pg_sz) << QM_CQ_PAGE_SIZE_SHIFT) | \
|
||||
|
@ -312,6 +318,7 @@ static const struct hisi_qm_cap_info qm_cap_info_comm[] = {
|
|||
{QM_SUPPORT_DB_ISOLATION, 0x30, 0, BIT(0), 0x0, 0x0, 0x0},
|
||||
{QM_SUPPORT_FUNC_QOS, 0x3100, 0, BIT(8), 0x0, 0x0, 0x1},
|
||||
{QM_SUPPORT_STOP_QP, 0x3100, 0, BIT(9), 0x0, 0x0, 0x1},
|
||||
{QM_SUPPORT_STOP_FUNC, 0x3100, 0, BIT(10), 0x0, 0x0, 0x1},
|
||||
{QM_SUPPORT_MB_COMMAND, 0x3100, 0, BIT(11), 0x0, 0x0, 0x1},
|
||||
{QM_SUPPORT_SVA_PREFETCH, 0x3100, 0, BIT(14), 0x0, 0x0, 0x1},
|
||||
};
|
||||
|
@ -1674,6 +1681,11 @@ unlock:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int qm_drain_qm(struct hisi_qm *qm)
|
||||
{
|
||||
return hisi_qm_mb(qm, QM_MB_CMD_FLUSH_QM, 0, 0, 0);
|
||||
}
|
||||
|
||||
static int qm_stop_qp(struct hisi_qp *qp)
|
||||
{
|
||||
return hisi_qm_mb(qp->qm, QM_MB_CMD_STOP_QP, 0, qp->qp_id, 0);
|
||||
|
@ -2031,43 +2043,25 @@ static void qp_stop_fail_cb(struct hisi_qp *qp)
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* qm_drain_qp() - Drain a qp.
|
||||
* @qp: The qp we want to drain.
|
||||
*
|
||||
* Determine whether the queue is cleared by judging the tail pointers of
|
||||
* sq and cq.
|
||||
*/
|
||||
static int qm_drain_qp(struct hisi_qp *qp)
|
||||
static int qm_wait_qp_empty(struct hisi_qm *qm, u32 *state, u32 qp_id)
|
||||
{
|
||||
struct hisi_qm *qm = qp->qm;
|
||||
struct device *dev = &qm->pdev->dev;
|
||||
struct qm_sqc sqc;
|
||||
struct qm_cqc cqc;
|
||||
int ret, i = 0;
|
||||
|
||||
/* No need to judge if master OOO is blocked. */
|
||||
if (qm_check_dev_error(qm))
|
||||
return 0;
|
||||
|
||||
/* Kunpeng930 supports drain qp by device */
|
||||
if (test_bit(QM_SUPPORT_STOP_QP, &qm->caps)) {
|
||||
ret = qm_stop_qp(qp);
|
||||
if (ret)
|
||||
dev_err(dev, "Failed to stop qp(%u)!\n", qp->qp_id);
|
||||
return ret;
|
||||
}
|
||||
|
||||
while (++i) {
|
||||
ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp->qp_id, 1);
|
||||
ret = qm_set_and_get_xqc(qm, QM_MB_CMD_SQC, &sqc, qp_id, 1);
|
||||
if (ret) {
|
||||
dev_err_ratelimited(dev, "Failed to dump sqc!\n");
|
||||
*state = QM_DUMP_SQC_FAIL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp->qp_id, 1);
|
||||
ret = qm_set_and_get_xqc(qm, QM_MB_CMD_CQC, &cqc, qp_id, 1);
|
||||
if (ret) {
|
||||
dev_err_ratelimited(dev, "Failed to dump cqc!\n");
|
||||
*state = QM_DUMP_CQC_FAIL;
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -2076,8 +2070,9 @@ static int qm_drain_qp(struct hisi_qp *qp)
|
|||
break;
|
||||
|
||||
if (i == MAX_WAIT_COUNTS) {
|
||||
dev_err(dev, "Fail to empty queue %u!\n", qp->qp_id);
|
||||
return -EBUSY;
|
||||
dev_err(dev, "Fail to empty queue %u!\n", qp_id);
|
||||
*state = QM_STOP_QUEUE_FAIL;
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
usleep_range(WAIT_PERIOD_US_MIN, WAIT_PERIOD_US_MAX);
|
||||
|
@ -2086,9 +2081,53 @@ static int qm_drain_qp(struct hisi_qp *qp)
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int qm_stop_qp_nolock(struct hisi_qp *qp)
|
||||
/**
|
||||
* qm_drain_qp() - Drain a qp.
|
||||
* @qp: The qp we want to drain.
|
||||
*
|
||||
* If the device does not support stopping queue by sending mailbox,
|
||||
* determine whether the queue is cleared by judging the tail pointers of
|
||||
* sq and cq.
|
||||
*/
|
||||
static int qm_drain_qp(struct hisi_qp *qp)
|
||||
{
|
||||
struct device *dev = &qp->qm->pdev->dev;
|
||||
struct hisi_qm *qm = qp->qm;
|
||||
struct hisi_qm *pf_qm = pci_get_drvdata(pci_physfn(qm->pdev));
|
||||
u32 state = 0;
|
||||
int ret;
|
||||
|
||||
/* No need to judge if master OOO is blocked. */
|
||||
if (qm_check_dev_error(pf_qm))
|
||||
return 0;
|
||||
|
||||
/* HW V3 supports drain qp by device */
|
||||
if (test_bit(QM_SUPPORT_STOP_QP, &qm->caps)) {
|
||||
ret = qm_stop_qp(qp);
|
||||
if (ret) {
|
||||
dev_err(&qm->pdev->dev, "Failed to stop qp!\n");
|
||||
state = QM_STOP_QUEUE_FAIL;
|
||||
goto set_dev_state;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = qm_wait_qp_empty(qm, &state, qp->qp_id);
|
||||
if (ret)
|
||||
goto set_dev_state;
|
||||
|
||||
return 0;
|
||||
|
||||
set_dev_state:
|
||||
if (qm->debug.dev_dfx.dev_timeout)
|
||||
qm->debug.dev_dfx.dev_state = state;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void qm_stop_qp_nolock(struct hisi_qp *qp)
|
||||
{
|
||||
struct hisi_qm *qm = qp->qm;
|
||||
struct device *dev = &qm->pdev->dev;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
|
@ -2099,39 +2138,36 @@ static int qm_stop_qp_nolock(struct hisi_qp *qp)
|
|||
*/
|
||||
if (atomic_read(&qp->qp_status.flags) != QP_START) {
|
||||
qp->is_resetting = false;
|
||||
return 0;
|
||||
return;
|
||||
}
|
||||
|
||||
atomic_set(&qp->qp_status.flags, QP_STOP);
|
||||
|
||||
ret = qm_drain_qp(qp);
|
||||
if (ret)
|
||||
dev_err(dev, "Failed to drain out data for stopping!\n");
|
||||
/* V3 supports direct stop function when FLR prepare */
|
||||
if (qm->ver < QM_HW_V3 || qm->status.stop_reason == QM_NORMAL) {
|
||||
ret = qm_drain_qp(qp);
|
||||
if (ret)
|
||||
dev_err(dev, "Failed to drain out data for stopping qp(%u)!\n", qp->qp_id);
|
||||
}
|
||||
|
||||
flush_workqueue(qp->qm->wq);
|
||||
flush_workqueue(qm->wq);
|
||||
if (unlikely(qp->is_resetting && atomic_read(&qp->qp_status.used)))
|
||||
qp_stop_fail_cb(qp);
|
||||
|
||||
dev_dbg(dev, "stop queue %u!", qp->qp_id);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* hisi_qm_stop_qp() - Stop a qp in qm.
|
||||
* @qp: The qp we want to stop.
|
||||
*
|
||||
* This function is reverse of hisi_qm_start_qp. Return 0 if successful.
|
||||
* This function is reverse of hisi_qm_start_qp.
|
||||
*/
|
||||
int hisi_qm_stop_qp(struct hisi_qp *qp)
|
||||
void hisi_qm_stop_qp(struct hisi_qp *qp)
|
||||
{
|
||||
int ret;
|
||||
|
||||
down_write(&qp->qm->qps_lock);
|
||||
ret = qm_stop_qp_nolock(qp);
|
||||
qm_stop_qp_nolock(qp);
|
||||
up_write(&qp->qm->qps_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(hisi_qm_stop_qp);
|
||||
|
||||
|
@ -2309,7 +2345,31 @@ static int hisi_qm_uacce_start_queue(struct uacce_queue *q)
|
|||
|
||||
static void hisi_qm_uacce_stop_queue(struct uacce_queue *q)
|
||||
{
|
||||
hisi_qm_stop_qp(q->priv);
|
||||
struct hisi_qp *qp = q->priv;
|
||||
struct hisi_qm *qm = qp->qm;
|
||||
struct qm_dev_dfx *dev_dfx = &qm->debug.dev_dfx;
|
||||
u32 i = 0;
|
||||
|
||||
hisi_qm_stop_qp(qp);
|
||||
|
||||
if (!dev_dfx->dev_timeout || !dev_dfx->dev_state)
|
||||
return;
|
||||
|
||||
/*
|
||||
* After the queue fails to be stopped,
|
||||
* wait for a period of time before releasing the queue.
|
||||
*/
|
||||
while (++i) {
|
||||
msleep(WAIT_PERIOD);
|
||||
|
||||
/* Since dev_timeout maybe modified, check i >= dev_timeout */
|
||||
if (i >= dev_dfx->dev_timeout) {
|
||||
dev_err(&qm->pdev->dev, "Stop q %u timeout, state %u\n",
|
||||
qp->qp_id, dev_dfx->dev_state);
|
||||
dev_dfx->dev_state = QM_FINISH_WAIT;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int hisi_qm_is_q_updated(struct uacce_queue *q)
|
||||
|
@ -3054,25 +3114,18 @@ static int qm_restart(struct hisi_qm *qm)
|
|||
}
|
||||
|
||||
/* Stop started qps in reset flow */
|
||||
static int qm_stop_started_qp(struct hisi_qm *qm)
|
||||
static void qm_stop_started_qp(struct hisi_qm *qm)
|
||||
{
|
||||
struct device *dev = &qm->pdev->dev;
|
||||
struct hisi_qp *qp;
|
||||
int i, ret;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < qm->qp_num; i++) {
|
||||
qp = &qm->qp_array[i];
|
||||
if (qp && atomic_read(&qp->qp_status.flags) == QP_START) {
|
||||
if (atomic_read(&qp->qp_status.flags) == QP_START) {
|
||||
qp->is_resetting = true;
|
||||
ret = qm_stop_qp_nolock(qp);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to stop qp%d!\n", i);
|
||||
return ret;
|
||||
}
|
||||
qm_stop_qp_nolock(qp);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -3112,21 +3165,31 @@ int hisi_qm_stop(struct hisi_qm *qm, enum qm_stop_reason r)
|
|||
|
||||
down_write(&qm->qps_lock);
|
||||
|
||||
qm->status.stop_reason = r;
|
||||
if (atomic_read(&qm->status.flags) == QM_STOP)
|
||||
goto err_unlock;
|
||||
|
||||
/* Stop all the request sending at first. */
|
||||
atomic_set(&qm->status.flags, QM_STOP);
|
||||
qm->status.stop_reason = r;
|
||||
|
||||
if (qm->status.stop_reason == QM_SOFT_RESET ||
|
||||
qm->status.stop_reason == QM_DOWN) {
|
||||
if (qm->status.stop_reason != QM_NORMAL) {
|
||||
hisi_qm_set_hw_reset(qm, QM_RESET_STOP_TX_OFFSET);
|
||||
ret = qm_stop_started_qp(qm);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "Failed to stop started qp!\n");
|
||||
goto err_unlock;
|
||||
/*
|
||||
* When performing soft reset, the hardware will no longer
|
||||
* do tasks, and the tasks in the device will be flushed
|
||||
* out directly since the master ooo is closed.
|
||||
*/
|
||||
if (test_bit(QM_SUPPORT_STOP_FUNC, &qm->caps) &&
|
||||
r != QM_SOFT_RESET) {
|
||||
ret = qm_drain_qm(qm);
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to drain qm!\n");
|
||||
goto err_unlock;
|
||||
}
|
||||
}
|
||||
|
||||
qm_stop_started_qp(qm);
|
||||
|
||||
hisi_qm_set_hw_reset(qm, QM_RESET_STOP_RX_OFFSET);
|
||||
}
|
||||
|
||||
|
@ -3141,6 +3204,7 @@ int hisi_qm_stop(struct hisi_qm *qm, enum qm_stop_reason r)
|
|||
}
|
||||
|
||||
qm_clear_queues(qm);
|
||||
qm->status.stop_reason = QM_NORMAL;
|
||||
|
||||
err_unlock:
|
||||
up_write(&qm->qps_lock);
|
||||
|
|
|
@ -118,7 +118,7 @@ struct sec_aead {
|
|||
};
|
||||
|
||||
/* Get an en/de-cipher queue cyclically to balance load over queues of TFM */
|
||||
static inline int sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req)
|
||||
static inline u32 sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req)
|
||||
{
|
||||
if (req->c_req.encrypt)
|
||||
return (u32)atomic_inc_return(&ctx->enc_qcyclic) %
|
||||
|
@ -485,8 +485,7 @@ static void sec_alg_resource_free(struct sec_ctx *ctx,
|
|||
sec_free_mac_resource(dev, qp_ctx->res);
|
||||
}
|
||||
|
||||
static int sec_alloc_qp_ctx_resource(struct hisi_qm *qm, struct sec_ctx *ctx,
|
||||
struct sec_qp_ctx *qp_ctx)
|
||||
static int sec_alloc_qp_ctx_resource(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx)
|
||||
{
|
||||
u16 q_depth = qp_ctx->qp->sq_depth;
|
||||
struct device *dev = ctx->dev;
|
||||
|
@ -541,8 +540,7 @@ static void sec_free_qp_ctx_resource(struct sec_ctx *ctx, struct sec_qp_ctx *qp_
|
|||
kfree(qp_ctx->req_list);
|
||||
}
|
||||
|
||||
static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx,
|
||||
int qp_ctx_id, int alg_type)
|
||||
static int sec_create_qp_ctx(struct sec_ctx *ctx, int qp_ctx_id)
|
||||
{
|
||||
struct sec_qp_ctx *qp_ctx;
|
||||
struct hisi_qp *qp;
|
||||
|
@ -561,7 +559,7 @@ static int sec_create_qp_ctx(struct hisi_qm *qm, struct sec_ctx *ctx,
|
|||
idr_init(&qp_ctx->req_idr);
|
||||
INIT_LIST_HEAD(&qp_ctx->backlog);
|
||||
|
||||
ret = sec_alloc_qp_ctx_resource(qm, ctx, qp_ctx);
|
||||
ret = sec_alloc_qp_ctx_resource(ctx, qp_ctx);
|
||||
if (ret)
|
||||
goto err_destroy_idr;
|
||||
|
||||
|
@ -614,7 +612,7 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
|
|||
}
|
||||
|
||||
for (i = 0; i < sec->ctx_q_num; i++) {
|
||||
ret = sec_create_qp_ctx(&sec->qm, ctx, i, 0);
|
||||
ret = sec_create_qp_ctx(ctx, i);
|
||||
if (ret)
|
||||
goto err_sec_release_qp_ctx;
|
||||
}
|
||||
|
@ -750,9 +748,7 @@ static void sec_skcipher_uninit(struct crypto_skcipher *tfm)
|
|||
sec_ctx_base_uninit(ctx);
|
||||
}
|
||||
|
||||
static int sec_skcipher_3des_setkey(struct crypto_skcipher *tfm, const u8 *key,
|
||||
const u32 keylen,
|
||||
const enum sec_cmode c_mode)
|
||||
static int sec_skcipher_3des_setkey(struct crypto_skcipher *tfm, const u8 *key, const u32 keylen)
|
||||
{
|
||||
struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
|
||||
struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;
|
||||
|
@ -843,7 +839,7 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
|
|||
|
||||
switch (c_alg) {
|
||||
case SEC_CALG_3DES:
|
||||
ret = sec_skcipher_3des_setkey(tfm, key, keylen, c_mode);
|
||||
ret = sec_skcipher_3des_setkey(tfm, key, keylen);
|
||||
break;
|
||||
case SEC_CALG_AES:
|
||||
case SEC_CALG_SM4:
|
||||
|
@ -1371,7 +1367,7 @@ static int sec_skcipher_bd_fill_v3(struct sec_ctx *ctx, struct sec_req *req)
|
|||
sec_sqe3->bd_param = cpu_to_le32(bd_param);
|
||||
|
||||
sec_sqe3->c_len_ivin |= cpu_to_le32(c_req->c_len);
|
||||
sec_sqe3->tag = cpu_to_le64(req);
|
||||
sec_sqe3->tag = cpu_to_le64((unsigned long)req);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -2145,8 +2141,8 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
|
|||
return sec_skcipher_crypto(sk_req, false);
|
||||
}
|
||||
|
||||
#define SEC_SKCIPHER_GEN_ALG(sec_cra_name, sec_set_key, sec_min_key_size, \
|
||||
sec_max_key_size, ctx_init, ctx_exit, blk_size, iv_size)\
|
||||
#define SEC_SKCIPHER_ALG(sec_cra_name, sec_set_key, \
|
||||
sec_min_key_size, sec_max_key_size, blk_size, iv_size)\
|
||||
{\
|
||||
.base = {\
|
||||
.cra_name = sec_cra_name,\
|
||||
|
@ -2158,8 +2154,8 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
|
|||
.cra_ctxsize = sizeof(struct sec_ctx),\
|
||||
.cra_module = THIS_MODULE,\
|
||||
},\
|
||||
.init = ctx_init,\
|
||||
.exit = ctx_exit,\
|
||||
.init = sec_skcipher_ctx_init,\
|
||||
.exit = sec_skcipher_ctx_exit,\
|
||||
.setkey = sec_set_key,\
|
||||
.decrypt = sec_skcipher_decrypt,\
|
||||
.encrypt = sec_skcipher_encrypt,\
|
||||
|
@ -2168,11 +2164,6 @@ static int sec_skcipher_decrypt(struct skcipher_request *sk_req)
|
|||
.ivsize = iv_size,\
|
||||
}
|
||||
|
||||
#define SEC_SKCIPHER_ALG(name, key_func, min_key_size, \
|
||||
max_key_size, blk_size, iv_size) \
|
||||
SEC_SKCIPHER_GEN_ALG(name, key_func, min_key_size, max_key_size, \
|
||||
sec_skcipher_ctx_init, sec_skcipher_ctx_exit, blk_size, iv_size)
|
||||
|
||||
static struct sec_skcipher sec_skciphers[] = {
|
||||
{
|
||||
.alg_msk = BIT(0),
|
||||
|
|
|
@ -282,6 +282,11 @@ static const struct debugfs_reg32 sec_dfx_regs[] = {
|
|||
{"SEC_BD_SAA6 ", 0x301C38},
|
||||
{"SEC_BD_SAA7 ", 0x301C3C},
|
||||
{"SEC_BD_SAA8 ", 0x301C40},
|
||||
{"SEC_RAS_CE_ENABLE ", 0x301050},
|
||||
{"SEC_RAS_FE_ENABLE ", 0x301054},
|
||||
{"SEC_RAS_NFE_ENABLE ", 0x301058},
|
||||
{"SEC_REQ_TRNG_TIME_TH ", 0x30112C},
|
||||
{"SEC_CHANNEL_RNG_REQ_THLD ", 0x302110},
|
||||
};
|
||||
|
||||
/* define the SEC's dfx regs region and region length */
|
||||
|
@ -374,7 +379,7 @@ void sec_destroy_qps(struct hisi_qp **qps, int qp_num)
|
|||
|
||||
struct hisi_qp **sec_create_qps(void)
|
||||
{
|
||||
int node = cpu_to_node(smp_processor_id());
|
||||
int node = cpu_to_node(raw_smp_processor_id());
|
||||
u32 ctx_num = ctx_q_num;
|
||||
struct hisi_qp **qps;
|
||||
int ret;
|
||||
|
|
|
@ -591,6 +591,7 @@ static struct acomp_alg hisi_zip_acomp_deflate = {
|
|||
.base = {
|
||||
.cra_name = "deflate",
|
||||
.cra_driver_name = "hisi-deflate-acomp",
|
||||
.cra_flags = CRYPTO_ALG_ASYNC,
|
||||
.cra_module = THIS_MODULE,
|
||||
.cra_priority = HZIP_ALG_PRIORITY,
|
||||
.cra_ctxsize = sizeof(struct hisi_zip_ctx),
|
||||
|
|
|
@ -454,7 +454,7 @@ MODULE_DEVICE_TABLE(pci, hisi_zip_dev_ids);
|
|||
int zip_create_qps(struct hisi_qp **qps, int qp_num, int node)
|
||||
{
|
||||
if (node == NUMA_NO_NODE)
|
||||
node = cpu_to_node(smp_processor_id());
|
||||
node = cpu_to_node(raw_smp_processor_id());
|
||||
|
||||
return hisi_qm_alloc_qps_node(&zip_devices, qp_num, 0, node, qps);
|
||||
}
|
||||
|
|
|
@ -59,10 +59,8 @@ struct iaa_device_compression_mode {
|
|||
const char *name;
|
||||
|
||||
struct aecs_comp_table_record *aecs_comp_table;
|
||||
struct aecs_decomp_table_record *aecs_decomp_table;
|
||||
|
||||
dma_addr_t aecs_comp_table_dma_addr;
|
||||
dma_addr_t aecs_decomp_table_dma_addr;
|
||||
};
|
||||
|
||||
/* Representation of IAA device with wqs, populated by probe */
|
||||
|
@ -107,23 +105,6 @@ struct aecs_comp_table_record {
|
|||
u32 reserved_padding[2];
|
||||
} __packed;
|
||||
|
||||
/* AECS for decompress */
|
||||
struct aecs_decomp_table_record {
|
||||
u32 crc;
|
||||
u32 xor_checksum;
|
||||
u32 low_filter_param;
|
||||
u32 high_filter_param;
|
||||
u32 output_mod_idx;
|
||||
u32 drop_init_decomp_out_bytes;
|
||||
u32 reserved[36];
|
||||
u32 output_accum_data[2];
|
||||
u32 out_bits_valid;
|
||||
u32 bit_off_indexing;
|
||||
u32 input_accum_data[64];
|
||||
u8 size_qw[32];
|
||||
u32 decomp_state[1220];
|
||||
} __packed;
|
||||
|
||||
int iaa_aecs_init_fixed(void);
|
||||
void iaa_aecs_cleanup_fixed(void);
|
||||
|
||||
|
@ -136,9 +117,6 @@ struct iaa_compression_mode {
|
|||
int ll_table_size;
|
||||
u32 *d_table;
|
||||
int d_table_size;
|
||||
u32 *header_table;
|
||||
int header_table_size;
|
||||
u16 gen_decomp_table_flags;
|
||||
iaa_dev_comp_init_fn_t init;
|
||||
iaa_dev_comp_free_fn_t free;
|
||||
};
|
||||
|
@ -148,9 +126,6 @@ int add_iaa_compression_mode(const char *name,
|
|||
int ll_table_size,
|
||||
const u32 *d_table,
|
||||
int d_table_size,
|
||||
const u8 *header_table,
|
||||
int header_table_size,
|
||||
u16 gen_decomp_table_flags,
|
||||
iaa_dev_comp_init_fn_t init,
|
||||
iaa_dev_comp_free_fn_t free);
|
||||
|
||||
|
|
|
@ -78,7 +78,6 @@ int iaa_aecs_init_fixed(void)
|
|||
sizeof(fixed_ll_sym),
|
||||
fixed_d_sym,
|
||||
sizeof(fixed_d_sym),
|
||||
NULL, 0, 0,
|
||||
init_fixed_mode, NULL);
|
||||
if (!ret)
|
||||
pr_debug("IAA fixed compression mode initialized\n");
|
||||
|
|
|
@ -258,16 +258,14 @@ static void free_iaa_compression_mode(struct iaa_compression_mode *mode)
|
|||
kfree(mode->name);
|
||||
kfree(mode->ll_table);
|
||||
kfree(mode->d_table);
|
||||
kfree(mode->header_table);
|
||||
|
||||
kfree(mode);
|
||||
}
|
||||
|
||||
/*
|
||||
* IAA Compression modes are defined by an ll_table, a d_table, and an
|
||||
* optional header_table. These tables are typically generated and
|
||||
* captured using statistics collected from running actual
|
||||
* compress/decompress workloads.
|
||||
* IAA Compression modes are defined by an ll_table and a d_table.
|
||||
* These tables are typically generated and captured using statistics
|
||||
* collected from running actual compress/decompress workloads.
|
||||
*
|
||||
* A module or other kernel code can add and remove compression modes
|
||||
* with a given name using the exported @add_iaa_compression_mode()
|
||||
|
@ -315,9 +313,6 @@ EXPORT_SYMBOL_GPL(remove_iaa_compression_mode);
|
|||
* @ll_table_size: The ll table size in bytes
|
||||
* @d_table: The d table
|
||||
* @d_table_size: The d table size in bytes
|
||||
* @header_table: Optional header table
|
||||
* @header_table_size: Optional header table size in bytes
|
||||
* @gen_decomp_table_flags: Otional flags used to generate the decomp table
|
||||
* @init: Optional callback function to init the compression mode data
|
||||
* @free: Optional callback function to free the compression mode data
|
||||
*
|
||||
|
@ -330,9 +325,6 @@ int add_iaa_compression_mode(const char *name,
|
|||
int ll_table_size,
|
||||
const u32 *d_table,
|
||||
int d_table_size,
|
||||
const u8 *header_table,
|
||||
int header_table_size,
|
||||
u16 gen_decomp_table_flags,
|
||||
iaa_dev_comp_init_fn_t init,
|
||||
iaa_dev_comp_free_fn_t free)
|
||||
{
|
||||
|
@ -370,16 +362,6 @@ int add_iaa_compression_mode(const char *name,
|
|||
mode->d_table_size = d_table_size;
|
||||
}
|
||||
|
||||
if (header_table) {
|
||||
mode->header_table = kzalloc(header_table_size, GFP_KERNEL);
|
||||
if (!mode->header_table)
|
||||
goto free;
|
||||
memcpy(mode->header_table, header_table, header_table_size);
|
||||
mode->header_table_size = header_table_size;
|
||||
}
|
||||
|
||||
mode->gen_decomp_table_flags = gen_decomp_table_flags;
|
||||
|
||||
mode->init = init;
|
||||
mode->free = free;
|
||||
|
||||
|
@ -420,10 +402,6 @@ static void free_device_compression_mode(struct iaa_device *iaa_device,
|
|||
if (device_mode->aecs_comp_table)
|
||||
dma_free_coherent(dev, size, device_mode->aecs_comp_table,
|
||||
device_mode->aecs_comp_table_dma_addr);
|
||||
if (device_mode->aecs_decomp_table)
|
||||
dma_free_coherent(dev, size, device_mode->aecs_decomp_table,
|
||||
device_mode->aecs_decomp_table_dma_addr);
|
||||
|
||||
kfree(device_mode);
|
||||
}
|
||||
|
||||
|
@ -440,73 +418,6 @@ static int check_completion(struct device *dev,
|
|||
bool compress,
|
||||
bool only_once);
|
||||
|
||||
static int decompress_header(struct iaa_device_compression_mode *device_mode,
|
||||
struct iaa_compression_mode *mode,
|
||||
struct idxd_wq *wq)
|
||||
{
|
||||
dma_addr_t src_addr, src2_addr;
|
||||
struct idxd_desc *idxd_desc;
|
||||
struct iax_hw_desc *desc;
|
||||
struct device *dev;
|
||||
int ret = 0;
|
||||
|
||||
idxd_desc = idxd_alloc_desc(wq, IDXD_OP_BLOCK);
|
||||
if (IS_ERR(idxd_desc))
|
||||
return PTR_ERR(idxd_desc);
|
||||
|
||||
desc = idxd_desc->iax_hw;
|
||||
|
||||
dev = &wq->idxd->pdev->dev;
|
||||
|
||||
src_addr = dma_map_single(dev, (void *)mode->header_table,
|
||||
mode->header_table_size, DMA_TO_DEVICE);
|
||||
dev_dbg(dev, "%s: mode->name %s, src_addr %llx, dev %p, src %p, slen %d\n",
|
||||
__func__, mode->name, src_addr, dev,
|
||||
mode->header_table, mode->header_table_size);
|
||||
if (unlikely(dma_mapping_error(dev, src_addr))) {
|
||||
dev_dbg(dev, "dma_map_single err, exiting\n");
|
||||
ret = -ENOMEM;
|
||||
return ret;
|
||||
}
|
||||
|
||||
desc->flags = IAX_AECS_GEN_FLAG;
|
||||
desc->opcode = IAX_OPCODE_DECOMPRESS;
|
||||
|
||||
desc->src1_addr = (u64)src_addr;
|
||||
desc->src1_size = mode->header_table_size;
|
||||
|
||||
src2_addr = device_mode->aecs_decomp_table_dma_addr;
|
||||
desc->src2_addr = (u64)src2_addr;
|
||||
desc->src2_size = 1088;
|
||||
dev_dbg(dev, "%s: mode->name %s, src2_addr %llx, dev %p, src2_size %d\n",
|
||||
__func__, mode->name, desc->src2_addr, dev, desc->src2_size);
|
||||
desc->max_dst_size = 0; // suppressed output
|
||||
|
||||
desc->decompr_flags = mode->gen_decomp_table_flags;
|
||||
|
||||
desc->priv = 0;
|
||||
|
||||
desc->completion_addr = idxd_desc->compl_dma;
|
||||
|
||||
ret = idxd_submit_desc(wq, idxd_desc);
|
||||
if (ret) {
|
||||
pr_err("%s: submit_desc failed ret=0x%x\n", __func__, ret);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = check_completion(dev, idxd_desc->iax_completion, false, false);
|
||||
if (ret)
|
||||
dev_dbg(dev, "%s: mode->name %s check_completion failed ret=%d\n",
|
||||
__func__, mode->name, ret);
|
||||
else
|
||||
dev_dbg(dev, "%s: mode->name %s succeeded\n", __func__,
|
||||
mode->name);
|
||||
out:
|
||||
dma_unmap_single(dev, src_addr, 1088, DMA_TO_DEVICE);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int init_device_compression_mode(struct iaa_device *iaa_device,
|
||||
struct iaa_compression_mode *mode,
|
||||
int idx, struct idxd_wq *wq)
|
||||
|
@ -529,24 +440,11 @@ static int init_device_compression_mode(struct iaa_device *iaa_device,
|
|||
if (!device_mode->aecs_comp_table)
|
||||
goto free;
|
||||
|
||||
device_mode->aecs_decomp_table = dma_alloc_coherent(dev, size,
|
||||
&device_mode->aecs_decomp_table_dma_addr, GFP_KERNEL);
|
||||
if (!device_mode->aecs_decomp_table)
|
||||
goto free;
|
||||
|
||||
/* Add Huffman table to aecs */
|
||||
memset(device_mode->aecs_comp_table, 0, sizeof(*device_mode->aecs_comp_table));
|
||||
memcpy(device_mode->aecs_comp_table->ll_sym, mode->ll_table, mode->ll_table_size);
|
||||
memcpy(device_mode->aecs_comp_table->d_sym, mode->d_table, mode->d_table_size);
|
||||
|
||||
if (mode->header_table) {
|
||||
ret = decompress_header(device_mode, mode, wq);
|
||||
if (ret) {
|
||||
pr_debug("iaa header decompression failed: ret=%d\n", ret);
|
||||
goto free;
|
||||
}
|
||||
}
|
||||
|
||||
if (mode->init) {
|
||||
ret = mode->init(device_mode);
|
||||
if (ret)
|
||||
|
@ -1324,7 +1222,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
|
|||
|
||||
*compression_crc = idxd_desc->iax_completion->crc;
|
||||
|
||||
if (!ctx->async_mode)
|
||||
if (!ctx->async_mode || disable_async)
|
||||
idxd_free_desc(wq, idxd_desc);
|
||||
out:
|
||||
return ret;
|
||||
|
@ -1570,7 +1468,7 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
|
|||
|
||||
*dlen = req->dlen;
|
||||
|
||||
if (!ctx->async_mode)
|
||||
if (!ctx->async_mode || disable_async)
|
||||
idxd_free_desc(wq, idxd_desc);
|
||||
|
||||
/* Update stats */
|
||||
|
@ -1596,6 +1494,7 @@ static int iaa_comp_acompress(struct acomp_req *req)
|
|||
u32 compression_crc;
|
||||
struct idxd_wq *wq;
|
||||
struct device *dev;
|
||||
u64 start_time_ns;
|
||||
int order = -1;
|
||||
|
||||
compression_ctx = crypto_tfm_ctx(tfm);
|
||||
|
@ -1669,8 +1568,10 @@ static int iaa_comp_acompress(struct acomp_req *req)
|
|||
" req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
|
||||
req->dst, req->dlen, sg_dma_len(req->dst));
|
||||
|
||||
start_time_ns = iaa_get_ts();
|
||||
ret = iaa_compress(tfm, req, wq, src_addr, req->slen, dst_addr,
|
||||
&req->dlen, &compression_crc, disable_async);
|
||||
update_max_comp_delay_ns(start_time_ns);
|
||||
if (ret == -EINPROGRESS)
|
||||
return ret;
|
||||
|
||||
|
@ -1717,6 +1618,7 @@ static int iaa_comp_adecompress_alloc_dest(struct acomp_req *req)
|
|||
struct iaa_wq *iaa_wq;
|
||||
struct device *dev;
|
||||
struct idxd_wq *wq;
|
||||
u64 start_time_ns;
|
||||
int order = -1;
|
||||
|
||||
cpu = get_cpu();
|
||||
|
@ -1773,8 +1675,10 @@ alloc_dest:
|
|||
dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
|
||||
" req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
|
||||
req->dst, req->dlen, sg_dma_len(req->dst));
|
||||
start_time_ns = iaa_get_ts();
|
||||
ret = iaa_decompress(tfm, req, wq, src_addr, req->slen,
|
||||
dst_addr, &req->dlen, true);
|
||||
update_max_decomp_delay_ns(start_time_ns);
|
||||
if (ret == -EOVERFLOW) {
|
||||
dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
|
||||
req->dlen *= 2;
|
||||
|
@ -1805,6 +1709,7 @@ static int iaa_comp_adecompress(struct acomp_req *req)
|
|||
int nr_sgs, cpu, ret = 0;
|
||||
struct iaa_wq *iaa_wq;
|
||||
struct device *dev;
|
||||
u64 start_time_ns;
|
||||
struct idxd_wq *wq;
|
||||
|
||||
if (!iaa_crypto_enabled) {
|
||||
|
@ -1864,8 +1769,10 @@ static int iaa_comp_adecompress(struct acomp_req *req)
|
|||
" req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
|
||||
req->dst, req->dlen, sg_dma_len(req->dst));
|
||||
|
||||
start_time_ns = iaa_get_ts();
|
||||
ret = iaa_decompress(tfm, req, wq, src_addr, req->slen,
|
||||
dst_addr, &req->dlen, false);
|
||||
update_max_decomp_delay_ns(start_time_ns);
|
||||
if (ret == -EINPROGRESS)
|
||||
return ret;
|
||||
|
||||
|
@ -1916,6 +1823,7 @@ static struct acomp_alg iaa_acomp_fixed_deflate = {
|
|||
.base = {
|
||||
.cra_name = "deflate",
|
||||
.cra_driver_name = "deflate-iaa",
|
||||
.cra_flags = CRYPTO_ALG_ASYNC,
|
||||
.cra_ctxsize = sizeof(struct iaa_compression_ctx),
|
||||
.cra_module = THIS_MODULE,
|
||||
.cra_priority = IAA_ALG_PRIORITY,
|
||||
|
|
|
@ -22,8 +22,6 @@ static u64 total_decomp_calls;
|
|||
static u64 total_sw_decomp_calls;
|
||||
static u64 max_comp_delay_ns;
|
||||
static u64 max_decomp_delay_ns;
|
||||
static u64 max_acomp_delay_ns;
|
||||
static u64 max_adecomp_delay_ns;
|
||||
static u64 total_comp_bytes_out;
|
||||
static u64 total_decomp_bytes_in;
|
||||
static u64 total_completion_einval_errors;
|
||||
|
@ -92,26 +90,6 @@ void update_max_decomp_delay_ns(u64 start_time_ns)
|
|||
max_decomp_delay_ns = time_diff;
|
||||
}
|
||||
|
||||
void update_max_acomp_delay_ns(u64 start_time_ns)
|
||||
{
|
||||
u64 time_diff;
|
||||
|
||||
time_diff = ktime_get_ns() - start_time_ns;
|
||||
|
||||
if (time_diff > max_acomp_delay_ns)
|
||||
max_acomp_delay_ns = time_diff;
|
||||
}
|
||||
|
||||
void update_max_adecomp_delay_ns(u64 start_time_ns)
|
||||
{
|
||||
u64 time_diff;
|
||||
|
||||
time_diff = ktime_get_ns() - start_time_ns;
|
||||
|
||||
if (time_diff > max_adecomp_delay_ns)
|
||||
max_adecomp_delay_ns = time_diff;
|
||||
}
|
||||
|
||||
void update_wq_comp_calls(struct idxd_wq *idxd_wq)
|
||||
{
|
||||
struct iaa_wq *wq = idxd_wq_get_private(idxd_wq);
|
||||
|
@ -151,8 +129,6 @@ static void reset_iaa_crypto_stats(void)
|
|||
total_sw_decomp_calls = 0;
|
||||
max_comp_delay_ns = 0;
|
||||
max_decomp_delay_ns = 0;
|
||||
max_acomp_delay_ns = 0;
|
||||
max_adecomp_delay_ns = 0;
|
||||
total_comp_bytes_out = 0;
|
||||
total_decomp_bytes_in = 0;
|
||||
total_completion_einval_errors = 0;
|
||||
|
@ -275,17 +251,11 @@ int __init iaa_crypto_debugfs_init(void)
|
|||
return -ENODEV;
|
||||
|
||||
iaa_crypto_debugfs_root = debugfs_create_dir("iaa_crypto", NULL);
|
||||
if (!iaa_crypto_debugfs_root)
|
||||
return -ENOMEM;
|
||||
|
||||
debugfs_create_u64("max_comp_delay_ns", 0644,
|
||||
iaa_crypto_debugfs_root, &max_comp_delay_ns);
|
||||
debugfs_create_u64("max_decomp_delay_ns", 0644,
|
||||
iaa_crypto_debugfs_root, &max_decomp_delay_ns);
|
||||
debugfs_create_u64("max_acomp_delay_ns", 0644,
|
||||
iaa_crypto_debugfs_root, &max_comp_delay_ns);
|
||||
debugfs_create_u64("max_adecomp_delay_ns", 0644,
|
||||
iaa_crypto_debugfs_root, &max_decomp_delay_ns);
|
||||
debugfs_create_u64("total_comp_calls", 0644,
|
||||
iaa_crypto_debugfs_root, &total_comp_calls);
|
||||
debugfs_create_u64("total_decomp_calls", 0644,
|
||||
|
|
|
@ -15,8 +15,6 @@ void update_total_sw_decomp_calls(void);
|
|||
void update_total_decomp_bytes_in(int n);
|
||||
void update_max_comp_delay_ns(u64 start_time_ns);
|
||||
void update_max_decomp_delay_ns(u64 start_time_ns);
|
||||
void update_max_acomp_delay_ns(u64 start_time_ns);
|
||||
void update_max_adecomp_delay_ns(u64 start_time_ns);
|
||||
void update_completion_einval_errs(void);
|
||||
void update_completion_timeout_errs(void);
|
||||
void update_completion_comp_buf_overflow_errs(void);
|
||||
|
@ -26,6 +24,8 @@ void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n);
|
|||
void update_wq_decomp_calls(struct idxd_wq *idxd_wq);
|
||||
void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n);
|
||||
|
||||
static inline u64 iaa_get_ts(void) { return ktime_get_ns(); }
|
||||
|
||||
#else
|
||||
static inline int iaa_crypto_debugfs_init(void) { return 0; }
|
||||
static inline void iaa_crypto_debugfs_cleanup(void) {}
|
||||
|
@ -37,8 +37,6 @@ static inline void update_total_sw_decomp_calls(void) {}
|
|||
static inline void update_total_decomp_bytes_in(int n) {}
|
||||
static inline void update_max_comp_delay_ns(u64 start_time_ns) {}
|
||||
static inline void update_max_decomp_delay_ns(u64 start_time_ns) {}
|
||||
static inline void update_max_acomp_delay_ns(u64 start_time_ns) {}
|
||||
static inline void update_max_adecomp_delay_ns(u64 start_time_ns) {}
|
||||
static inline void update_completion_einval_errs(void) {}
|
||||
static inline void update_completion_timeout_errs(void) {}
|
||||
static inline void update_completion_comp_buf_overflow_errs(void) {}
|
||||
|
@ -48,6 +46,8 @@ static inline void update_wq_comp_bytes(struct idxd_wq *idxd_wq, int n) {}
|
|||
static inline void update_wq_decomp_calls(struct idxd_wq *idxd_wq) {}
|
||||
static inline void update_wq_decomp_bytes(struct idxd_wq *idxd_wq, int n) {}
|
||||
|
||||
static inline u64 iaa_get_ts(void) { return 0; }
|
||||
|
||||
#endif // CONFIG_CRYPTO_DEV_IAA_CRYPTO_STATS
|
||||
|
||||
#endif
|
||||
|
|
|
@ -106,3 +106,17 @@ config CRYPTO_DEV_QAT_C62XVF
|
|||
|
||||
To compile this as a module, choose M here: the module
|
||||
will be called qat_c62xvf.
|
||||
|
||||
config CRYPTO_DEV_QAT_ERROR_INJECTION
|
||||
bool "Support for Intel(R) QAT Devices Heartbeat Error Injection"
|
||||
depends on CRYPTO_DEV_QAT
|
||||
depends on DEBUG_FS
|
||||
help
|
||||
Enables a mechanism that allows to inject a heartbeat error on
|
||||
Intel(R) QuickAssist devices for testing purposes.
|
||||
|
||||
This is intended for developer use only.
|
||||
If unsure, say N.
|
||||
|
||||
This functionality is available via debugfs entry of the Intel(R)
|
||||
QuickAssist device
|
||||
|
|
|
@ -361,53 +361,6 @@ static u32 get_ena_thd_mask(struct adf_accel_dev *accel_dev, u32 obj_num)
|
|||
}
|
||||
}
|
||||
|
||||
static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
enum adf_cfg_service_type rps[RP_GROUP_COUNT] = { };
|
||||
const struct adf_fw_config *fw_config;
|
||||
u16 ring_to_svc_map;
|
||||
int i, j;
|
||||
|
||||
fw_config = get_fw_config(accel_dev);
|
||||
if (!fw_config)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < RP_GROUP_COUNT; i++) {
|
||||
switch (fw_config[i].ae_mask) {
|
||||
case ADF_AE_GROUP_0:
|
||||
j = RP_GROUP_0;
|
||||
break;
|
||||
case ADF_AE_GROUP_1:
|
||||
j = RP_GROUP_1;
|
||||
break;
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (fw_config[i].obj) {
|
||||
case ADF_FW_SYM_OBJ:
|
||||
rps[j] = SYM;
|
||||
break;
|
||||
case ADF_FW_ASYM_OBJ:
|
||||
rps[j] = ASYM;
|
||||
break;
|
||||
case ADF_FW_DC_OBJ:
|
||||
rps[j] = COMP;
|
||||
break;
|
||||
default:
|
||||
rps[j] = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT |
|
||||
rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT |
|
||||
rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT |
|
||||
rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT;
|
||||
|
||||
return ring_to_svc_map;
|
||||
}
|
||||
|
||||
static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
|
||||
const char * const fw_objs[], int num_objs)
|
||||
{
|
||||
|
@ -433,6 +386,20 @@ static const char *uof_get_name_420xx(struct adf_accel_dev *accel_dev, u32 obj_n
|
|||
return uof_get_name(accel_dev, obj_num, adf_420xx_fw_objs, num_fw_objs);
|
||||
}
|
||||
|
||||
static int uof_get_obj_type(struct adf_accel_dev *accel_dev, u32 obj_num)
|
||||
{
|
||||
const struct adf_fw_config *fw_config;
|
||||
|
||||
if (obj_num >= uof_get_num_objs(accel_dev))
|
||||
return -EINVAL;
|
||||
|
||||
fw_config = get_fw_config(accel_dev);
|
||||
if (!fw_config)
|
||||
return -EINVAL;
|
||||
|
||||
return fw_config[obj_num].obj;
|
||||
}
|
||||
|
||||
static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num)
|
||||
{
|
||||
const struct adf_fw_config *fw_config;
|
||||
|
@ -496,12 +463,13 @@ void adf_init_hw_data_420xx(struct adf_hw_device_data *hw_data, u32 dev_id)
|
|||
hw_data->fw_mmp_name = ADF_420XX_MMP;
|
||||
hw_data->uof_get_name = uof_get_name_420xx;
|
||||
hw_data->uof_get_num_objs = uof_get_num_objs;
|
||||
hw_data->uof_get_obj_type = uof_get_obj_type;
|
||||
hw_data->uof_get_ae_mask = uof_get_ae_mask;
|
||||
hw_data->get_rp_group = get_rp_group;
|
||||
hw_data->get_ena_thd_mask = get_ena_thd_mask;
|
||||
hw_data->set_msix_rttable = adf_gen4_set_msix_default_rttable;
|
||||
hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;
|
||||
hw_data->get_ring_to_svc_map = get_ring_to_svc_map;
|
||||
hw_data->get_ring_to_svc_map = adf_gen4_get_ring_to_svc_map;
|
||||
hw_data->disable_iov = adf_disable_sriov;
|
||||
hw_data->ring_pair_reset = adf_gen4_ring_pair_reset;
|
||||
hw_data->enable_pm = adf_gen4_enable_pm;
|
||||
|
|
|
@ -320,53 +320,6 @@ static u32 get_ena_thd_mask_401xx(struct adf_accel_dev *accel_dev, u32 obj_num)
|
|||
}
|
||||
}
|
||||
|
||||
static u16 get_ring_to_svc_map(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
enum adf_cfg_service_type rps[RP_GROUP_COUNT];
|
||||
const struct adf_fw_config *fw_config;
|
||||
u16 ring_to_svc_map;
|
||||
int i, j;
|
||||
|
||||
fw_config = get_fw_config(accel_dev);
|
||||
if (!fw_config)
|
||||
return 0;
|
||||
|
||||
for (i = 0; i < RP_GROUP_COUNT; i++) {
|
||||
switch (fw_config[i].ae_mask) {
|
||||
case ADF_AE_GROUP_0:
|
||||
j = RP_GROUP_0;
|
||||
break;
|
||||
case ADF_AE_GROUP_1:
|
||||
j = RP_GROUP_1;
|
||||
break;
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
|
||||
switch (fw_config[i].obj) {
|
||||
case ADF_FW_SYM_OBJ:
|
||||
rps[j] = SYM;
|
||||
break;
|
||||
case ADF_FW_ASYM_OBJ:
|
||||
rps[j] = ASYM;
|
||||
break;
|
||||
case ADF_FW_DC_OBJ:
|
||||
rps[j] = COMP;
|
||||
break;
|
||||
default:
|
||||
rps[j] = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT |
|
||||
rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT |
|
||||
rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT |
|
||||
rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT;
|
||||
|
||||
return ring_to_svc_map;
|
||||
}
|
||||
|
||||
static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
|
||||
const char * const fw_objs[], int num_objs)
|
||||
{
|
||||
|
@ -399,6 +352,20 @@ static const char *uof_get_name_402xx(struct adf_accel_dev *accel_dev, u32 obj_n
|
|||
return uof_get_name(accel_dev, obj_num, adf_402xx_fw_objs, num_fw_objs);
|
||||
}
|
||||
|
||||
static int uof_get_obj_type(struct adf_accel_dev *accel_dev, u32 obj_num)
|
||||
{
|
||||
const struct adf_fw_config *fw_config;
|
||||
|
||||
if (obj_num >= uof_get_num_objs(accel_dev))
|
||||
return -EINVAL;
|
||||
|
||||
fw_config = get_fw_config(accel_dev);
|
||||
if (!fw_config)
|
||||
return -EINVAL;
|
||||
|
||||
return fw_config[obj_num].obj;
|
||||
}
|
||||
|
||||
static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num)
|
||||
{
|
||||
const struct adf_fw_config *fw_config;
|
||||
|
@ -479,11 +446,12 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
|
|||
break;
|
||||
}
|
||||
hw_data->uof_get_num_objs = uof_get_num_objs;
|
||||
hw_data->uof_get_obj_type = uof_get_obj_type;
|
||||
hw_data->uof_get_ae_mask = uof_get_ae_mask;
|
||||
hw_data->get_rp_group = get_rp_group;
|
||||
hw_data->set_msix_rttable = adf_gen4_set_msix_default_rttable;
|
||||
hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;
|
||||
hw_data->get_ring_to_svc_map = get_ring_to_svc_map;
|
||||
hw_data->get_ring_to_svc_map = adf_gen4_get_ring_to_svc_map;
|
||||
hw_data->disable_iov = adf_disable_sriov;
|
||||
hw_data->ring_pair_reset = adf_gen4_ring_pair_reset;
|
||||
hw_data->enable_pm = adf_gen4_enable_pm;
|
||||
|
|
|
@ -53,3 +53,5 @@ intel_qat-$(CONFIG_PCI_IOV) += adf_sriov.o adf_vf_isr.o adf_pfvf_utils.o \
|
|||
adf_pfvf_pf_msg.o adf_pfvf_pf_proto.o \
|
||||
adf_pfvf_vf_msg.o adf_pfvf_vf_proto.o \
|
||||
adf_gen2_pfvf.o adf_gen4_pfvf.o
|
||||
|
||||
intel_qat-$(CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION) += adf_heartbeat_inject.o
|
||||
|
|
|
@ -248,6 +248,7 @@ struct adf_hw_device_data {
|
|||
void (*set_msix_rttable)(struct adf_accel_dev *accel_dev);
|
||||
const char *(*uof_get_name)(struct adf_accel_dev *accel_dev, u32 obj_num);
|
||||
u32 (*uof_get_num_objs)(struct adf_accel_dev *accel_dev);
|
||||
int (*uof_get_obj_type)(struct adf_accel_dev *accel_dev, u32 obj_num);
|
||||
u32 (*uof_get_ae_mask)(struct adf_accel_dev *accel_dev, u32 obj_num);
|
||||
int (*get_rp_group)(struct adf_accel_dev *accel_dev, u32 ae_mask);
|
||||
u32 (*get_ena_thd_mask)(struct adf_accel_dev *accel_dev, u32 obj_num);
|
||||
|
@ -332,6 +333,7 @@ struct adf_accel_vf_info {
|
|||
struct ratelimit_state vf2pf_ratelimit;
|
||||
u32 vf_nr;
|
||||
bool init;
|
||||
bool restarting;
|
||||
u8 vf_compat_ver;
|
||||
};
|
||||
|
||||
|
@ -401,6 +403,7 @@ struct adf_accel_dev {
|
|||
struct adf_error_counters ras_errors;
|
||||
struct mutex state_lock; /* protect state of the device */
|
||||
bool is_vf;
|
||||
bool autoreset_on_error;
|
||||
u32 accel_id;
|
||||
};
|
||||
#endif
|
||||
|
|
|
@ -7,8 +7,15 @@
|
|||
#include <linux/delay.h>
|
||||
#include "adf_accel_devices.h"
|
||||
#include "adf_common_drv.h"
|
||||
#include "adf_pfvf_pf_msg.h"
|
||||
|
||||
struct adf_fatal_error_data {
|
||||
struct adf_accel_dev *accel_dev;
|
||||
struct work_struct work;
|
||||
};
|
||||
|
||||
static struct workqueue_struct *device_reset_wq;
|
||||
static struct workqueue_struct *device_sriov_wq;
|
||||
|
||||
static pci_ers_result_t adf_error_detected(struct pci_dev *pdev,
|
||||
pci_channel_state_t state)
|
||||
|
@ -26,6 +33,19 @@ static pci_ers_result_t adf_error_detected(struct pci_dev *pdev,
|
|||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
}
|
||||
|
||||
set_bit(ADF_STATUS_RESTARTING, &accel_dev->status);
|
||||
if (accel_dev->hw_device->exit_arb) {
|
||||
dev_dbg(&pdev->dev, "Disabling arbitration\n");
|
||||
accel_dev->hw_device->exit_arb(accel_dev);
|
||||
}
|
||||
adf_error_notifier(accel_dev);
|
||||
adf_pf2vf_notify_fatal_error(accel_dev);
|
||||
adf_dev_restarting_notify(accel_dev);
|
||||
adf_pf2vf_notify_restarting(accel_dev);
|
||||
adf_pf2vf_wait_for_restarting_complete(accel_dev);
|
||||
pci_clear_master(pdev);
|
||||
adf_dev_down(accel_dev, false);
|
||||
|
||||
return PCI_ERS_RESULT_NEED_RESET;
|
||||
}
|
||||
|
||||
|
@ -37,6 +57,13 @@ struct adf_reset_dev_data {
|
|||
struct work_struct reset_work;
|
||||
};
|
||||
|
||||
/* sriov dev data */
|
||||
struct adf_sriov_dev_data {
|
||||
struct adf_accel_dev *accel_dev;
|
||||
struct completion compl;
|
||||
struct work_struct sriov_work;
|
||||
};
|
||||
|
||||
void adf_reset_sbr(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct pci_dev *pdev = accel_to_pci_dev(accel_dev);
|
||||
|
@ -82,29 +109,57 @@ void adf_dev_restore(struct adf_accel_dev *accel_dev)
|
|||
}
|
||||
}
|
||||
|
||||
static void adf_device_sriov_worker(struct work_struct *work)
|
||||
{
|
||||
struct adf_sriov_dev_data *sriov_data =
|
||||
container_of(work, struct adf_sriov_dev_data, sriov_work);
|
||||
|
||||
adf_reenable_sriov(sriov_data->accel_dev);
|
||||
complete(&sriov_data->compl);
|
||||
}
|
||||
|
||||
static void adf_device_reset_worker(struct work_struct *work)
|
||||
{
|
||||
struct adf_reset_dev_data *reset_data =
|
||||
container_of(work, struct adf_reset_dev_data, reset_work);
|
||||
struct adf_accel_dev *accel_dev = reset_data->accel_dev;
|
||||
unsigned long wait_jiffies = msecs_to_jiffies(10000);
|
||||
struct adf_sriov_dev_data sriov_data;
|
||||
|
||||
adf_dev_restarting_notify(accel_dev);
|
||||
if (adf_dev_restart(accel_dev)) {
|
||||
/* The device hanged and we can't restart it so stop here */
|
||||
dev_err(&GET_DEV(accel_dev), "Restart device failed\n");
|
||||
if (reset_data->mode == ADF_DEV_RESET_ASYNC)
|
||||
if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
|
||||
completion_done(&reset_data->compl))
|
||||
kfree(reset_data);
|
||||
WARN(1, "QAT: device restart failed. Device is unusable\n");
|
||||
return;
|
||||
}
|
||||
|
||||
sriov_data.accel_dev = accel_dev;
|
||||
init_completion(&sriov_data.compl);
|
||||
INIT_WORK(&sriov_data.sriov_work, adf_device_sriov_worker);
|
||||
queue_work(device_sriov_wq, &sriov_data.sriov_work);
|
||||
if (wait_for_completion_timeout(&sriov_data.compl, wait_jiffies))
|
||||
adf_pf2vf_notify_restarted(accel_dev);
|
||||
|
||||
adf_dev_restarted_notify(accel_dev);
|
||||
clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status);
|
||||
|
||||
/* The dev is back alive. Notify the caller if in sync mode */
|
||||
if (reset_data->mode == ADF_DEV_RESET_SYNC)
|
||||
complete(&reset_data->compl);
|
||||
else
|
||||
/*
|
||||
* The dev is back alive. Notify the caller if in sync mode
|
||||
*
|
||||
* If device restart will take a more time than expected,
|
||||
* the schedule_reset() function can timeout and exit. This can be
|
||||
* detected by calling the completion_done() function. In this case
|
||||
* the reset_data structure needs to be freed here.
|
||||
*/
|
||||
if (reset_data->mode == ADF_DEV_RESET_ASYNC ||
|
||||
completion_done(&reset_data->compl))
|
||||
kfree(reset_data);
|
||||
else
|
||||
complete(&reset_data->compl);
|
||||
}
|
||||
|
||||
static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
|
||||
|
@ -137,8 +192,9 @@ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
|
|||
dev_err(&GET_DEV(accel_dev),
|
||||
"Reset device timeout expired\n");
|
||||
ret = -EFAULT;
|
||||
} else {
|
||||
kfree(reset_data);
|
||||
}
|
||||
kfree(reset_data);
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
|
@ -147,14 +203,25 @@ static int adf_dev_aer_schedule_reset(struct adf_accel_dev *accel_dev,
|
|||
static pci_ers_result_t adf_slot_reset(struct pci_dev *pdev)
|
||||
{
|
||||
struct adf_accel_dev *accel_dev = adf_devmgr_pci_to_accel_dev(pdev);
|
||||
int res = 0;
|
||||
|
||||
if (!accel_dev) {
|
||||
pr_err("QAT: Can't find acceleration device\n");
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
}
|
||||
if (adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_SYNC))
|
||||
|
||||
if (!pdev->is_busmaster)
|
||||
pci_set_master(pdev);
|
||||
pci_restore_state(pdev);
|
||||
pci_save_state(pdev);
|
||||
res = adf_dev_up(accel_dev, false);
|
||||
if (res && res != -EALREADY)
|
||||
return PCI_ERS_RESULT_DISCONNECT;
|
||||
|
||||
adf_reenable_sriov(accel_dev);
|
||||
adf_pf2vf_notify_restarted(accel_dev);
|
||||
adf_dev_restarted_notify(accel_dev);
|
||||
clear_bit(ADF_STATUS_RESTARTING, &accel_dev->status);
|
||||
return PCI_ERS_RESULT_RECOVERED;
|
||||
}
|
||||
|
||||
|
@ -171,11 +238,62 @@ const struct pci_error_handlers adf_err_handler = {
|
|||
};
|
||||
EXPORT_SYMBOL_GPL(adf_err_handler);
|
||||
|
||||
int adf_dev_autoreset(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
if (accel_dev->autoreset_on_error)
|
||||
return adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_ASYNC);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void adf_notify_fatal_error_worker(struct work_struct *work)
|
||||
{
|
||||
struct adf_fatal_error_data *wq_data =
|
||||
container_of(work, struct adf_fatal_error_data, work);
|
||||
struct adf_accel_dev *accel_dev = wq_data->accel_dev;
|
||||
struct adf_hw_device_data *hw_device = accel_dev->hw_device;
|
||||
|
||||
adf_error_notifier(accel_dev);
|
||||
|
||||
if (!accel_dev->is_vf) {
|
||||
/* Disable arbitration to stop processing of new requests */
|
||||
if (accel_dev->autoreset_on_error && hw_device->exit_arb)
|
||||
hw_device->exit_arb(accel_dev);
|
||||
if (accel_dev->pf.vf_info)
|
||||
adf_pf2vf_notify_fatal_error(accel_dev);
|
||||
adf_dev_autoreset(accel_dev);
|
||||
}
|
||||
|
||||
kfree(wq_data);
|
||||
}
|
||||
|
||||
int adf_notify_fatal_error(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct adf_fatal_error_data *wq_data;
|
||||
|
||||
wq_data = kzalloc(sizeof(*wq_data), GFP_ATOMIC);
|
||||
if (!wq_data)
|
||||
return -ENOMEM;
|
||||
|
||||
wq_data->accel_dev = accel_dev;
|
||||
INIT_WORK(&wq_data->work, adf_notify_fatal_error_worker);
|
||||
adf_misc_wq_queue_work(&wq_data->work);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int adf_init_aer(void)
|
||||
{
|
||||
device_reset_wq = alloc_workqueue("qat_device_reset_wq",
|
||||
WQ_MEM_RECLAIM, 0);
|
||||
return !device_reset_wq ? -EFAULT : 0;
|
||||
if (!device_reset_wq)
|
||||
return -EFAULT;
|
||||
|
||||
device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0);
|
||||
if (!device_sriov_wq)
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void adf_exit_aer(void)
|
||||
|
@ -183,4 +301,8 @@ void adf_exit_aer(void)
|
|||
if (device_reset_wq)
|
||||
destroy_workqueue(device_reset_wq);
|
||||
device_reset_wq = NULL;
|
||||
|
||||
if (device_sriov_wq)
|
||||
destroy_workqueue(device_sriov_wq);
|
||||
device_sriov_wq = NULL;
|
||||
}
|
||||
|
|
|
@ -49,5 +49,6 @@
|
|||
ADF_ETRMGR_BANK "%d" ADF_ETRMGR_CORE_AFFINITY
|
||||
#define ADF_ACCEL_STR "Accelerator%d"
|
||||
#define ADF_HEARTBEAT_TIMER "HeartbeatTimer"
|
||||
#define ADF_SRIOV_ENABLED "SriovEnabled"
|
||||
|
||||
#endif
|
||||
|
|
|
@ -83,6 +83,9 @@ static int measure_clock(struct adf_accel_dev *accel_dev, u32 *frequency)
|
|||
}
|
||||
|
||||
delta_us = timespec_to_us(&ts3) - timespec_to_us(&ts1);
|
||||
if (!delta_us)
|
||||
return -EINVAL;
|
||||
|
||||
temp = (timestamp2 - timestamp1) * ME_CLK_DIVIDER * 10;
|
||||
temp = DIV_ROUND_CLOSEST_ULL(temp, delta_us);
|
||||
/*
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
|
||||
#define CNV_ERR_INFO_MASK GENMASK(11, 0)
|
||||
#define CNV_ERR_TYPE_MASK GENMASK(15, 12)
|
||||
#define CNV_SLICE_ERR_MASK GENMASK(7, 0)
|
||||
#define CNV_SLICE_ERR_SIGN_BIT_INDEX 7
|
||||
#define CNV_DELTA_ERR_SIGN_BIT_INDEX 11
|
||||
|
||||
|
|
|
@ -40,6 +40,7 @@ enum adf_event {
|
|||
ADF_EVENT_SHUTDOWN,
|
||||
ADF_EVENT_RESTARTING,
|
||||
ADF_EVENT_RESTARTED,
|
||||
ADF_EVENT_FATAL_ERROR,
|
||||
};
|
||||
|
||||
struct service_hndl {
|
||||
|
@ -60,6 +61,8 @@ int adf_dev_restart(struct adf_accel_dev *accel_dev);
|
|||
|
||||
void adf_devmgr_update_class_index(struct adf_hw_device_data *hw_data);
|
||||
void adf_clean_vf_map(bool);
|
||||
int adf_notify_fatal_error(struct adf_accel_dev *accel_dev);
|
||||
void adf_error_notifier(struct adf_accel_dev *accel_dev);
|
||||
int adf_devmgr_add_dev(struct adf_accel_dev *accel_dev,
|
||||
struct adf_accel_dev *pf);
|
||||
void adf_devmgr_rm_dev(struct adf_accel_dev *accel_dev,
|
||||
|
@ -84,12 +87,14 @@ int adf_ae_stop(struct adf_accel_dev *accel_dev);
|
|||
extern const struct pci_error_handlers adf_err_handler;
|
||||
void adf_reset_sbr(struct adf_accel_dev *accel_dev);
|
||||
void adf_reset_flr(struct adf_accel_dev *accel_dev);
|
||||
int adf_dev_autoreset(struct adf_accel_dev *accel_dev);
|
||||
void adf_dev_restore(struct adf_accel_dev *accel_dev);
|
||||
int adf_init_aer(void);
|
||||
void adf_exit_aer(void);
|
||||
int adf_init_arb(struct adf_accel_dev *accel_dev);
|
||||
void adf_exit_arb(struct adf_accel_dev *accel_dev);
|
||||
void adf_update_ring_arb(struct adf_etr_ring_data *ring);
|
||||
int adf_disable_arb_thd(struct adf_accel_dev *accel_dev, u32 ae, u32 thr);
|
||||
|
||||
int adf_dev_get(struct adf_accel_dev *accel_dev);
|
||||
void adf_dev_put(struct adf_accel_dev *accel_dev);
|
||||
|
@ -188,6 +193,7 @@ bool adf_misc_wq_queue_delayed_work(struct delayed_work *work,
|
|||
#if defined(CONFIG_PCI_IOV)
|
||||
int adf_sriov_configure(struct pci_dev *pdev, int numvfs);
|
||||
void adf_disable_sriov(struct adf_accel_dev *accel_dev);
|
||||
void adf_reenable_sriov(struct adf_accel_dev *accel_dev);
|
||||
void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev, u32 vf_mask);
|
||||
void adf_disable_all_vf2pf_interrupts(struct adf_accel_dev *accel_dev);
|
||||
bool adf_recv_and_handle_pf2vf_msg(struct adf_accel_dev *accel_dev);
|
||||
|
@ -208,6 +214,10 @@ static inline void adf_disable_sriov(struct adf_accel_dev *accel_dev)
|
|||
{
|
||||
}
|
||||
|
||||
static inline void adf_reenable_sriov(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int adf_init_pf_wq(void)
|
||||
{
|
||||
return 0;
|
||||
|
|
|
@ -60,10 +60,10 @@ static int adf_get_vf_real_id(u32 fake)
|
|||
|
||||
/**
|
||||
* adf_clean_vf_map() - Cleans VF id mapings
|
||||
*
|
||||
* Function cleans internal ids for virtual functions.
|
||||
* @vf: flag indicating whether mappings is cleaned
|
||||
* for vfs only or for vfs and pfs
|
||||
*
|
||||
* Function cleans internal ids for virtual functions.
|
||||
*/
|
||||
void adf_clean_vf_map(bool vf)
|
||||
{
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
#include "adf_accel_devices.h"
|
||||
#include "adf_cfg_services.h"
|
||||
#include "adf_common_drv.h"
|
||||
#include "adf_fw_config.h"
|
||||
#include "adf_gen4_hw_data.h"
|
||||
#include "adf_gen4_pm.h"
|
||||
|
||||
|
@ -398,6 +399,9 @@ int adf_gen4_init_thd2arb_map(struct adf_accel_dev *accel_dev)
|
|||
ADF_GEN4_ADMIN_ACCELENGINES;
|
||||
|
||||
if (srv_id == SVC_DCC) {
|
||||
if (ae_cnt > ICP_QAT_HW_AE_DELIMITER)
|
||||
return -EINVAL;
|
||||
|
||||
memcpy(thd2arb_map, thrd_to_arb_map_dcc,
|
||||
array_size(sizeof(*thd2arb_map), ae_cnt));
|
||||
return 0;
|
||||
|
@ -430,3 +434,58 @@ int adf_gen4_init_thd2arb_map(struct adf_accel_dev *accel_dev)
|
|||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(adf_gen4_init_thd2arb_map);
|
||||
|
||||
u16 adf_gen4_get_ring_to_svc_map(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
|
||||
enum adf_cfg_service_type rps[RP_GROUP_COUNT] = { };
|
||||
unsigned int ae_mask, start_id, worker_obj_cnt, i;
|
||||
u16 ring_to_svc_map;
|
||||
int rp_group;
|
||||
|
||||
if (!hw_data->get_rp_group || !hw_data->uof_get_ae_mask ||
|
||||
!hw_data->uof_get_obj_type || !hw_data->uof_get_num_objs)
|
||||
return 0;
|
||||
|
||||
/* If dcc, all rings handle compression requests */
|
||||
if (adf_get_service_enabled(accel_dev) == SVC_DCC) {
|
||||
for (i = 0; i < RP_GROUP_COUNT; i++)
|
||||
rps[i] = COMP;
|
||||
goto set_mask;
|
||||
}
|
||||
|
||||
worker_obj_cnt = hw_data->uof_get_num_objs(accel_dev) -
|
||||
ADF_GEN4_ADMIN_ACCELENGINES;
|
||||
start_id = worker_obj_cnt - RP_GROUP_COUNT;
|
||||
|
||||
for (i = start_id; i < worker_obj_cnt; i++) {
|
||||
ae_mask = hw_data->uof_get_ae_mask(accel_dev, i);
|
||||
rp_group = hw_data->get_rp_group(accel_dev, ae_mask);
|
||||
if (rp_group >= RP_GROUP_COUNT || rp_group < RP_GROUP_0)
|
||||
return 0;
|
||||
|
||||
switch (hw_data->uof_get_obj_type(accel_dev, i)) {
|
||||
case ADF_FW_SYM_OBJ:
|
||||
rps[rp_group] = SYM;
|
||||
break;
|
||||
case ADF_FW_ASYM_OBJ:
|
||||
rps[rp_group] = ASYM;
|
||||
break;
|
||||
case ADF_FW_DC_OBJ:
|
||||
rps[rp_group] = COMP;
|
||||
break;
|
||||
default:
|
||||
rps[rp_group] = 0;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
set_mask:
|
||||
ring_to_svc_map = rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_0_SHIFT |
|
||||
rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_1_SHIFT |
|
||||
rps[RP_GROUP_0] << ADF_CFG_SERV_RING_PAIR_2_SHIFT |
|
||||
rps[RP_GROUP_1] << ADF_CFG_SERV_RING_PAIR_3_SHIFT;
|
||||
|
||||
return ring_to_svc_map;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(adf_gen4_get_ring_to_svc_map);
|
||||
|
|
|
@ -235,5 +235,6 @@ int adf_gen4_ring_pair_reset(struct adf_accel_dev *accel_dev, u32 bank_number);
|
|||
void adf_gen4_set_msix_default_rttable(struct adf_accel_dev *accel_dev);
|
||||
void adf_gen4_set_ssm_wdtimer(struct adf_accel_dev *accel_dev);
|
||||
int adf_gen4_init_thd2arb_map(struct adf_accel_dev *accel_dev);
|
||||
u16 adf_gen4_get_ring_to_svc_map(struct adf_accel_dev *accel_dev);
|
||||
|
||||
#endif
|
||||
|
|
|
@ -1007,8 +1007,7 @@ static bool adf_handle_spppar_err(struct adf_accel_dev *accel_dev,
|
|||
static bool adf_handle_ssmcpppar_err(struct adf_accel_dev *accel_dev,
|
||||
void __iomem *csr, u32 iastatssm)
|
||||
{
|
||||
u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SSMCPPERR);
|
||||
u32 bits_num = BITS_PER_REG(reg);
|
||||
u32 reg, bits_num = BITS_PER_REG(reg);
|
||||
bool reset_required = false;
|
||||
unsigned long errs_bits;
|
||||
u32 bit_iterator;
|
||||
|
@ -1106,8 +1105,7 @@ static bool adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
|
|||
static bool adf_handle_ser_err_ssmsh(struct adf_accel_dev *accel_dev,
|
||||
void __iomem *csr, u32 iastatssm)
|
||||
{
|
||||
u32 reg = ADF_CSR_RD(csr, ADF_GEN4_SER_ERR_SSMSH);
|
||||
u32 bits_num = BITS_PER_REG(reg);
|
||||
u32 reg, bits_num = BITS_PER_REG(reg);
|
||||
bool reset_required = false;
|
||||
unsigned long errs_bits;
|
||||
u32 bit_iterator;
|
||||
|
|
|
@ -23,12 +23,6 @@
|
|||
|
||||
#define ADF_HB_EMPTY_SIG 0xA5A5A5A5
|
||||
|
||||
/* Heartbeat counter pair */
|
||||
struct hb_cnt_pair {
|
||||
__u16 resp_heartbeat_cnt;
|
||||
__u16 req_heartbeat_cnt;
|
||||
};
|
||||
|
||||
static int adf_hb_check_polling_freq(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
u64 curr_time = adf_clock_get_current_time();
|
||||
|
@ -211,6 +205,19 @@ static int adf_hb_get_status(struct adf_accel_dev *accel_dev)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void adf_heartbeat_reset(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
u64 curr_time = adf_clock_get_current_time();
|
||||
u64 time_since_reset = curr_time - accel_dev->heartbeat->last_hb_reset_time;
|
||||
|
||||
if (time_since_reset < ADF_CFG_HB_RESET_MS)
|
||||
return;
|
||||
|
||||
accel_dev->heartbeat->last_hb_reset_time = curr_time;
|
||||
if (adf_notify_fatal_error(accel_dev))
|
||||
dev_err(&GET_DEV(accel_dev), "Failed to notify fatal error\n");
|
||||
}
|
||||
|
||||
void adf_heartbeat_status(struct adf_accel_dev *accel_dev,
|
||||
enum adf_device_heartbeat_status *hb_status)
|
||||
{
|
||||
|
@ -235,6 +242,7 @@ void adf_heartbeat_status(struct adf_accel_dev *accel_dev,
|
|||
"Heartbeat ERROR: QAT is not responding.\n");
|
||||
*hb_status = HB_DEV_UNRESPONSIVE;
|
||||
hb->hb_failed_counter++;
|
||||
adf_heartbeat_reset(accel_dev);
|
||||
return;
|
||||
}
|
||||
|
||||
|
|
|
@ -13,17 +13,26 @@ struct dentry;
|
|||
#define ADF_CFG_HB_TIMER_DEFAULT_MS 500
|
||||
#define ADF_CFG_HB_COUNT_THRESHOLD 3
|
||||
|
||||
#define ADF_CFG_HB_RESET_MS 5000
|
||||
|
||||
enum adf_device_heartbeat_status {
|
||||
HB_DEV_UNRESPONSIVE = 0,
|
||||
HB_DEV_ALIVE,
|
||||
HB_DEV_UNSUPPORTED,
|
||||
};
|
||||
|
||||
/* Heartbeat counter pair */
|
||||
struct hb_cnt_pair {
|
||||
__u16 resp_heartbeat_cnt;
|
||||
__u16 req_heartbeat_cnt;
|
||||
};
|
||||
|
||||
struct adf_heartbeat {
|
||||
unsigned int hb_sent_counter;
|
||||
unsigned int hb_failed_counter;
|
||||
unsigned int hb_timer;
|
||||
u64 last_hb_check_time;
|
||||
u64 last_hb_reset_time;
|
||||
bool ctrs_cnt_checked;
|
||||
struct hb_dma_addr {
|
||||
dma_addr_t phy_addr;
|
||||
|
@ -35,6 +44,9 @@ struct adf_heartbeat {
|
|||
struct dentry *cfg;
|
||||
struct dentry *sent;
|
||||
struct dentry *failed;
|
||||
#ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION
|
||||
struct dentry *inject_error;
|
||||
#endif
|
||||
} dbgfs;
|
||||
};
|
||||
|
||||
|
@ -51,6 +63,15 @@ void adf_heartbeat_status(struct adf_accel_dev *accel_dev,
|
|||
enum adf_device_heartbeat_status *hb_status);
|
||||
void adf_heartbeat_check_ctrs(struct adf_accel_dev *accel_dev);
|
||||
|
||||
#ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION
|
||||
int adf_heartbeat_inject_error(struct adf_accel_dev *accel_dev);
|
||||
#else
|
||||
static inline int adf_heartbeat_inject_error(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
return -EPERM;
|
||||
}
|
||||
#endif
|
||||
|
||||
#else
|
||||
static inline int adf_heartbeat_init(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
|
|
|
@ -155,6 +155,44 @@ static const struct file_operations adf_hb_cfg_fops = {
|
|||
.write = adf_hb_cfg_write,
|
||||
};
|
||||
|
||||
static ssize_t adf_hb_error_inject_write(struct file *file,
|
||||
const char __user *user_buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct adf_accel_dev *accel_dev = file->private_data;
|
||||
char buf[3];
|
||||
int ret;
|
||||
|
||||
/* last byte left as string termination */
|
||||
if (*ppos != 0 || count != 2)
|
||||
return -EINVAL;
|
||||
|
||||
if (copy_from_user(buf, user_buf, count))
|
||||
return -EFAULT;
|
||||
buf[count] = '\0';
|
||||
|
||||
if (buf[0] != '1')
|
||||
return -EINVAL;
|
||||
|
||||
ret = adf_heartbeat_inject_error(accel_dev);
|
||||
if (ret) {
|
||||
dev_err(&GET_DEV(accel_dev),
|
||||
"Heartbeat error injection failed with status %d\n",
|
||||
ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
dev_info(&GET_DEV(accel_dev), "Heartbeat error injection enabled\n");
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
static const struct file_operations adf_hb_error_inject_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = simple_open,
|
||||
.write = adf_hb_error_inject_write,
|
||||
};
|
||||
|
||||
void adf_heartbeat_dbgfs_add(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct adf_heartbeat *hb = accel_dev->heartbeat;
|
||||
|
@ -171,6 +209,17 @@ void adf_heartbeat_dbgfs_add(struct adf_accel_dev *accel_dev)
|
|||
&hb->hb_failed_counter, &adf_hb_stats_fops);
|
||||
hb->dbgfs.cfg = debugfs_create_file("config", 0600, hb->dbgfs.base_dir,
|
||||
accel_dev, &adf_hb_cfg_fops);
|
||||
|
||||
if (IS_ENABLED(CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION)) {
|
||||
struct dentry *inject_error __maybe_unused;
|
||||
|
||||
inject_error = debugfs_create_file("inject_error", 0200,
|
||||
hb->dbgfs.base_dir, accel_dev,
|
||||
&adf_hb_error_inject_fops);
|
||||
#ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION
|
||||
hb->dbgfs.inject_error = inject_error;
|
||||
#endif
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(adf_heartbeat_dbgfs_add);
|
||||
|
||||
|
@ -189,6 +238,10 @@ void adf_heartbeat_dbgfs_rm(struct adf_accel_dev *accel_dev)
|
|||
hb->dbgfs.failed = NULL;
|
||||
debugfs_remove(hb->dbgfs.cfg);
|
||||
hb->dbgfs.cfg = NULL;
|
||||
#ifdef CONFIG_CRYPTO_DEV_QAT_ERROR_INJECTION
|
||||
debugfs_remove(hb->dbgfs.inject_error);
|
||||
hb->dbgfs.inject_error = NULL;
|
||||
#endif
|
||||
debugfs_remove(hb->dbgfs.base_dir);
|
||||
hb->dbgfs.base_dir = NULL;
|
||||
}
|
||||
|
|
|
@ -0,0 +1,76 @@
|
|||
// SPDX-License-Identifier: GPL-2.0-only
|
||||
/* Copyright(c) 2023 Intel Corporation */
|
||||
#include <linux/random.h>
|
||||
|
||||
#include "adf_admin.h"
|
||||
#include "adf_common_drv.h"
|
||||
#include "adf_heartbeat.h"
|
||||
|
||||
#define MAX_HB_TICKS 0xFFFFFFFF
|
||||
|
||||
static int adf_hb_set_timer_to_max(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
|
||||
|
||||
accel_dev->heartbeat->hb_timer = 0;
|
||||
|
||||
if (hw_data->stop_timer)
|
||||
hw_data->stop_timer(accel_dev);
|
||||
|
||||
return adf_send_admin_hb_timer(accel_dev, MAX_HB_TICKS);
|
||||
}
|
||||
|
||||
static void adf_set_hb_counters_fail(struct adf_accel_dev *accel_dev, u32 ae,
|
||||
u32 thr)
|
||||
{
|
||||
struct hb_cnt_pair *stats = accel_dev->heartbeat->dma.virt_addr;
|
||||
struct adf_hw_device_data *hw_device = accel_dev->hw_device;
|
||||
const size_t max_aes = hw_device->get_num_aes(hw_device);
|
||||
const size_t hb_ctrs = hw_device->num_hb_ctrs;
|
||||
size_t thr_id = ae * hb_ctrs + thr;
|
||||
u16 num_rsp = stats[thr_id].resp_heartbeat_cnt;
|
||||
|
||||
/*
|
||||
* Inject live.req != live.rsp and live.rsp == last.rsp
|
||||
* to trigger the heartbeat error detection
|
||||
*/
|
||||
stats[thr_id].req_heartbeat_cnt++;
|
||||
stats += (max_aes * hb_ctrs);
|
||||
stats[thr_id].resp_heartbeat_cnt = num_rsp;
|
||||
}
|
||||
|
||||
int adf_heartbeat_inject_error(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct adf_hw_device_data *hw_device = accel_dev->hw_device;
|
||||
const size_t max_aes = hw_device->get_num_aes(hw_device);
|
||||
const size_t hb_ctrs = hw_device->num_hb_ctrs;
|
||||
u32 rand, rand_ae, rand_thr;
|
||||
unsigned long ae_mask;
|
||||
int ret;
|
||||
|
||||
ae_mask = hw_device->ae_mask;
|
||||
|
||||
do {
|
||||
/* Ensure we have a valid ae */
|
||||
get_random_bytes(&rand, sizeof(rand));
|
||||
rand_ae = rand % max_aes;
|
||||
} while (!test_bit(rand_ae, &ae_mask));
|
||||
|
||||
get_random_bytes(&rand, sizeof(rand));
|
||||
rand_thr = rand % hb_ctrs;
|
||||
|
||||
/* Increase the heartbeat timer to prevent FW updating HB counters */
|
||||
ret = adf_hb_set_timer_to_max(accel_dev);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Configure worker threads to stop processing any packet */
|
||||
ret = adf_disable_arb_thd(accel_dev, rand_ae, rand_thr);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Change HB counters memory to simulate a hang */
|
||||
adf_set_hb_counters_fail(accel_dev, rand_ae, rand_thr);
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -103,3 +103,28 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev)
|
|||
csr_ops->write_csr_ring_srv_arb_en(csr, i, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(adf_exit_arb);
|
||||
|
||||
int adf_disable_arb_thd(struct adf_accel_dev *accel_dev, u32 ae, u32 thr)
|
||||
{
|
||||
void __iomem *csr = accel_dev->transport->banks[0].csr_addr;
|
||||
struct adf_hw_device_data *hw_data = accel_dev->hw_device;
|
||||
const u32 *thd_2_arb_cfg;
|
||||
struct arb_info info;
|
||||
u32 ae_thr_map;
|
||||
|
||||
if (ADF_AE_STRAND0_THREAD == thr || ADF_AE_STRAND1_THREAD == thr)
|
||||
thr = ADF_AE_ADMIN_THREAD;
|
||||
|
||||
hw_data->get_arb_info(&info);
|
||||
thd_2_arb_cfg = hw_data->get_arb_mapping(accel_dev);
|
||||
if (!thd_2_arb_cfg)
|
||||
return -EFAULT;
|
||||
|
||||
/* Disable scheduling for this particular AE and thread */
|
||||
ae_thr_map = *(thd_2_arb_cfg + ae);
|
||||
ae_thr_map &= ~(GENMASK(3, 0) << (thr * BIT(2)));
|
||||
|
||||
WRITE_CSR_ARB_WT2SAM(csr, info.arb_offset, info.wt2sam_offset, ae,
|
||||
ae_thr_map);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -433,6 +433,18 @@ int adf_dev_restarted_notify(struct adf_accel_dev *accel_dev)
|
|||
return 0;
|
||||
}
|
||||
|
||||
void adf_error_notifier(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct service_hndl *service;
|
||||
|
||||
list_for_each_entry(service, &service_table, list) {
|
||||
if (service->event_hld(accel_dev, ADF_EVENT_FATAL_ERROR))
|
||||
dev_err(&GET_DEV(accel_dev),
|
||||
"Failed to send error event to %s.\n",
|
||||
service->name);
|
||||
}
|
||||
}
|
||||
|
||||
static int adf_dev_shutdown_cache_cfg(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
|
||||
|
|
|
@ -139,8 +139,13 @@ static bool adf_handle_ras_int(struct adf_accel_dev *accel_dev)
|
|||
|
||||
if (ras_ops->handle_interrupt &&
|
||||
ras_ops->handle_interrupt(accel_dev, &reset_required)) {
|
||||
if (reset_required)
|
||||
if (reset_required) {
|
||||
dev_err(&GET_DEV(accel_dev), "Fatal error, reset required\n");
|
||||
if (adf_notify_fatal_error(accel_dev))
|
||||
dev_err(&GET_DEV(accel_dev),
|
||||
"Failed to notify fatal error\n");
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -272,7 +277,7 @@ static int adf_isr_alloc_msix_vectors_data(struct adf_accel_dev *accel_dev)
|
|||
if (!accel_dev->pf.vf_info)
|
||||
msix_num_entries += hw_data->num_banks;
|
||||
|
||||
irqs = kzalloc_node(msix_num_entries * sizeof(*irqs),
|
||||
irqs = kcalloc_node(msix_num_entries, sizeof(*irqs),
|
||||
GFP_KERNEL, dev_to_node(&GET_DEV(accel_dev)));
|
||||
if (!irqs)
|
||||
return -ENOMEM;
|
||||
|
@ -375,8 +380,6 @@ EXPORT_SYMBOL_GPL(adf_isr_resource_alloc);
|
|||
/**
|
||||
* adf_init_misc_wq() - Init misc workqueue
|
||||
*
|
||||
* Function init workqueue 'qat_misc_wq' for general purpose.
|
||||
*
|
||||
* Return: 0 on success, error code otherwise.
|
||||
*/
|
||||
int __init adf_init_misc_wq(void)
|
||||
|
|
|
@ -99,6 +99,8 @@ enum pf2vf_msgtype {
|
|||
ADF_PF2VF_MSGTYPE_RESTARTING = 0x01,
|
||||
ADF_PF2VF_MSGTYPE_VERSION_RESP = 0x02,
|
||||
ADF_PF2VF_MSGTYPE_BLKMSG_RESP = 0x03,
|
||||
ADF_PF2VF_MSGTYPE_FATAL_ERROR = 0x04,
|
||||
ADF_PF2VF_MSGTYPE_RESTARTED = 0x05,
|
||||
/* Values from 0x10 are Gen4 specific, message type is only 4 bits in Gen2 devices. */
|
||||
ADF_PF2VF_MSGTYPE_RP_RESET_RESP = 0x10,
|
||||
};
|
||||
|
@ -112,6 +114,7 @@ enum vf2pf_msgtype {
|
|||
ADF_VF2PF_MSGTYPE_LARGE_BLOCK_REQ = 0x07,
|
||||
ADF_VF2PF_MSGTYPE_MEDIUM_BLOCK_REQ = 0x08,
|
||||
ADF_VF2PF_MSGTYPE_SMALL_BLOCK_REQ = 0x09,
|
||||
ADF_VF2PF_MSGTYPE_RESTARTING_COMPLETE = 0x0a,
|
||||
/* Values from 0x10 are Gen4 specific, message type is only 4 bits in Gen2 devices. */
|
||||
ADF_VF2PF_MSGTYPE_RP_RESET = 0x10,
|
||||
};
|
||||
|
@ -124,8 +127,10 @@ enum pfvf_compatibility_version {
|
|||
ADF_PFVF_COMPAT_FAST_ACK = 0x03,
|
||||
/* Ring to service mapping support for non-standard mappings */
|
||||
ADF_PFVF_COMPAT_RING_TO_SVC_MAP = 0x04,
|
||||
/* Fallback compat */
|
||||
ADF_PFVF_COMPAT_FALLBACK = 0x05,
|
||||
/* Reference to the latest version */
|
||||
ADF_PFVF_COMPAT_THIS_VERSION = 0x04,
|
||||
ADF_PFVF_COMPAT_THIS_VERSION = 0x05,
|
||||
};
|
||||
|
||||
/* PF->VF Version Response */
|
||||
|
|
|
@ -1,21 +1,83 @@
|
|||
// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
|
||||
/* Copyright(c) 2015 - 2021 Intel Corporation */
|
||||
#include <linux/delay.h>
|
||||
#include <linux/pci.h>
|
||||
#include "adf_accel_devices.h"
|
||||
#include "adf_pfvf_msg.h"
|
||||
#include "adf_pfvf_pf_msg.h"
|
||||
#include "adf_pfvf_pf_proto.h"
|
||||
|
||||
#define ADF_PF_WAIT_RESTARTING_COMPLETE_DELAY 100
|
||||
#define ADF_VF_SHUTDOWN_RETRY 100
|
||||
|
||||
void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct adf_accel_vf_info *vf;
|
||||
struct pfvf_message msg = { .type = ADF_PF2VF_MSGTYPE_RESTARTING };
|
||||
int i, num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev));
|
||||
|
||||
dev_dbg(&GET_DEV(accel_dev), "pf2vf notify restarting\n");
|
||||
for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) {
|
||||
if (vf->init && adf_send_pf2vf_msg(accel_dev, i, msg))
|
||||
vf->restarting = false;
|
||||
if (!vf->init)
|
||||
continue;
|
||||
if (adf_send_pf2vf_msg(accel_dev, i, msg))
|
||||
dev_err(&GET_DEV(accel_dev),
|
||||
"Failed to send restarting msg to VF%d\n", i);
|
||||
else if (vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK)
|
||||
vf->restarting = true;
|
||||
}
|
||||
}
|
||||
|
||||
void adf_pf2vf_wait_for_restarting_complete(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
int num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev));
|
||||
int i, retries = ADF_VF_SHUTDOWN_RETRY;
|
||||
struct adf_accel_vf_info *vf;
|
||||
bool vf_running;
|
||||
|
||||
dev_dbg(&GET_DEV(accel_dev), "pf2vf wait for restarting complete\n");
|
||||
do {
|
||||
vf_running = false;
|
||||
for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++)
|
||||
if (vf->restarting)
|
||||
vf_running = true;
|
||||
if (!vf_running)
|
||||
break;
|
||||
msleep(ADF_PF_WAIT_RESTARTING_COMPLETE_DELAY);
|
||||
} while (--retries);
|
||||
|
||||
if (vf_running)
|
||||
dev_warn(&GET_DEV(accel_dev), "Some VFs are still running\n");
|
||||
}
|
||||
|
||||
void adf_pf2vf_notify_restarted(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct pfvf_message msg = { .type = ADF_PF2VF_MSGTYPE_RESTARTED };
|
||||
int i, num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev));
|
||||
struct adf_accel_vf_info *vf;
|
||||
|
||||
dev_dbg(&GET_DEV(accel_dev), "pf2vf notify restarted\n");
|
||||
for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) {
|
||||
if (vf->init && vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK &&
|
||||
adf_send_pf2vf_msg(accel_dev, i, msg))
|
||||
dev_err(&GET_DEV(accel_dev),
|
||||
"Failed to send restarted msg to VF%d\n", i);
|
||||
}
|
||||
}
|
||||
|
||||
void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct pfvf_message msg = { .type = ADF_PF2VF_MSGTYPE_FATAL_ERROR };
|
||||
int i, num_vfs = pci_num_vf(accel_to_pci_dev(accel_dev));
|
||||
struct adf_accel_vf_info *vf;
|
||||
|
||||
dev_dbg(&GET_DEV(accel_dev), "pf2vf notify fatal error\n");
|
||||
for (i = 0, vf = accel_dev->pf.vf_info; i < num_vfs; i++, vf++) {
|
||||
if (vf->init && vf->vf_compat_ver >= ADF_PFVF_COMPAT_FALLBACK &&
|
||||
adf_send_pf2vf_msg(accel_dev, i, msg))
|
||||
dev_err(&GET_DEV(accel_dev),
|
||||
"Failed to send fatal error msg to VF%d\n", i);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -5,7 +5,28 @@
|
|||
|
||||
#include "adf_accel_devices.h"
|
||||
|
||||
#if defined(CONFIG_PCI_IOV)
|
||||
void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev);
|
||||
void adf_pf2vf_wait_for_restarting_complete(struct adf_accel_dev *accel_dev);
|
||||
void adf_pf2vf_notify_restarted(struct adf_accel_dev *accel_dev);
|
||||
void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev);
|
||||
#else
|
||||
static inline void adf_pf2vf_notify_restarting(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void adf_pf2vf_wait_for_restarting_complete(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void adf_pf2vf_notify_restarted(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
}
|
||||
|
||||
static inline void adf_pf2vf_notify_fatal_error(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
typedef int (*adf_pf2vf_blkmsg_provider)(struct adf_accel_dev *accel_dev,
|
||||
u8 *buffer, u8 compat);
|
||||
|
|
|
@ -291,6 +291,14 @@ static int adf_handle_vf2pf_msg(struct adf_accel_dev *accel_dev, u8 vf_nr,
|
|||
vf_info->init = false;
|
||||
}
|
||||
break;
|
||||
case ADF_VF2PF_MSGTYPE_RESTARTING_COMPLETE:
|
||||
{
|
||||
dev_dbg(&GET_DEV(accel_dev),
|
||||
"Restarting Complete received from VF%d\n", vf_nr);
|
||||
vf_info->restarting = false;
|
||||
vf_info->init = false;
|
||||
}
|
||||
break;
|
||||
case ADF_VF2PF_MSGTYPE_LARGE_BLOCK_REQ:
|
||||
case ADF_VF2PF_MSGTYPE_MEDIUM_BLOCK_REQ:
|
||||
case ADF_VF2PF_MSGTYPE_SMALL_BLOCK_REQ:
|
||||
|
|
|
@ -308,6 +308,12 @@ static bool adf_handle_pf2vf_msg(struct adf_accel_dev *accel_dev,
|
|||
|
||||
adf_pf2vf_handle_pf_restarting(accel_dev);
|
||||
return false;
|
||||
case ADF_PF2VF_MSGTYPE_RESTARTED:
|
||||
dev_dbg(&GET_DEV(accel_dev), "Restarted message received from PF\n");
|
||||
return true;
|
||||
case ADF_PF2VF_MSGTYPE_FATAL_ERROR:
|
||||
dev_err(&GET_DEV(accel_dev), "Fatal error received from PF\n");
|
||||
return true;
|
||||
case ADF_PF2VF_MSGTYPE_VERSION_RESP:
|
||||
case ADF_PF2VF_MSGTYPE_BLKMSG_RESP:
|
||||
case ADF_PF2VF_MSGTYPE_RP_RESET_RESP:
|
||||
|
|
|
@ -788,6 +788,24 @@ static void clear_sla(struct adf_rl *rl_data, struct rl_sla *sla)
|
|||
sla_type_arr[node_id] = NULL;
|
||||
}
|
||||
|
||||
static void free_all_sla(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct adf_rl *rl_data = accel_dev->rate_limiting;
|
||||
int sla_id;
|
||||
|
||||
mutex_lock(&rl_data->rl_lock);
|
||||
|
||||
for (sla_id = 0; sla_id < RL_NODES_CNT_MAX; sla_id++) {
|
||||
if (!rl_data->sla[sla_id])
|
||||
continue;
|
||||
|
||||
kfree(rl_data->sla[sla_id]);
|
||||
rl_data->sla[sla_id] = NULL;
|
||||
}
|
||||
|
||||
mutex_unlock(&rl_data->rl_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* add_update_sla() - handles the creation and the update of an SLA
|
||||
* @accel_dev: pointer to acceleration device structure
|
||||
|
@ -1155,7 +1173,7 @@ void adf_rl_stop(struct adf_accel_dev *accel_dev)
|
|||
return;
|
||||
|
||||
adf_sysfs_rl_rm(accel_dev);
|
||||
adf_rl_remove_sla_all(accel_dev, true);
|
||||
free_all_sla(accel_dev);
|
||||
}
|
||||
|
||||
void adf_rl_exit(struct adf_accel_dev *accel_dev)
|
||||
|
|
|
@ -60,7 +60,6 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev)
|
|||
/* This ptr will be populated when VFs will be created */
|
||||
vf_info->accel_dev = accel_dev;
|
||||
vf_info->vf_nr = i;
|
||||
vf_info->vf_compat_ver = 0;
|
||||
|
||||
mutex_init(&vf_info->pf2vf_lock);
|
||||
ratelimit_state_init(&vf_info->vf2pf_ratelimit,
|
||||
|
@ -84,6 +83,32 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev)
|
|||
return pci_enable_sriov(pdev, totalvfs);
|
||||
}
|
||||
|
||||
void adf_reenable_sriov(struct adf_accel_dev *accel_dev)
|
||||
{
|
||||
struct pci_dev *pdev = accel_to_pci_dev(accel_dev);
|
||||
char cfg[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
|
||||
unsigned long val = 0;
|
||||
|
||||
if (adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
|
||||
ADF_SRIOV_ENABLED, cfg))
|
||||
return;
|
||||
|
||||
if (!accel_dev->pf.vf_info)
|
||||
return;
|
||||
|
||||
if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, ADF_NUM_CY,
|
||||
&val, ADF_DEC))
|
||||
return;
|
||||
|
||||
if (adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, ADF_NUM_DC,
|
||||
&val, ADF_DEC))
|
||||
return;
|
||||
|
||||
set_bit(ADF_STATUS_CONFIGURED, &accel_dev->status);
|
||||
dev_dbg(&pdev->dev, "Re-enabling SRIOV\n");
|
||||
adf_enable_sriov(accel_dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* adf_disable_sriov() - Disable SRIOV for the device
|
||||
* @accel_dev: Pointer to accel device.
|
||||
|
@ -103,6 +128,7 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev)
|
|||
return;
|
||||
|
||||
adf_pf2vf_notify_restarting(accel_dev);
|
||||
adf_pf2vf_wait_for_restarting_complete(accel_dev);
|
||||
pci_disable_sriov(accel_to_pci_dev(accel_dev));
|
||||
|
||||
/* Disable VF to PF interrupts */
|
||||
|
@ -115,8 +141,10 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev)
|
|||
for (i = 0, vf = accel_dev->pf.vf_info; i < totalvfs; i++, vf++)
|
||||
mutex_destroy(&vf->pf2vf_lock);
|
||||
|
||||
kfree(accel_dev->pf.vf_info);
|
||||
accel_dev->pf.vf_info = NULL;
|
||||
if (!test_bit(ADF_STATUS_RESTARTING, &accel_dev->status)) {
|
||||
kfree(accel_dev->pf.vf_info);
|
||||
accel_dev->pf.vf_info = NULL;
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(adf_disable_sriov);
|
||||
|
||||
|
@ -194,6 +222,10 @@ int adf_sriov_configure(struct pci_dev *pdev, int numvfs)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
val = 1;
|
||||
adf_cfg_add_key_value_param(accel_dev, ADF_GENERAL_SEC, ADF_SRIOV_ENABLED,
|
||||
&val, ADF_DEC);
|
||||
|
||||
return numvfs;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(adf_sriov_configure);
|
||||
|
|
|
@ -204,6 +204,42 @@ static ssize_t pm_idle_enabled_store(struct device *dev, struct device_attribute
|
|||
}
|
||||
static DEVICE_ATTR_RW(pm_idle_enabled);
|
||||
|
||||
static ssize_t auto_reset_show(struct device *dev, struct device_attribute *attr,
|
||||
char *buf)
|
||||
{
|
||||
char *auto_reset;
|
||||
struct adf_accel_dev *accel_dev;
|
||||
|
||||
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
|
||||
if (!accel_dev)
|
||||
return -EINVAL;
|
||||
|
||||
auto_reset = accel_dev->autoreset_on_error ? "on" : "off";
|
||||
|
||||
return sysfs_emit(buf, "%s\n", auto_reset);
|
||||
}
|
||||
|
||||
static ssize_t auto_reset_store(struct device *dev, struct device_attribute *attr,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct adf_accel_dev *accel_dev;
|
||||
bool enabled = false;
|
||||
int ret;
|
||||
|
||||
ret = kstrtobool(buf, &enabled);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
|
||||
if (!accel_dev)
|
||||
return -EINVAL;
|
||||
|
||||
accel_dev->autoreset_on_error = enabled;
|
||||
|
||||
return count;
|
||||
}
|
||||
static DEVICE_ATTR_RW(auto_reset);
|
||||
|
||||
static DEVICE_ATTR_RW(state);
|
||||
static DEVICE_ATTR_RW(cfg_services);
|
||||
|
||||
|
@ -291,6 +327,7 @@ static struct attribute *qat_attrs[] = {
|
|||
&dev_attr_pm_idle_enabled.attr,
|
||||
&dev_attr_rp2srv.attr,
|
||||
&dev_attr_num_rps.attr,
|
||||
&dev_attr_auto_reset.attr,
|
||||
NULL,
|
||||
};
|
||||
|
||||
|
|
|
@ -293,8 +293,6 @@ EXPORT_SYMBOL_GPL(adf_flush_vf_wq);
|
|||
/**
|
||||
* adf_init_vf_wq() - Init workqueue for VF
|
||||
*
|
||||
* Function init workqueue 'adf_vf_stop_wq' for VF.
|
||||
*
|
||||
* Return: 0 on success, error code otherwise.
|
||||
*/
|
||||
int __init adf_init_vf_wq(void)
|
||||
|
|
|
@ -13,15 +13,6 @@
|
|||
#include "qat_compression.h"
|
||||
#include "qat_algs_send.h"
|
||||
|
||||
#define QAT_RFC_1950_HDR_SIZE 2
|
||||
#define QAT_RFC_1950_FOOTER_SIZE 4
|
||||
#define QAT_RFC_1950_CM_DEFLATE 8
|
||||
#define QAT_RFC_1950_CM_DEFLATE_CINFO_32K 7
|
||||
#define QAT_RFC_1950_CM_MASK 0x0f
|
||||
#define QAT_RFC_1950_CM_OFFSET 4
|
||||
#define QAT_RFC_1950_DICT_MASK 0x20
|
||||
#define QAT_RFC_1950_COMP_HDR 0x785e
|
||||
|
||||
static DEFINE_MUTEX(algs_lock);
|
||||
static unsigned int active_devs;
|
||||
|
||||
|
|
|
@ -105,8 +105,8 @@ struct qat_crypto_instance *qat_crypto_get_instance_node(int node)
|
|||
}
|
||||
|
||||
/**
|
||||
* qat_crypto_vf_dev_config()
|
||||
* create dev config required to create crypto inst.
|
||||
* qat_crypto_vf_dev_config() - create dev config required to create
|
||||
* crypto inst.
|
||||
*
|
||||
* @accel_dev: Pointer to acceleration device.
|
||||
*
|
||||
|
|
|
@ -371,6 +371,11 @@ static int rk_crypto_probe(struct platform_device *pdev)
|
|||
}
|
||||
|
||||
crypto_info->engine = crypto_engine_alloc_init(&pdev->dev, true);
|
||||
if (!crypto_info->engine) {
|
||||
err = -ENOMEM;
|
||||
goto err_crypto;
|
||||
}
|
||||
|
||||
crypto_engine_start(crypto_info->engine);
|
||||
init_completion(&crypto_info->complete);
|
||||
|
||||
|
|
|
@ -225,11 +225,11 @@ static int __virtio_crypto_akcipher_do_req(struct virtio_crypto_akcipher_request
|
|||
struct virtio_crypto *vcrypto = ctx->vcrypto;
|
||||
struct virtio_crypto_op_data_req *req_data = vc_req->req_data;
|
||||
struct scatterlist *sgs[4], outhdr_sg, inhdr_sg, srcdata_sg, dstdata_sg;
|
||||
void *src_buf = NULL, *dst_buf = NULL;
|
||||
void *src_buf, *dst_buf = NULL;
|
||||
unsigned int num_out = 0, num_in = 0;
|
||||
int node = dev_to_node(&vcrypto->vdev->dev);
|
||||
unsigned long flags;
|
||||
int ret = -ENOMEM;
|
||||
int ret;
|
||||
bool verify = vc_akcipher_req->opcode == VIRTIO_CRYPTO_AKCIPHER_VERIFY;
|
||||
unsigned int src_len = verify ? req->src_len + req->dst_len : req->src_len;
|
||||
|
||||
|
@ -240,7 +240,7 @@ static int __virtio_crypto_akcipher_do_req(struct virtio_crypto_akcipher_request
|
|||
/* src data */
|
||||
src_buf = kcalloc_node(src_len, 1, GFP_KERNEL, node);
|
||||
if (!src_buf)
|
||||
goto err;
|
||||
return -ENOMEM;
|
||||
|
||||
if (verify) {
|
||||
/* for verify operation, both src and dst data work as OUT direction */
|
||||
|
@ -255,7 +255,7 @@ static int __virtio_crypto_akcipher_do_req(struct virtio_crypto_akcipher_request
|
|||
/* dst data */
|
||||
dst_buf = kcalloc_node(req->dst_len, 1, GFP_KERNEL, node);
|
||||
if (!dst_buf)
|
||||
goto err;
|
||||
goto free_src;
|
||||
|
||||
sg_init_one(&dstdata_sg, dst_buf, req->dst_len);
|
||||
sgs[num_out + num_in++] = &dstdata_sg;
|
||||
|
@ -278,9 +278,9 @@ static int __virtio_crypto_akcipher_do_req(struct virtio_crypto_akcipher_request
|
|||
return 0;
|
||||
|
||||
err:
|
||||
kfree(src_buf);
|
||||
kfree(dst_buf);
|
||||
|
||||
free_src:
|
||||
kfree(src_buf);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
|
|
|
@ -42,8 +42,6 @@ static void virtcrypto_ctrlq_callback(struct virtqueue *vq)
|
|||
virtio_crypto_ctrlq_callback(vc_ctrl_req);
|
||||
spin_lock_irqsave(&vcrypto->ctrl_lock, flags);
|
||||
}
|
||||
if (unlikely(virtqueue_is_broken(vq)))
|
||||
break;
|
||||
} while (!virtqueue_enable_cb(vq));
|
||||
spin_unlock_irqrestore(&vcrypto->ctrl_lock, flags);
|
||||
}
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
aesp8-ppc.S
|
||||
ghashp8-ppc.S
|
|
@ -1,14 +0,0 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
config CRYPTO_DEV_VMX_ENCRYPT
|
||||
tristate "Encryption acceleration support on P8 CPU"
|
||||
depends on CRYPTO_DEV_VMX
|
||||
select CRYPTO_AES
|
||||
select CRYPTO_CBC
|
||||
select CRYPTO_CTR
|
||||
select CRYPTO_GHASH
|
||||
select CRYPTO_XTS
|
||||
default m
|
||||
help
|
||||
Support for VMX cryptographic acceleration instructions on Power8 CPU.
|
||||
This module supports acceleration for AES and GHASH in hardware. If you
|
||||
choose 'M' here, this module will be called vmx-crypto.
|
|
@ -1,23 +0,0 @@
|
|||
# SPDX-License-Identifier: GPL-2.0
|
||||
obj-$(CONFIG_CRYPTO_DEV_VMX_ENCRYPT) += vmx-crypto.o
|
||||
vmx-crypto-objs := vmx.o aesp8-ppc.o ghashp8-ppc.o aes.o aes_cbc.o aes_ctr.o aes_xts.o ghash.o
|
||||
|
||||
ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
|
||||
override flavour := linux-ppc64le
|
||||
else
|
||||
ifdef CONFIG_PPC64_ELF_ABI_V2
|
||||
override flavour := linux-ppc64-elfv2
|
||||
else
|
||||
override flavour := linux-ppc64
|
||||
endif
|
||||
endif
|
||||
|
||||
quiet_cmd_perl = PERL $@
|
||||
cmd_perl = $(PERL) $< $(flavour) > $@
|
||||
|
||||
targets += aesp8-ppc.S ghashp8-ppc.S
|
||||
|
||||
$(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE
|
||||
$(call if_changed,perl)
|
||||
|
||||
OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y
|
|
@ -1,231 +0,0 @@
|
|||
#!/usr/bin/env perl
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
# PowerPC assembler distiller by <appro>.
|
||||
|
||||
my $flavour = shift;
|
||||
my $output = shift;
|
||||
open STDOUT,">$output" || die "can't open $output: $!";
|
||||
|
||||
my %GLOBALS;
|
||||
my $dotinlocallabels=($flavour=~/linux/)?1:0;
|
||||
my $elfv2abi=(($flavour =~ /linux-ppc64le/) or ($flavour =~ /linux-ppc64-elfv2/))?1:0;
|
||||
my $dotfunctions=($elfv2abi=~1)?0:1;
|
||||
|
||||
################################################################
|
||||
# directives which need special treatment on different platforms
|
||||
################################################################
|
||||
my $globl = sub {
|
||||
my $junk = shift;
|
||||
my $name = shift;
|
||||
my $global = \$GLOBALS{$name};
|
||||
my $ret;
|
||||
|
||||
$name =~ s|^[\.\_]||;
|
||||
|
||||
SWITCH: for ($flavour) {
|
||||
/aix/ && do { $name = ".$name";
|
||||
last;
|
||||
};
|
||||
/osx/ && do { $name = "_$name";
|
||||
last;
|
||||
};
|
||||
/linux/
|
||||
&& do { $ret = "_GLOBAL($name)";
|
||||
last;
|
||||
};
|
||||
}
|
||||
|
||||
$ret = ".globl $name\nalign 5\n$name:" if (!$ret);
|
||||
$$global = $name;
|
||||
$ret;
|
||||
};
|
||||
my $text = sub {
|
||||
my $ret = ($flavour =~ /aix/) ? ".csect\t.text[PR],7" : ".text";
|
||||
$ret = ".abiversion 2\n".$ret if ($elfv2abi);
|
||||
$ret;
|
||||
};
|
||||
my $machine = sub {
|
||||
my $junk = shift;
|
||||
my $arch = shift;
|
||||
if ($flavour =~ /osx/)
|
||||
{ $arch =~ s/\"//g;
|
||||
$arch = ($flavour=~/64/) ? "ppc970-64" : "ppc970" if ($arch eq "any");
|
||||
}
|
||||
".machine $arch";
|
||||
};
|
||||
my $size = sub {
|
||||
if ($flavour =~ /linux/)
|
||||
{ shift;
|
||||
my $name = shift; $name =~ s|^[\.\_]||;
|
||||
my $ret = ".size $name,.-".($dotfunctions?".":"").$name;
|
||||
$ret .= "\n.size .$name,.-.$name" if ($dotfunctions);
|
||||
$ret;
|
||||
}
|
||||
else
|
||||
{ ""; }
|
||||
};
|
||||
my $asciz = sub {
|
||||
shift;
|
||||
my $line = join(",",@_);
|
||||
if ($line =~ /^"(.*)"$/)
|
||||
{ ".byte " . join(",",unpack("C*",$1),0) . "\n.align 2"; }
|
||||
else
|
||||
{ ""; }
|
||||
};
|
||||
my $quad = sub {
|
||||
shift;
|
||||
my @ret;
|
||||
my ($hi,$lo);
|
||||
for (@_) {
|
||||
if (/^0x([0-9a-f]*?)([0-9a-f]{1,8})$/io)
|
||||
{ $hi=$1?"0x$1":"0"; $lo="0x$2"; }
|
||||
elsif (/^([0-9]+)$/o)
|
||||
{ $hi=$1>>32; $lo=$1&0xffffffff; } # error-prone with 32-bit perl
|
||||
else
|
||||
{ $hi=undef; $lo=$_; }
|
||||
|
||||
if (defined($hi))
|
||||
{ push(@ret,$flavour=~/le$/o?".long\t$lo,$hi":".long\t$hi,$lo"); }
|
||||
else
|
||||
{ push(@ret,".quad $lo"); }
|
||||
}
|
||||
join("\n",@ret);
|
||||
};
|
||||
|
||||
################################################################
|
||||
# simplified mnemonics not handled by at least one assembler
|
||||
################################################################
|
||||
my $cmplw = sub {
|
||||
my $f = shift;
|
||||
my $cr = 0; $cr = shift if ($#_>1);
|
||||
# Some out-of-date 32-bit GNU assembler just can't handle cmplw...
|
||||
($flavour =~ /linux.*32/) ?
|
||||
" .long ".sprintf "0x%x",31<<26|$cr<<23|$_[0]<<16|$_[1]<<11|64 :
|
||||
" cmplw ".join(',',$cr,@_);
|
||||
};
|
||||
my $bdnz = sub {
|
||||
my $f = shift;
|
||||
my $bo = $f=~/[\+\-]/ ? 16+9 : 16; # optional "to be taken" hint
|
||||
" bc $bo,0,".shift;
|
||||
} if ($flavour!~/linux/);
|
||||
my $bltlr = sub {
|
||||
my $f = shift;
|
||||
my $bo = $f=~/\-/ ? 12+2 : 12; # optional "not to be taken" hint
|
||||
($flavour =~ /linux/) ? # GNU as doesn't allow most recent hints
|
||||
" .long ".sprintf "0x%x",19<<26|$bo<<21|16<<1 :
|
||||
" bclr $bo,0";
|
||||
};
|
||||
my $bnelr = sub {
|
||||
my $f = shift;
|
||||
my $bo = $f=~/\-/ ? 4+2 : 4; # optional "not to be taken" hint
|
||||
($flavour =~ /linux/) ? # GNU as doesn't allow most recent hints
|
||||
" .long ".sprintf "0x%x",19<<26|$bo<<21|2<<16|16<<1 :
|
||||
" bclr $bo,2";
|
||||
};
|
||||
my $beqlr = sub {
|
||||
my $f = shift;
|
||||
my $bo = $f=~/-/ ? 12+2 : 12; # optional "not to be taken" hint
|
||||
($flavour =~ /linux/) ? # GNU as doesn't allow most recent hints
|
||||
" .long ".sprintf "0x%X",19<<26|$bo<<21|2<<16|16<<1 :
|
||||
" bclr $bo,2";
|
||||
};
|
||||
# GNU assembler can't handle extrdi rA,rS,16,48, or when sum of last two
|
||||
# arguments is 64, with "operand out of range" error.
|
||||
my $extrdi = sub {
|
||||
my ($f,$ra,$rs,$n,$b) = @_;
|
||||
$b = ($b+$n)&63; $n = 64-$n;
|
||||
" rldicl $ra,$rs,$b,$n";
|
||||
};
|
||||
my $vmr = sub {
|
||||
my ($f,$vx,$vy) = @_;
|
||||
" vor $vx,$vy,$vy";
|
||||
};
|
||||
|
||||
# Some ABIs specify vrsave, special-purpose register #256, as reserved
|
||||
# for system use.
|
||||
my $no_vrsave = ($elfv2abi);
|
||||
my $mtspr = sub {
|
||||
my ($f,$idx,$ra) = @_;
|
||||
if ($idx == 256 && $no_vrsave) {
|
||||
" or $ra,$ra,$ra";
|
||||
} else {
|
||||
" mtspr $idx,$ra";
|
||||
}
|
||||
};
|
||||
my $mfspr = sub {
|
||||
my ($f,$rd,$idx) = @_;
|
||||
if ($idx == 256 && $no_vrsave) {
|
||||
" li $rd,-1";
|
||||
} else {
|
||||
" mfspr $rd,$idx";
|
||||
}
|
||||
};
|
||||
|
||||
# PowerISA 2.06 stuff
|
||||
sub vsxmem_op {
|
||||
my ($f, $vrt, $ra, $rb, $op) = @_;
|
||||
" .long ".sprintf "0x%X",(31<<26)|($vrt<<21)|($ra<<16)|($rb<<11)|($op*2+1);
|
||||
}
|
||||
# made-up unaligned memory reference AltiVec/VMX instructions
|
||||
my $lvx_u = sub { vsxmem_op(@_, 844); }; # lxvd2x
|
||||
my $stvx_u = sub { vsxmem_op(@_, 972); }; # stxvd2x
|
||||
my $lvdx_u = sub { vsxmem_op(@_, 588); }; # lxsdx
|
||||
my $stvdx_u = sub { vsxmem_op(@_, 716); }; # stxsdx
|
||||
my $lvx_4w = sub { vsxmem_op(@_, 780); }; # lxvw4x
|
||||
my $stvx_4w = sub { vsxmem_op(@_, 908); }; # stxvw4x
|
||||
|
||||
# PowerISA 2.07 stuff
|
||||
sub vcrypto_op {
|
||||
my ($f, $vrt, $vra, $vrb, $op) = @_;
|
||||
" .long ".sprintf "0x%X",(4<<26)|($vrt<<21)|($vra<<16)|($vrb<<11)|$op;
|
||||
}
|
||||
my $vcipher = sub { vcrypto_op(@_, 1288); };
|
||||
my $vcipherlast = sub { vcrypto_op(@_, 1289); };
|
||||
my $vncipher = sub { vcrypto_op(@_, 1352); };
|
||||
my $vncipherlast= sub { vcrypto_op(@_, 1353); };
|
||||
my $vsbox = sub { vcrypto_op(@_, 0, 1480); };
|
||||
my $vshasigmad = sub { my ($st,$six)=splice(@_,-2); vcrypto_op(@_, $st<<4|$six, 1730); };
|
||||
my $vshasigmaw = sub { my ($st,$six)=splice(@_,-2); vcrypto_op(@_, $st<<4|$six, 1666); };
|
||||
my $vpmsumb = sub { vcrypto_op(@_, 1032); };
|
||||
my $vpmsumd = sub { vcrypto_op(@_, 1224); };
|
||||
my $vpmsubh = sub { vcrypto_op(@_, 1096); };
|
||||
my $vpmsumw = sub { vcrypto_op(@_, 1160); };
|
||||
my $vaddudm = sub { vcrypto_op(@_, 192); };
|
||||
my $vadduqm = sub { vcrypto_op(@_, 256); };
|
||||
|
||||
my $mtsle = sub {
|
||||
my ($f, $arg) = @_;
|
||||
" .long ".sprintf "0x%X",(31<<26)|($arg<<21)|(147*2);
|
||||
};
|
||||
|
||||
print "#include <asm/ppc_asm.h>\n" if $flavour =~ /linux/;
|
||||
|
||||
while($line=<>) {
|
||||
|
||||
$line =~ s|[#!;].*$||; # get rid of asm-style comments...
|
||||
$line =~ s|/\*.*\*/||; # ... and C-style comments...
|
||||
$line =~ s|^\s+||; # ... and skip white spaces in beginning...
|
||||
$line =~ s|\s+$||; # ... and at the end
|
||||
|
||||
{
|
||||
$line =~ s|\b\.L(\w+)|L$1|g; # common denominator for Locallabel
|
||||
$line =~ s|\bL(\w+)|\.L$1|g if ($dotinlocallabels);
|
||||
}
|
||||
|
||||
{
|
||||
$line =~ s|^\s*(\.?)(\w+)([\.\+\-]?)\s*||;
|
||||
my $c = $1; $c = "\t" if ($c eq "");
|
||||
my $mnemonic = $2;
|
||||
my $f = $3;
|
||||
my $opcode = eval("\$$mnemonic");
|
||||
$line =~ s/\b(c?[rf]|v|vs)([0-9]+)\b/$2/g if ($c ne "." and $flavour !~ /osx/);
|
||||
if (ref($opcode) eq 'CODE') { $line = &$opcode($f,split(',',$line)); }
|
||||
elsif ($mnemonic) { $line = $c.$mnemonic.$f."\t".$line; }
|
||||
}
|
||||
|
||||
print $line if ($line);
|
||||
print "\n";
|
||||
}
|
||||
|
||||
close STDOUT;
|
|
@ -231,7 +231,10 @@ static int zynqmp_handle_aes_req(struct crypto_engine *engine,
|
|||
err = zynqmp_aes_aead_cipher(areq);
|
||||
}
|
||||
|
||||
local_bh_disable();
|
||||
crypto_finalize_aead_request(engine, areq, err);
|
||||
local_bh_enable();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -87,8 +87,6 @@ static inline bool crypto_shash_alg_needs_key(struct shash_alg *alg)
|
|||
!(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY);
|
||||
}
|
||||
|
||||
bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg);
|
||||
|
||||
int crypto_grab_ahash(struct crypto_ahash_spawn *spawn,
|
||||
struct crypto_instance *inst,
|
||||
const char *name, u32 type, u32 mask);
|
||||
|
|
|
@ -10,6 +10,7 @@
|
|||
#ifndef _LINUX_PUBLIC_KEY_H
|
||||
#define _LINUX_PUBLIC_KEY_H
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <linux/keyctl.h>
|
||||
#include <linux/oid_registry.h>
|
||||
|
||||
|
|
|
@ -43,6 +43,7 @@
|
|||
#define QM_MB_CMD_CQC_BT 0x5
|
||||
#define QM_MB_CMD_SQC_VFT_V2 0x6
|
||||
#define QM_MB_CMD_STOP_QP 0x8
|
||||
#define QM_MB_CMD_FLUSH_QM 0x9
|
||||
#define QM_MB_CMD_SRC 0xc
|
||||
#define QM_MB_CMD_DST 0xd
|
||||
|
||||
|
@ -151,6 +152,7 @@ enum qm_cap_bits {
|
|||
QM_SUPPORT_DB_ISOLATION = 0x0,
|
||||
QM_SUPPORT_FUNC_QOS,
|
||||
QM_SUPPORT_STOP_QP,
|
||||
QM_SUPPORT_STOP_FUNC,
|
||||
QM_SUPPORT_MB_COMMAND,
|
||||
QM_SUPPORT_SVA_PREFETCH,
|
||||
QM_SUPPORT_RPM,
|
||||
|
@ -161,6 +163,11 @@ struct qm_dev_alg {
|
|||
const char *alg;
|
||||
};
|
||||
|
||||
struct qm_dev_dfx {
|
||||
u32 dev_state;
|
||||
u32 dev_timeout;
|
||||
};
|
||||
|
||||
struct dfx_diff_registers {
|
||||
u32 *regs;
|
||||
u32 reg_offset;
|
||||
|
@ -189,6 +196,7 @@ struct qm_debug {
|
|||
struct dentry *debug_root;
|
||||
struct dentry *qm_d;
|
||||
struct debugfs_file files[DEBUG_FILE_NUM];
|
||||
struct qm_dev_dfx dev_dfx;
|
||||
unsigned int *qm_last_words;
|
||||
/* ACC engines recoreding last regs */
|
||||
unsigned int *last_words;
|
||||
|
@ -523,7 +531,7 @@ void hisi_qm_uninit(struct hisi_qm *qm);
|
|||
int hisi_qm_start(struct hisi_qm *qm);
|
||||
int hisi_qm_stop(struct hisi_qm *qm, enum qm_stop_reason r);
|
||||
int hisi_qm_start_qp(struct hisi_qp *qp, unsigned long arg);
|
||||
int hisi_qm_stop_qp(struct hisi_qp *qp);
|
||||
void hisi_qm_stop_qp(struct hisi_qp *qp);
|
||||
int hisi_qp_send(struct hisi_qp *qp, const void *msg);
|
||||
void hisi_qm_debug_init(struct hisi_qm *qm);
|
||||
void hisi_qm_debug_regs_clear(struct hisi_qm *qm);
|
||||
|
|
|
@ -138,12 +138,14 @@ class TestInvalidSignature(DynamicBoostControlTest):
|
|||
|
||||
def test_authenticated_nonce(self) -> None:
|
||||
"""fetch authenticated nonce"""
|
||||
get_nonce(self.d, None)
|
||||
with self.assertRaises(OSError) as error:
|
||||
get_nonce(self.d, self.signature)
|
||||
self.assertEqual(error.exception.errno, 1)
|
||||
self.assertEqual(error.exception.errno, 22)
|
||||
|
||||
def test_set_uid(self) -> None:
|
||||
"""set uid"""
|
||||
get_nonce(self.d, None)
|
||||
with self.assertRaises(OSError) as error:
|
||||
set_uid(self.d, self.uid, self.signature)
|
||||
self.assertEqual(error.exception.errno, 1)
|
||||
|
@ -152,13 +154,13 @@ class TestInvalidSignature(DynamicBoostControlTest):
|
|||
"""fetch a parameter"""
|
||||
with self.assertRaises(OSError) as error:
|
||||
process_param(self.d, PARAM_GET_SOC_PWR_CUR, self.signature)
|
||||
self.assertEqual(error.exception.errno, 1)
|
||||
self.assertEqual(error.exception.errno, 11)
|
||||
|
||||
def test_set_param(self) -> None:
|
||||
"""set a parameter"""
|
||||
with self.assertRaises(OSError) as error:
|
||||
process_param(self.d, PARAM_SET_PWR_CAP, self.signature, 1000)
|
||||
self.assertEqual(error.exception.errno, 1)
|
||||
self.assertEqual(error.exception.errno, 11)
|
||||
|
||||
|
||||
class TestUnFusedSystem(DynamicBoostControlTest):
|
||||
|
|
Loading…
Reference in New Issue